If you listen closely to how in-house legal teams talk about their work, you’ll notice a pattern emerging fairly quickly. The frustration isn’t usually about the law itself, but everything around it. The constant Slack messages, the repeated questions, the same clauses being rewritten, reviewed, and negotiated again and again. And the lingering sense that a lot of institutional knowledge exists, but never quite shows up when it’s needed.
For years, legal technology has tried to solve this with more tools. New dashboards, new AI assistants, and new ways to search documents faster. Some of those tools help, but many don’t stick. And most struggle with the same underlying problem: legal work doesn’t happen in isolation, and neither does legal knowledge.
That’s the context in which Sandstone has entered the conversation. In January this year, Sandstone announced a $10 million seed round led by Sequoia, and described itself as “the platform for AI-native legal departments.” It’s a bold phrase, and an intentionally structural one. Rather than promising faster answers or better drafts, Sandstone is making a different claim: that the real opportunity lies in turning institutional legal knowledge into something operational.
Not stored. Not searched. Used.

Since that January announcement, the initial hype has given way to something more telling: steady product development, deeper customer adoption, and a clearer articulation of what an AI-native legal department actually looks like in practice. So now that the dust has settled, we caught up with co-founder Jarryd Strydom to see what has endured.
From tools to operating models
The phrase “AI-native” is doing a lot of work in legal tech right now, with varying degrees of precision. In practice, it can mean anything from bolting a chatbot onto an existing product to redesigning workflows around automation from the ground up.
Sandstone’s interpretation leans firmly toward the latter. The platform is built around the idea that legal teams shouldn’t have to leave the systems they already use (think Slack, Salesforce, and email) to access legal context or trigger legal workflows. Instead, legal judgment should surface where the work is ordinarily happening, shaped by playbooks that reflect how a team operates in practice.
That framing reflects a broader shift underway in legal tech. As AI moves from experimentation to expectation, the differentiator is no longer whether a system can generate an answer, but whether it understands when and why that answer is needed, and what should happen next.
In-house teams feel this pressure most acutely. They sit at the intersection of legal risk and business velocity, often with limited headcount and growing demand. The result is a constant trade-off between responsiveness and rigor, one that has historically been difficult to resolve, even with the latest technologies.
Where institutional knowledge breaks down
Sandstone’s founding story is rooted in a familiar in-house experience. One of its co-founders, Jarryd Strydom, previously worked as in-house counsel at a fast-growing software company. His day-to-day reality of juggling procurement requests, redlines, and internal questions, mirrors what many legal teams are forced to accept as normal.
The problem, as Sandstone sees it, isn’t that legal teams lack expertise. It’s that expertise is fragile. It lives in people’s heads, old documents, and half-remembered precedents. When someone leaves, changes roles, or simply gets busy, that context disappears. The team slows down. The same questions resurface. Risk tolerance becomes inconsistent.
This is where Sandstone positions itself differently from traditional knowledge management tools. Rather than treating knowledge as something to be catalogued and retrieved, it treats it as something to be executed: embedded into workflows that handle intake, triage, and decision-making in real time.
That distinction is important. It suggests a shift from documentation to infrastructure, and from reference material to living systems.
AI-native does not mean autonomous
It’s also worth being clear about what Sandstone is not claiming. Sandstone has been clear that this is not a vision of autonomous legal departments running on autopilot. Human judgment remains central, both practically and philosophically.
Instead, the platform is designed around agentic workflows that operate within defined boundaries, drawing on playbooks, applying context, and escalating decisions when judgment is required. In that sense, Sandstone aligns closely with where much of the legal AI conversation has landed heading into 2026: automation where it’s appropriate, oversight where it’s essential.
This approach also reflects a growing realism in the market. Legal teams are no longer impressed by broad claims about what AI might do someday. They are asking harder questions about reliability, accountability, and return on investment. Any system that can’t explain its reasoning, show its sources, or fit into existing workflows is increasingly viewed by the legal community as friction instead of progress.
Why in-house teams are the proving ground
One of the more interesting aspects of Sandstone’s positioning is its exclusive focus on in-house legal teams. That choice brings precision, but also constraint. In-house teams vary widely in size, maturity, and industry context. Designing for them requires opinionated trade-offs.
At the same time, in-house teams are often where new operating models emerge first. They are closer to the business. They feel inefficiency immediately and they are under constant pressure to justify legal’s role not just as a cost center, but as a strategic partner.
Seen through that lens, Sandstone’s emphasis on measurable outcomes (i.e. faster turnaround times, clearer visibility into bottlenecks, and consistent application of legal judgment) speaks to a broader redefinition of value in legal work. The question changes from how much time was spent, to how effectively that time was applied.
A system-level shift, not a feature race
Stepping back, Sandstone is best understood not as a response to the latest wave of AI capability, but as part of a wider system-level shift. Legal technology is moving away from isolated point solutions and toward platforms that attempt to coordinate work across people, processes, and data.
Whether Sandstone ultimately succeeds will depend on execution, adoption, and how well it navigates the tension between flexibility and control. But its underlying premise, that institutional legal knowledge should compound rather than stagnate, is one many in-house teams will recognize immediately.
The full conversation with Jarryd Strydom, included below, explores how these ideas translate into real product decisions, where the limits of automation lie, and what it actually takes to build an AI-native legal department in practice.

Q&A with Jarryd Strydom, Co-founder, Sandstone:
- You’ve described Sandstone as “AI-native,” which is a term that gets used a lot. In practical terms, what does that mean for an in-house legal team on a normal weekday?
Jarryd: “I totally agree that ‘AI-native’ has become a pretty notorious term across software verticals, but I do think it gets misused/misunderstood. Pragmatically, AI-native in the sense of in-house tech is when the capabilities of AI are woven into the workflows of the legal team end-to-end and actually do the work. Think of it this way: if you are working for your tool, it’s not AI-native. The tool should be working for you. Let’s switch gears by way of example. If you need to redline a document and you drag it into a point solution tool, craft a prompt/playbook, click submit, review the outputs, then download it → upload it to your CLM and respond to the requester. That is not AI-native. What is AI-native, is the request being analyzed at intake, the system figuring out the intent and task at hand and then handing it off to an agent who finds the appropriate context and playbook, runs the analysis and then informs the lawyer to do final review before sending it for the lawyer (the orchestrator).
The day in the life of a modern in-house team running on an AI-native legal department looks rather different to the status quo. Lawyers will check in what work their agents have been doing, which work needs input/sign off and the rest of their day is spent on strategic work such as negotiating better legal and commercial postures and refining playbooks and workflows based on data. One of our investors did a great post on the ‘day-in-the-life’ after Sandstone.
I will also reference my post I did last year when I took a first pass at defining an AI-native legal department: “One that designs its workflows, decision-making, and service delivery around an integrated intelligence layer, where human legal judgment works seamlessly with embedded AI systems, and automation functions as a core capability rather than an add-on.” This is going to be a big theme of 2026. We just had a dinner in Austin and Canva attended. They are thinking deeply about this too: See post.”
- Much of your founding story centers on context being lost across Slack, email, tickets, and documents. Why has that been so hard for in-house legal teams to solve historically?
Jarryd: “Business units choose tools that work for them, and they should! But Legal is a unique unit, they support across the business. It’s a tool fragmentation issue. There is no one-size fits all department tool. Legal is therefore stuck in the middle, connecting the dots. Fortunately, cross system integrations have become very feasible and LLMs are very good at making sense of unstructured data. So the power of AI-native legal department systems like Sandstone is that it sits across all the business systems, serving as a centralized intelligence layer that the lawyers are in control of, pulling and pushing context they need to do the day-to-day without making business users change their existing behaviors.”
- Sandstone emphasizes operationalizing playbooks instead of just storing them. What tends to break when teams try to turn legal judgment into repeatable workflows?
Jarryd: “The issue with playbooks in their current state is the instant a playbook is written down, it becomes outdated. Every time you use them, you learn something new, a new fallback or preferred position, the law changes, etcetera. These updates all have to be tracked and updated across every document. With everything else on legal team’s plates, this understandably gets deprioritized and the gap only continues to grow.
Sandstone solves this problem by dynamically learning and suggesting updates every time you use a playbook. Updating across your playbooks instantly. This doesn’t just save your team time, it ensures your legal department has a uniform risk tolerance, no matter who works on a matter.”
- You’ve been explicit that Sandstone is not about replacing lawyers. Where do you think automation genuinely helps, and where does it start to get risky?
Jarryd: “Not only are we not replacing lawyers, but we will be amplifying the way they work and making them a more strategic partner. Our perspective is that when workload is eased, it creates more bandwidth for strategic work.
The low-hanging fruit is the first obvious efficiency gain. Think repetitive Q&A on policies, same contract drafting on repeat, document reviews against playbooks. In-house legal will then shift to become system thinkers, optimizing playbooks based on data analytics, looking more broadly at commercial implications, and so forth.
Over time, automation will begin playing a role in even the more complex and strategic legal tasks. We already are thinking about this for contract reviews.”
- In-house teams often worry about adding yet another system. How do you think about adoption, especially when legal work already spans so many tools?
Jarryd: “We think about this in two ways. First, Sandstone integrates directly with your existing tech stack and is specifically designed to eliminate the cost of context switching, opening and clicking between tabs. Centralizing work and context into one view eliminates the need to switch between tools, so it doesn’t create the same drag in adoption other tools can. Second, our client success team is incredibly hands on. Any question or adoption concern that might exist, they are working directly next to legal teams to answer or problem solve.”
- There’s growing pressure on legal teams to demonstrate business value, not just legal correctness. How should teams think about measuring success in an AI-supported environment?
Jarryd: “Success shouldn’t be measured in simple efficiency gains. While efficiency gains are great, they’re only part of the story. Success for AI-native legal departments should be the ability to help drive business unit success. Faster hiring, procurement, and sales cycles supported by legal’s ability to holistically understand the impacts on the business.”
- From your experience, what’s the hardest mindset shift for legal teams moving from reactive support to a more proactive, systems-driven role?
Jarryd: “The largest shift from reactive to proactive is the issue of context. Without a full picture of information, it is impossible to be a true business partner. When legal has the full vision of the impact on the business, they are able to move with the speed and precision necessary to drive value.”
- Looking ahead, what do you think most people misunderstand about what it will take to build a truly AI-native legal department?
Jarryd: “Firstly, it’s a bold decision. Moving from traditional legal operations to an AI-native model isn’t a quick transition—it’s a structural shift. Getting it right depends on four critical factors:
Change management
This is less about tool adoption and more about identity. Lawyers need clarity on why the shift is happening, what success looks like, and how their role evolves rather than diminishes. Without deliberate change management, AI risks being treated as just another piece of software instead of a new operating model.
A mindset shift toward “thinking in systems”
The transition ultimately requires a fundamental change in how in-house legal teams work. Instead of a linear input → output model, lawyers need to design reusable systems that allow one legal decision or insight to have 1:many applications across the business. If lawyers are still working for their tools, the exponential efficiency gains of AI won’t be realized. In that sense, “orchestrators” is an apt description of the next generation of in-house lawyers.
Sunsetting legacy legaltech
Becoming AI-native will require letting go of large parts of the existing legaltech stack. Siloed point solutions that don’t integrate with the rest of the business will increasingly become a constraint rather than an asset. Instead of maintaining a separate pile of legal tools, legal teams will embed legal intelligence directly into core business systems—CRM, procurement, finance, HR—so legal guidance shows up where decisions are actually made.
Operating model and capability redesign
An AI-native legal function needs new workflows, incentives, and skills to support this shift. That includes redefining what “good” looks like (speed, leverage, consistency—not just precision), investing in legal ops and data fluency, and designing processes where AI is embedded by default rather than bolted on. Without this, even the right mindset won’t translate into sustained impact.”
