No items found.
Blog

CEO Perspectives: Introducing the AI Trust Stack

David Marquis, Chief Executive Officer, Caseware

In my last article, I wrote about why artificial intelligence will reshape professional services - not simply by improving individual tools, but by becoming embedded in the workflows where professional work happens. 

That argument raises an obvious follow-up question. If AI is going to work within audit, accounting, tax, and compliance processes, what is actually needed for it to be trustworthy in those environments? Not trustworthy in a general sense. Trustworthy in the specific, exacting sense that professionals demand. In many industries, the answer might simply be accuracy and efficiency. 

Professional services are different. 

The outputs of these professions are relied upon by clients, regulators, lenders, investors, and capital markets. The work has to be consistent, explainable, documented, and reviewable. 

In other words, intelligence alone is not enough. 

AI systems operating in professional environments must also support the conditions that allow work to be trusted. 

Over the past year, I’ve been sitting with this challenge. What I keep coming back to is that it’s not a single problem. It’s a layered one, and it requires multiple components working together to solve it properly. 

I’ve started calling those layers the AITrust Stack. 

Why AI needs a trust framework

Most conversations about AI understandably focus on what the models can do – and what they can do is genuinely extraordinary. 

Large language models can generate text, analyze data, and reason across complex information in ways that were difficult to imagine even a few years ago. 

These advances represent an extraordinary leap in intelligence. 

But professional work does not rely on intelligence alone. 

For work to be trusted, professionals depend on something more than a capable output. They depend on systems that provide context about the work being performed, structure around how tasks are executed, documentation of evidence and decisions, and governance and oversight at every stage. 

These elements are the mechanism by which professional conclusions earn the right to be relied upon – by clients, regulators. By the markets that price risk against them.  

When AI begins participating in professional workflows, those same requirements apply. 

Which means the future of AI in professional services will be shaped not just by how good the models are, but by the systems that surround them. 

The layers of the AI Trust Stack

The AI Trust Stack is my way of mapping the components that need to be present for AI to operate effectively in professional environments. 

At the foundation is intelligence. The models, the reasoning systems, the analytical capabilities that allow AI to process information, generate insights, and perform tasks. This is where most of the current conversation sits, and rightly so – without strong intelligence, nothing else matters. 

But intelligence on its own is passive. It can generate answers, but it doesn’t act. So, the next layer is agency.

Agency is what allows AI to take action. To plan, to execute, to move work forward. It’s the difference between a system that responds to prompts and one that can carry out tasks within defined boundaries. In professional environments, this doesn’t mean autonomy without control - it means governed execution. AI agents that can operate within constraints, follow rules, and take meaningful action as part of a broader system.

But even agentic capability is not enough on its own. So above it sits workflow.

Professional work happens inside structured processes: planning engagements, collecting evidence, performing analysis, documenting findings, and conducting reviews. AI that operates outside these workflows as a standalone tool rather than a participant in the process will always be limited in what it can genuinely contribute. Agents need to be embedded directly into these workflows to create real value.

Above workflow sits context.

Professional judgment is deeply contextual. It draws on years of accumulated experience, professional methodologies, industry standards, firm-specific approaches, and prior engagements. AI systems that lack this context are working without the information that professionals consider essential. They may produce outputs that look impressive on the surface, but any experienced practitioner would recognize that they are missing something important.

At the top of the stack sits governance.

Professional work requires oversight, documentation, and accountability. It requires transparency into how conclusions were reached and how decisions were made. In regulated professions, governance isn’t a layer you add at the end - it’s structural. AI operating in these environments must support it from the outset.

Together, these five layers form the AI Trust Stack.

Each builds on the one below it, and each one is necessary. Remove any layer, and the stack doesn’t hold.

Why this matters

As AI adoption accelerates, most organizations are experimenting with tools that operate at the intelligence layer. These tools can provide impressive demonstrations of what AI can do. 

But in professional environments, the real challenge begins after the demonstration ends. 

  • How does the AI access the right information? 
  • Where does it perform its work? 
  • How are its outputs reviewed? 
  • How are decisions documented? 
  • How is accountability maintained? 

Without the surrounding layers of the AI Trust Stack, even powerful AI systems struggle to integrate into real professional workflows in any meaningful way. They remain impressive tools at the periphery of the process, rather than trusted participants within it. 

This is why I believe the next generation of professional software will be defined not just by the intelligence of its models, but by the quality of the platforms those models operate within. 

From assistants to agents

One of the developments I find most interesting right now is the emergence of AI agents - systems capable of performing tasks across workflows rather than simply responding to prompts. 

Agents can retrieve information, perform analysis, generate outputs, and move work forward through a process. 

The potential here is real and significant. But for agents to operate effectively in professional environments, they have to function inside the AI Trust Stack. 

They need access to the workflow, grounding in the context of prior work, and governance structures that ensure accountability at every step. 

Without these layers, AI remains an assistant. 

With them, AI begins to operate as a participant in professional workflows. 

The conversation ahead

Over the coming weeks, I’ll explore each layer of the AI Trust Stack in more detail. 

I’ll consider why the intelligence layer alone is not enough, how AI agents will begin operating inside professional workflows, why context is so critical to AI performance in practice and how governance and transparency enable the kind of trusted AI systems this profession needs.  

Artificial intelligence will undoubtedly change how professional services operate. 

But the most important challenge ahead is not simply building more intelligent systems. 

It’s building systems where intelligence and trust work together.

That’s what the AI Trust Stack is ultimately about. 

No items found.

Latest news and insights.

Explore expert perspectives to help your firm stay ahead.

Este es un texto dentro de un bloque div.
Este es un texto dentro de un bloque div.
Este es un texto dentro de un bloque div.
Este es un texto dentro de un bloque div.
Este es un texto dentro de un bloque div.
Este es un texto dentro de un bloque div.
Este es un texto dentro de un bloque div.

Lead your firm to accuracy, efficiency, and growth with Caseware.

The authority in AI-powered audit.

Contact Us