Human-in-the-Loop is Becoming an Assurance Control 
Blog

Human-in-the-Loop is Becoming an Assurance Control 

As artificial intelligence becomes embedded in audit and assurance workflows, the debate in Asia-Pacific is shifting. The question is no longer whether humans should remain involved in AI-driven decisions. Regulators, standard-setters and firms broadly agree on that principle. The more pressing issue is how that involvement is designed, evidenced and tested. 

This reflects a broader regulatory movement away from high-level AI principles towards demonstrable governance and accountability. For audit and assurance professionals, the challenge is familiar: translating intent into controls that are observable, repeatable and capable of independent review. 

Governance is lagging AI adoption, and the risk is real 

Insights shared by EY with audit and risk leaders at the CwX APAC conference in November 2025 illustrate the scale of the challenge. EY analysis shows that AI transformation is significantly outpacing governance maturity, with 99 per cent of organisations surveyed reporting financial losses linked to AI-related risks. Of those, 64 per cent experienced losses exceeding US$1 million. EY estimates the average loss at US$4.4 million among organisations that have experienced such risks, based on conservative midpoint assumptions. 

This governance gap is unfolding against a rapidly expanding regulatory landscape. EY research highlights more than 1,000 AI policy initiatives across nearly 70 countries, with more than 500 new AI-related regulatory developments reported globally in the past year. In APAC, new or revised frameworks span Singapore, Australia, Japan and South Korea. While approaches differ, they share a common emphasis on accountability, transparency and risk management. 

For assurance teams, this combination of rapid adoption, uneven governance and increasing regulatory scrutiny is creating pressure to demonstrate that AI-assisted decisions remain subject to professional judgement and effective control. 

From principles to proof 

Human-in-the-loop has featured prominently in global AI frameworks, including those issued by the OECD and reflected in guidance from bodies such as Singapore’s Infocomm Media Development Authority and the US National Institute of Standards and Technology. Increasingly, however, regulators and oversight bodies are focused on implementation rather than aspiration. 

Audit standards have long required practitioners to obtain sufficient and appropriate audit evidence and to apply professional judgement over all sources of information, regardless of whether those sources are human or automated. As AI systems influence risk assessments, population testing and anomaly detection, human-in-the-loop is being recast as a control mechanism rather than a philosophical safeguard. 

In assurance terms, this means defined points in AI-assisted workflows where human review or approval is mandatory, overrides and escalations are governed by clear criteria, and decisions are documented with supporting rationale. Without this evidence, claims of human oversight risk becoming assertions rather than controls. 

Lifecycle controls, not ad hoc intervention 

One of the more practical responses emerging in the market is the framing of human-in-the-loop across the entire AI lifecycle, rather than as a single review step. Frameworks presented by EY to APAC audit leaders outline how oversight can be embedded from use-case approval and data validation through to model deployment, ongoing monitoring and rollback decisions. 

Under this lifecycle approach, humans approve AI use cases and boundaries, data is validated for quality, consent and bias risk, models are reviewed for fairness and performance, and post-deployment monitoring includes drift detection, anomaly review and escalation. These activities are positioned as net new control measures, integrated with existing governance structures such as committees, lines of defence and validation processes. 

For auditors, the logic is familiar. What is new is the application of this discipline to systems that learn and adapt over time. 

Repeatability is the assurance test 

Regulators rarely focus on isolated examples of good practice. Their concern is whether controls operate consistently over time. Human-in-the-loop only functions as an assurance control if it is repeatable, triggered by defined thresholds, applied across engagements and capable of producing comparable evidence. 

This is particularly relevant in APAC, where audit firms operate across multiple jurisdictions with differing AI regimes. Repeatable human-in-the-loop workflows allow firms to apply consistent judgement standards even as regulatory detail varies. They also address a practical constraint. Senior judgement is finite. Well-designed controls ensure that human expertise is applied where risk is highest, rather than diffused across low-value review. 

Why assurance infrastructure matters 

As expectations around AI governance continue to take shape, the role of assurance-focused technology is becoming more prominent. Platforms designed for audit and assurance, rather than general AI development, are increasingly being used to embed control logic, evidence capture and workflow discipline into AI-assisted processes. 

Caseware’s relevance in this context lies in its long-standing focus on how assurance operates in practice. This includes designing workflows that embed review points, linking AI outputs to human decisions, and producing a single, defensible evidentiary trail. Such capabilities support consistency across teams, clients and reporting periods, and align with emerging expectations for demonstrable AI governance. 

An assurance response to an AI reality 

The regulatory direction is clear, even if the detail continues to evolve. Across APAC, policymakers are converging on expectations of accountability, transparency and demonstrable control over AI systems. For audit and assurance professionals, human-in-the-loop represents a practical bridge between emerging AI governance frameworks and long-standing assurance principles. 

It is no longer enough to say that humans remain responsible. The challenge lies in showing where that responsibility sits, how it is exercised and how it can be independently verified. 

For further context on AI governance, lifecycle controls and human-in-the-loop concepts discussed with audit leaders at CwX APAC 2025, readers can explore EY’s session recording on responsible AI and assurance frameworks.