Turning Audit AI Hype into Practical Governance
As artificial intelligence continues to shape the future of audit and assurance, auditors are facing a new set of challenges. From understanding what constitutes “AI” to ensuring compliance with regulatory frameworks, governance has emerged as a critical priority for firms adopting digital tools. In a recent April webinar hosted by Caseware and Chartered Accountants ANZ, Dr Kobi Leins, an expert in AI ethics and governance, laid out practical strategies for close to 900 accountants and auditors in attendance navigating this increasingly complex terrain.
One of the most compelling takeaways from the session was Dr Leins’ insistence on abandoning the idea of AI as a mystical or inherently transformative force. Instead, she encouraged auditors to view AI as a tool—no different than a calculator or spreadsheet—that demands scrutiny, structure, and oversight. “Would you call a toaster responsible?” she asked. “It’s a technology. What matters is whether it’s used in compliance with existing regulation and best practice international and Australian standards.”
Asking the right questions
Dr Leins emphasised that the foundation of good AI governance lies in asking the right questions. Whether evaluating a vendor product or developing an internal solution, auditors should interrogate the data sources, algorithmic design, and decision-making processes involved. Are the training datasets inclusive and free of bias? Is there clear documentation about how decisions are made? Can outcomes be traced and explained?
These questions are particularly salient in the audit profession, where even small miscalculations can have outsized impacts on stakeholders. Leins pointed to the infamous Zillow incident—where poor AI modelling led to hundreds of millions in losses—as a cautionary tale.
Five pillars of AI governance
To help firms build effective governance frameworks, Dr Leins proposed a structure based on five interlocking elements:
- Policies – AI policies should be integrated into broader company strategies, including codes of conduct and executive KPIs.
- Processes – Auditors must map where and how AI is used, particularly in high-risk areas like client data handling and automated analysis.
- People – Staff training is critical. Leins cautioned against assuming that users inherently understand AI risks, noting that many still feed confidential data into public platforms.
- Planning – Firms should avoid implementing AI just to “keep up” with competitors. Instead, AI should be deployed in line with specific business problems.
- Performance metrics – Success should be measured not just by usage, but by outcomes. Is the technology saving time? Enhancing quality? Empowering staff?
The auditor’s role
Auditors are uniquely positioned to act as stewards of responsible AI use. Their familiarity with compliance, documentation, controls and regular audits equips them to identify both risk and opportunity in emerging technologies. Yet according to polls taken during the webinar, over 25% of attendees had not yet begun using AI in any formal capacity.
This hesitation may be linked to the “black box” nature of some AI tools, which can make it difficult to verify outputs. Leins recommended critical engagement with outputs, regular validation processes, and always maintaining human oversight.
Caseware AiDA and industry certification
While not the focus of the webinar, Caseware’s own AI tool, AiDA, was mentioned in the discussion. AiDA recently received certification from Holistic AI, a third-party assessor of responsible AI systems. The certification reflects alignment with ISO/IEC 42001, the international AI management standard.
Rather than endorsing any single tool, Leins stressed the importance of firms selecting solutions that align with their specific risk appetite and business goals. The broader message was clear: the success of AI in audit depends less on the tool itself and more on how it is governed and applied.
Leveraging AI as an assistant
When used well, AI has the potential to function as an invaluable assistant—sifting through documents, highlighting anomalies, or generating first-draft summaries. However, auditors must remain vigilant about where it is appropriate to delegate tasks and where human judgement remains irreplaceable.
This approach aligns with a key piece of advice from Leins: always begin with your business problem, not the technology. “Start with the issue you’re trying to solve,” she urged. “Then decide whether AI is the right tool.”
Looking ahead
With regulatory scrutiny increasing and client expectations evolving, now is the time for audit firms to solidify their AI governance practices. Accountants and auditors looking to better understand these principles can download Caseware’s AI Guide for Accountants here.
If you missed the April session, we’re gearing up for the next event in Caseware’s Speaker Series, scheduled for July.
The series offers discussions with professionals across the audit and accounting field, focusing on current trends and practical insights.
Subscribe now to stay in the loop and be the first to know when registration opens.
For more information about strategic AI management, governance or compliance, whether policies, process or people uplift – feel free to reach out to either Kobi@infosphereeducation.com or for more information, check out https://infosphereeducation.com/.