AI Made Friendly HERE

KPMG Australia partner fined for using AI to cheat in AI ethics test

A senior partner at KPMG Australia was fined A$10,000 (approximately US$7,000) for using generative AI tools to cheat on an internal training assessment on the responsible and ethical application of the technology.

The partner, a registered company auditor, uploaded a training manual into an external AI platform to generate answers for a mandatory assessment in July 2025, according to a Financial Times report.

The latest case emerges as part of a larger trend involving 28 instances of AI-related cheating identified at KPMG Australia since July, as per a report by the Aussie Corporate. While most other cases involved staff at or below the managerial level, the partner’s involvement has drawn significant attention.

As a registered company auditor, partners are subject to tougher requirements due to their critical role in protecting the financial data of the clients.

As per an Australian Financial Review report, partners are required to download a reference manual as part of the training course for the ethical use of

AI. The concerned partner broke the company’s rules by submitting the reference material to an AI tool to answer a question.

The breach was flagged in August 2025 by KPMG’s internal AI monitoring tools. KPMG has upgraded its processes and policing to detect AI cheating following widespread misconduct on internal tests between 2016 and 2020.

After an internal investigation, KPMG imposed a fine of more than A$10,000 in terms of future income. The partner was asked to retake the exam. The individual self-reported the incident to Chartered Accountants Australia and New Zealand, which has launched its own investigation.

Speaking to the Australian Financial Review, KPMG Australia chief executive Andrew Yates stated the firm is challenged by the rapid embrace of AI, particularly concerning its use in internal training and testing.

“It’s a very hard thing to get on top of, given how quickly society has embraced it. As soon as we introduced monitoring for AI in internal testing in 2024, we found instances of people using AI outside our policy. We followed with a significant firm-wide education campaign and have continued to introduce new technologies to block access to AI during testing,” Yates added.

KPMG aims to establish a new transparency standard by pledging to report AI-related cheating in its annual results and verifying that staff self-report misconduct to professional bodies.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird