Context

KPMG International hosted a roundtable discussion in June 2024 to explore the creation of an ethical framework for AI in tax. This event sought to take a global perspective and build on a previous scoping roundtable hosted in London earlier in the year.

How should we go about creating an ethical framework for revenue authorities, business and civil society and what does AI mean for tax responsibility, transparency and accountability? Which principles should be adopted to reduce risks and build trust in tax systems? And how can we co-create systems that work for global parties with potentially conflicting interests?

The conversation was held under the Chatham House Rule (which means participants are free to use the information received, but neither the identity nor the affiliation of the speaker(s), nor that of any other participant, may be revealed.) and was attended by eight expert participants (see below for a list of attendees). The write-up below summarizes the personal views of participants and does not necessarily reflect the view of any particular organization, including KPMG.

Executive Summary:

  • There are many reasons to be optimistic about the potential of AI to bring positive changes to tax administration including significantly enhancing the ability of tax administrations to detect fraud and streamline processes. For the global south AI offers to potential to accelerate tax capacity.
  • However careful management, ethical considerations, and robust oversight will be crucial to realizing these potential benefits.
  • Participants discussed the merits of risk-based vs. principle-based approaches to using AI in tax.
  • Concerns about data scarcity, especially in lower and middle-income countries, and bias in AI systems were discussed. Independent oversight and robust data security frameworks were suggested to ensure fairness and accountability.
  • Trustworthiness and human oversight are crucial for AI systems in tax administration. Digital sandboxes for testing AI and measurable trustworthiness metrics were proposed to protect taxpayer rights and maintain public trust.
  • Dynamic regulatory frameworks were suggested to avoid leaving gaps in governance and accountability.
  • Emphasis was placed on revamping tax qualifications to include AI and digital skills, making them relevant and appealing to Gen-Z and thereby equipping the future workforce.

Risk- vs. Principle-Based Approaches to an Ethical Framework for AI in Tax

Participants discussed two primary approaches to integrating AI in tax administration: a risk-based and a principle-based approach. The risk-based approach focuses on categorizing and managing the risks associated with AI applications to ensure that AI systems used for tax purposes are safe and reliable, minimizing the chances of biased or unfair outcomes. It was highlighted that, under this approach, coercive enforcement activities (measures taken by tax authorities to compel compliance with tax laws and regulations) would be deemed high risk. A paper published by David Hadwick in 2022 argues that this leads to an issue of compliance with legal principles and a notable gap in safeguarding taxpayers' fundamental rights.1

On the other hand, a principle-based approach emphasizes the development of a broad ethical framework guided by core principles such as transparency, fairness, accountability, and trustworthiness. The issue of defining the principles and gaining consensus was raised as a potential problem in implementing this approach. AI assurance checks can be identified with each trustworthy AI principle; however, it is not obvious what standards are required to implement a principle in full given varied interpretations of principles. Who should be responsible for ensuring and reporting on the implementation of the principles? Further research is needed to investigate how well this can work and how users could apply such frameworks in practice.

Data Security, Scarcity, and Bias

Participants highlighted significant concerns around data sharing, data scarcity, and bias when using AI in tax administration. They noted the "black box" nature of AI systems, which often leads to a lack of transparency and accountability.

The potential for AI to assist in closing the tax gap by efficiently identifying tax fraud and errors was raised but participants stressed the importance of a well-defined framework for data security and ethical use of AI across different regions. A recent study2, used by the United States Internal Revenue Service (IRS) to explore issues of fairness in algorithms to select tax audits, was highlighted. It considers how tax burdens should be fairly distributed across taxpayers of different income levels. Using detailed, anonymized taxpayer data from 2010-2014, it found that more advanced machine learning techniques shift audit burdens from higher to middle-income taxpayers and attempts to address high audit rates among low-income taxpayers through fairness techniques often sacrifice performance accuracy.

Data scarcity, particularly in lower and middle-income countries, exacerbates these issues, as these regions may lack the robust datasets needed to train reliable AI models.

Concerns were also raised about the ability of AI to deal with complex business activities and value creation, especially when interpreting data across borders. Participants suggested more research and dialogue will be needed to address these issues and ensure that AI systems are used responsibly and effectively.

Trustworthiness and Human Oversight

Trustworthiness and human oversight emerged as recurring themes in the discussion. Participants stressed that for AI systems in tax administration to be trusted, they must be transparent, fair, and subject to rigorous human oversight3. They cited examples such as the ‘Robodebt scandal’, an automated debt recovery system implemented by the Australian government between 2016 and 2020, which used data-matching technology to identify discrepancies between income reported to the Australian Taxation Office (ATO) and income reported to Centrelink, the government agency responsible for welfare payments. Critics argued that the Robodebt system unfairly targeted vulnerable individuals, including low-income earners, pensioners, and people with disabilities, causing significant financial and emotional distress due to a lack of human oversight. It was pointed out that part of the problem was human error in how the systems were set up in the first place.

The idea of ‘digital sandboxes’ was proposed, where AI systems could be tested in controlled environments before full implementation. The importance of measurable trustworthiness metrics was highlighted, suggesting that systems should be designed to allow for the evaluation of fairness and effectiveness.

Ensuring that human oversight is not just a formality, but a critical part of the process was deemed essential to mitigate the risks associated with AI decision-making.

Skills and Implications for Gen-Z in the Workforce

Participants suggested that the traditional pathways to becoming qualified as a tax professional (including tax education programs and qualifications) will need to be re-evaluated to stay relevant and appealing to the next generation.

Qualifications should be fit for purpose in an AI-driven future, focusing on and flexible learning modules that align with Gen-Z's preferences and the evolving nature of the profession. This also includes understanding the ethical implications and developing skills to work alongside AI technologies effectively.

The Role of Regulation

Participants underscored the current inadequacies in the regulation of AI in tax administration. They noted that existing regulations often fail to keep pace with technological advancements, leaving gaps in governance and accountability. The need for robust regulatory frameworks that can adapt to the fast-evolving AI landscape was emphasized.

Participants referred to Lord Holmes' recent AI Bill4 in the UK as a step towards demonstrating leadership in AI regulation. The Bill advocates for an agile and adaptable AI authority to play a coordinating role ensuring existing regulators meet their obligations and identifying any gaps in the AI regulatory landscape. The proposed regulation in the Bill centres around principles of trust, transparency, inclusion, innovation, interoperability, public engagement, and accountability.

Participants argued for a regulatory approach that not only sets standards but also ensures that there are mechanisms in place for when things go wrong, protecting taxpayers and maintaining public trust. This includes the need for regulations that address the ethical use of AI, ensure transparency in AI decision-making processes, and provide for independent oversight to monitor and evaluate the performance of AI systems in tax administration.

The Brighter Side

Despite the challenges, participants were optimistic about the potential of AI to bring positive changes to tax administration. The potential for AI to improve efficiency, accuracy, and fairness in tax processes was recognized. Participants also discussed the importance of responsible AI use and the need for ongoing dialogue and collaboration among governments, tech developers, and other stakeholders to maximize the benefits of AI while mitigating its risks.

They emphasized that AI, when used correctly, can significantly enhance the ability of tax administrations to detect fraud, streamline processes, and ensure that tax policies are implemented fairly and effectively. There was recognition that careful management, ethical considerations, and robust oversight will be crucial to realizing these benefits.

KPMG International will be looking deeper into the future of Tax and AI and how the world is building structures, cultures and practices that can serve business and social interest in this new digital world.

Contributors to the discussion included:

  1. Caroline Khene, Digital and Technology Cluster Lead, Institute of Development Studies
  2. Carsten Maple, Fellow at the Alan Turing Institute and Professor of Cyber Systems Engineering, The University of Warwick's Cyber Security Centre (CSC)
  3. Neal Lawson, Partner, Jericho
  4. Benita Mathew, Lecturer in AI and Fintech, University of Surrey
  5. Chris Morgan, Head of Global Responsible Tax Programme at KPMG International
  6. Grant Wardell-Johnson, Global Tax Policy Leader and Chair of the Global Tax Policy Leadership Group, KPMG International
  7. Helen Whiteman, Chief Executive, CIOT