The integration of AI technologies into tax systems has been heralded as improving efficiency, accuracy, compliance and, it has been argued, has the potential to improve tax positions, resulting in significantly reduced tax liabilities and the associated distortions in increased net earnings, especially in scenarios that involve complex decision-making. However, the sudden and rapid rise of the use of AI has caused anxiety over unintended consequences and unforeseen outcomes and this new paradigm shift brings forth a host of ethical and practical considerations which must be carefully considered.

KPMG International hosted a roundtable discussion, to explore the use of AI, concerns, and how to create an ethical framework for AI and tax. The discussion looked at how the integration of AI into taxation systems could align with principles of justice, equity, and societal well-being and anticipate issues of transparency, accountability, human supervision, fairness, algorithmic biases, trust and morale, data security and unintended consequences. The broad-ranging nature of the conversation highlighted the depth of complexity inherent in these issues and the work to be done to navigate our path forward.

The conversation was held under the Chatham House Rule (which means nothing can be attributed to attendees). It was attended by twelve specialist participants and was shared with a small number of additional stakeholders, who were unable to attend the event, for review (see below for a list of contributors). The write-up below summarizes the personal views of participants and reviewers and does not necessarily reflect the view of any particular organization, including KPMG International.

Executive summary

  • AI is not a solution for all issues around tax. It is a complex technology, still in the process of development, that carries both immense opportunities and dangers of harm. Key areas include data quality, governance, competition and how deployment is managed.
  • An ethical framework for data collection and management may be a useful starting point for this debate. The standardization of data has a significant role to play in feeding innovation and already well-standardized tax systems are well placed for this. The concern that wider use of synthetic rather than collected data will become unavoidable needs to be considered.
  • Tax professionals question any assumption that AI in tax can compensate for deficiencies in tax legislation or policy.
  • An effective approach might be to regulate the deployment and commercialization of AI but not its development. The right governance structures could mitigate some potential problems. Alternatively, an ethical framework is required for both development and deployment of AI. Different principles may apply, or have different weight, at the various stages of development.
  • While a principles-based approach to regulating AI in tax holds promise, questions still remain about how the key principles will be decided, how can tax professionals develop consistency in their definitions and how they can be implemented effectively.
  • An ethical framework should consider the implications of increased transparency (and how to mitigate any decrease in transparency) brought about by AI, the importance of narrative explanation to accompany the sharing of data and the risks presented by bad actors.
  • Justifying the use of AI in tax will become critical in terms of environmental sustainability and taxpayer rights.
  • Debate concerning the extent AI could be used as a decision-maker, rather than just an adviser, is likely to become crucial quickly and should be considered carefully in the design of an ethical framework for tax and AI.
  • AI is developing at pace and tax is likely to be a crucial and early sector in which its practical and principle issues are likely to be fully explored.

Starting with the data

Ethical data collection and management, standardization, data hoarding versus sharing and real versus synthetic data.

Participants suggested that it is sensible not to lose sight of the role of the data in the debate around tax and AI. Questions were raised about the quality of the data being used to feed algorithms and methods of collection. Key considerations, it was suggested, should include the currency of the data (is there more up-to-date information available?), the authentication and reliability of the data sources, and the sufficiency and quality of the data (i.e. is the available data adequate?). An ethical framework for data collection and data management may be a sensible starting point in this debate and is worthy of further exploration.

The standardization of data formatting also has a significant role to play in the interchange of information and feeding innovation, it was suggested. Tax and accountancy are well-standardized sectors in terms of data collection guidelines and should be well-placed to provide and share data in a useful way to feed AI. However, data hoarding, in order to gain a competitive advantage, could pose problems.

Key aspects of an ethical framework for data collection and management, contributors suggested, should consider the following:

  • Anonymization, privacy and data protection - Whenever possible, personal data should be anonymized. All collection and processing activities must adhere to applicable data protection laws to ensure privacy and security.
  • Consent - Informed consent should be obtained prior to the data collection and individuals should understand why their data is being collected, how it will be used and who will have access.
  • Biases in data - The data collection process and algorithms used should attempt to prevent, identify and resolve biases.
  • Data minimization - The collection of personal data should be limited strictly to what is necessary for the stated purpose, avoiding any unnecessary data accumulation.
  • Security of the data - Strict controls must be in place to regulate access to data and to prevent unauthorized alterations and use.

The idea of an AI kitemark was raised, to indicate that AI had been used in the process and possibly to convey the quality of the data produced. Indeed, one issue raised was whether the very use of AI should be an ethical consideration and whether there should be pre-deployment rules on even using it.

The notion of a core data set/ data silos being made available, via open access, to protect smaller firms against unfair competition was suggested as an area to explore further. It was suggested that a potential approach would be to have one pool of data accessible for all eligible users and to focus competition on developing algorithms to use that data. Is there an independently controlled model available that would help to level the playing field and avoid a concentration of power? One participant noted however that many professional firms have confidentiality rules preventing them from sharing client data. They may be reluctant to share even redacted data in case AI could be used to trace sources.

Other participants highlighted the risk that if data runs out, it is likely they will see models collapse or see synthetic data used. It is estimated that soon 60% of the data used to develop AI and analytics projects will be synthetically generated.1 There were concerns raised about the distortion that synthetic data can produce in terms of bias and general inaccuracy. This issue will need to be addressed.

The need for better legislation, regulation and governance

Is AI being used to compensate for the inefficiencies in tax law? What regulation is needed around the commercialization of AI in tax? Can the right governance structures mitigate some of the potential problems?

It was raised that there is a potential fallacy at play within tax policy: that the use of AI in tax compliance and enforcement can compensate for the deficiencies of the tax law.2 Although AI can bring many advantages and compliance-enhancing measures, it will be most effective when adopted in conjunction with a legal system that has been designed to minimize tax non-compliance. Tax law reform remains a critical element of increasing efficiency and fairness in tax systems.

Participants suggested that commercializing the use of AI to give tax advice will require additional regulation. This is an area to potentially explore further with questions raised about its efficacy. Are they getting the right answers from AI systems? Can they control how data is being put to use? A concern was expressed that proprietary advice based on specific facts could be used by AI to create new advice relating to similar but differing fact patterns. This could create issues over the integrity of advice and present a copyright or intellectual property infringement.

Some participants suggested that many of the issues being raised were, in fact, governance problems. In the current absence of legislation and regulation, it was suggested that some issues could be mitigated by putting an appropriate governance framework in place.

One example referenced was a study that analyzed various diagnostic tests using deep learning to detect age-related macular degeneration.3 The governance model created for this is a data institution (or foundation model) whereby trustees (a mix of patients, community representation and medical professionals) decide to whom the data is made available and only allow it to be accessed for the purpose of medical research and advancement. A similar governance structure could be considered for the sharing of data for developing algorithms for tax compliance.

As with the tax debate now, there is an issue of whether AI is best regulated at the national or global level.

Do tax professionals need a principles-based approach to an ethical framework for AI in tax administration?

How should the key principles be decided and how do we develop consistency in their definitions? How can they be implemented effectively?

Participants suggested that a principles-based approach to an ethical framework for tax and AI would be a sensible starting point. How should these principles be decided upon and who gets to choose them?

One participant commented that there have already been so many frameworks developed for the trustworthy use of AI across disciplines. Tax professionals should consider whether there are common aspects to these frameworks, and what makes them different from one another. Some frameworks begin with overarching principles, suggest questions that practitioners must address and provide statements about risks that must be mitigated from end-user perspectives. The overlaps between governance frameworks for trustworthy AI need to be condensed, if a practitioner has to comply with frameworks at a national, organizational and disciplinary level.

A current literature review of these frameworks points towards an agreement on principles of responsible and trustworthy AI. These include but are not limited to, explainability, transparency, accountability, transferability, and efficacy.

These common factors raise a consideration as to whether it is possible to have a trustworthy AI framework promoted at a global level as these challenges are not unique to users in any one country or region. It was suggested that the non-partisan nature of these principles is an implementation advantage, but the issues will center around finding common definitions for these principles and how they are implemented in practice (across different cultures and jurisdictions) by the various actors.

These principles need to then provide practical guidance for practitioners and government employees to help ensure that risks are mitigated, with an assurance process that can be documented. Tax professionals need international projects looking at enhancing trustworthy use of AI in tax administration. At an international level, it is important to bring in diverse views of those working in administration to develop guidelines or define risks. Differing principles may apply or have a greater or lesser importance at each stage. For example, the risk of bias impacting certain tools may be most relevant at an early development stage or data preparation stage where principles addressing bias are most relevant.

The ALTAI framework 4 is an example of principle-based checks that can be used to test if the relevant risks are mitigated for the use of the AI tool. Further research is needed to generate information about how well this can work and how employee users can apply such frameworks in practice.

Considering the role of actors

Increased transparency, the importance of narrative explanation and the risk of bad actors

Participants suggested that when considering the structure of an ethical framework for tax and AI is vital to consider the role and motivations of individual actors.

It was highlighted that many organizations look to build trust through transparency and try to find the best routes to make data available in a responsible way. On the one hand, it was suggested, that AI can be used to encourage more transparency and promote more open discussions. On the other hand, there is a risk that AI could assist actors who have an interest in misrepresenting and misinterpreting certain information and using the data for their ulterior purposes. Whilst narratives around the information shared can go some way to mitigate this, there is a risk that more information increases the opportunity for actors to create messaging to suit their agenda and not represent the facts fairly.

The importance of ‘when’ in regulation and ethical frameworks and the use of AI in tax

When should regulation and/or an ethical framework for tax and AI be applied? What justifies the use of AI in tax and should AI become a decision maker or an adviser?

When it comes to governance and regulation, the issue of ‘when’ was raised several times throughout the conversation as a point to consider carefully in the construction of an ethical framework for tax and AI.

Firstly, there was the matter of when regulation should be applied: at the point of deployment or during development. One participant suggested it is more practicable to apply regulation at the point when AI is implemented to help ensure it complies with the core principles and to minimize negative consequences by correcting errors. It was also suggested that attempting to apply regulation during development could stifle innovation and prove challenging to implement; tax professionals may not be able to practically regulate the development of algorithms, but regulation could be used to prevent their deployment.

However, others suggested that by the development stage, certain design decisions and conceptual frameworks are already established. Addressing compliance issues or ethical shortcomings discovered during development may require significant alterations to the system's architecture or algorithms, which can be costly and time-consuming. AI development with an eye on the agreed ethical standards would allow for more seamless deployment so there is much to debate in this space.

Secondly, there was the matter of when AI should be used in tax. An ethical framework, it was suggested, should consider at first instance whether AI is necessary under particular circumstances and expect justifications for its use.

Furthermore, justifying the impact (or potential impact) on taxpayers' rights will be critical. Algorithmic biases and unintended consequences are of particular concern. A recent example of such issues includes the ‘Dutch childcare benefit scandal’, where algorithms in which ‘foreign-sounding names’ and ‘dual nationality’ were used as indicators of potential fraud.5

In addition, the ‘Robodebt scandal’, an automated debt recovery system implemented by the Australian government between 2016 and 2020, used data-matching technology to identify discrepancies between income reported to the Australian Taxation Office (ATO) and income reported to Centrelink, the government agency responsible for welfare payments. Critics argued that the Robodebt system unfairly targeted vulnerable individuals, including low-income earners, pensioners, and people with disabilities, causing significant financial and emotional distress. 6

Participants raised the issue that one contributing factor in these cases was a lack of human oversight. Keeping a human involved in the process, it was suggested, may go some way to mitigating problems such as those described – the human perspective is still key in understanding the complexity involved and should not be underestimated in its importance.

The debate as described led to the final question of when, if at all, should AI become the decision-maker versus an advisory tool. This, it was argued, should be dictated by the level of importance regarding the decision in question. This is another area for deeper debate.”

Participants suggested that cost pressures being faced by public bodies and organizations alike mean this is an area of debate that may become critical quickly and so it is important that this is considered in this and future discussions concerning the development of an ethical framework for Tax and AI.

Furthermore, one participant pointed out that the use of AI requires a huge amount of energy and water to run and cool computer equipment. Also, there could be many cases where the use of AI is not needed and there are alternative solutions. There is a debate to be had about balancing the assumption that using AI will automatically be beneficial against the environmental impact.

Finally, for groups of companies with complex tax positions, the potential to identify substantial improvements is significant, and the scale of these improvements can be quite large, potentially leading to significant reductions in tax liabilities for large corporations at the expense of public finances. This raises ethical questions about the alignment with the concept of companies paying their 'fair share' of taxes.

The discussion was attended by:

  1. Adam Afriyie MP, Chair, APPG on FinTech
  2. Louise Burke, CEO, Open Data Institute
  3. Kurt Burrows, Group Head of Tax, Anglo American
  4. Alan Craven, Director, Phoenix TP
  5. Professor Nigar Hashimzade, Professor - Economics, Brunel University London
  6. Becky Holloway, Programme Director, Jericho Chambers
  7. Wendy Jephson, CEO & Co-Founder, Let's Think
  8. Alex Kuczynski, Executive Director, Corporate Services & General Counsel, Financial Reporting Council
  9. Neal Lawson, Partner, Jericho Chambers
  10. Benita Mathew, Lecturer in AI and Fintech, University of Surrey
  11. Caroline Miskin, Senior Technical Manager, Digital Taxation, ICAEW Tax Faculty
  12. Chris Morgan, Head of the KPMG Global Responsible Tax Project, KPMG International
  13. Ed Saperia, Dean, Newspeak House

Some or all of the services described herein may not be permissible for KPMG audit clients and their affiliates or related entities. The information contained herein is of a general nature and is not intended to address the circumstances of any particular individual or entity. Although we endeavor to provide accurate and timely information, there can be no guarantee that such information is accurate as of the date it is received or that it will continue to be accurate in the future. No one should act on such information without appropriate professional advice after a thorough examination of the particular situation. © 2024 Copyright owned by one or more of the KPMG International entities. KPMG International entities provide no services to clients. All rights reserved. The views and opinions of external contributors expressed herein are those of the interviewees and do not necessarily represent the views and opinions of KPMG International Limited or any KPMG member firm. The KPMG name and logo are trademarks used under license by the independent member firms of the KPMG global organization.