Friday Highlight  

'AI must be wielded cautiously in tax administration'

'AI must be wielded cautiously in tax administration'
(AndersonPiza/Envato Elements)

At the start of July, Turkey announced it was implementing artificial intelligence in its continuing efforts to tackle tax evasion, joining other countries such as the UK, US and Canada in seeking smarter ways to reduce the tax gap.

Within the EU this is not new; 18 member states make regular use of machine learning within their tax administrations, with some models being used as early as 2004.

The EU has developed its own machine learning system to combat carousel fraud.

Article continues after advert

Machine learning is a great tool for analysing big data, finding commonalities between data sets, clustering information, and highlighting anomalous findings on a large scale.

The use of these tools makes sense; tax administrations can process data in scalable and efficient ways – but the use of such tools is changing the dynamic between authorities and their tax base.

AI concerns

The use of this technology at scale, however, raises several legitimate concerns.

One of these concerns regarding implementing AI as an investigatory tool is the existence of bias within the system.

A typical machine learning model arrives at the desired output after it has been trained using test data and tweaked to provide correct outputs.

The system's success depends on the quality of the data used and rigorous standards and procedures to ensure correct and consistent outcomes for the developers. Great care must be taken to ensure that test data is absent of social (or other) bias. 

This equality is not limited to data; any human interaction manipulating or assessing the output whilst the model is being trained is also crucial to preventing biased outputs.

It may be assumed that the larger the training data, the fairer and more representative the system may be, especially given the large amount of data tax authorities hold.

This was an unfortunate lesson for the tens of thousands of victims of “'toeslagenaffaire', the Dutch child care benefits scandal.

In 2013, the Dutch tax authority implemented a self-learning algorithm to help uncover the early stages of benefit fraud. Penalties were wrongly levied on tens of thousands of families, typically from lower incomes and ethnic backgrounds.

The impact was harrowing: families were forced into poverty, there were suicides, and more than a thousand children were taken into foster care.

This was not a fault of the system, blame surely lies in the government representatives that allowed the system to go live without the relevant safeguards in place to protect honest taxpayers.

Transparency and explainability are pillars of the OECD AI principles, which state that AI actors should commit to transparency and responsible disclosure regarding AI systems and provide information that enables those adversely affected. No such transparency was offered to those affected in the Netherlands.

The EU AI Act details a risk-based framework requiring private companies operating high-risk AI systems (any system that profiles based on individual data such as processing of personal data, behaviour, location, etc) to be regulated, and for any such system to be designed to implement human oversight.