Cabot Transfer Pricing

Digital transition and the impact of AI27 September 2024

Digital transition and the impact of AI

The digital transition has been, in recent decades, a catalytic force for economic, social and cultural changes at global level – it represents the transition from traditional methods of managing information and processes to digital solutions, based on emerging technologies such as cloud computing, the Internet of Things (IoT) and, more recently, artificial intelligence (AI), defined as the ability of a machine to exhibit human-like capabilities such as reasoning, learning, planning and creativity, enabling technical systems to perceive their environment, to deal with what they perceive, solve problems and act to achieve a certain goal (we mention only a few of the areas of applicability in everyday life: analyzing large amounts of health data and discovering patterns that could lead to new discoveries in medicine, improvement of individual diagnosis, improvement of safety, speed and efficiency of railway traffic, intelligent systems of production, food and agriculture, automated systems for administration and public services).

In this context, AI has become a central element of the digital transition, bringing significant benefits, but also substantial challenges in terms of accountability and regulation: it has a high potential in terms of technological progress (digitalization of existing processes and services, radical transformation of how they work, new opportunities for innovation and economic growth) and enables new business models in many sectors of the digital economy, but equally, and depending on the circumstances of its specific application and use, it can generate risks and harms interests and rights that are protected by European Union law or domestic law (data security, user privacy, algorithmic discrimination and job losses due to automation). In this context, it is difficult to determine who bears responsibility for damages caused by artificial intelligence.

To meet these challenges, the European Union has proposed a new directive – the AI ​​Liability Act, which addresses legal and ethical issues related to the use of AI, aims to create a clear framework of liability for the use of AI and ensure that people harmed by systems using artificial intelligence benefit from the same level of protection as people harmed by other technologies. This directive builds on the world’s first comprehensive set of rules governing AI – the Artificial Intelligence Directive, adopted by Parliament in March 2024.

The main objectives of the AI ​​Liability Directive:

1. Clarification of liability: The act aims to establish who is responsible in case of damage caused by AI. Whether it is AI producers, software developers or users, the act creates a legal framework to determine the liability of each party.

2. Facilitating access to justice: Victims may have difficulty proving that an AI system caused harm, given the technological complexity of these systems. The act introduces rules that simplify the process of proving guilt, including access to information about the operation of the AI ​​system.

3. Holding AI developers accountable: The law encourages companies developing AI technologies to take stricter precautions by introducing higher safety and transparency standards. This includes the obligation to provide clear evidence of how an algorithm was created and tested.

4. Preventing algorithmic discrimination: The AI ​​Liability Directive also aims to address ethical issues such as discrimination based on AI algorithms. AI systems can amplify existing biases from the data they are trained on, generating discriminatory decisions, especially in sensitive areas such as recruitment, credit or justice.

If we apply the above objectives to taxation and the use of AI by tax authorities in Member States, we can ask:

• To what extent will ANAF/other tax authorities accept data or reports generated by artificial intelligence? But if ANAF is the institution that generates them? To what extent can they constitute evidence in court?
• How can taxpayers ensure the transparency of the processes carried out by the algorithms used? How can I explain these processes to tax inspectors? But vice versa (in the situation where ANAF uses AI)?
• Currently, ANAF and not only collects huge amounts of data through all the e-mails that have become mandatory (SAF-T, e-invoice, e-transport, e-VAT…). How does ANAF analyze this data? How does ANAF use this data?
• Is digitization useful for increasing state budget revenues? As a personal opinion, I think so, definitely. But is digitization alone enough to optimally increase tax revenues? We all know about the problem of the deficit that is catching up with us and will determine the realization of big fiscal “reforms” in the next period.

The AI ​​Liability Directive is an essential step towards a more responsible use of AI in the digital age. As AI becomes more and more integrated into everyday life, it is essential to have a clear and effective legal framework that protects the rights of users (if we are talking about taxes – the rights of tax payers) and encourages responsible innovation. In this context, the digital transition does not only mean the implementation of new technologies, but also the development of a robust set of norms and values ​​to ensure that these technologies are used ethically and safely.

 

Article published in Bursa.ro

Alina Andrei

Partener Taxe & Prețuri de transfer

Contact

Price Request