Who is responsible for AI Agents?
DALL-E

Who is responsible for AI Agents?

Who is responsible for AI Agents?

Autonomous systems promise to gain ground in 2025, but accountability for their actions faces challenges

Eduardo Felipe Matias

The project was ambitious: to reinvent the portable heater, transforming it into a smart product, capable of automatically adjusting the temperature and optimizing energy consumption. The consultant hired by the company was given carte blanche to lead the development of the project, incorporating innovations that promised to revolutionize the market. He implemented advanced features, such as Wi-Fi connection for remote control and a thermal optimization system based on machine learning, which adapted to user habits and adjusted the heater in real time by accessing online weather data.

In record time, the product was on the market. Sales exploded. And so did many heaters...

To reduce costs, the consultant had opted for cheap components, whose limitations only became evident when the device was already in thousands of homes and began to present critical failures. The Wi-Fi connectivity, implemented without adequate security, made the heater vulnerable to pyromaniac hackers, who could control it remotely. The device, which was supposed to be smart, learned little, causing energy inefficiency and domestic accidents.

When the authorities began their investigation, they discovered something disturbing. The consultant was not a human, but an AI agent, who had been given the mission to maximize results, with unrestricted autonomy.

AI agents are autonomous systems that perceive the environment, make decisions and perform actions based on specific objectives. And as this case shows – thankfully a fictitious one – they can produce adverse impacts.

The question is: who is responsible for the acts of AI systems? To illustrate this dilemma, consider another example: the running over of a person by an autonomous car operated at that moment exclusively by an AI. Even if someone was sitting in the driver's seat, it would be necessary to discuss who should be held responsible. The possibilities include the human, the company performing the tests, the developers of the AI ​system or the manufacturers of the sensors and equipment used.

This scenario provokes ethical reflections on the role of developers in preventing their creations from causing harm or human loss. Suppose a car can be configured by its owner to exceed the speed limit. If this configuration results in an accident, who should be held responsible, the owner who programmed such behavior or the manufacturer who made this functionality available? For human drivers, exceeding the speed limit is a common practice and is often seen as inevitable. However, with autonomous vehicles, where the maximum speed can be determined in advance, the issue changes. As Harvard professor Jonathan Zittrain notes, in this situation, while there may seem to be a shortage of liability, there is actually an excess. Manufacturers, programmers, and vehicle owners, among others, could be held liable.

In machine learning-based systems, liability is even more complex, due to the autonomous and potentially unpredictable nature of these systems, which are designed to make independent decisions and find creative solutions. If courts interpret this inherent unpredictability as unfair to hold developers liable, victims may find themselves unable to obtain any compensation for their losses. Other challenges can be mentioned. One of them, known as the “discretion problem,” refers to the fact that AI projects may use components and technologies that, considered in isolation, seem harmless, but that can pose significant risks when integrated. Then there is the “diffusion problem.” AI systems can be developed online by small teams using widely available tools. These individuals may be dispersed globally, with no formal links between them, which makes it difficult to identify and hold accountable those responsible, who may be in different jurisdictions.

One proposal to prevent AI systems from getting out of control is to always keep a “human in the loop” in the decision-making chains involving the action of algorithms. The European Union’s General Data Protection Regulation (GDPR) incorporated this concept in its article 22, guaranteeing people the right not to be subject to exclusively automated decisions that produce significant legal effects. This principle is also found in article 8 of the Bill approved at the end of last year in the Brazilian Senate (PL 2.238/2023), which provides for human supervision in high-risk AI systems, in order to intervene if they function inadequately.

2025 promises to be the year of AI agents, with significant advances in their integration into a wide range of activities, both personal and professional. Companies like Google and OpenAI are developing autonomous systems capable of performing complex tasks like travel planning and online shopping, ushering in a new era of productivity and cementing AI as an indispensable tool in everyday life. As these agents become increasingly present in our lives, it is essential to establish rules and mechanisms that ensure someone responds when something they do goes wrong.

Eduardo Felipe Matias is the author of the books “Humankind and its borders” and “Humankind against the ropes”, winners of Premio Jabuti, and coordinator of the book “Startups Legal Framework”. PhD in International Law from the University of Sao Paulo, he was a visiting scholar at the universities of Columbia, in NY, and Berkeley and Stanford, in California, is a guest professor at Fundação Dom Cabral and a partner in the business law area of Elias, Matias Advogados.

 

Article originally published in Portuguese at Época Negócios magazine: Quem responde pelos agentes de IA? | Na Fronteir@ | Época NEGÓCIOS

#column #NaFronteira #EpocaNegocios #AI #artificialintelligence #AIagents #accountability #responsibility #technology #innovation #AIregulation

 

Alexandre Maia

Conectar Expande! Conectar Escala! Pesquisador de Canais de Vendas e os Seus Efeitos Sobre a Lucratividade das Empresas

6mo

Eduardo, obrigado por compartilhar! Sempre acompanho os seus conteúdos e são de grande valor!

Like
Reply

To view or add a comment, sign in

Others also viewed

Explore content categories