Should the new stage of Artificial Intelligence include an ex-ante regulation?

In America, the strong neoliberal regulatory tradition has promoted ex-post regulatory schemes, that is, we regulate long after changes occur, which allows the market to adjust itself during implementation periods.

But although experience has shown that subsequent regulation allows for greater innovation, it is also true that it generates a notable absence of the State on crucial issues, as has happened in recent days with many developments derived from Internet such as Uber, Bitcoin, Airbnb or Netflix. For this reason, and in view of the risks that new developments in Artificial Intelligence entail, it is necessary to consider whether regulation in this field should be ex-ante before all the changes are developed and not ex-post.

Why is important to anticipate the regulation of the new AI?

By: Gabriel E. Levy B. – www.galevy.com

The term ex-ante is a neo-Latin word meaning “before the event” and it is most commonly used in the business and regulatory world, where results of a particular action, or series of action, are anticipated before they can occur. The opposite of ex-ante is ex-post, that is, after the event occurs [1].

Artificial Intelligence (AI), refers to the type of processing based on computer algorithms that can be developed by a computational machine, through a type of electronic imitation of human cognitive functions such as perceiving, reasoning, learning and solving problems [2].

Artificial Intelligence is part of the algorithm systems with which the applications that we use daily in our mobiles or computers have been designed, including programs that we use in many aspects of our daily life.

In the next generation of technological developments, Artificial Intelligence will be present practically all aspects of human life, and although in most cases it comes to solutions to make the world we inhabit easier, there is also a high risk associated to this technology. This is not only because it can be used for military and geopolitical purposes, but also because at a certain moment this intelligence can become so autonomous, that it turns against humanity itself, in the best style of the argument raised by the Matrix trilogy [3].

This represents a great regulatory challenge around the world, which, due to the dimensions and scope, it possibly requires a model that anticipates risks. That is, an ex-ante regulation, which based on what has already been developed, but with a capacity for perspective and prospecting, establishes rational limits to the high risks derived from the implementation of a powerful and autonomous development, and it also allows taking the best advantage of those changes.

The Canadian case

Considering the high risks involved in Artificial Intelligence, as well as the huge promises, Canada has decided to start working on a regulatory route that apparently points towards ex-ante regulations of the new developments that are emerging in the Artificial Intelligence field, reason why the Office of the Privacy Commissioner of Canada is consulting with various sectorial agents and the public in general about how privacy principles should be applied to Artificial Intelligence (AI) [4].

Although the OPC recognizes the AI potential to improve the quality of computing services and processes, it is concerned about privacy risks and, above all, the bias and discrimination that could be unleashed if this technology was to get out of control and play against humanity itself.

In the same way, the lack of regulation could frustrate the democratization and optimization of many of the great promises and advantages that these systems can bring to society, economy, culture, and health.

An issue that worries experts in the field

For the Spanish expert Moises Barrio, there is an urgent need for regulation on Artificial Intelligence, since the States have for now left the issue in the individuals hands and with all the known antecedents, there has not been enough intervention and regulation, leaving many aspects adrift.

“It is not entirely clear who should be held responsible if the AI causes damage (for example, in an accident with an autonomous car or due to an incorrect application of an algorithm): the original designer, the manufacturer, the owner, the user or even the AI. If we apply solutions case by case, we risk uncertainty and confusion. Lack of regulation also increases the likelihood of knee-jerk, instinctive or even public anger reactions”. Moises barrio in Retina del Pais España [5].

For Barrio, the risks of AI are multiple and its great variety of possible applications generates an equal number of possible risks:

“AI systems already have the capacity to make difficult decisions that until now have been based on human intuition or on laws and court practices. Such decisions range from life and death issues, such as the use of autonomous killer robots in armies, to issues of economic and social importance, such as how to avoid algorithmic bias when AI decides, for example, whether to grant a license to a student or when a prisoner is granted parole. If a human being made these decisions, would be always subject to a legal or ethical standard. There are no such rules in the AI present” Moises barrio in Retina del Pais España [6].

Interests opposing regulation

Although at first glance it seems very logical to regulate the new generation of Artificial Intelligence, there are many interests to prevent this from happening, since for some a state intervention would limit the economic, military and political potential that can be derived from its implementation. In this regard, Moises Barrio shows how corporate interests take precedence in many cases over regulatory attempts, whether of civil or governmental origin, which is evident in other fields of economy, such as what has happened with the financial system:

“AI regulation is currently governed by corporate interests. This is not always convenient, one needs only look at the global financial crisis of 2008 to see what happens when industry self-regulation gets out of control. Although the States have intervened to demand banks to keep better assets to back up their loans, the world economy continues suffering the repercussions of a deregulated regime” Moises barrio in Retina del Pais España [7].

In conclusion, although Artificial Intelligence represents a great advance for humanity that could significantly improve the life quality in many aspects, it also represents an unprecedented risk that, if we do not intervene urgently through adequate regulation, we would take the risk of it getting out of control. As has happened in other fields such as the financial system, which with crisis of 2008 revealed the danger of an absence of regulation, or in technological issues with the conflicts that have arisen applications such as Uber, Airbnb, Blockchain and Netflix have raisen globally.

[1] Definition of ExAnte in economic encyclopedia

[2] Book: Artificial Intelligence: a modern approach, Upper Saddle River, N.J.: Prentice Hall. ISBN 0-13-604259-7

[3] Wikipedia article about Matrix

[4] OPC Consultation on AI risk

[5] Article: Should States regulate Artificial Intelligence?

[6] Article: Should States regulate Artificial Intelligence?

[7] Article: Should States regulate Artificial Intelligence?

Disclaimer: The published articles correspond to contextual reviews or analyses on digital transformation in the information society, duly supported by reliable and verified academic and/or journalistic sources.  The publications are NOT opinion articles and therefore the information they contain does not necessarily represent Andinalink’s position, nor that of their authors or the entities with which they are formally linked, regarding the topics, persons, entities or organizations mentioned in the text.