Humanity’s great challenge in the face of artificial intelligence

New developments in artificial intelligence in recent years have triggered euphoria in the technology industry, promising an automated, convenient and simple future for human civilization. However, some scientists and other experts in the field do not share the same enthusiasm and warn of potential risks.

What are the biggest challenges underlying the development of artificial intelligence?

By Gabriel E. Levy B.

www.galevy.com

To address this complex issue, it is important to clarify that, in all known cases, the concept of artificial intelligence is linked to the concept of algorithm.

In much simpler terms, an algorithm or programming code is a sequence of instructions given to a software, or computer system, to execute a certain task in order to simulate autonomous behavior. Only a few decades into the reign of algorithms, these routines are varied, diverse and versatile.

Algorithms, specifically those linked to databases and personal information systems, are responsible for the elaboration of many of the profiles that evaluate transversal aspects of contemporary society, from our job performance to our economic situation, including the advertising we receive, the analysis of our health, the record of the places we visit and the identification of the content we consume; in other words, data that summarize our lives.

As we have discussed extensively in other articles, although banks, insurance companies, governments and companies whose operations are based on the handling of personal information have been implementing the use of algorithms over the years, they are not the main managers of these technologies. It is the large Internet companies that massively collect our information and process it through computerized codes, being able to affirm, almost with certainty, that they know us even better than we know ourselves.

Companies like Facebook, Alphabet Inc. (owner of Google), Apple, Amazon, Netflix, Ebay or Tripadvisor, among many, many others, have access to information about aspects of our lives that even friends and relatives ignore. They know what kind of content we consume, our political ideology, who our friends and family are, what places we visit, what food we like, what movies we watch, what kind of photos we share, who we hang out with, what opinions we have on sensitive topics, how often we connect to the Internet, what kind of devices we own and how we use them.

Of course, this information ends up influencing the type of advertising we receive, the information we are exposed to and the content offered to us on our devices, among other variables.

Our information acts as a bargaining chip, as the free services such as e-mail or social networks are compensated by the use of our data for advertising purposes.

Algorithms obtain information from multiple sources, most of them related to our behavior on the Internet. Based on our online browsing, viewing, consumption and purchasing habits, a profile of us is determined from which they try to manipulate our behavior and we are segmented to receive offers and advertising content.

Algorithm discrimination

A study published in 2020 by George Washington University, in the United States, found that the algorithm of Uber and other transportation applications apply a higher rate of charge when the trip is to depressed areas or where people with low economic resources live, especially if they are areas inhabited by people of African descent or Latinos[1].

The researchers analyzed data from more than 100 million trips that occurred in the city of Chicago between November 2018 and December 2019. Each trip contained a wide variety of information, such as trip start and end point, duration, cost, or whether it was a shared or individual trip, as well as the customer’s ethnicity.

“After analyzing transportation census data from the city of Chicago, the finding was made that these companies charge a higher price per mile for trips whose destination is an area with a higher proportion of ethnic minority residents than to majority white neighborhoods. Basically, if you go to a neighborhood with a large black population, you will pay a higher price for your trip.” Aylin Caliskan, study co-author[2].

COMPAS is a software for judicial processing, it is used by the justice system in at least ten states in North America, where judges employ it as an aid in issuing sentences. This algorithm is based on criminal statistics and on several occasions various social organizations and trial lawyers have denounced that, if the offender is of Latino or black origin, the software tends to qualify the suspect as “high risk of committing new crimes.”[3].

Along these lines, an analysis of more than 10,000 defendants in the state of Florida published in 2016 by the research group ProPublica showed that black people were often rated highly likely to reoffend, while whites were considered less likely to commit new crimes[4].

On the other hand, a study by American researchers Gideon Mann and Cathy O’Neil, published in Harvard Business Review, found that the programs or algorithms in charge of pre-selecting resumes for companies and corporations in the United States, are not devoid of prejudices and biases typical of humans, which is why the selection is not objective and many profiles may be discarded simply because of a preconceived idea of who programmed the code[5].

Daniel Innerarity’s critical view

In an interview published by the Spanish media El País, the philosopher Daniel Innerarity assures that “we are at a moment in the history of humanity in which it is still possible to negotiate, dissent, reflect on these technologies”, warning that in a few years it may be too late and the consequences for humanity unpredictable; especially since “democracy is not up to the complexity of the world[6]“.

For Innerarity, the greatest difficulty in approaching artificial intelligence stems from the mistake of comparing it with human intelligence, since they are very different.

“It seems to me a big mistake the strategy of defining artificial intelligence from humans: if humans have rights, so do machines; if we pay taxes, so do we, if we tell jokes, so do we. Because we are talking about two completely different intelligences. My hypothesis is that there is not going to be a substitution.”[7].

But not only Daniel Innerarity has this view, Kenneth Cukier, Viktor Mayer and Francis de Véricourt, authors of the book Framers, virtue in the digital age, agree that imagining alternative realities allows us to convert causal reasoning into something actionable, allowing us to analyze potential causes to determine their particular effects. For this reason, both elements, i.e. “counterfactual thinking and causal reasoning”, strengthen each other and this is what makes us cognitively superior to other species and machines, which means that machines do not possess, nor are they likely to possess, the ability to develop counterfactual thinking.

“Without causality we would drown in a sea of events, stripped of meaning. Without counterfactuality we would be prisoners of what exists, stripped of options.”[8].

For Daniel Innerarity, there are many examples of how we have overestimated the supposed artificial intelligence, while at the same time we have serious difficulties in understanding the dimensions underlying this intelligence. He attributes this overvaluation, in many cases, to the industry that wants to sell us a better future. In other cases, however, we also undervalue it because of the normal skepticism that any new technological development arouses.

“We have gone, in one year, from thinking that artificial intelligence is going to save politics, after Cambridge Analytica, to thinking that it is going to kill democracy. Why in such a short period of time have we gone from great over-enthusiasm to the opposite, as with the Arab springs? That sort of wave of democratization that we expected from the internet has not happened. And now the word internet is associated with hate speech, disinformation, and so on. When we have such different attitudes towards a technology, it means that we are not understanding it well. Because it is true that the Internet horizontalizes the public space, it puts an end to the verticality that would turn us citizens into mere spectators or subordinates. But it is not true that it democratizes by itself, especially because technology does not solve the political part of political problems.”[9].

 The difficult path to ensure Algorithm Neutrality

Historically, when a technological development has a potential risk of concentration or misuse, the regulatory mechanism of neutralizing such technology has proven to be effective. Thus, the Internet, for example, a few years after its consolidation, was declared a neutral network. This figure has been used in the case of many technologies in the telecommunications sector.

There is the possibility of declaring algorithms as neutral and, therefore, artificial intelligence is no exception.

The issue of algorithm neutrality has been widely discussed by academics in the last decade; however, there is little progress on the subject, most likely because it is still an issue that has not yet migrated from the academic to the legislative arena.

In the media context, little is known about the subject; possibly the main reference is the scandal of Facebook and the British company Cambridge Analytica[10]which set off alarm bells in the public opinion when it was revealed that our information could be being used to privilege private, political and economic interests without our express authorization, which breaks confidentiality policies and agreements.

This is a good example of a situation in which algorithms are no longer neutral; that is, when they benefit one interest, person or organization to the detriment of other people, interests or organizations. This was documented in many political campaigns around the world, where the Cambridge Analytica algorithm sought to benefit one candidate to the detriment of others, using a mix of strategies that included the use of voters’ information, especially their deepest fears and apprehensions, to expose them to content, mostly false, that sought to induce voting.

In other cases, possibly harmful, it has been shown that algorithms are used to favor commercial interests. Such is the case of Amazon, which uses the code within its own virtual store to display its products in the first places, above all its competitors, or the case of Google, which always displays in the first places searches related to its products, i.e., if someone types in the search engine the word “email”, “GMail” will come up as the first option.

For this type of practice Google was fined $5 billion by the European Union, which found in the ruling that the company “blocks innovation by abusing its dominant position,” including “through the configuration of its search engine algorithm.”[11].

For Daniel Innerarity, the discussion should involve not only the neutrality of codes, but also the ownership of data; that is, regulating or regulating the capture, use and management of information that companies and organizations make of their users’ data via the Internet.

“There is a need for a renewal of concepts. And this is where philosophers have a role to play. For example, nowadays it is said a lot: whose data are they? It seems to me that the concept of ownership is a very inadequate concept to refer to data, which rather than a public good is a common good, something that cannot be appropriated, especially because the level of collection that I tolerate greatly conditions that of others. And now we are dealing with an idea of privacy that we have never had before and the concept of sovereignty, the concept of power… A philosophical reflection is needed about concepts that are being used inappropriately and that deserve to be revised. And there are many centers in the world that are reflecting on this from an ethical and legal point of view, and there are very few people who are reviewing it from a political point of view: What is the politics of algorithms, what impact does this have on democracy?“[12]

In conclusion, although the so-called artificial intelligence, which is nothing more than the sophistication of algorithms, can improve the quality of human life in many aspects, in parallel it represents a great risk that several authors have referred to.

The challenge for artificial intelligence to play in favor of humanity and not against it requires regulatory actions that guarantee the neutrality of algorithms as an urgent and priority issue in government agendas, while policies must be designed to define clear rules of the game for these software codes.

[1] Pandey, A. and Caliskan, A. (2021). Disparate Impact of Artificial Intelligence Bias in Ridehailing Economy’s Price Discrimination Algorithms. George Washington University. Available at https://arxiv.org/pdf/2006.04599.pdf

[2] Ibid.

[3] Duarte, F. (2018). 5 algorithms that are already making decisions about your life that you may not have known. In BBC.com. Available at https://www.bbc.com/mundo/noticias-42916502

[4] Ibid.

[5] Mann, G. and O’Neil, C. (2016). Hiring Algorithms are not Neutral. In Harvard Business Review. Available at https://hbr.org/2016/12/hiring-algorithms-are-not-neutral

[6] Salas, J. (July 4, 2022). Daniel Innerarity: “Algorithms are conservative and our freedom depends on them letting us be unpredictable”. In El País. Available at https://elpais.com/tecnologia/2022-07-05/daniel-innerarity-los-algoritmos-son-conservadores-y-nuestra-libertad-depende-de-que-nos-dejen-ser-imprevisibles.html

[7] Ibíd.

[8] Véricourt, F.; Mayer- Schönberger, V., and Cukier, K. (2021). Framers: Human virtue in the digital age.

[9] Salas, J. (4 de julio de 2022). Daniel Innerarity: “Los algoritmos son conservadores y nuestra libertad depende de que nos dejen ser imprevisibles”. En El País. Disponible en https://elpais.com/tecnologia/2022-07-05/daniel-innerarity-los-algoritmos-son-conservadores-y-nuestra-libertad-depende-de-que-nos-dejen-ser-imprevisibles.html

[10] BBC Mundo. (20 de marzo de 2018). 5 Claves para entender el escándalo de Cambridge Analytica que hizo que Facebook perdiera USD$37.000 millones en un día. Available at https://www.bbc.com/mundo/noticias-43472797

[11] Ibid.

[12] Salas, J. (July 4, 2022). Daniel Innerarity: “Algorithms are conservative and our freedom depends on them letting us be unpredictable”. In El País. Available at https://elpais.com/tecnologia/2022-07-05/daniel-innerarity-los-algoritmos-son-conservadores-y-nuestra-libertad-depende-de-que-nos-dejen-ser-imprevisibles.html