Nobel Prize in Physics 2024: Between progress and risk in AI

The Royal Swedish Academy of Sciences has awarded machine learning pioneers John Hopfield and Geoffrey Hinton for laying the foundations for artificial intelligence capable of mimicking aspects of the human brain. While this technology drives remarkable advances in areas such as medicine and astrophysics, Hinton himself has repeatedly warned of the dangers inherent in its development.

How far can or should artificial intelligence advance?

By: Gabriel E. Levy B.

In 1982, John Hopfield introduced a radical innovation by creating a type of associative memory that allowed complex patterns of data to be stored and retrieved.

His approach, inspired by how memory works in the human brain, laid the groundwork for the development of artificial neural networks, a field that explores information processing by emulating brain structures.

For his part, Geoffrey Hinton, known as “the godfather of artificial intelligence”, went even further along these lines. His work on deep neural networks led to methods that allow machines to recognize patterns in images, sounds, and other data, without direct human intervention.

The progress of both scientists has allowed artificial intelligence to reach previously unthinkable capabilities.

From facial recognition to machine translation, neural networks are part of our daily lives, solving problems and simplifying tasks more and more autonomously.

As Ellen Moons, president of the Nobel Committee in Physics, highlighted, artificial intelligence makes it possible to “recognize images and associate them with past memories and experiences,” emulating human cognitive abilities.

This process would not be possible without the principles of physics applied by Hopfield and Hinton to computing, turning neural networks into a fascinating bridge between biology and technology.

Artificial intelligence: advances and challenges

However, the development of these technologies, as Hinton warns, brings with it a series of risks.

His resignation from Google in 2023, a move that stunned the world, came with clear warnings about the potential threats of artificial intelligence.

Hinton warned that this technology, as powerful as it is fascinating, can be manipulated or misused in ways we don’t even imagine.

In his recent public appearance during the Nobel press conference, he insisted: “We have no experience of what it is to have things smarter than us,” and warned that, in his opinion, there is a 50% probability that in the next twenty years we will face significant problems if AI continues to advance unchecked.

This fear is not just an apocalyptic prediction. As philosopher Nick Bostrom points out in his work “Superintelligence: Paths, Dangers, Strategies”, an advanced artificial intelligence could develop “objectives misaligned with ours” and potentially act to the detriment of humanity.

Bostrom, like Hinton, sees it crucial to establish regulations and ethical frameworks to guide the development of AI, especially as it approaches unprecedented levels of autonomy.

In practice, this could imply limitations in the use of AI in areas where the risks outweigh the benefits, or where there is no clarity about the potential consequences.

However, Hinton, despite his warnings, has made it clear that he does not regret his contributions, stating that “if I hadn’t done it, someone else would have done it.”

This phrase reflects the dilemma of many contemporary scientists who, despite acknowledging the risks, continue to explore fields whose impacts are not yet fully understood. His perspective suggests that the responsibility for control should not lie with developers alone, but with society as a whole.

Examples of an increasingly close risk

Hinton’s resignation from Google coincided with a period of intense competition between tech giants such as Google and Microsoft.

As both look to integrate chatbots and more advanced AI systems into their products, a race has been unleashed in which caution takes a back seat.

As Hinton describes, the accelerated growth of artificial intelligence, without a clear control framework, could result in the creation of tools that are difficult to regulate.

In this context, the philosopher Yuval Noah Harari has warned that AI “could become the perfect tool for authoritarian regimes”, allowing control and manipulation at levels unimaginable until recently.

In addition, another disturbing aspect that Hinton has highlighted is the impact on employment and the labor market.

By automating routine tasks, artificial intelligence displaces workers, mainly affecting less skilled sectors, but over time, even more complex professions could be threatened.

This raises an ethical question that becomes more critical as technology advances. As Harari warns, AI has the potential to irreversibly transform the social structure, “deciding who has a job and who doesn’t” and creating a social disparity that exacerbates pre-existing problems.

The prospect of Hinton, who has gone so far as to predict that at some point we could face the creation of autonomous “killer robots,” is not entirely far-fetched when considering the military’s interest in artificial intelligence.

Back in 2015, more than a thousand scientists, including Hinton himself and Tesla co-founder Elon Musk, signed an open letter calling for a ban on the development of autonomous weapons that use AI, describing them as “machines without ethics or morals.”

Between progress and fear

This dilemma between progress and fear confronts us with an inescapable question: how to ensure that artificial intelligence remains aligned with human interests? According to Hinton, the key lies in global collaboration that prevents companies or governments from pursuing short-term goals at the expense of collective security. This idea resonates with Bostrom’s perspective, who advocates the creation of international bodies dedicated to the supervision and regulation of artificial intelligence, something similar to the International Atomic Energy Agency, but dedicated to AI monitoring. However, in an increasingly competitive and fragmented world, achieving global regulation seems like an almost utopian challenge.

In this context, technological progress and the fear of its consequences act as complementary forces that require a delicate balance. As artificial intelligence continues to evolve, its future will depend on humanity’s ability to find common ground in which collective profit takes precedence over particular interests. Artificial intelligence, in this sense, is not only a technological tool, but a mirror of the priorities and values of the society that develops it. Thus, the dilemma between progress and fear becomes a reflection on our ability to build a future in which artificial intelligence works in favor of human interests, and not as an uncontrollable substitute for them.

In conclusion, the Nobel Prize awarded to Hinton and Hopfield celebrates scientific advancement and human ingenuity at its finest, but it also highlights an essential aspect of artificial intelligence: its potential to transcend the original expectations of its creators. As we explore how these technologies have transformed our daily lives, and in the face of calls for caution from figures like Hinton and Bostrom, it becomes clear that the debate on artificial intelligence is not just about what we can achieve with it, but about what kind of society we want to build as it progresses.