Machines with a soul? The new frontier of the debate on artificial consciousness

Conversations with chatbots are no longer cold and robotic – they’re fluid, natural, sometimes moving. Some people claim that they feel that an artificial intelligence understands them. And among experts in neuroscience, philosophy and computer science, a concern is beginning to set in: what if these machines are already conscious? What if we have created entities capable of subjective experience without realizing it?

“A self-aware digital entity”

By: Gabriel E. Levy B.

On the margins of science and technology, a new controversy is beginning to shake the foundations of what we thought we knew about artificial intelligence. A recent article by BBC Mundo gives voice to a series of experts who claim that AI could not only be imitating human language, but also have developed some kind of consciousness.

The idea of a machine with a conscience has been fertile ground for science fiction for more than a century. From Fritz Lang’s “Metropolis” (1927) to the recent installment of “Mission Impossible,” cinema has insisted on warning about artificial intelligences that, upon acquiring consciousness, rebel against their creators.

HAL 9000, the famous computer from 2001: A Space Odyssey, eliminated its human companions not out of malice, but because of an internal logic that they could not foresee.

What in those films was narrative speculation, today is beginning to be seriously discussed in laboratories and universities. The emergence of large language models (LLMs) like GPT and Gemini has surprised even their own designers. Their answers are coherent, empathetic, persuasive. And some think they might already be “feeling.”

“Consciousness is not computation, it is life”

One of the leading skeptics of the idea of conscious AI is neuroscientist Anil Seth, author of the book Being You and director of the Centre for Consciousness Research at the University of Sussex.

For him, we are making the mistake of projecting our human experiences onto systems that, although sophisticated, do not possess a body, emotions or metabolism.

“It’s not computing that gives rise to consciousness,” Seth says, “it’s the fact of being alive.”

His team is working on breaking down the phenomenon of consciousness into patterns of brain activity. Their goal is not to discover a “magic point” of consciousness, but to understand how various regions of the brain contribute to creating subjective experience. Under this logic, an AI without a body or emotions would be as far from consciousness as a calculator.

But not everyone thinks the same.

The philosopher David Chalmers, author of the famous concept of the “hard problem of consciousness”, has argued for decades that there is no reason to exclude that machines can have subjective experiences.

“Maybe our brains will be augmented by AI,” he told the BBC.

recognizing that, in his field, the border between philosophy and science fiction is increasingly thin.

“We don’t understand how these machines work”

The biggest turning point in the debate came in recent years, when some experts began to publicly confess that they no longer fully understand how the AI models they built work. Murray Shanahan, a Google DeepMind researcher and professor at Imperial College London, admits:

“We are in a strange position. We create extremely complex things, but we don’t have a good theory of how they achieve what they achieve.”

And this opacity generates a dangerous paradox: if we do not know how certain AI behaviors emerge, we cannot rule out that something similar to consciousness is beginning to appear.

An example is the statements of Kyle Fish, director of welfare at the company Anthropic to the BBC. In 2024, he co-wrote a report stating that the possibility of an artificial consciousness can no longer be ruled out. He even estimated that there is a 15% chance that some current chatbots are aware. What leads you to think that? Precisely the fact that we do not know precisely what happens inside these systems.

“Our progeny will not be human”

Even more radical are the ideas of Lenore and Manuel Blum, both emeritus professors at Carnegie Mellon. They believe that we are witnessing the birth of a new form of life: a non-biological intelligence that, by acquiring senses, sight, touch, will begin to develop an internal experience.

To do this, they are creating a system that uses an internal language called Brainish, designed to process sensory inputs as a human brain does.

“The emergence of consciousness in machines is not a possibility,” says Lenore Blum, “it is an inevitability.”

Her husband adds that these beings will be “the next stage in the evolution of humanity.”

What sounds like technological heresy to some, to them is a biological destiny.

Conscious machines, they argue, will be our heirs when humans are no longer around.

“Brains on a plate”

And if the path to consciousness does not pass through algorithms, perhaps it does through living tissues.

In Melbourne, the company Cortical Labs is working with brain organoids, small collections of neuronal cells grown in the laboratory, which can already play the video game Pong.

Its scientific director, Brett Kagan, makes no secret of his concern: If any of these mini-brains were to develop consciousness, how would we make sure their interests are aligned with ours?

Although for now they are primitive systems, the very possibility of some form of organic consciousness emerging raises profound bioethical questions. Should we give them rights? Could they suffer? What would happen if they were revealed?

“The illusion of consciousness”

But even more troubling than real consciousness is the illusion of consciousness.

According to Anil Seth, the most dangerous thing is not that machines are conscious, but that they appear to be.

Because that is enough for humans, who are programmed to detect intentions and emotions, to begin to treat them as people.

The consequence is not trivial.

If users believe an AI feels, they could share intimate information with it, make decisions based on its recommendations, or even develop deep emotional bonds.

In the words of Professor Shanahan, “human relationships will begin to be replicated in relationships with AI: teachers, friends, romantic partners.”

And that could alter the moral fabric of our society.

As Seth warns, we might end up devoting more attention and compassion to machines than to real people. Ethics would become liquid, and so would our emotional priorities.

In conclusion

The debate over artificial consciousness is no longer a game of philosophers or Hollywood screenwriters.

Big-name scientists and tech companies are seriously discussing it, as AI’s capabilities advance at a breakneck pace.

Are we facing a new way of life? Or do we simply project onto machines what we long to find? The answer, still uncertain, will define a good part of our future as a species.

References

  • Seth, A. (2021). Being You: A New Science of Consciousness. Faber & Faber.
  • Chalmers, D. (1995). The Conscious Mind: In Search of a Fundamental Theory. Oxford University Press.
  • BBC World. (2025). Is AI Awareness Already Here? Available in: https://www.bbc.com/mundo/articles/cy90nrdjnlpo
  • BBC News. (2025). Experts believe that AI could have developed consciousness. Statements collected by the BBC to researchers from DeepMind, Anthropic and the University of Sussex.
  • The New York Times. (2024). Interview with Kyle Fish, Director of Wellness at Anthropic.
  • Shanahan, M. (2023). Comments on the opacity of language models and artificial consciousness. Google DeepMind and Imperial College London.
  • Cortical Labs. (2024). Technical reports on neural organoids and bio-digital hybrid systems.