Artificial intelligence has become a regular companion of human thought. GPT-4 responds, writes, solves, proposes.
And while the promise of its efficiency seems unbeatable, a new study raises a disturbing warning: Relying on this tool for complex tasks can cause our brains to lose their edge.
The constant delegation of deep cognitive processes could be slowly anesthetizing our most valuable abilities.
“When we stop thinking, something inside us also stops”
By: Gabriel E. Levy B.
This is not the first time that humanity has externalized mental functions. Since the invention of writing, as Walter Ong noted, each technology of knowledge has meant a transformation in how we process the world.
We write down so as not to memorize, we navigate by GPS so as not to orient, we ask search engines before memory.
But never has a tool offered answers as complex, as convincing, as fast as GPT-4.
Nicholas Carr already anticipated it in Superficiales (2010):
“What the Net seems to be doing is diminishing my ability to concentrate and contemplate.”
A recent study published by MIT in the journal Nature[1], showed that artificial intelligence is taking a new step in that direction.
The study that analyzed 482 people showed that those who used GPT-4 in moral dilemmas or abstract reasoning obtained faster and apparently accurate results, but with less depth and clarity.
Is it possible that we are starting to think less, and we don’t even notice it?
“The machine thinks for me”
The paradox of using GPT-4 is not in its ability to help, but in our tendency to let it do everything. The phenomenon identified as “cognitive displacement” involves a gradual transfer of mental effort to the machine. It is a delegation that, as with any unexercised muscle, could lead to functional atrophy.
“Epistemic laziness,” another key finding of the study, refers to the mental shortcut that many users adopt. GPT-4 proposes, and the user nods.
The problem is not using the tool, but stopping questioning it. If everything is automated, even thought, what room is left for doubt, error, introspection?
The Danish researcher Svend Brinkmann, in his work Stand Firm, argued that deep thinking requires endurance, time, even discomfort.
GPT-4, by offering instant solutions, seems to erase that friction and, with it, the cognitive processes that forge critical thinking and autonomy.
“Imagining is not the same as solving”
The study does not condemn the use of AI, but it does introduce an important distinction: the impact varies depending on the type of task. In activities such as brainstorming or creative writing, GPT-4 can act as a spark of inspiration. But when it comes to formal logic, ethical dilemmas, or analytical reasoning, their intervention can erode human capabilities.
Why does this happen? Because creative tasks are divergent, open, and in that field, AI can feed new ideas.
On the other hand, logical and ethical thinking demands rigor, consistency, patient construction of arguments. The ease with which GPT-4 delivers answers can lead the user to confuse clarity with truth, and speed with depth.
The problem is twofold: not only are we delegating to AI, but we are exercising less and less our ability to deal with the complex.
Solving dilemmas requires something that AI doesn’t yet have: awareness, intuition, internal conflict. If we let it solve them for us, we are giving up the right to learn by thinking.
“Cases where thought was diluted”
The cases observed in the study provide revealing examples. One group had to solve the classic tram dilemma, saving five people by sacrificing one. Those who used GPT-4 quickly proposed the utilitarian option, without exploring ethical nuances or considering values such as individual dignity or moral autonomy. Those who reasoned without AI offered more diverse answers, with arguments that reflected moral tension and an awareness of ambiguity.
In another task, it was asked to justify why an algorithm should or should not decide access to medical treatments based on genetic history. AI users repeated arguments generated by GPT-4, but without demonstrating an understanding of the risks of discrimination or the social implications. His reasoning, although structured, lacked depth.
In contrast, creative writing tasks, such as writing a poem or starting a short novel, showed another pattern. AI amplified participants’ productivity, generated suggestive images, and stimulated new narrative forms. Here, the delegation did not extinguish creativity, but multiplied it.
The case is clear: it is not a question of banning or fearing artificial intelligence, but of understanding its limits and, above all, ours.
In conclusion, the constant use of GPT-4 can make intellectual life easier, but at the price of making it more superficial. If we stop exercising deep reasoning, critical decision-making, and autonomous resolution, we run the risk of losing what makes us truly thinking. Artificial intelligence is not an enemy, but it is a mirror: it reveals how willing we are to continue thinking for ourselves.
References:
- Carr, Nicholas. Superficial: What is the Internet doing to our minds? Taurus, 2010.
- Ong, Walter. Orality and writing. Tecnologías de la palabra, Fondo de Cultura Económica, 1987.
- Brinkmann, Svend. Stand Firm: Resisting the Self-Improvement Craze. Polity Press, 2017.
- Ahmad, SF, Han, H., Alam, MM et al. Impact of artificial intelligence on the loss of human lives in decision-making, laziness and safety in education. Humanit Soc Sci Commun 10, 311 (2023). https://doi.org/10.1057/s41599-023-01787-8
[1] https://www.nature.com/articles/s41599-023-01787-8