The Karma of AI Evolution

Amid headlines talking about “revolution” and presentations loaded with promise, OpenAI announced GPT-5, its latest artificial intelligence model.

The staging sounds like a historic leap, but beneath the surface, the improvements are more discreet than the marketing suggests: adjustments in speed, precision and multimodal handling that fine-tune what already exists.

More than a change of era, it is a move to reaffirm presence in a technological competition that does not allow pauses.

“The Promise of the Synthetic Mind”

By: Gabriel E. Levy B.

The history of artificial intelligence is not measured in decades, but in leaps.

From the first chess programs that fascinated by defeating human champions to today’s systems capable of writing novels, producing software code, or generating medical diagnoses, each advance has pushed the limit of what we understand by intelligence.

Marvin Minsky, one of the founding fathers of AI, already warned in the 80s that “every time AI solves something, we stop calling it artificial intelligence”.

The emergence of ChatGPT in 2022 marked a before and after.

GPT-3.5 and then GPT-4 extended their tentacles into education, programming, and creativity.

Now, GPT-5 seeks to fine-tune what already seemed fine-tuned: fewer errors, more speed, better contextualization. Advanced multimodality, understanding not only words, but images, sounds and videos as a whole, recalls Alan Turing’s old dream: a machine capable of perceiving the world and not just processing symbols.

But the jump, although technical, is perceived differently.

This is not an “internet moment” or a “smartphone moment”, but a more subtle update.

Perhaps because, as sociologists Evgeny Morozov and Shoshana Zuboff point out, today’s technological innovation is caught between the pressure of competition and the need to show constant novelty, even if it is minimal.

“Moving forward so as not to be reached”

The competition for supremacy in AI is so dizzying that it feels more like a marathon without a goal than a race with a purpose.

OpenAI, Google DeepMind, Anthropic and dozens of emerging laboratories are working on models that are increasingly faster, more efficient, more capable of “reasoning”.

GPT-5 arrives not only as a tool, but as a strategic response to not cede ground to Google’s Gemini or Anthropic’s Claude.

In this context, each new version has a double function: to improve the user experience and to reinforce the perception of leadership.

The unification of models, the studio mode, the integration with services such as Gmail and Calendar, or the customization of the tone of voice are not only technical improvements, but also market messages.

OpenAI seems to say, “We’re here, we’re still leading, we’re not far behind.”

However, the ethical and strategic dilemma is evident.

The speed of updating leaves little room to reflect on social impacts, risks of misuse or inequalities in access to these tools.

In other words, technical evolution runs faster than the evolution of regulatory frameworks and our own abilities to understand and manage what we have created.

“When progress becomes karma”

The concept of “karma” here is not spiritual, but cultural: the idea that past actions condition the present and the future.

In the AI race, each new model is born with the accumulated expectations and pressures of its predecessor.

No matter how much it improves, it will always be compared, evaluated, and demanded under the prism of “the next must be bigger than the last.”

This cycle can lead to a phenomenon known in economics as “diminishing returns on innovation.” Nicholas Carr, a critic of technological fetishism, formulates it this way:

“Technology does not stop, but the impact of each successive advance tends to be less than the previous one, even if the effort is greater.”

GPT-5 represents a technical leap, but not a cultural breakthrough.

And yet, the pressure to announce it as such is inevitable.

In this dynamic, the risk is twofold: for developers, who live in a constant tension between improving and revolutionizing, and for users, who may lose the capacity for wonder and critical gaze.

If everything is “the most advanced in history,” how do we distinguish the truly transformative from the incremental?

“From theory to the field”

GPT-5’s use cases show both its power and its limits.

In programming, GPT-5 Pro significantly reduces errors and outperforms rivals such as Gemini and Claude, speeding up processes that previously required days of human labor.

In education, study mode allows students to receive personalized tutoring tailored to their learning style, something that could democratize access to knowledge, as long as it doesn’t rely on expensive subscription plans.

In the media, multimodality opens the door to richer analyses: a journalist could upload a video of a press conference, ask for a summary and also receive historical context and non-verbal language analysis.

However, the risk of these same capabilities being used to manufacture audiovisual disinformation is equally real.

In healthcare, GPT-5 can analyze medical images along with clinical descriptions to suggest preliminary diagnoses, but no serious expert would recommend blindly trusting the machine: the margin of error, while narrow, did not disappear.

Finally, in the creative realm, the customization of response “personalities,” from a cynical to a more robotic tone, opens up narrative possibilities, but also raises questions about the emotional manipulation of users and the authenticity of the interaction.

In what is truly human it cracks

Despite its technical deployment, GPT-5 stumbles into the most delicate thing: that which makes intelligence deeply human.

The humour he produces is still rudimentary, the irony is perceived as flat, the sarcasm almost non-existent and, above all, his ability to understand complex contexts remains limited.

This lack feeds one of the most recurrent criticisms from academics and analysts: in these essential aspects, GPT-5 is not too far from GPT-4.5.

Such a perception seems to prove right Kenneth Cukier, Viktor Mayer-Schönberger and Francis de Véricourt, authors of Framers, who have insisted that the human mind’s ability to contextualize, interpret and project scenarios cannot be replicated by any machine.

In conclusion, GPT-5 is a technical milestone, but also a mirror where the karma of technological evolution is reflected: the constant obligation to move forward so as not to go backwards, even when progress no longer involves a cultural leap. It reminds us that the true AI revolution will not be the one that adds more modes or reduces more errors, but the one that forces us to rethink what we want to do with these machines and, above all, what we do not want to delegate to them.

References

  • Minsky, Marvin. The Society of Mind. Simon & Schuster, 1986.
  • Carr, Nicholas. The Glass Cage: Automation and Us. W.W. Norton & Company, 2014.
  • Morozov, Evgeny. To Save Everything, Click Here. PublicAffairs, 2013.
  • Zuboff, Shoshana. The Age of Surveillance Capitalism. PublicAffairs, 2019.
  • Cukier, Kenneth; Mayer-Schönberger, Viktor; de Véricourt, Francis. Framers: Human Advantage in an Age of Technology and Turmoil. Dutton, 2021.