Outdated cybersecurity in the face of emerging threats

An article in the Harvard Business Review, published on January 11, 2026, issues a resounding warning: traditional cybersecurity paradigms are no longer enough.

This is not an isolated technical failure, but a structural shift in the balance of power between defenders and attackers in the digital environment.

The EchoLeak vulnerability, which affected Microsoft 365 Copilot, represents just the first visible crack in a system designed for a type of enemy that no longer exists.

“Security designed for threats that no longer exist”

By: Gabriel E. Levy B.

The digital world is in check. While companies are racing to implement artificial intelligence solutions to optimize processes, reduce costs and improve the customer experience, attackers do exactly the same thing: they use AI to break the systems that were supposed to protect us.

Until recently, most cybersecurity frameworks were designed to defend against humans.

Users who clicked on deceptive links, emails that impersonated identities and stolen passwords in underground forums.

It was a game of reaction: the attackers innovated and the defenders responded.

But the emergence of artificial intelligence has changed the rules.

AI systems, such as large language models (LLMs), process information at speeds that exceed human capabilities, and they do so through complex, opaque layers that can be manipulated without anyone noticing.

In this scenario, the case called EchoLeak, discovered in June 2025 by the company Aim Security, was a warning sign of what may happen in the future. Unlike other attacks, the victim did not need to click on anything or fall for a hoax. The attackers managed to extract sensitive information by taking advantage of the way a Microsoft artificial intelligence tool called Copilot worked internally.

Microsoft fixed the problem without users having to do anything. But the message was clear: today attacks no longer depend on people making mistakes, but aim directly at the inner workings of artificial intelligence systems.

The Harvard Business Review article puts it bluntly: the current ways of protecting ourselves are not prepared for a world where machines learn on their own and make decisions.

Expert Zayd Enam explains that, while it was previously thought that systems always worked the same, with artificial intelligence that is no longer the case.

As a result, attacks can be more difficult to foresee, change all the time, and often go undetected by traditional security systems.

An army of machines: more identities than humans

The changing nature of enterprise technology infrastructure adds a new level of complexity. A December 2025 report also published by Harvard Business Review reveals that companies now handle a ratio of 82 machine identities for every human employee. These automated identities, created so that bots and services can communicate with each other, have become a new minefield for security.

These identities, often without direct supervision, can be forged and used to trigger automated chain actions: access to internal systems, file transfer, financial movements or activation of cloud services. This implies that a single compromised identity can cause damage multiplied by tens or hundreds of automated processes.

Security firm GreyNoise, which monitors digital infrastructures through honeypots, recorded more than 91,000 attack sessions targeting AI platforms between October 2025 and January 2026. These campaigns not only exploited known vulnerabilities in tools such as Twilio or the Ollama modeling platform; they also directed their efforts at the LLMs themselves.

OpenAI’s GPT-4o, Anthropic’s Claude, Meta’s Llama, and Google’s Gemini were systematically probed through queries designed to evade detection systems.

The attackers were not trying to directly damage the models; They sought to understand their reactions, build response profiles, and map their internal mechanisms.

The extracted information allows for more targeted, silent, and dangerous subsequent attacks.

In the words of academic Nicolas Papernot, a security and machine learning researcher at the University of Toronto: “AIs not only learn from data, they can also be manipulated by it.”

“The enemy is invisible, but real”

The EchoLeak vulnerability was not an isolated case. The increasing sophistication of campaigns targeting AI infrastructures reveals a clear trend: attackers are professionalizing their tools, and artificial intelligence is no longer just a potential victim, but also a weapon.

Moody’s, the risk rating agency, recently warned of the proliferation of “adaptive malware” and “autonomous attacks” during 2026.

AI’s ability to generate deepfakes, compose personalized phishing emails in seconds, or automate the identification of backdoors in complex systems, elevates the threat to a new level. The possibility of attacks without human intervention, executed by machines that learn from each failed attempt, draws a future of constant confrontation in a terrain that evolves in real time.

In addition, experts warn about the fragility of current security filters. Many platforms rely on moderation systems based on fixed rules or detection models trained on previous patterns. But the attackers have already learned to get around them. The “innocuous” queries recorded by GreyNoise, for example, were designed to avoid any keywords that would trigger protection mechanisms. A kind of passive recognition that precedes surgical attacks.

In this new scenario, cybersecurity must stop thinking in terms of “perimeter” or “authorized user”. In a world governed by AI, security cannot depend on who accesses, but on how systems behave once they gain access.

“When the traps activate themselves”

On a practical level, recent cases show how even the most protected architectures can fail against AI-led attacks. The campaign recorded during Christmas 2025 exploited legitimate model download functions and Twilio webhooks to infiltrate malicious code.

The speed of the attack was such that, in 48 hours, 1,688 sessions were recorded, many of them chained from different locations, using previously compromised machine identities.

Another case, which occurred on December 28, deployed a massive reconnaissance operation on 73 access points to AI models.

Using two IP addresses with documented backgrounds, the attackers sent more than 80,000 queries, gathering information without activating defense mechanisms.

The researchers say that the goal was to build a detailed map of the attack surface, from which to launch more sophisticated actions later.

The great irony is that many of these attacks did not require technical vulnerabilities per se.

It was enough to understand how the models worked, predict their responses, and exploit them in favor of the attacker. This new paradigm, known as prompt injection or injection between prompts, uses AI systems themselves as attack tools, manipulating them through carefully designed natural language.

In an environment where traditional defense tools rely on familiar signatures or repetitive patterns, AI’s adaptability becomes its biggest threat.

In conclusion, cybersecurity is experiencing an identity crisis. Today’s threats don’t just come from malicious human hackers, but from automated systems that learn, adapt, and overcome defenses designed for a world that no longer exists. As evidenced by the case of EchoLeak and the recent campaigns detected by GreyNoise, artificial intelligence has redefined the battlefield. Businesses, governments, and users need to rethink not only how they defend themselves, but what it really means to be safe in an environment where even machines can lie, persuade, and attack.

References:

  • Harvard Business Review. (2026). AI-Powered Threats Break Traditional Cybersecurity Models.
  • Aim Security. (2025). CVE-2025-32711: EchoLeak Disclosure Report.
  • GreyNoise Intelligence. (2026). AI Threat Campaign Logs – Q4 2025.
  • Moody’s Analytics. (2025). Cyber Risk Outlook 2026.
  • Schneier, Bruce. (2023). A Hacker’s Mind: How the Powerful Bend Society’s Rules, and How to Bend Them Back.
  • Papernot, Nicolas et al. (2021). Security and Privacy of Machine Learning. Proceedings of the IEEE.