Sora: the social network that shakes reality

A familiar face smiles in front of the camera, improvises a political speech, cries when talking about his childhood. It seems like a sincere confession. But it’s not real. Not the video, not the tears, not the speech. It’s Sora. And millions of people have already believed otherwise.

When the video ceased to be evidence

By: Gabriel E. Levy B.

Until a few years ago, the moving image was the ultimate evidence of what had happened. “If it’s on video, it’s true,” they used to say.

But that certainty began to crumble with the first deepfakes, those rudimentary montages that showed swapped faces, cloned voices and clumsy movements. They seemed like a digital magic trick, limited to marginal or experimental environments.

However, as Hany Farid, a professor at the University of California, Berkeley and a pioneer in digital manipulation detection, explains, the evolution of artificial intelligence transformed these lab sets into sophisticated visual production tools.

Deepfakes stopped being a trick and became an invisible art.

With Sora, that technology was democratized. Now, any user with access to the social network can create a hyper-realistic scene, from a fictional protest to a private conversation that never existed.

Danish researcher Britt Paris, a specialist in digital disinformation, already warned in 2020 that the threat of deepfakes was not only technical, but epistemological: “if any image can be manipulated, then all visual testimony becomes suspicious”. Sora didn’t invent this problem, but he made it viral.

The Empire of the Plausible

One sentence is enough. A short text. A reference image.

And Sora generates a video. In seconds. The result is so precise that it deceives not only the eye, but also the algorithms trained to detect fakes. This phenomenon, widely documented in recent weeks by media such as Fast Company and The New York Times, alarms experts in cybersecurity, justice, politics and communication.

The “cameos” feature, for example, allows any user to be inserted into an AI-generated scene with just a facial check.

The problem is that this verification, in practice, was violated a few hours after the launch. Companies specializing in impersonation engineering managed to embed the faces of celebrities in fictitious situations, circumventing the authentication barriers implemented by OpenAI.

A specific case was that of actor Bryan Cranston, who publicly denounced the use of his image in videos manufactured for promotional purposes without his consent, which forced the company to review its filters.

Beyond celebrities, the risk lies with the millions of anonymous users who do not have a legal or media network to defend their identity.

Sora turns reality into a malleable matter, where the line between the true and the false becomes blurred, manipulable and, most disturbingly, attractive.

Truth as a game of mirrors

Sora’s danger lies not only in the creation of false content, but in the systematic erosion of trust.

The so-called “liar’s dividend” accurately describes this new scenario: if everything can be fabricated, then even the true can be dismissed as fiction.

This notion, developed by researchers Chesney and Citron in their paper “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security,” reveals a disturbing paradox: technology that allows you to see more also allows you to doubt everything.

The videos generated with Sora circulate unbridled on social networks, they are edited, recontextualized, they become memes, evidence of non-existent crimes, hate campaigns or tools of political manipulation.

During the recent municipal elections in the United States, at least twenty cases were detected in which deepfakes created with Sora were used to simulate false statements by candidates, according to a report by the Federal Election Commission.

Although platforms tried to block this content, many went viral before being removed.

The most worrying thing is that the recommendation algorithms of social networks promote this type of video because of its retention capacity.

They are visually striking, emotionally engaging, easy to consume, and difficult to verify. In this context, the lie does not need to be perfect, it just needs to be more convincing than the truth.

A future without reliable witnesses

Sora’s legal and ethical implications are just beginning to be discussed. OpenAI claims to have incorporated invisible watermarks, encoded metadata and restrictions to prevent the use of its platform for political, sexual or violent purposes.

However, multiple media reports agree that these measures are easily vulnerable or, at best, insufficient.

Regulation, as is often the case in the technological field, runs after the vertigo of innovation.

In the United States and Europe, legislators are debating bills to require clear labels on synthetic content, force platforms to identify AI-generated videos or sanction those who use these tools to impersonate identities.

But these proposals are far from becoming effective laws, and in the meantime, the proliferation of false content does not stop.

Cases such as that of Martin Luther King Jr., whose image was used in an apocryphal video to promote ideas contrary to his legacy, illustrate the scope of the problem.

Although OpenAI removed the content and offered a public apology, the symbolic damage was already done.

Platforms don’t just reproduce images: they reproduce senses, emotions, decisions. And when these images are fake, so are the consequences they trigger.

In Latin America, where institutional oversight is weaker and information polarization more acute, Sora could be a dangerous tool in the hands of political actors or extremist groups.

In Brazil, for example, fake videos were detected showing indigenous leaders supporting government proposals that they actually rejected.

These clips circulated widely on WhatsApp and Telegram, influencing key legislative debates.

In conclusion, Sora represents a new stage in the relationship between image, technology and truth. It’s not just a tool for creating content.

It is a device that reconfigures the very foundations of digital communication. By democratizing the production of deepfakes, it transforms suspicion into the norm, montage into spectacle and doubt into strategy.

Faced with this scenario, society needs more than just regulations. It needs a new visual literacy, an ethic of digital consumption and, above all, a renewed commitment to the truth.

References

Chesney, R., & Citron, D. K. (2019). Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security. California Law Review, 107(6), 1753–1820.

Farid, H. (2021). Digital forensics in a post-truth world. University of California Press.

Paris, B. (2020). Deepfakes and the epistemic crisis of the visual. Media and Communication, 8(3), 13–24.