Disinformation as a Political Weapon

By: Gabriel E. Levy B.

An analysis published by MIT Technology Review, the specialized media of the Massachusetts Institute of Technology[1], found that as a consequence of the growth of deepfakes and the improvement of the technologies they use, the biggest risk we will face in the next decade will not be the false news itself, but “blaming deepfakes for making the real look fake“.

Are We Entering the Age of Disinformation?

The elections of Bolsonaro in Brazil, Macri in Argentina, Trump in the United States, the Brexit in the United Kingdom, as well as the electoral processes in the Caribbean islands of Grenada and Barbados have one element in common: all these elections were manipulated by the British company Cambridge Analytica, which manipulated the decision of the vote of millions of undecided people using FakeNews, tilting the electoral balance in favor of its clients [2].

The Cambridge Analytica scandal was the first incident that revealed the danger of manipulating information through sophisticated technology, the so-called Social Engineering and the use of private information from users of social media, exposing a latent risk for contemporary democracies. However, this episode was the genesis of a much worse phenomenon: the consolidation of deepfakes, a type of Fake News that the MIT Technology Review has defined as “Fakes” that using Artificial Intelligence (AI) and the most advanced technology, achieve an unprecedented level of realism even in videos or highly complex pieces, which make the common user unable to differentiate between original and manipulated information, becoming a greater risk.

The Relativization of Truth

According to the experts of Massachusetts Technology, cited by MIT Technology Review, in the current presidential elections for 2020 that are occurring in the U.S., deepfakes are increasingly convincing and difficult to differentiate for the common citizens, generating fears about how these false contents could influence public opinion, political positions and alter the course of the elections, as happened in the past elections. However, a report by Deeptrace Labs[3], a cyber security company specializing in deepfakes detection, found no examples of deepfakes being used in disinformation campaigns so far in the current election contest, but it is no longer possible for ordinary people to know and people are suspicious of all information, regardless of whether it is true or not.

“Deepfakes do pose a risk to politics in terms of fake media appearing to be real, but right now the more tangible threat is how the idea of deepfakes can be invoked to make the real appear fake. The hype and rather sensational coverage speculating on deepfakes’ political impact has overshadowed the real cases where deepfakes have had an impact”.

Henry Ajder, Deeptrace Labs expert cited by MIT Technology Review

The Relativization of Evidence

Let us suppose that a politician is discovered receiving a bribe and the main proof is a video recorded in a hidden way, with some technical problems where the character is identified and his voice is heard, when the scandal explodes this politician could argue that it is a kind of deepfake and that everything has been a manipulation to end his reputation, sowing the doubt that his followers would take advantage of it to generate confusion in the social media and finally nobody could know for sure if it is real or not.

The same could happen with any type of information or news, generating an unprecedented paradox: at the historical moment when we will have reached the maximum possible access to information, we will be experiencing the highest functional level of disinformation. Instead of having an informed society, we will have a society that is confused and probably paranoid about the veracity of the facts, triggering a level of objectivity relativity that could erode the foundations of society itself.

Since the beginning of this century, the outstanding scholar Ignacio Ramonet stated that: “When we have reached the optimum of information, we will have reached the maximum of disinformation, due to saturation” [4]. A futuristic vision far ahead of its time, which anticipated many current voices, especially from civil organizations and various disinformation experts, who have expressed their concern, since they believe that most of the efforts made by regulators, governments and technology companies, have focused on evaluating “the ease by which technology can make fake things appear real”, but have ignored the second problem mentioned in the report by Henry Ajder: “Although the limits to create deepfakes are rapidly disappearing, questioning the veracity of something does not require any technology at all”.

It is another weapon for the powerful: to respond with ‘It is a deepfake’ about anything that people outside the power try to use to show corruption“.

Differentiating between reality and fiction has always been complex for citizens without greater educational skills, but undoubtedly the challenge of the coming years will show an unprecedented confusion, which will go beyond the traditional ambiguity and will affect not only the ordinary Internet user, but also people with professional training, even allowing to complex exercises such as journalism, which will have to incorporate computer skills to ensure objectivity, and therefore the challenge facing governments and telecommunications companies, will be monumental.

A Joint Effort of Several Sectors

Governments, regulators, civil society organizations, research groups, and technology companies such as Google have been developing tools, software, and technology to detect deepfakes, including artificial intelligence systems. In the same way, solutions have emerged: apps, which allow any user to contrast whether an information is true or false, while creating comparative bases to analyze the versions of content circulating on the Internet.

Education as a Containment Mechanism

Although technology will probably allow the emergence of many strategies to fight deepfake, the best tool will undoubtedly be education. That is, to train citizens and Internet users in sufficient skills to enable them to rationally discern the veracity or not of an information and learn to use tools to contrast and verify efficiently, the content available on the cloud and in the media.

In conclusion, deepfakes will undoubtedly be one of the greatest problems and threats that the contemporary world will have to face in the next decade. Not only because of the large amount of fake content that will emerge in the next few years, but because this type of content will lead us to doubt any kind of information, completely mistrusting the data accessible on the web and in the media, which would call into question any evidence that impacts the interests of the most powerful, becoming an unprecedented potential threat to global democracy, requiring the joint effort of governments, civil organizations, technology companies and experts, to promote new effective regulations, develop technology to identify fraud and above all, promote education in responsible consumption of digital content, promoting critical citizenship, equipped with concepts, elements and objective criteria for observation and consumption of digital content.

[1] Analysis by MIT about the risk of “deepfakes”

[2] Keys to understanding the Cambridge Analytica scandal

[3] Report presented by Deeptracelabs on USA 2020 elections

[4] Interview to Mario Kaplun referring to the reflections of Ignacio Ramonet