Neutrality and Transparency of Algorithms

The creator of the controversial documentary The Social Dilemma, Jeff Orlowskien, recently assured that “Social media is ending democracy” [1]. This affirmation that coincides with what was stated by Facebook Co-founder Chris Hughes, who considers that this company has to be dissolved due to the “uncontrollable power it holds” [2], at the same time that Donald Trump tries to censor Tik Tok and Wechat  in the United States.

For its part and how we analyzed in our previous article, the United States Federal Trade Commission FTC, prepares a possible antitrust lawsuit against Facebook [3], while the British Competition & Markets Authority claims that Facebook and Google constrain competition [4], and the Australian authorities are in a unprecedented legal and media dispute against these two tech giants [5].

Is Algorithm Neutrality the solution to the conflict with social media?

By: Gabriel E. Levy B. –

In simple terms, an algorithm is a sequence of instructions given to software or computerized system to run determined task that simulates autonomous behavior. Just decades after the reign of algorithms, these routines are so varied and versatile that they directly impact our lives.

Algorithms specially linked to database systems and personal information are responsible of the elaboration of many of the profiles that evaluate cross-cutting aspects of contemporary society, from performances in our jobs to the economic situation passing through the publicity we receive, our health, the paces we visit and the content we consume; that is; data that summarizes our lives.

Although banks, insurance companies, governments and companies, whose transaction is based in the handling of personal information, over the years have been implementing the use of algorithms, they are not the main managers of these technologies. Are the big internet companies popularly known as the “.com” the ones that have been massively collecting our information and applying it in multiples types of computerized codes, being able to affirm, almost with certainty, that they know us even better than ourselves.

Companies as Facebook, Alphabet Inc. (Google owner), Apple, Amazon, Netflix, Ebay or Tripadvisor, among many others, have access to information about aspects of our lives that even friends and family ignore. They know what type of content we consume, our politic ideology, who are our friends and family, what places we visit, what food we like, what movies we watch, what type of photos we share, who do we meet, what is our opinion on sensitive issues, how often do we connect to internet, what type of devices we have and how we used them. Of course, this information ends up influencing the type of advertising we receive and the contents that are offered to us on our devices, among other variables.

Our information acts as an exchange currency, since the free services such as email or social networks are compensated with the use of our data for advertising purposes. However, recent scandals, have demonstrated that it is not only used for advertising purposes, but other purposes that probably have exceeded legal limits, as happened months ago with the Facebook and Cambridge Analytica scandal, subject we talk about in the article The privacy crisis in the digital age [6].

Algorithms obtain information from multiple sources, most of them, related to our behavior on the Internet, which based on consultation, viewing, consumption and purchase habits, determines a profile about us from which offers are made and attempt to manipulate our behavior is made.

Algorithms Neutrality

The topic of Algorithms Neutrality has been widely discussed by academics in the last decade. It was precisely the Facebook and the British company Cambridge Analytica scandal, the one that set off alarms, revealing that our information could be being used to privilege private, political and economic interests without our consent, thus breaking the policies and confidentiality agreements. This is a perfect example of a situation in which algorithms are no longer neutral, it means, when they start to benefit an interest, person or organization to the detriment of other people, interests or organizations.

This was documented in many political campaigns all over the world, where the Cambridge Analytica algorithm sought to benefit a candidate to the detriment of others, using a mix of strategies that included the use of voters’ information, especially their deepest fears, to expose them to content, mostly false, that sought to induce the vote [7].

In other possibly harmful cases, it has been shown that algorithms are used to further commercial interests. Such is the case of Amazon, which uses the code within its own virtual store to display its products in the first places, above all its competitors, or the case of Google, which always shows searches related to its products in the first places, that is, if someone types in the search engine the word “email”, “Gmail” will appear as the first option. For this type of practices, Google received a millionaire sanction of 5 billion dollars from the European Union, who considered in the ruling that the company “blocks innovation by abusing its dominant position“, among other things “through the algorithm settings of their search engine” [8].

Effective discrimination

But this Neutrality is not only broken when it seeks to benefit one cause to the detriment of another or by favoring a commercial brand; it also happens when the algorithm discriminates in any way. Such is the case of software for judicial processes of American origin called COMPAS, which is used by the justice system in at least ten states, where judges use it as an aid to issue sentences. This algorithm is based on criminal statistics and on several occasions various social organizations and trial lawyers have denounced that, if the accused is Latin or has black origin, the software tends to classify the suspect as “high risk of committing new crimes” [9]. An analysis of more than 10.000 accused in the state of Florida published in 2016 by the ProPublica research group, showed that black people were often rated as highly likely to reoffend, while whites were considered less likely to commit new crimes [10].

Algorithm transparency

From the persecution that President Donald Trump decided to undertake against Chinese companies, and very particularly against Tik Tok, a leading social media among children and adolescents, the CEO of the company, Kevin Mayer, assured that the company will reveal the algorithms that guide its content moderation policies, for which he announced the launch of the Transparency and Accountability Center for moderation and data usage:

“We believe that all companies should reveal their algorithms, moderation policies and data flows to regulators. We won’t wait for regulation to come; instead TikTok takes the first step by launching the Transparency and Accountability Center for data moderation and practices. Experts can observe our moderation policies in real time, as well as examine the code that guides our algorithms. This puts us one step ahead of the industry and we encourage others to follow suit”, Kevin Mayer CEO of TIK TOK [11].

Following the statements of the CEO of Tiktok, Jack Dorsey, CEO of Twitter, announced that the blue bird social network is evaluating implementing a similar mechanism, which guarantees greater transparency:

“We need to be open and transparent about how our algorithm works and how it is used. We may even allow people to choose or create their own algorithms to rank content. To be that open, I think, would be quite incredible”, said Dorsey during an interview on “The Daily”, a podcast hosted by Michael Barbaro in the New York Times [12].

In Conclusion, the overwhelming growth of social media in recent years has become one of the biggest headaches for Western democracies, and also an unquantifiable risk factor to favor interests to the detriment of free competition, plurality and equitable access. Its development can promote, intentionally or accidentally, an effective prejudicial discrimination, which can severely violate the rights of specific social groups and alter the fair social order.

It is for all the above that governments and regulatory bodies around the world must promote regulatory actions that guarantee the Algorithms Neutrality as an urgent and priority issue on the agendas, while they must design policies that define clear game rules for those software codes. The goal is to protect innovation in the sector, often blocked by the dominant position of some companies, and at the same time to ensure comprehensive respect for the citizens’ civil rights, the market and healthy competition. Large technology companies should follow the example of TikTok and Twitter and make their codes transparent, allowing users and experts to have a clear understanding of how social media works.

The above could guarantee a healthy coexistence between networks, users, private companies, the State and citizens.

[1] Semana magazine interview with the creator of the documentary The Social Dilemma

[2] La República newspaper analysis about the opinion article by Cris Huges in TNWT.

[3] Andinalink article: Facebook in the sights of the authorities.

[4] Andinalink article about the British report against Facebook and Google

[5] El País article from Spain about the regulatory conflict in Australia

[6] Andinalink article: The privacy crisis in the Internet age

[7] BBC article about Cambridge Analytica scandal

[8] BBC article about European Union sanctions against Google

[9] BBC article about the impact of algorithms on the case COMPAS

[10] BBC article about the impact of algorithms on the case COMPAS

[11] BBC Observacom analysis about statements made by the CEO of Tik Tok

[12] BBC Observacom analysis about statements by the CEO of Twitter

Disclaimer: This article corresponds to a review and analysis in the context of digital transformation in the information society, and is duly supported by reliable and verified academic and/or journalistic sources. This is NOT an opinion article and therefore the information it contains does not necessarily represent the position of Andinalink, or its authors’ or the entities with which they are formally linked, regarding the issues, persons, entities or organizations mentioned in the text.