By: Gabriel E. Levy B.
A study published in 2020 by George Washington University in the United States, found that the algorithm of Uber and other ride-hailing applications applies a higher rate of charge when the destination of the ride is in depressed areas or areas inhabited by people with low economic resources, especially if they are areas inhabited by people of African descent or Latinos[1].
The researchers analyzed data from more than 100 million trips that took place in the city of Chicago between November 2018 and December 2019. Each ride contained information such as pick-up point and destination, duration, cost, or whether it was a shared or individual trip, as well as the customer’s ethnicity.
“After analyzing transportation census data from the city of Chicago, it was found that ride-hailing companies charge a higher price per mile for a trip if the pick-up point or destination is a neighborhood with a higher proportion of ethnic minority residents than for those with predominantly white residents. Basically, if you’re going to a neighborhood where there’s a large African-American population, you’re going to pay a higher fare price for your ride” Aylin Caliskan, co-author of the study.
COMPAS is a judicial processing software, which is used by the justice system in at least ten states in North America, where judges use it as an aid in issuing sentences. This algorithm is based on criminal statistics and on several occasions, various social organizations and trial lawyers have denounced that, if the offender is Latino or African-American, the software tends to classify the suspect as “high risk of committing new crimes [2]”.
An analysis of more than 10,000 defendants in the state of Florida published in 2016 by the research group ProPublica showed that African-Americans were often rated as highly likely to reoffend, while whites were considered less likely to commit new crimes [3].
A study by American researchers Gideon Mann and Cathy O’Neil, published in Harvard Business Review, found that the programs or algorithms in charge of pre-selecting resumes for companies and corporations in the United States are not devoid of human biases and prejudices, which is why the selection is not objective and many profiles can be discarded simply because of a preconceived idea of who programmed the code [4].
Sexist Algorithms
A research developed by Boston University in the United States, used Google News algorithms and then they built an automated learning system with the information collected, which was tested to solve simple problems after months of training, for example: “men are to computer programmers what women are to x“. The automatic answer was “x = housewife” [5].
Similarly, the study identified that the algorithm associated female names such as Sarah with words attributed to the family, such as parents and wedding. In contrast, male names such as John had stronger associations with words attributed to work, such as professional and salary [6].
In 2014, the company Amazon introduced with great fanfare a new algorithm that would be responsible for hiring talent for their company. The system collected information on job applicants at Amazon over a 10-year period and was trained to observe patterns. However, a Reuters investigation found that the code was significantly biased toward male profiles and assumed macho patterns of behavior in many aspects. Five members of the team who worked on the development of this tool told Reuters that the algorithm “taught itself that male candidates were a preference [7].”
Other research conducted by the University of Sheffield found that the algorithm of BING, Microsoft‘s search engine, associated images of shopping and kitchens with women. Thus, it inferred that “if it’s in the kitchen, it’s a woman” most of the time. In contrast, it associated images of physical training with men [8].
In 2019, the New York State Department of Financial Services (USA) launched a business investigation against the credit company Goldman Sachs for possible gender discrimination in the issuance of its credit card limits.
The investigation began after Internet entrepreneur David Heinemeier Hansson tweeted that the Apple Card issued by Goldman Sachs had given him a credit limit 20 times higher than his wife’s, even though they both filed joint tax returns and she had a better credit rating [9].
Algorithm Neutrality
The issue of Algorithm Neutrality has been widely discussed by academics in the last decade, and it is a topic that we analyzed extensively in previous articles [10].
The Facebook and the British company Cambridge Analytica scandal is a perfect example of a situation where algorithms cease to be neutral, that is, when they happen to benefit an interest, person or organization to the detriment of other people, interests or organizations [11]. This was documented in many political campaigns around the world, where the Cambridge Analytica algorithm sought to benefit one candidate to the detriment of others, using a mix of strategies that included the use of voters’ information, especially their deepest fears and apprehensions in order to expose them to mostly false content that sought to induce voting [7].
In other possibly harmful cases, it has been shown that algorithms are used to favor commercial interests. Such is the case of Amazon, which uses the code within its own virtual store to display its products in the first places, above all its competitors, or the case of Google, which always displays searches related to its products at the top, i.e., if someone types the word “email” in the search engine, “Gmail” will come up as the first option.
For this type of practice, Google was fined 5 billion dollars by the European Union, which ruled that the company “blocks innovation by abusing its dominant position“, among other things “through the configuration of its search engine algorithm” [12].
Greater regulation is required in this field
At the end of 2020, the United Nations, through one of its committees, issued a recommendation for 182 countries adhered to the Convention on the Elimination of All Forms of Racial Discrimination to pay special attention to the biases that artificial intelligence and big data may create [13].
However, it is not only the United Nations who is asking for this; many human rights organizations around the world have been asking for several years now that governments pay more attention to artificial intelligence systems and create regulations that favor the neutrality of these codes.
In conclusion, many major academic studies in the United States have shown that the development of algorithms and artificial intelligence systems can influence the widening of social gaps, whether when applying for a job, applying for credit or even in things as basic as transportation within a city.
This is why it is necessary on the one hand to globally promote transparency and neutrality of algorithms, and on the other hand a greater regulation of Maching Learning and Artificial Intelligence systems, through the incorporation of effective control mechanisms in order to prevent this type of developments from violating human rights and especially to ensure that the codes do not assume sexist, aporophobic, racist and xenophobic positions.
Photo by Portuguese Gravity on Unsplash.com
[1] Study published by George Washington University
[2] BBC article on the impact of algorithms the COMPAS case
[3] BBC article on the impact of algorithms on the COMPAS case
[4] Research published in Harvard Business Review
[5] Study conducted by Boston University in the United States
[6] Study conducted by Boston University in the United States
[7] Reuters research on amazon’s algorithm discrimination
[8] Study published by the University of Sheffield
[9] Article published by MIT on investigation against Goldman Sachs
[10] Andinalink article: Algorithm Neutrality
[11] BBC article on the Cambridge Analytica scandal
[12] BBC article on the European Union’s sanction against Google
[13] United Nations Recommendation on Algorithm Biases
Disclaimer: The published articles correspond to contextual reviews or analyses on digital transformation in the information society, duly supported by reliable and verified academic and/or journalistic sources. The publications are NOT opinion articles and therefore the information they contain does not necessarily represent Andinalink’s position, nor that of their authors or the entities with which they are formally linked, regarding the topics, persons, entities or organizations mentioned in the text.