By: Gabriel E. Levy B.
A study published in 2020 by George Washington University in the United States, found that the algorithm of Uber and other ride-hailing applications applies a higher rate of charge when the destination of the ride is in depressed areas or areas inhabited by people with low economic resources, especially if they are areas inhabited by people of African descent or Latinos.
The researchers analyzed data from more than 100 million trips that took place in the city of Chicago between November 2018 and December 2019. Each ride contained information such as pick-up point and destination, duration, cost, or whether it was a shared or individual trip, as well as the customer’s ethnicity.
“After analyzing transportation census data from the city of Chicago, it was found that ride-hailing companies charge a higher price per mile for a trip if the pick-up point or destination is a neighborhood with a higher proportion of ethnic minority residents than for those with predominantly white residents. Basically, if you’re going to a neighborhood where there’s a large African-American population, you’re going to pay a higher fare price for your ride” Aylin Caliskan, co-author of the study.
COMPAS is a judicial processing software, which is used by the justice system in at least ten states in North America, where judges use it as an aid in issuing sentences. This algorithm is based on criminal statistics and on several occasions, various social organizations and trial lawyers have denounced that, if the offender is Latino or African-American, the software tends to classify the suspect as “high risk of committing new crimes ”.
An analysis of more than 10,000 defendants in the state of Florida published in 2016 by the research group ProPublica showed that African-Americans were often rated as highly likely to reoffend, while whites were considered less likely to commit new crimes .
A study by American researchers Gideon Mann and Cathy O’Neil, published in Harvard Business Review, found that the programs or algorithms in charge of pre-selecting resumes for companies and corporations in the United States are not devoid of human biases and prejudices, which is why the selection is not objective and many profiles can be discarded simply because of a preconceived idea of who programmed the code .
A research developed by Boston University in the United States, used Google News algorithms and then they built an automated learning system with the information collected, which was tested to solve simple problems after months of training, for example: “men are to computer programmers what women are to x“. The automatic answer was “x = housewife” .
Similarly, the study identified that the algorithm associated female names such as Sarah with words attributed to the family, such as parents and wedding. In contrast, male names such as John had stronger associations with words attributed to work, such as professional and salary .
In 2014, the company Amazon introduced with great fanfare a new algorithm that would be responsible for hiring talent for their company. The system collected information on job applicants at Amazon over a 10-year period and was trained to observe patterns. However, a Reuters investigation found that the code was significantly biased toward male profiles and assumed macho patterns of behavior in many aspects. Five members of the team who worked on the development of this tool told Reuters that the algorithm “taught itself that male candidates were a preference .”
Other research conducted by the University of Sheffield found that the algorithm of BING, Microsoft‘s search engine, associated images of shopping and kitchens with women. Thus, it inferred that “if it’s in the kitchen, it’s a woman” most of the time. In contrast, it associated images of physical training with men .
In 2019, the New York State Department of Financial Services (USA) launched a business investigation against the credit company Goldman Sachs for possible gender discrimination in the issuance of its credit card limits.
The investigation began after Internet entrepreneur David Heinemeier Hansson tweeted that the Apple Card issued by Goldman Sachs had given him a credit limit 20 times higher than his wife’s, even though they both filed joint tax returns and she had a better credit rating .
The issue of Algorithm Neutrality has been widely discussed by academics in the last decade, and it is a topic that we analyzed extensively in previous articles .
The Facebook and the British company Cambridge Analytica scandal is a perfect example of a situation where algorithms cease to be neutral, that is, when they happen to benefit an interest, person or organization to the detriment of other people, interests or organizations . This was documented in many political campaigns around the world, where the Cambridge Analytica algorithm sought to benefit one candidate to the detriment of others, using a mix of strategies that included the use of voters’ information, especially their deepest fears and apprehensions in order to expose them to mostly false content that sought to induce voting .
In other possibly harmful cases, it has been shown that algorithms are used to favor commercial interests. Such is the case of Amazon, which uses the code within its own virtual store to display its products in the first places, above all its competitors, or the case of Google, which always displays searches related to its products at the top, i.e., if someone types the word “email” in the search engine, “Gmail” will come up as the first option.
For this type of practice, Google was fined 5 billion dollars by the European Union, which ruled that the company “blocks innovation by abusing its dominant position“, among other things “through the configuration of its search engine algorithm” .
Greater regulation is required in this field
At the end of 2020, the United Nations, through one of its committees, issued a recommendation for 182 countries adhered to the Convention on the Elimination of All Forms of Racial Discrimination to pay special attention to the biases that artificial intelligence and big data may create .
However, it is not only the United Nations who is asking for this; many human rights organizations around the world have been asking for several years now that governments pay more attention to artificial intelligence systems and create regulations that favor the neutrality of these codes.
In conclusion, many major academic studies in the United States have shown that the development of algorithms and artificial intelligence systems can influence the widening of social gaps, whether when applying for a job, applying for credit or even in things as basic as transportation within a city.
This is why it is necessary on the one hand to globally promote transparency and neutrality of algorithms, and on the other hand a greater regulation of Maching Learning and Artificial Intelligence systems, through the incorporation of effective control mechanisms in order to prevent this type of developments from violating human rights and especially to ensure that the codes do not assume sexist, aporophobic, racist and xenophobic positions.
Photo by Portuguese Gravity on Unsplash.com
Disclaimer: The published articles correspond to contextual reviews or analyses on digital transformation in the information society, duly supported by reliable and verified academic and/or journalistic sources. The publications are NOT opinion articles and therefore the information they contain does not necessarily represent Andinalink’s position, nor that of their authors or the entities with which they are formally linked, regarding the topics, persons, entities or organizations mentioned in the text.