Racism in technology – when artificial intelligence discriminates

Many decisions in our everyday life are already made by artificial intelligence (AI), whether we realize it or not. But it is far from perfect. Errors of AIs are not just minor inconveniences. They can lead to racism and other forms of discrimination and bring with them serious problems.

***Trigger warning: In the article, racist discrimination through artificial intelligence is discussed and listed in the form of examples.***

The question of what artificial intelligence (AI) means is not so easy to answer. In the search for an explanation deeper questions arise. What does intelligence mean anyway?? How do we define intelligence in relation to humans, the environment, and technology?? Although there is no claim to actual intelligence, the concept of AI is difficult to grasp. One possible definition is that AI consists of computers taking over activities that humans are currently better at. Among other things, this includes the ability to learn.

One can distinguish between different approaches as well as mixed forms of AI. However, in order to understand the topic around racist AI, the connectionism approach is especially central. This focuses on programming artificial neural networks. What distinguishes artificial neural networks is their ability to learn, which is referred to as "machine learning".

Fundamental to the learning process of neural networks is pattern recognition. The data with which the AI is trained already contains the desired results. As a result, the AI recognizes patterns in the characteristics and correlations to certain outcomes. In the field, the AI looks for similar patterns and then outputs the results that reflect the learned correlations.

Technically, this is made possible by the connections between neurons. Neurons activate each other differently due to different weightings. This creates activation patterns that reflect learned contexts. Neural networks independently adjust this weighting and learn in this way, which brings great advantages as well as numerous possible applications. Pattern recognition is not entirely positive, however, but brings with it problems such as prejudice.

Prejudice and racism from AI

Bias in AIs means: the patterns the AI recognizes and applies to its decisions can be discriminatory. This can come about because patterns are learned incorrectly due to insufficient data, or because social biases contained in the training data are reproduced by the AI. This puts marginalized groups at a disadvantage in decisions made with the help of AI. This is also very clear on the issue of racism by AI. How racism is perpetuated or even reinforced by AI and what consequences this has for those affected can be seen in numerous examples.

For example, search engines associated with black women show predominantly pornographic and stereotypical results. Texting algorithms also make racist statements and stereotypical role assumptions, associating black people predominantly with negative characteristics. Speech recognition systems like Siri and Alexa are worse at understanding accents and dialects, discriminating by ethnicity. African-Americans, for example, are less well understood than white people.

However, racism from AI can have much more serious consequences for the people concerned. Here, one realizes the extent of the problem when considering everyday situations. AI is used, for example, to determine who is shown which job and housing listings, which job applications are selected and accepted, and who is or is not creditworthy. In all these applications, People of Color (POC) are disadvantaged and racial prejudices are reproduced by the AIs. POC receive fewer and even worse offers and have lower chances of being accepted for jobs or loans. In medicine, too, algorithms in the U.S. have resulted in black individuals receiving deficient recommendations more often than white individuals and not being referred to experts as often as white individuals with similar health conditions.

AI in the legal system has severe consequences

Major problems also exist in facial recognition. This became part of the public debate when an AI reconstructed a pixelated image of Barack Obama and then instead of the former president, it showed a white person. Experts are aware of the inaccuracy of facial recognition algorithms. The Google Photos app has issues with racist facial recognition, so in some cases black people were not recognized correctly and categorized as animals.

But faulty facial recognition can also have criminal consequences. Police use facial recognition in their investigations. There are several known cases of black people being falsely identified by facial recognition programs as wanted criminals and arrested. This is a particularly big problem because black people are most often targeted by these AIs, but they also make the most mistakes with black people.

The police, as well as prisons, use numerous other AIs. Predictive policing applications are particularly controversial in this regard. Predictive policing is actually supposed to prevent crime. Due to the high incarceration rates in the USA, they are used there very often. One example is an AI that is supposed to assess the risk of a person becoming a repeat offender. The score output by AI helps determine the court's verdict, sentence, treatment of individuals before trial, and rehabilitation. A study by ProPublica, a non-profit newsroom for investigative journalism, shows that white people often falsely receive a low score, while black people are often falsely suspected and receive significantly higher scores than white people. Various civil rights organizations oppose the use of this AI and advocate against it.

Racist programs?

AIs have no consciousness, they cannot think or have an opinion. They are programs that cannot take responsibility. Nevertheless, how does the systematic bias of technology arise??

AI does not emerge from nowhere. There are people behind the research and development who can and must take responsibility for their programs. AI developer Kriti Sharma criticized in a TED Talk that the smartest minds were developing AI technology and could do so in any way they choose. AI developers* view their programming work as an objective process. Even when errors are not intentional, the affected groups suffer the negative consequences of ignorance and the subconscious biases of the programmers. In addition, POC are severely underrepresented even in the field of computer science. The education system already provides POC with fewer opportunities for education and advancement. Very low recruitment rates and funding deny many opportunities to POC in computer science. In addition, many POC who work in informatics have painfully experienced that they are rarely welcomed with open arms.

So when white people implement technology that doesn't take into account its use in POC and potential negative consequences for them, it usually goes unquestioned. That the technologies are "intelligent" is not the problem here, and racist technology is not a novelty. If you look back a few decades in history, similar problems can be seen in the development of color film. These were initially optimized only for light skin tones. In the development of the films, chemicals for various darker skin tones were not used at all, resulting in less nuance in the colors and insufficient quality in the representation of these.

These problems are also evident today. Fitness trackers, such as those from Fitbit and Samsung use only green light to measure heart rate, for example. It detects larger amounts of blood between heartbeats. However, this light is more strongly absorbed by darker skin tones and leads to worse results for them. The more accurate infrared light is rarely used due to cost. POC does not only affect this in everyday life. Fitness trackers are also a popular tool in medical studies. A problem that again ends in racism.

However, racist decisions by AI can occur not only by how it is programmed, but also by how it has been trained. Often POC are disproportionately represented in training data. Facial and speech recognition systems predominantly use data from white individuals. So the AIs can only learn these patterns and do not recognize POC correctly later on. In the training data of predictive policing algorithms, black persons in particular are strongly overrepresented as repeat offenders.

Structural racism in AI

Over-representation and under-representation are not the only problems in training data. It is also problematic if existing case data is used to train the AI, which contains structural racism regardless of its distribution, which the AI then learns. Certain decision patterns can not only be reproduced, but also demonstrably strengthened in the process. There is an assumption when using AI that, for example, certain patterns in the data profiles of repeat offenders are actually associated with higher risk. But can predictive policing algorithms really predict crime? Much more likely to predict racist police practices.

Racialization itself is usually not a characteristic, but characteristics that correlate with it are used to determine risk scores. These connections often have racial origins. Such characteristics are also used in algorithms on job and housing offers, employment and credit allocation. POC may be disadvantaged by AIs as a result because individuals with similar characteristics, due to racist traditions in their respective institutions, have been disadvantaged in the training data. Results of medical algorithms reflect structural racism by linking health status to health care expenditures. However, due to more difficult access to health care for black individuals, they generally spend significantly less on health care. Also, racist traditions in photography lead to lower quality photos of black people, which affects the training of facial recognition programs. Since text-generating AIs are also trained on web content, they learn from racism on the web. AIs are just another tool in racist social systems that have already existed for decades or centuries.

AI as such are not racist. The more important questions are: Why are the biases of AI and their consequences ignored by so many?? Why do researchers not take responsibility? Why are safety of and benefits for POC not considered?? And what can be done about it?

In search of solutions

There are many structural problems underlying racism in AIs. These problems are essential in the discussion about racism and solving them is, not only for KI, fundamental on the way to an anti-racist society.

Looking specifically at AI development and deployment, a big issue is that AI failures are often overlooked or not taken seriously. This makes it harder to take action against them. AI is seen as objective. Affected people, as well as companies and institutions that use AI, are often unaware of how AIs work and that they are affected by structural issues. In addition, there is also intransparency in the use of AI. People rarely find out when decisions are made about them with AI; this is also referred to as the AI black box.

Regulations on general application and disclosure, as well as specific anti-discrimination laws for AI development and its use are needed. Guidelines of this kind are recommended by the federal government's data ethics committee, among others. Fair training data sets are also important for this. Face and voice recognition should also be trained on examples of POC. Since it is hard to understand which contexts AIs use for decisions, they should be thoroughly tested for bias in the training phase and by independent parties. So far, this has only been the case in isolated cases.

Change is indispensable

In the development of AI, in addition to adequate training data, it is necessary for developers to be aware of these issues. Consequences and impact on POC should be considered by them, as well as creating a sense of responsibility for their programs. A next step would be to ensure POC better admission and funding opportunities in computer science. Your perspectives, experiences, and expertise are important in reducing bias in AI and lead to better outcomes. Vividly demonstrated by looking at facial recognition AI developed in Asia. You are much better at recognizing Asian faces. Far from the problems addressed, more opportunities and a better environment for POC in computer science are desirable in any case.

Fundamental to the solution approaches is the question of what we want to use AI for. If AI is to help people, it must help everyone, and for that, investment, regulation and change are essential.

More on the topic: Jordan Harrod talks about how AI perpetuates structural racism on the occasion of Juneteenth last year. Jordan Harrod is a doctoral student at Harvard University and MIT, working on AI. For example, she researches brain-computer interfaces and machine learning in medicine.

Like this post? Please share to your friends:
Leave a Reply

;-) :| :x :twisted: :smile: :shock: :sad: :roll: :razz: :oops: :o :mrgreen: :lol: :idea: :grin: :evil: :cry: :cool: :arrow: :???: :?: :!: