Algorithms often reproduce human prejudices

Computer: Artificial intelligence with firm prejudices

The current study is therefore also of great importance for robot research, according to Bryson. After all, it has long been argued that robots need a body in order to really understand the world: "It was said: You can't get semantics without feeling the real world." She herself was a supporter of this thesis. "But that is not necessary, as our study shows." For example, just reading the Internet is clearly enough to come to the conclusion that insects are unpleasant and flowers are pleasant - even if the computer never sniffed or sniffed a blossom was bitten by mosquitoes.

Regardless of this, in view of the two studies, all AI processes that learn independently on the basis of training data are being put to the test. Black prisoners in the USA, for whom a computer had suggested a longer term of imprisonment than for white criminals, felt what it means when the algorithm took over and cemented prejudices: it had learned from previous human decisions and adopted the prejudices of the judges. It's actually very simple, says Margaret Mitchell of Google Research in Seattle: "If we put prejudices in, prejudices come out." However, these are hardly obvious, which is why they are often not noticed. “Today, thanks to the deep learning revolution, we have powerful technologies,” says Mitchell - and this raises new questions as it is slowly becoming clear what impact machine learning can have on society. "Such tendencies in the data sometimes only become visible through the output of the systems," says the researcher. But only if the developers are aware of the problem that they have to question the results.

A filter against prejudice

Mitchell admits that there is still no technical solution as to how to systematically identify the prejudices in the data that can lead to discrimination: "We have to deal with that now, because these systems are the basis for the technologies of the future." calls this the "evolution of artificial intelligence". Especially at the interface between image and text recognition, there are always breakdowns: For example, Google software had signed the photo of two dark-skinned people with the signature “Gorillas”. Embarrassing enough for the group to now also focus more on this level of machine learning.

“Even systems based on 'Google News' articles (i.e. newspaper articles; author's note) trained show gender stereotypes to a disruptive extent, «write the authors working with Tolga Bolukbasi from Boston University in the above-mentioned article. You propose to "de-bias" the models, that is, to remove tendencies and prejudices from the training data. Joanna Bryson thinks that is wrong: "It will hardly be possible to take every prejudice out of the data." After all, very few are as obvious as racism and gender stereotypes.

From their point of view, it is better to equip the systems with a kind of filter after training: with programmed rules that prevent implicit prejudices from flowing into decisions or actions. Quite similar actually to people who do not translate every prejudice into an action - possibly quite consciously, because they have a fairer world in mind. "Society can change," says Bryson. But not if artificial intelligence based on data from the past keeps us at a racist and sexist level forever.



Interview: "Mistakes have consequences for the lives of real people"

Hanna Wallach from Microsoft Research explains in an interview why machines make racist decisions and why it is important to address this topic.