ARTIFICIAL INTELLIGENCE TECHNOLOGY

AI In The Fight Against Discrimination

Artificial intelligence

Artificial intelligence does many things better than humans. Algorithms help us recognize heart disease, draw pictures, search for missing children and secret graves, communicate with doctors – and this is not the limit of possibilities. But here’s the trouble: servile machines discriminate against people, apparently picking up bad habits from their creators.

Tech companies Youth Laboratories and  Insilico Medicine decided to put an end to this mess. First, they teach AI to fight human prejudice, and second, they try to get rid of the bias of the algorithms themselves. Recently, the project presented its first results: a study of socio-cultural diversity in the leadership of the world’s leading companies using neural networks.

In 2014, scientists developed an algorithm that predicts which criminals might break the law again and what the likelihood of a relapse is. Later studies showed that the algorithm exaggerated the danger of African Americans: it was twice as likely to mistakenly flag them as a “serious threat”, while whites were twice as likely to be incorrectly categorized as “low risk.” Another example is the development of Youth Laboratories and  Insilico Medicine.

The first company released the RYNKL app, which analyzes facial wrinkles and evaluates the effectiveness of medical and decorative cosmetics. The creators of RYNKLit soon discovered that the app did not work well with dark skin because the training set contained few pictures of black people. Insilico Medicine has introduced a service that estimates age by blood counts. Alas, for the elderly, it is ineffective: the database does not have enough test results for healthy people over 60.

“Earlier, several works reported on pronounced discrimination based on race and sex in artificial intelligence systems,” comments Roman Yampolskiy, one of the authors of the study. – It has been suggested that the recorded distortions were not intentionally introduced but were an artifact of man-made datasets that already contained such distortions. In this paper, we showcased an AI designed to recognize human bias. We hope that AI itself will soon be able to mitigate using polluted datasets for machine learning. “

In a first pilot project, representatives from Youth Laboratories and  Insilico Medicine worked with scientists from Russia, the United States, the United Kingdom, Denmark, and Switzerland to create an algorithm that analyzed the senior management of well-known companies based on race and gender. Researchers selected 484 corporations from the Forbes Global 2000 list and found photographs of senior executives – more than 7,200 images in total.

They then taught algorithms to automatically recognize the age, race, and gender of senior executives and compared these results with the gender and racial ratio in the country where the company is located. Thus, they tried to determine to what extent the corporation’s situation reflects the state’s demographic situation.

The results showed that only 21.2% of managers were women. And in almost all companies, except for Swedish H&M, the proportion of female board members was lower than the proportion of working-age women in the country. In 23 corporations, there were no women managers – most of these firms are located in Asia. Nearly 80% of all top executives were white, 3.6% were black, and 16.7% were Asian.

The largest number of black leaders was found in South Africa, where 80% of the population is black. However, even in two South African companies, the maximum proportion of blacks was only 35% and 53%.

The situation in many American companies reflected the demographic situation in the country – the percentage of blackboard members was at least 13%. However, in 30 corporations, there were no top black managers at all. The average age of leaders in all 38 countries is 52. “This work confirms that we live in a world full of prejudice,” says data ethicist Dr. Sandra Wachter, who was not involved in the study. Team Leader Diversity AI at Youth Laboratories, Konstantin Chekanov believes that AI systems “can help recognize people’s bias in various organizations, communities, and even in government.” “In the future, artificial intelligence can be used to detect and prevent discrimination, harassment, and other misconduct,” he says.

The algorithms of Youth Laboratories and  Insilico Medicine are still imperfect, and the companies invite everyone to participate in the project. The data used in the new study is available on the Diversity AI website

Also Read: How Enterprises Are Implementing Artificial Intelligence: Build Your Own vs purchase

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *