United Nations (United Nations Organization) specialists drew attention to the artificial intelligence algorithms used for facial recognition and police checks this Thursday (26). They say that these instruments can increase racial discrimination and that countries should be vigilant.
According to the Jamaican expert Verene Shepherd, “there is a high risk of this [a inteligĂȘncia artificial] Reinforce prejudices and thus exacerbate or enable discriminatory practices “.
Shepherd is a member of the United Nations Committee on the Elimination of Racial Discrimination, which consists of 18 experts. On Thursday, the group released a report with recommendations to authorities on how to tackle this problem.
The committee is particularly concerned with the algorithms used among the police instruments for “prevention” or “risk assessment”. These surveillance systems, which are designed to help prevent crime and were first introduced in the US in the early 2000s, have been criticized for increasing prejudice against some communities.
The group also drew attention to the search engine used by social networks that can filter the content and display biased advertisements to users.
Vicious circle
“Historical data on prisons in a given neighborhood [que alimentam a inteligĂȘncia artificial] may very well reflect and, therefore, reproduce biased police practices, Shepherd points out.
“This information increases the risk of an excessive police presence, which could lead to further arrests and thus create a vicious circle,” he warns. “Wrong data leads to bad results.”
Among the recommendations, the Committee expresses concern about the increasing use of facial recognition or other surveillance technologies used in security missions.
In this respect, too, the insight from artificial intelligence is closely linked to the data that are used to “clear up” these systems, explains the Jamaican expert. Studies have shown that this data has difficulty recognizing dark-skinned or feminine faces.
An example of this was the arrest of a black American, Robert Williams, earlier that year in the city of Detroit, USA, based on the “inferences” of a poorly developed algorithm that identified him as a suspect in a robbery.
“We have received complaints about this form of misidentification because of these technologies, from those who develop them, or from the examples used by these systems,” says Shepherd, adding, “It’s a real problem.”
The Committee urges countries to regulate companies operating in this sector to ensure that these systems comply with international human rights law. The report insists on the need for transparency in the design and application of these rights for the population.
The committee’s recommendations are not limited to these new technologies. “The development of breed profiles didn’t start with that,” recalls Shepherd. She hopes that “the intensification and internationalization of the Black Lives Matter movement (…) and other campaigns denouncing discrimination against vulnerable groups will help [a destacar] the importance of these recommendations “.