Kwork.ru - услуги фрилансеров от 500 руб.
Home / WORLD / UN warns unchecked AI & machine-learning tech can ‘violate human rights, damage lives’ as it calls for safeguards against abuse

UN warns unchecked AI & machine-learning tech can ‘violate human rights, damage lives’ as it calls for safeguards against abuse

In a statement on Wednesday, UN High Commissioner for Human Rights Michelle Bachelet stressed the need for an outright ban on AI applications that are not in compliance with international human rights law, while also urging for a pause on sales for certain technologies of concern.

Noting that AI and machine-learning algorithms now reach “into almost every corner of our physical and mental lives and even emotional states,” Bachelet said the tech has the potential to be “a force for good,” but could also have “negative, even catastrophic, effects if they are used without sufficient regard to how they affect people’s human rights.”

Kwork.ru - услуги фрилансеров от 500 руб.

Bachelet’s warning came as the UN Human Rights Office released a report that analyzed the impact of AI systems – such as profiling, automated decision-making and other machine-learning technologies – on various fundamental rights, including privacy, health, education, freedom of expression and movement.

The report highlights a number of worrying developments, including a “sprawling ecosystem of largely non-transparent personal data collection and exchanges,” as well as how AI systems have affected “government approaches to policing,” the “administration of justice” and “accessibility of public services.”

AI-driven decision-making could also be “discriminatory” if it relies on out of date or irrelevant data, the report added, also underscoring that the technology could be used to dictate what people see and share on the web.

However, the report noted that the most urgent need is “human rights guidance” with respect to biometric technologies – which measure and record unique bodily features and are able to recognize specific human faces – as they are “becoming increasingly a go-to solution” for governments, international bodies and tech firms for a variety of tasks.

In particular, the report warns about the increasing use of tools that attempt to “deduce people’s emotional and mental state” by analyzing facial expressions and other “predictive biometrics” to decide whether a person is a security threat. Technologies that seek to glean “insights into patterns of human behaviour” and make predictions on that basis also raise “serious questions,” the human rights body said.

Noting that such tech lacked a “solid scientific basis” and was susceptible to bias, the report cautioned that the use of “emotion recognition systems” by authorities – for instance, during police stops, arrests and interrogations – undermined a person’s rights to privacy, liberty and a fair trial.

“The risk of discrimination linked to AI-driven decisions – decisions that can change, define or damage human lives – is all too real,” Bachelet said, adding that the world could not “afford to continue playing catch-up” with rapidly developing AI technology.

Think your friends would be interested? Share this story!

© 2021, paradox. All rights reserved.

Check Also

No ‘unfriendly nations’ for Russia, only ‘unfriendly elites’ – Putin

Russia has no intention of cancelling any country’s culture, President Vladimir Putin said on Wednesday …

Leave a Reply

Your email address will not be published. Required fields are marked *