Leading experts spoke out strongly against the development of lethal Autonomous weapons.
The world’s leading experts on artificial intelligence (AI), concerned that their development can be used for murder, signed an agreement not to participate in the creation of Autonomous weapons.
An open letter published online July 18, was signed by 2,400 scientists from 160 organizations and 36 countries. They include the founders of Deep Mind of Demis, Hassabis, Shane Legg and Mustafa Suleyman, Elon Musk and other eminent businessmen and scientists.
The signatories spoke out against the development of lethal Autonomous weapons.
“These systems pose a serious threat to humanity, they have no place in our world,” the letter reads.
“There is an urgent need for citizens, politicians and leaders to distinguish between acceptable and unacceptable use of AI.
The military use of AI is unacceptable and we, the undersigned, agree that the decision to take human life should never be given over to the machine.
We ask to technology companies and organizations, as well as leaders, politicians and others have joined us in this pledge”.
“We really want the development of the technology for AI was positive and did not lead to the terrible arms race or dystopia, in which flying robots killing people,” — said in comments to CNN signatory Anthony Aguirre, teacher of physics at the University of California.
Flying killer robots and “smart” weapons of today are science fiction, but the rapid progress of computer vision technology and machine learning makes their creation more realistic. CNN writes that the national defense strategy released by the Pentagon, there is a call for increased investment in artificial intelligence.
“New technologies such as AI, give the opportunity to improve our ability to deter war, to protect citizens, reduce the number of victims among the civilian population and cause less harm to civilian infrastructure, — said the press Secretary of the Ministry of defense Michelle Baldanza for CNNMoney. — This initiative emphasizes the need for active dialogue between the Ministry of defence, the research community AI, ethics, sociologists, and other involved communities. We need open discussions on the ethics and security in the development and use of AI”.
Despite the opposition of leading researchers, the development of intelligent systems for use in the military is hardly going to stop.
“I don’t think it will fundamentally change how major powers such as the US, China and Russia, come to the AI technology,” says Paul Carr, research fellow, Center for a new American security and author of a book about Autonomous weapons “Army of None”.
Anyway, the failure DeepMind Technologies, an Element AI and other leading laboratories “to support the development, the production, trafficking or use” of Autonomous weapons is a good example for other researchers of AI.
The agreement to ban the production of Autonomous weapons was the organization of the Future of Life Institute. The document was presented at the international conference on artificial intelligence International Joint Conference on Artificial Intelligence, held in Stockholm from 13 to 19 July.
© 2018, z-news.link. All rights reserved.