The Future of Life Institute made a big announcement at the Joint Conference on Artificial Intelligence. It was announced that more than 2,400 individuals and 160 companies and organizations have signed a pledge, that says that they will "neither participate in nor support the development, manufacture, trade or use of lethal autonomous weapons."
The signatories also pled the governments to pass laws against such weapons. Google DeepMind and the Xprize Foundation were also among the companies that took the pledge, while Elon Musk and DeepMind co-founders Demis Hassabis, Shane Legg and Mustafa Suleyman were also among the signatories.
The pledge comes following the backlash faced by few companies over their technologies and how they provide them to the government agencies. Even Google faced heat for its Project Maven Pentagon contract, which helps the military flag drone images that need additional human review, with the help of AI.
Retail giant Amazon is also fired for sharing its facial recognition tech with law enforcement agencies. Microsoft has also been called out for offering services to immigration and Customs Enforcement (ICE).
"Thousands of AI researchers agree that by removing the risk, attributability and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems," says the pledge. It adds "the decision to take a human life should never be delegated to a machine."
"I'm excited to see AI leaders shifting from talk to action, implementing a policy that politicians have thus far failed to put into effect," said President of Future of Life Institute, Max Tegmark. "AI has huge potential to help the world -- if we stigmatize and prevent its abuse. AI weapons that autonomously decide to kill people are as disgusting and destabilizing as bioweapons, and should be dealt with in the same way."
Google has already put forward a set of principles, which will guide its ethics on the budding AI technology. The company policy says that it won't design or deploy AI for the production of weapons, surveillance or tech "whose purpose contravenes widely accepted principles of international law and human rights."
Microsoft also stated that its collaboration with ICE is restricted to email, calendar, document management, and doesn't include any facial recognition tech. The company will also release a new set of guiding rules for the use of its facial recognition tech.