Technology always promises progress and improvement in the quality of life. But lately, the realization is dawning that there is need for vigilance, especially when it comes to its military use.
Artificial intelligence (AI) is poised to play an increasing role in military systems. And with it comes an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI.
The Future of Life Institute (FLI) is spearheading this movement. The FLI, based in the Boston area, is a charity and outreach organization working to ensure that tomorrow’s most powerful technologies are beneficial for humanity.
According to the FLI Mission statement, “With less powerful technologies such as fire, we learned to minimize risks largely by learning from mistakes. With more powerful technologies such as nuclear weapons, synthetic biology and future strong artificial intelligence, planning ahead is a better strategy than learning from mistakes, so we support research and other efforts aimed at avoiding problems in the first place.”
At the 2018 International Joint Conference on Artificial Intelligence (IJCAI) currently on this week in Stockholm, Sweden, the FLI launched its ‘Lethal Autonomous Weapons Pledge’, which has been signed by 164 organizations and 2,443 individuals comprising scientists, tech leader and visionary entrepreneurs around the world.
Leading this fight against ‘killer robots’ is Elon Musk, Founder of Tesla, and scientists and tech leaders. Among the others are Skype founder Jaan Tallinn, three cofounders of Google’s DeepMind subsidiary Demis Hassabis, Shane Legg, and Mustafa Suleyman.
Late Stephen Hawking, Steve Wozniak, Bill Gates, and many other big names in science and technology have also expressed concern in the media aabout the risks posed by AI.
Elon Musk has been eloquent about the dangers of Artificial Intelligence. Four yeas ago, he spoke about the dangers of AI at MIT saying it was perhaps humanity’s “biggest existential threat.” He also wanted an international regulatory oversight to make sure that scientists would not do “something very foolish.”
For Musk, Artificial Intelligence was like “summoning the demon.”
He has carried this campaign to other forums ever since then and now the FLI focusing “on keeping artificial intelligence beneficial and are also exploring ways of reducing risks from nuclear weapons and biotechnology.”
Google bars uses of its artificial intelligence tech in weaponsGoogle will not allow its artificial intelligence software to be used in weapons or unreasonable surveillance efforts, the Alphabet Inc unit said on ... Variety
Dubai Police considers using artificial intelligence to fight drug smugglingDubai Police and Dubai Customs held the first forum discussing ways of preventing drug smuggling into the UAE through land, air and sea ports using ... Gulf
Facebook to expand artificial intelligence to help prevent suicideFacebook will expand its pattern recognition software to other countries after successful tests in the US to detect users with suicidal intent, the ... Digital
Google’s sister company Deepmind forms ethics unit for artificial intelligenceDeepMind, the Google sibling focusing on artificial intelligence, has announced the launch of an “ethics and society” unit to study the ... Technology