Technology always promises progress and improvement in the quality of life. But lately, the realization is dawning that there is need for vigilance, especially when it comes to its military use.
Artificial intelligence (AI) is poised to play an increasing role in military systems. And with it comes an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI.
The Future of Life Institute (FLI) is spearheading this movement. The FLI, based in the Boston area, is a charity and outreach organization working to ensure that tomorrow’s most powerful technologies are beneficial for humanity.
According to the FLI Mission statement, “With less powerful technologies such as fire, we learned to minimize risks largely by learning from mistakes. With more powerful technologies such as nuclear weapons, synthetic biology and future strong artificial intelligence, planning ahead is a better strategy than learning from mistakes, so we support research and other efforts aimed at avoiding problems in the first place.”
At the 2018 International Joint Conference on Artificial Intelligence (IJCAI) currently on this week in Stockholm, Sweden, the FLI launched its ‘Lethal Autonomous Weapons Pledge’, which has been signed by 164 organizations and 2,443 individuals comprising scientists, tech leader and visionary entrepreneurs around the world.
Leading this fight against ‘killer robots’ is Elon Musk, Founder of Tesla, and scientists and tech leaders. Among the others are Skype founder Jaan Tallinn, three cofounders of Google’s DeepMind subsidiary Demis Hassabis, Shane Legg, and Mustafa Suleyman.
نستخدم ملفات الكوكيز لنسهل عليك استخدام مواقعنا الإلكترونية ونكيف المحتوى والإعلانات وفقا لمتطلباتك واحتياجاتك الخاصة، لتوفير ميزات وسائل التواصل الاجتماعية ولتحليل حركة المرور لدينا...اعرف أكثر