Artificial Intelligence: The Good and the Evil
Vijay Sakhuja June 02, 2018
The jubilation over opportunities presented by artificial intelligence is quite telling, and its usage has found favour among a number of stakeholders. Researchers and proponents believe that future artificial intelligence enabled machines would restructure many sectors of the industry such as transportation, health, science, finance, and automate all human tasks including restaurants. Intelligent machines will be in the forefront and according to Google’s director of engineering Ray Kurzweil, by 2019 ‘computers will be able to understand our language, learn from experience and outsmart even the most intelligent humans’. In essence, the technology developers are now working to teach the machines and make artificial intelligence as good as or even better than human-level intelligence though their own efforts.
Amid this euphoria, there is also a strong belief that an uncontrolled and ‘runaway’ march of artificial intelligence towards final maturation could be catastrophic and invite dystopian problems. Elon Musk, CEO of Tesla and SpaceX CEO has cautioned that artificial intelligence is a ‘fundamental risk to the existence of human civilization’ and “we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’ll be too late,”
The military domain is also in the throes of transformation led by disruptive technologies such as the artificial intelligence, big data, quantum computing, deep machine leaning, to name a few. Robots are believed to be panacea for a number of military tasks and missions including warfighting by killer robots as fully autonomous weapons. The adverse impact of fully autonomous weapons such as the killer robots is not yet fully understood.
However, there have been some positive developments in this regard in another project. For instance, nearly 4000 employees of Google submitted a letter in April 2018 to their leadership stating that the company should not develop technologies which would get the company into the ‘business of war’. They urged that the ongoing Project Maven is stopped and the company should “draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology.”
Project Maven, formally known as the Algorithmic Warfare Cross-Functional Team, is a U.S. Department of Defense (DoD) program for development of drones which uses artificial intelligence and machine learning technology to help analyze huge amounts of captured surveillance footage. The project will enable the Pentagon to “deploy machine-learning techniques that internet companies use to distinguish cats and cars to spot and track objects of military interest, such as people, vehicles, and buildings.” Further, it will be possible to “automatically annotate objects such as boats, trucks, and buildings on digital maps.” The DoD plans to equip image analysis technologies onboard the unarmed and armed drones and then it will be “only a short step to autonomous drones authorized to kill without human supervision or meaningful human control.” The initial plan was to have the system ready by December 2017 but the project has run into difficulty after Google employees raised their objections.
In this context, the global movement against robot killers led by Campaign to Stop Killer Robots since 2013 and has found favour among at least 28 countries. They are seeking an international treaty or instrument whereby a human control exits over any lethal functions of a weapon system. Their voice has gained significant momentum during the last five years and the global coalition against killer robots constitutes 64 international, regional, and national non-governmental organizations (NGOs) in 28 countries that calls for a preemptive ban on fully autonomous weapons.
While that may be the shape of things to come in the future, the fear is that technology developers may not be able to determine what is ‘good’ and what is ‘evil’. Issues such as ethics and morality are fast taking precedence and Google employees’ call to rein in artificial intelligence and control its future development merits attention.
Last month, on May 14, scholars, academics, and researchers who study, teach about, and develop information technology came in support of the Google employees and expressed concern that Google had “moved into military work without subjecting itself to public debate or deliberation, either domestically or internationally” It is now reported that Google along with its parent company Alphabet have made note of these issues and are beginning to address some ethical issues related to the “development of artificial intelligence (AI) and machine learning, but, as yet, have not taken a position on the unchecked use of autonomy and AI in weapon systems.”
The question before the technology developer is therefore not about its ability to produce high-end technology, but how to teach morality and ethics to the machines. It is fair to argue that uncontrolled coalescence of artificial intelligence and self-learning machines would cause greater harm to the society particularly in the context of killer robots and drones that have found fancy among a few militaries.
Dr Vijay Sakhuja is the founding Member of The Peninsula Foundation, Chennai.
Dr Vijay Sakhuja is a co-founder and trustee of TPF.