From machine learning thermostats to digital assistants, artificial intelligence continues to play a larger and larger role in society every day. This essay will investigate whether we should worry about these machines becoming malicious. The essay will begin by looking at the differing forms of artificial intelligence, evaluating the common misconception of human attributes. It will then look at the ways in which we can keep this technology from growing out of the control of human oversight.
MISSION VIEJO, CA — David Smith, a schoolteacher from Southern California, awakens from a fitful sleep just before dawn on a Monday in mid-October, jolted by a recurring nightmare. Like something out of a low-budget Sci-Fi film, his dreams are haunted by evil robots who outsmart and subjugate their human creators before proceeding to take over the world. Still reeling from the episode, David asks Siri to put on some calming music while he goes through the motions of his morning routine. Nest, a learning thermostat, keeps the temperature at a comfortable level as he reaches for his towel after a shower. Before heading to work in his new driverless car, David confirms his appointments for the day with Alexa, his digital assistant. Could these “smart” technologies be Freddy Krueger-like manifestations of his night terrors, lurking about in the shadows, waiting for the right moment to strike with their razor-like malware? Should David be worried?
In short: probably not, but there may be cause for concern yet. There is much controversy and misinformation surrounding the notion of artificial intelligence today. My aim here is to set the record straight. What is artificial intelligence? How might we benefit from it? And where does regulation, public policy, and the law fit into the picture? By answering these questions, I wish to impart a qualified optimism about the future of artificial intelligence that will hopefully allow David to rest a little easier.
I want to start off by defining artificial intelligence. According to Scott Kuindersma, Assistant Professor of Engineering and Computer Science at the Harvard School of Engineering and Applied Sciences, artificial intelligence is the design of computer algorithms that behave rationally with respect to a desired outcome (i.e. that optimally achieve their goal). This debunks one of the most commonly held misconceptions regarding the technology today—namely, that it must always take the form of robots with distinctly human characteristics. As clarified by Max Tegmark of the Future of Life Institute, artificial intelligence “can encompass anything from Google’s search algorithms to IBM’s Watson to autonomous weapons,” and usually sticks to a specific narrow task, such as face recognition, as opposed to generalized functions, like planning, sensing, and making moral judgments.
Understanding artificial intelligence and the forms that it can take is critical to discerning its substantial benefits. For one, artificial intelligence is consistently improving nearly every aspect of the human experience, from communication, to administrative efficiency, to lifespan. Deep learning—that is, the recognition of patterns in sets of data through repeated trials—for instance, is being applied to medicine to make diagnoses more accurate by helping identify malignant tumors before they spread. Stories like this are an almost daily occurrence in our modern society. Right now, the opportunities for innovation are endless. Contrary to a reading of Robert Gordon’s The Rise and Fall of U.S. Economic Growth, I predict that artificial intelligence—which is still very much in its early stages and evolving at an incredibly rapid pace—will soon yield significant increases in labor productivity and produce lasting effects on the full range of human experience. This is not to say that technological progress will make for a world wherein the average work week is fifteen hours and our biggest problem is how to spend our idle time, as suggested by Keynes in “Economic Possibilities for Our Grandchildren.” However, if recent innovation is any indication of what the future holds, artificial intelligence, handled properly, will continue to increase living standards at a growing rate.
But is artificial intelligence progressing a little too quickly? Will this Frankenstein-like monster soon outgrow our control? Not if we take the necessary precautions. There are two ways for artificial intelligence to become dangerous: (1) the AI is programmed to do something devastating (e.g. autonomous weaponry); or (2) the AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal. This is where public policy comes into play. Governments and universities should take a more involved role in the developmental process of artificial intelligence, providing key oversight, safety research, and input to future projects whenever possible. Given that the field of artificial intelligence is still very much the Wild West due to it being in the infancy stage, restrictions and regulations on what kinds of technology can be produced and where it can be deployed may also be appropriate going forward, especially vis-à-vis drones and military equipment, though it is difficult to say exactly what they might look like without delving into specifics; they might come at the local level, like municipalities stipulating where you can and cannot fly a drone; at the national level, such as the government forbidding social media sites and search engines from selling user data (more on this later); or at the international level, for instance appending the laws of war to preempt smart bombs from being deployed in areas highly populated by innocent civilians in order to reduce collateral.
It is important to note that the precedents and procedures that we establish now will guide subsequent developments. We cannot stop the expansion of artificial intelligence nor do we want to end up in a Forbidden Planet-esque apocalyptic scenario, risking the destruction of the progress we have made as a civilization because we can no longer control our technology. Taking appropriate preventative measures is of the utmost importance and ought to be a key goal of public policy going forward.
Okay, so maybe you aren’t so worried about Terminator-like cyborgs wreaking mayhem on humanity. Perhaps your concern lies with a different kind of villain, one less flashy but no less insidious: technological unemployment. Coined by Keynes in the same article referenced above, technological unemployment expresses a fundamental fear about whether workers will be displaced by machines. While it is true that the labor market will likely endure a disruptive transitional period as artificial intelligence continues to evolve and the computer automation process accelerates, most economists agree that mass unemployment is a very unlikely scenario because technology simultaneously substitutes and complements labor.
However, as underscored by David Autor—renowned American economist and professor of economics at the Massachusetts Institute of Technology—low education, manual tasks and middle-skill, routine jobs are most threatened by the growth and proliferation of artificial intelligence, so keeping up with higher educational and job training demands and addressing unequal income distributions must be of significant importance to the government going forward, and ought to be reflected in public policy.
But what of the ethical considerations of AI? There are certain moral considerations that accompany technological advances which must be addressed through law and diplomacy. Whether it be Facebook influencing the results of an election by manipulating public opinion through targeted advertisements, search engines selling user data to other companies, or the possibility of governments waging cyber warfare against one another using increasingly advanced algorithmic attacks, the legal system and our public representatives have the responsibility of determining how these innovative technologies impact our lives.
We need to pay much closer attention to the important ethical issues of our time, starting with the influence of dominating mediums of communication like Facebook and Twitter on the political system which undergirds our sacred democracy, as well as the ability to cause harm through the internet. Like Henry David Thoreau in Walden, I am simultaneously excited by the opportunities presented by artificial intelligence and unsettled by the cheapening of life that could result if not handled properly.
I think we can be relatively optimistic about the future of artificial intelligence. But this is no license for complacency. Proactivity in regulation is a must if we are to reap the full benefits of this great human endeavor. As Max Tegmark said, “our civilization will flourish as long as we win the race between the growing power of technology and the wisdom with which we manage it.”