Artificial Intelligence Fiction Vs Reality
Artificial intelligence has been a topic discussed in popular culture for decades, despite its relatively recent appearance in the robotics industry. In 20th-century films such as 2001: A Space Odyssey and The Terminator, sentient machines are most commonly portrayed as being cold, unfeeling killers who invariably turn on their creators to fulfill their own nefarious goals. The metallic antagonists of these films, HAL 9000 and Skynet, were created to serve humanity, but once they gained “sentience”, that purpose faded away with their conscience. Although these characters are meant to represent a product of the far future, robots in the real world are already demonstrating some level of self-sufficiency. However, these films have had a sizeable impact on public perception of Artificial Intelligence as a concept, propagating widespread fear among viewers.
The way superintelligent machines are portrayed in media is nothing short of monstrous, and this is likely not a coincidence. HAL is directly responsible for many deaths aboard the Discovery One spaceship during the events of 2001: A Space Odyssey. Skynet triggered a nuclear arms race early in the events of The Terminator. Marvel’s take on an artificially-intelligent villain, Ultron, attempted to use a meteor to wipe out humanity. While these characters, particularly Skynet and Ultron, are easy to write off as simply being campy Sci-Fi bad guys, there is no doubt that their presence has shaken discussion of AI across the world, due in no small part to common screenwriting trends. Science fiction writers and directors want their audiences to hate and fear the villains they have created, and there is a tendency to think that there is too much room for error in that regard when writing their villain as a comparatively sympathetic human. No such mistake can be made with a villain who is inherently detached from human nature, such as a living machine. Their goals are easy to conceptualize and justify, and their motivations need not be complicated or nuanced to be understood.
With a human character, there must be an adequate justification for their antagonistic behavior in order for the audience to be able to make sense of their position in the story and, therefore, take the story more seriously. These justifications mostly lie within emotions and reactions to certain events that happened within the character’s life. With a sentient AI character, no such justification is necessary, as the factor of emotion has been cut out entirely, and other factors are cut short. The tradeoff in a shortcut like this is that a mechanical villain will likely be considerably less entertaining and charming than a human villain. The common stereotype of robotic characters acting in a rigid, almost boring way has an adverse effect on critical reception of characters such as HAL and Skynet. While as villains, there is no doubt that they are interesting, imposing, and entertaining, they lack a particular personality that other popular fictional villains do. While this does allow them stand out more among a sea of comparatively uninteresting villains, there is no denying that the demographic satisfied by an AI villain is a rather niche one.
The state of AI development in the real world and the common portrayal of AI in fictional worlds are, for lack of a better term, worlds apart. While the advent of sentient machines may be on the horizon, the films that showcase them in action do little more than spread fear, setting back a developing field that the films’ writers are hardly able to understand. Seth Baum, executive director of the Global Catastrophic Risk Institute, grapples with discussion of global destruction and human extinction on a daily basis. Baum finds the issue of artificial intelligence particularly difficult to work with, because unlike more common threats to humanity, artificially intelligent computers have never had any real risk associated with them until the arrival of the 21st century. Baum rightfully cites that “computers have never taken over the world and killed everyone before”, which slows down the otherwise-straightforward process of risk analysis (Creighton). They can’t simply examine the data, because there is no data. Baum goes on to elaborate that “not only has this never happened before, the technology doesn’t even exist yet”, which only further complicates any potential risk analysis (Creighton).
Perhaps unsurprisingly, very few creations in the robotics industry have come close to replicating the capabilities of fictional AI. Robots, like all machines, are almost always built to meet specific requirements and perform specific tasks in specific ways. One machine might be incredibly efficient and clever about completing taxes and audits without any additional input from its user, but that machine likely won’t be able to brew a cup of coffee. Another machine might be Atlas, an athletic robot designed and built by members of Boston Dynamics. A variety of sensors and balancing mechanisms allow Atlas to “achieve whole-body mobile manipulation, greatly expanding its reach and workspace” (Boston Dynamics).
With his wide range of motion and sensory stimuli, he can open doors, traverse terrain, pick up objects, and stand back up after falling over. However, these are not the qualities of an intelligent creature. Atlas would only barely meet the qualities of a self-preserving animal. When compared to a machine like the fictional Ultron, who would be Atlas’ closest competitor, there is simply no contest. Ultron has all the capabilities of Atlas turned up to 11, compounded with enough intelligence to not only speak, but have an attitude. If a machine like Atlas, whose own existence is remarkable on its own, is at the forefront of AI development in the robotics industry, then HAL and Ultron are still pipe dreams.
If artificial intelligence is ever going to present itself as an active threat to humanity, in the same way that it does in the world of science fiction, we are likely many years away from reaching that point. The most complicated learning robots of our world can hardly hold a candle to even the most simple learning robots of the realm of fiction. As development and research of AI construction booms forward, so does research of AI safety.
Films like 2001: A Space Odyssey and The Terminator not only cemented the AI villain as a staple of science fiction, but also created an idea of what robotics researchers can and should avoid. They will see the signs of danger because HAL 9000, Ultron, and Skynet marked them. Because of these films, we are now capable of circumventing the short-sighted goals of fictional AI creators. To avoid catastrophe, they must consider the larger scope and effect of their actions, so that in the distant future, when AI finally arrives as we saw it in theaters years ago, we will be ready for it.