The Beauty and Danger of Artificial Intelligence
Since the dawn of novels and television, the notion of artificial intelligence in the form of robots has been a reoccurring theme in the science fiction genre. The over dramatization of inimical artificial intelligence in these fictional narratives has led the general population to form a slight aversion to the idea of further developing artificial intelligence. The inherent fear of the unknown has also contributed to this problem; people are afraid of developing a race that could potentially replace us as the dominant species and of a future where technology/biology are indistinguishable from one another. As shown in a multitude of thrillers, artificial intelligence seemingly has the capability to do so.
To fairly debate about whether or not we should pursue this type of higher technology, it is important to distinguish the type of artificial intelligence that possesses the capability to go awry/dominate the human race from the artificial intelligence that cannot. For the purposes of the argument presented in this paper, we assume this distinction lies predominantly on whether or not the AI is both sentient and sapient. Sentience is capability to feel emotions and sensations. Sapience is the capacity to critically think and reason; an ability unique to human beings. If we were able to successfully imitate these two metaphysical qualities in technology, then it is likely possible that these machine
Our writers can help you with any type of essay. For any subjectGet your price
How it works
On the other hand, there are multifarious reasons to believe that the glorification of robots in books, television, and film has led to a ubiquitous miseducation of the public due to inaccurate portrayal. Does artificial intelligence have actually the potential and capability to “take over the world?” Though it is arguably possible, a lot of that potential lay in the hands of its creator. What is the probability that these creations will go awry and end up harming the human race? Is that probability negligible? Furthermore, there are multifarious external factors that we must consider in order to fairly assess this ethical issue.
For this reason, I surmise that the pros may outweigh the risk; therefore, it should be easy to say that we should proceed in our advancement of artificial intelligence and head towards imitating human’s sapience. However, it would be unsophisticated to give a definitive answer to the question: “should we or should we not create artificial intelligence?” without defining stringent conditions. Thus, the question is not “can we,” nor “should we,” but “what restrictions and rules should we place on both the creators and the creations?” Will we be able to come to a consensus on these stringent rules on the basis of ethicality and morality?
Firstly, we must define why it is a good idea to develop AIs and refute the counterarguments. Consider the Catholic Church and other religions: many religious extremists have always been against the advancement of technology, playing God, and extending someone’s life past “their time.” Many were afraid of the development of even simple medicine; to this day, some cultures refrain from medicine and hospitals, leaving the termination of a human’s life entirely up to the deities they believe in. Just because we are wary does not mean we should not venture into the unknown. We tend to learn from history and history has shown that advanced research, though seemingly unethical at the time, has in turn drastically enhanced human life (i.e. stem cell research).
If artificial intelligence has the capability to improve our quality of life, it’s a necessity to continue advancing it. The fear of robots taking over the world begins to seem irrational when we consider that the human race is advancing at the speed of light as well. The general population is far more intelligent, educated, and robust than it has ever been in the past. Our own development and betterment will accompany the development of artificial intelligence. To think that robots can and will takeover, dismisses the wondrous complexity, intelligence, and capability of human beings. Going back to the theme of film and movies, there is always that heroine/hero who goes against the odds. Furthermore, developing more complex and conscious artificial intelligence will help us understand them, and ultimately ourselves immensely better. With how fast technology advances and how fast our web of knowledge expands, it is hard to even predict what level we will end up at within this very lifespan.
While it is easy to list and consider these merits of positive artificial intelligence scenarios, we must also consider and refute the cons. A common concern is that robots will put humans out of the work force. From an economic standpoint, this may be true in the short run, but this will only help push us in the right direction. The well-being and prosperity of one’s country is determined by its productivity, or how efficient it is at producing goods and services. With the introduction of artificial intelligence to take up some human occupations, we will have more resources, tools, and time to refine/advance other areas.
The establishment of machines into factories in prior history has helped us to advance as a species; the loss of jobs that require simple-minded work like screwing on a toothpaste cap or placing a label on an orange juice bottle has forced the general population to become more intelligent and focus on jobs that require more brainwork and exercise the brain. It has caused more of us to be college educated than ever before. If sapient/conscious machines were able to take up more than simple factory jobs and move into fields of counseling, education, banking, etc, it is likely that the human race will continue to further advance as well and new job markets will open up. Many of the jobs we hold now may become obsolete. It is possible that this may go in a different direction as well: humans will have more leisure time and the social aspect of our lives and our general happiness may arguably improve as well.
If machines obtain the ability to do all these things, then the ethicality of artificial intelligence becomes a very complex argument, with a huge gray area. When the line becomes blurred and artificial intelligence becomes both sapient and sentient, what is the ethicality behind treating them however humans please? At this point, do they become the same status and level as human beings?
According to the results of the Turing Test, proposed by Alan Turing (1912-1954), no machine has been able to demonstrate sapience. They are unable to exercise free will, engage in creative activity, or display real feelings; therefore, unable to deceive human beings into thinking they are human as well. While I disagree with the development of an AI with the sole purpose of passing the Turning Test and outwitting human beings, I understand that triumphing this roadblock will allow humans to understand artificial intelligence better and move us towards the creation of helpful sapient AIs, such as a more advanced Eliza (artificial intelligence with the function of acting as a counsellor.)
Given a future where we achieve sapient and sentient AI, who are we to punish if an AI goes awry: the AI or its creator? If the inventor is unable to predict her/his invention taking a turn for the sour or an accident happening due to a bug in her/his invention, how severe should their punishment be? It should be severe enough to incentivize them to take many precautions in developing their machine, but where is that line drawn, just how severe is “severe,” and what is to happen to the machine that has gone rogue? To further delve into the issue of ethics, we must also consider the idea that machines potentially aren’t going to dysfunction on their own. The use of artificial intelligence can easily be corrupted. The debate of whether human beings are inherently good or bad is another topic of debate entirely, but it is inevitable that some humans and organizations would take advantage of artificial intelligence and use it in a harmful way to benefit themselves.
The scenario in which “robots go evil” is not a black and white case; evil is a broadly defined word, it could mean something as severe and permanent as killing or something as subtle as discriminating. For example, the article “The Ethics of Artificial Intelligence” by Nick Bostrom and Eliezer Yudkowsky, conveys an example in which a bank uses an AI to recommend mortgage applications; they claim that since it is a machine, it cannot be racially biased, but statistics show that the machine has significantly been denying black applicants. The Fair Housing Act of 1968 made discrimination in renting houses illegal, yet this exact discrimination based on skin color still occurs today. They are no longer allowed to prevent people from obtaining citizenship, yet minorities’ basic human rights are infringed on all the time.
Racism is still running rampant in our country, despite discrimination being outlawed almost half a century ago. Minorities are still disadvantaged on all levels: social, political, and economic. This example of corrupting an AI shows that corrupt people will continue to find ways to suppress others. Where does the blame lie in this case? Do we punish both AI and its creator? If we have the ability to punish “bad” artificial intelligence, then is it not moral to protect “good” artificial intelligence under the law? Though older movies tend to depict robots in a menacing light, modern films like Her and television series like Black Mirror show very real issues that may arise from conscious/sapient artificial intelligence. Once we have machines that can successfully and consistently pass the Turing Test, we delve into a gray area of ethics. What level of rights do they receive? It is unlikely that we will continue to treat machines who have emotions and free will as inanimate objects. It would feel wrong to take apart a sentient machine as we would an inanimate computer. When it has emotions and free will, it will seem more like executing/terminating rather than “simply taking apart.” Then, we must consider what level of rights they should receive. Can we treat them like animals, who also have emotions and free will? Or are they intelligent enough to be on the level of human beings? How thin is the line between these two and what warrants a crossover from one side to the other? While I don’t have an exhaustive answer to these questions, bringing up these thoughtprovoking considerations helps to refute the idea that it would be unsophisticated to simply say yes or no to the advancement of conscious artificial intelligence. Even if the pros outweigh the cons greatly, we then must consider ethics and define the line between human rights and artificial intelligence rights if we chose to draw a line at all.