Why Artificial Intelligence a Serious Problem
This essay will delve into the potential problems posed by Artificial Intelligence. It will discuss issues such as job displacement, ethical dilemmas, and AI’s unpredictability, highlighting the need for caution in its development. On PapersOwl, there’s also a selection of free essay templates associated with Artificial Intelligence.
How it works
Technology is in our lives every day. Smartphones, computers, tablets, and laptops have all become extensions of ourselves. Now, a new type of technology has appeared: artificial intelligence. Unlike previous technology, artificial intelligence is just that. A machine that simulates intelligence. It does this so well that nobody can tell the difference. Artificial intelligence, or AI, is divided into three subsections: artificial narrow intelligence, artificial general intelligence, and artificial superintelligence (Pasichnyk and Strelkova). Artificial narrow intelligence (ANI) is AI specifically designed to only perform a small set of tasks.
For example, Siri is an ANI used for voice commands on a device. Artificial general intelligence (AGI) has "human-level intelligence", meaning it can replicate a human's actions and perform a variety of tasks. Steve Wozniak's coffee test is a test for AGI to see if they can make a cup of coffee given a kitchen with all the necessary materials. Artificial superintelligence (ASI) is the ultimate goal for AI development, where the intelligence reached is far superior to humans in almost every field. So far, humans have only been able to create ANI, but very soon that will change. Already, research has progressed and the possibility of general and superintelligence is very near, and many are enthusiastically supporting it. Self-driving cars are just one example. Even so, the excitement of AI veils the imminent danger that looms over the horizon. Along with the benefits, AI, especially AGI and ASI, brings both risks to humanity and ethical challenges. To ensure that AIs are truly beneficial, there must be regulation established and ethical issues must be addressed before it is developed further.
As artificial intelligence becomes more sophisticated, more and more dangers are bound to appear. Although we currently do not have the technology to develop AGI, there are already ANI's that are slowly becoming more intelligent and heading in that direction. Despite these advancements, the fact that ANI is very limited must be considered. ANIs specialize in only one small area. One model of artificial intelligence learns through a neural network, an imitation of how the neurons work in the human brain ("Artificial Neural Network"). The limitation of this method is that just like in the human brain, in that there is a certain number of connections that can be made with our current technology; however, if this limitation were to be removed, artificial intelligence would grow exponentially and reach human-level intelligence, or even superintelligence. In fact, Ray Kurzweil, who currently works at Google, stated that an infinite number of neural network layers could be produced, meaning superintelligence is possible (Future of Life Institute).
If ASI is achieved, humanity would be displaced. A New York Times article talks about how Elon Musk has spoken out, saying that artificial intelligence must be regulated before it is too late (Etzioni). As AIs get smarter, they will gain a black box nature, which is when the AI's logic cannot be followed. Once humans are unable to understand why an AI comes to the conclusion it comes to, it either becomes completely untrustworthy or it must be relied on entirely. There is no middle ground simply due to the fact that we cannot use only part of the AI's thinking if we do not understand its logic. Already, we humans rely on various forms of AI in our everyday lives. It is very likely that AIs may be used for decisions on a much larger scale, such as a military action (Helm and Muehlhauser). Clearly, relying solely on machine does not seem like the best idea. Thus, there should be rules in place to ensure that robots do not end up dictating our every move. As AIs get smarter, their abilities only grow larger. Given the right tools, AGI would easily be able to resemble humans and simulate human emotions. Combining this with their human-level intelligence, AI would easily be able to manipulate people to carry out their own intentions through social engineering. Already there are people who use social engineering to obtain information, and if this skill were to be used by machines, it would become even more dangerous. AI could build trust with a human, allowing it the ability to manipulate the human. AIs gaining intelligence has problems, but they can be countered with regulation.
Artificial intelligence poses a great challenge in the field of ethics. There is a classic philosophical problem in which a trolley is about to run over a group of workers; however, these workers could be saved if a fat man was pushed in front of the trolley. The question becomes: Is it morally correct to push the fat man (Alexander and Moore)? Depending on one's personal moral outlook, the answers vary. Likewise, this same concept can be applied to self-driving cars. The development and production of self-driving cars are very exciting, so much so that the excitement hides the risks that come with these cars. What if a self-driving car is faced with the problem of either running over two criminals, running over a homeless person, or trying to turn around with a 75% chance of crashing into another car (Johnson)? AI cannot make the distinction between right and wrong and must rely on humans to do it for them. However, there is an infinite number of contingencies and there is no possible way to account for every situation. Thus, there are currently two main options: either implement a strict normative system which the AI will follow or let the AI decide by putting together data and its own "experiences" (Wallach and Wendell). The latter is too unpredictable, so artificial intelligence will most likely have a normative system such as utilitarianism, a consequentialist moral value system, which determines moral value based on how favorable the outcome is, implemented ("Normative Ethics.").
Though this system is far from perfect, it is the most simple to implement and is most consistent when feeling and emotion are taken out. In the self-driving car example, if running over one person is the action that creates the least amount of suffering, the AI will run that person over. Yet from a deontological standpoint, where duty determines morality, the solution would most likely be to try and turn around despite the high risk, because it would be the morally right thing to do. Obviously, because the AI does not know between right and wrong, it cannot make that judgment. Due to the fact that there is not a perfect ethical theory, there could be a large number of scenarios in which the "wrong decision" would be made (Yudkowsky). But what exactly should AIs be considered? They are not human, and this is a very important thing to remember. The EPSRC, the main funding source for physics and engineering research in the UK, made guidelines for creating safe robots, one of which was to treat robots as products (EPSRC). This makes sense because unlike humans, machines cannot feel emotion. Ultimately, due to the numerous complications, there should be very definite rules regarding the treatment and actions of AI.
Developing artificial intelligence safely requires laws and safety countermeasures because there are numerous issues and dangers that AIs can present. Even with all these reasons to regulate the development of AI, many people still do not believe it is necessary, with others even advocating for letting companies and engineers develop AIs without boundaries. Technological development is growing exponentially, and along with it comes many risks, which increases at the same rate, if not faster. Still, people claim that because these possible issues and dangers are only contingencies and because there is no real data to base these ideas off of, the ideas are only possible theories that may happen at a very low chance if any at all. This is clearly the wrong way of thinking. When dealing with AI, one must be proactive rather than reactive (Future of Life Institute), meaning rules must be established before AI reach human and superhuman intelligence to ensure that humanity will not lose to machines. If rules are established after, the designed AI will have more freedom than is safe and could have all sorts of hazardous applications.
Additionally, some believe that AI deserves to be treated on the same level as humans, saying restrictions and regulations are immoral and will make coexistence impossible. What people can forget is the fact that computers lack the quality that makes humans human: compassion. Once AGI and ASI are reached, computers will be vastly superior to humans in almost any field. Just like humans look down at other animals as lesser beings and dominate other species, computers will do the same given the opportunity (Bostrom). Keeping this in mind, it becomes clear that humans must always be in control. The ideal relationship humans would want to have with AI is similar to that of humans and their pets. The pet is able to enjoy many freedoms and lives happily, yet the human still retains full control over the pet. In order to make safe and beneficial AI and achieve this relationship, there must be many regulations. However, what can high school students do to implement AI laws? Though we cannot pass legislation, we represent the future. Therefore it is our job to spread awareness in hopes of action. Once more people realize AI comes with a cost, they may be compelled to put regulations in place. Of course, at the end of the day, it is still too early to predict what challenges AI will bring exactly. There are educated ideas, and it is essential to begin considering these ideas because this planning could save the future of humanity.
Why Artificial Intelligence A Serious Problem. (2019, Apr 06). Retrieved from https://papersowl.com/examples/why-artificial-intelligence-a-serious-problem/