Why Artificial Intelligence Must be Regulated

Category: Science
Date added
2019/07/16
Pages:  6
Words:  1715
GET YOUR PRICE

In the past decade, tremendous strides have been made in computing technology due to Moore’s law, which states that the manufacturable density of transistors in microchips will roughly double every two years. This has lead to dramatically increased computing power, and has allowed for previously theoretical concepts, such as Neural Networks, to become practical in modern society. The negative impact that these new technologies could have, however, is often not considered in favor of uncontested innovation. Although some may argue that Artificial Intelligence poses no risk to the future of human society, experts argue that malicious uses of Artificial Intelligence are inevitable and outline the steps necessary to prevent such an event.

To better understand Artificial Intelligence’s significance and potential, it is necessary to understand how it works. In traditional Artificial Intelligence, a computer would make “decisions” based on previously decided guidelines. For example, in a pathfinding AI, it might move until it encounters a wall at which point it is programmed to turn to the left and repeat the same set of steps, also known as an algorithm. A Neural Network, however, operates entirely differently. A Neural Network is Tabula Rasa, or a Blank Slate, meaning it has no previous or starting knowledge and everything is learned through trial and error. This is significant because, while Neural Networks require much more time and computing power to train along with a vast dataset, they end up much more accurate and can accomplish tasks that simply couldn’t be programmed into traditional Artificial Intelligence (Hurrion 337-338). Solver describes Neural Networks as “relatively crude electronic networks of ‘neurons’ based on the neural structure of the brain.” Further, this source explains how they “process records one at a time, and ‘learn’ by comparing their classification of the record with the known actual classification of the record.”   The Neural Network learns from it’s errors, as the “initial classification of the first record is fed back into the network, and used to modify the networks algorithm the second time around, and so on for many iterations.” (Solver). This idea for a training method has been around since the early 1990’s, however, due to lackluster computing power, was not realistic on large-scale problems until the early 2010’s.

Although Neural Networks are good at many things such as pattern recognition and data analysis, “they’re not well-suited for tasks that require logical reasoning or putting many pieces of information together” (Hitchings). As outlined in the journal article, Using a Neural Network to Enhance the Decision Making Quality of a Visual Interactive Simulation Model, a Neural Network can make decisions almost as accurately as a hard-wired AI; From the experiments performed in the journal article, it was found that the mean difference between the Neural Network and traditional Artificial Intelligence was 0.0417%. This result was then described as “very encouraging” and showing “negligible bias between the simulation results and corresponding results obtained from the neural network” within the experiment (Hurrion 337-338). While the slightly decreased accuracy is a disadvantage, it is a miniscule tradeoff when it is considered that this technology can be applied to almost any problem that could not be solved another way, and that Neural Networks have several significant advantages compared to traditional Artificial Intelligence. One of the most significant advantages of Neural Networks is also outlined in Hurrion’s work, when he describes how the aforementioned result “confirms that the neural network has ‘learnt’ the response to the different configurations in the training set.” (Hurrion 338). As previously mentioned, while lots of resources are required to train a Neural Network, the processing time once fully trained is magnitudes quicker, making it one of the most significant advantages. This is described in the following quote: “The time to obtain a solution from the neural network took microseconds, while the time taken to confirm the solution using the simulation was close to 3 minutes per configuration.” (Hurrion 338). Although computers have gotten exponentially faster since 1992 when Hurrion’s paper was published, solution time is still an extremely important consideration especially in time-sensitive applications such as in self-driving technology. This enormous upside allows the processing of solutions that would otherwise take far too long with current resources to be practical.

The United States Military has already realized the potential of Artificial Intelligence and Neural Networks within military technology. In a joint effort with Google, The Department of Defense launched Project Maven. During a presentation on the technology, Marine Corps Col. Drew Cukor, chief of the Algorithmic Warfare Cross-Function Team, said that by the end of the  year, “the department will field advanced computer algorithms onto government platforms to extract objects from massive amounts of moving or still imagery” (Pellerin). This new technology will allow better differentiation of friend from foe, and will allow surveillance operators to be much more effective in distinguishing targets.

Military applications are just one use of AI’s current best skill: image recognition. Neural Networks provide nearly endless possibilities in modern society, proving extremely valuable for medical diagnosis, pattern recognition, and especially image recognition (Henderson). In the medical field, AI is being used to diagnose breast cancer at unprecedented rates. Researchers at Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory, Massachusetts General Hospital and Harvard Medical School worked together to create a machine learning algorithm that helps to “predict if a high-risk breast lesion identified on biopsy will be upgraded to cancer at surgery, or whether the lesion could be safely surveilled.” This model then tested on 335 high-risk lesions and “correctly diagnosed 97% of the breast cancers as malignant.” It also reduced benign lesion surgeries by “more than 30%” compared to current methods. (Massat). This is an exemplary example of technology and Artificial Intelligence being used to save lives and serves as an insight into the potential for Artificial Intelligence to work hand-in-hand with humanity.

Self-driving car technology is also one of the major focuses of Artificial Intelligence. While self-driving cars are certainly convenient, the main focus by most companies is safety technology. According to Jeff Schneider, senior engineering manager at Uber and a research professor at Carnegie Mellon University, “94% of car crashes” are caused by human error, with half of the mistakes being due to “recognition errors”, as in lack of awareness or attention by the driver, while the other half was the result of a “decision error”, such as the driver going too fast or misunderstanding the situation. According to Schneider, self-driving vehicles can address these two errors through a mixture of advanced hardware and carefully programmed software combined with large datasets. Recognition issues would be minimized by using sensors, radar, cameras, Lidar (a remote sensing system), and other devices. The cars map objects and other things around them in 3D, as well as receiving 360-degree camera views and having access to other data such as velocities of objects. This data is fed into complex models to better analyze the environment and make the correct driving decisions. (Global Focus North America). These innovations will not only make humanity safer, but will help in furthering the goals and aspirations of man.

While Artificial Intelligence has almost infinite potential benefits and uses, it is important to make sure that Artificial Intelligence doesn’t get out of control. Stephen Hawking is quoted as saying that “the development of full artificial intelligence could spell the end of the human race.” Bill Gates told Charlie Rose that “A.I. was potentially more dangerous than a nuclear catastrophe.” Nick Bostrom, an Oxford philosophy professor, warned that “once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed.” The best way to ensure humanity’s fate alongside Artificial intelligence is to introduce regulation ensuring that Artificial Intelligence doesn’t get out of control, potentially causing the demise of humanity (Dowd).

In February 2018, a revolutionary report was published. It was led by the University of Oxford, but included leading researchers from around the world. The report is titled The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, and focused on the necessary legislation to make Artificial Intelligence sustainable in the future. In their report, the researchers outlined four crucial recommendations, the first one being that lawmakers should “collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI.” This is a crucial step, and will allow lawmakers to better understand the potential problems that humanity faces from Artificial Intelligence. This will lead to more effective regulation that will protect humanity without stifling the development and improvement of Artificial Intelligence. The responsibility, however, doesn’t fall entirely on the lawmakers. The second recommendation is for researchers and engineers studying Artificial Intelligence to  “take the dual-use nature of their work seriously”, and must consider possible misuses to “influence research priorities and norms”, as well as keeping politicians informed and “proactively reaching out to relevant actors when harmful applications are foreseeable.” Researchers also need to be aware of the potential implications of their technology, and take that into account instead of advancement at all costs, as well as keeping other researchers informed. The third recommendation describes how knowledge from other, more mature sectors such as cybersecurity should be used to better strike the balance of the dual-use concerns of Artificial Intelligence. The researchers fourth and final recommendation is that researchers should “actively seek to expand the range of stakeholders and domain experts involved in discussions of these challenges.” This will allow those in power, as well as the general population, to better understand where Artificial Intelligence is headed, and what steps need to be taken to make sure that Artificial Intelligence stays in the best interest of humanity. In conclusion, researchers need to share their work with others, and be cognizant of the potential impact their work may have on society.

It is obvious by the introduction of new technologies that Artificial Intelligence already does, and will continue to, have a significant place in society. However, this new technology can easily get out of control if researchers aren’t careful. It is important that lawmakers and researchers alike listen to experts, and always keep the best interests of humanity above everything. Although Artificial Intelligence can, and should be, used for good to further humanity, it is important to introduce regulation before bad-actors with ill intentions misuse the power of Artificial Intelligence, possibly at the demise of humanity.

Did you like this example?
The deadline is too short to read someone else's essay

Hire a verified expert to write you a 100% Plagiarism-Free paper