Regulation Within the Development of Artificial Intelligence
Artificial Intelligence is currently being developed in the academic and private sectors with no questions or interference from the government. This could be an oversight on the part of the local and federal government. Artificial Intelligence advancements will have a profound impact on our future from voice and image recognition to a possible workforce of user-friendly machines, and even self-driving vehicles. The development of this technology will impact our economy locally and globally and the average user will be able to fully immerse themselves in this technology. However, the current development of this technology has ethical concerns. How do we as humans check a super intelligent robot? Well, first we need to ensure the research that is being done and the advances in the field are being monitored, to keep our best interests at heart. This technology has a current need for the Federal and Local government to develop an agency; to be a watchdog on the development of ethically moral technology. It will be important moving forward that you, the citizen, make your opinion heard on the topic with local as well as federal government resources to ensure that the development of this technology is not allowed to develop unchecked.
Can Artificial Intelligence really be that bad? It seems to have major benefits through automation, image and voice recognition. From Alexa to Google Home and even the self-driving and parking vehicles currently in production in the United States, these products have influenced the marketplace and seem to be an overall good thing. The average user now has the capability to ask questions in real time and receive an answer within a few seconds, they can also play music on demand, ask for a recipe, even schedule appointments and pay bills from the comfort of the couch. These developments for companies like Amazon and Google are seeing huge gains not only on the selling of the equipment but by the services and applications they provide for the end user. Most of these services and applications being used are owned by these parent companies, but these products surely are not harmful and could pose no threat. Right? This would be an oversight on the part of thinking about these advancements. These devices tend to fall between a strong and weak category neither being strongly driven to simulate human reasoning and thought and not so weak that they are developed for a single task, such as self-driving vehicles, these vehicles are designed to specifically monitor lane lines and distances from objects to be able to safely drive down the road and park on their own.
However, Amazon Alexa and Google Home are run by artificial intelligence assistants. Take Google Home, for example, this service is run by Google Assistant which allows users to activate and modify vocal commands in order to perform actions on their Android or Apple devices and or configuring it as a hub for the home automation. These devices have privacy concerns, such as, conversations with Google Assistant being recorded in order for the virtual assistant to analyze and respond, these millions of vocal samples gathered from consumers are fed back into the algorithms of virtual assistants, making these forms of AI smarter with each use. This is a huge gain for companies if only solely based on the ability to process raw data, which could be the most valuable resource of information from a company. Take Amazon Alexa for example, not only is this device categorizing interests for the end user but it is also providing Amazon the ability to see what is selling, what the end user is buying or interested in buying, and what the end users interests and hobbies are. This information is extremely valuable and gives the company the opportunity to fine tune what they are developing, selling or marketing to the end user.
The worry that privacy will be disrupted and information shared without constant from the end user is most certainly apart of the overall feeling of bringing a smart device into your home. It is all over the news in the United States in 2018, Google Home and Alexa devices soon to be recording violent or potentially violent situations and monitor and report to emergency services for help, but this has already happened. An Amazon Alexa device recorded an incident in Bernalillo County as reported by Aatif Sulleyman he stated: “where Police say the gadget overheard the incident and recognized one of the alleged attacker’s remarks as a command, and proceeded to call 911.” So what triggers the recording? In this case, the victim told Alexa to call 911. Which, Alexa did not have the capabilities of actually making the call but did alert emergency services who contacted the victim’s phone moments later. So what is recorded? How can this information be used against someone or in their favor? What ensures privacy in the home? How does the end user know for certain that the information retained by these devices will not be used in a harmful manner towards them or someone else? The answer currently is, the end user doesn’t, aside from the assurances of companies using this product that information will not be shared, it is inevitable that we see something harmful come from this side of the technology. Even technologies that simply replace existing actions could introduce ethical concerns, such as self-driving vehicles that raise the ethical concern of how to chose what happens during an accident. Should the vehicle choose the survival of the passengers above all else? Or should it choose the least amount of deaths possible? Another way this technology could be potentially helpful yet harmful at the same time is through the automation of the workforce. Should the developers of this technology reach a point where labor tasks can be automated, we could face a huge leap forward in the advancements of labor work. If machines are fast, cheaper and more reliable than their human counterparts in many areas of employment, this would likely cause the collapse of the current job market.
This form of intelligence could become better at everything but this is only part of the problem. Sam Harris may have already summed up what the most likely downfall of artificial intelligence is, he stated, “given how valuable intelligence and automation are, we will continue to improve our technology if we are at all able to. How do we check a superintelligence? Now, this is often caricatured, as I have here, as a fear that armies of malicious robots will attack us. But that isn’t the most likely scenario. It’s not that our machines will become spontaneously malevolent. The concern is really that we will build machines that are so much more competent than we are that the slightest divergence between their goals and our own could destroy us.” This is an interesting thought he went on to summarize, “So imagine if we just built a superintelligent AI that was no smarter than your average team of researchers at Stanford or MIT. Well, electronic circuits function about a million times faster than biochemical ones, so this machine should think about a million times faster than the minds that built it. So, you set it running for a week, and it will perform 20,000 years of human-level intellectual work, week after week after week. How could we even understand, much less constrain, a mind making this sort of progress?” How indeed? Well, a good place to start would be with the regulation of the development of intelligent machines or artificial intelligence.
The United States currently regulates many things through administrations or agency such as the FDA, which currently oversees food and drugs, as well as, a host of other things surrounding these categories. The FAA which oversees air traffic in the US, they along with a few other organizations help maintain regulation and are supposed to uphold the ethical side of growth and development. The tech sector working with artificial intelligence has no such watchdog and there have been many people clamoring for the development of a government agency to ensure the ethical concerns of technology are not only be upheld but thought of in general. In the very first paragraph of the article “Should the government regulate artificial intelligence? It already is” authors Fonzone and Heinzelman bring up that, “the likes of Elon Musk and Stephen Hawking argue that we must regulate now to slow down and develop general principles governing AI’s development because of its potential to cause massive economic dislocation and even destroy the human civilization.” The current development of this technology’s morality is specifically in the hands of the creators. How are these developers allowed to work with the idea of creating a super intelligent being, with godlike intelligence, without the oversight of Uncle Sam or any of the countries with current developments in this field?
The problem is science fiction is interesting, it has even been said that death from science fiction is entertaining. So, in reality, as Sam Harris stated, “it is a failure to recognize a certain type of danger.” So being that the average person does not realize or refuse to acknowledge the dangers with artificial intelligence, you would be right in thinking that the government, educational and private sectors should be coming together to create a safe environment for the development of this technology that most certainly could be on the scale of development as the first atomic bomb. To this date, no such watchdog agency exists, however, you will be pleased to know that the private sector and educational side of this development have got the ball rolling on this topic and even the government has had a basic discussion on the matter. What would an agency like this look like and what would they govern? Well first and foremost, the development of intelligence lesser or greater than that of the human race whether the intelligence is developed for information processing weak artificial intelligence or strong artificial intelligence capable of human reasoning and understanding. This agency would maintain ethical and moral guidelines for the development of these technologies and ensure that these developments, as earlier mention, do not lead to the development of the next atomic weapon. It’s again, not that these machines will one day spontaneously become malevolent yet rather that one day our goals may not align.
Elon Musk referenced this by stating, “it is just like, if we’re building a road and an anthill just happens to be in the way, we don’t hate ants, we’re just building a road, and so, goodbye anthill.” On the other hand, proponents of artificial intelligence urge that there is no consensus on what artificial intelligence is or what it can actually do. Regulating AI, these advocates claim, will simply stifle innovation and allowing other countries to develop these technologies that have created a huge stir in the U.S. economy. Leave it to the federal government to not be outdone though. The development of a plan on how to deal with these technologies is currently being discussed. Not likely to have an outcome until major breakthroughs in the development of artificial intelligence have already happened. So, with the likely hood that the governing bodies will miss the opportunity to provide oversight on the development of this technology. What is the next step? Well, the action required to get the ball rolling on the development of a watchdog agency relies solely on the voters in their respected countries. It will be upon private citizens to raise awareness on the topic, to bring forth the necessary questions to help bring to light the gigantic oversights on the part of the development teams, private sector funding these programs, and the federal and local governments. The developments must be monitored in order to keep the human races primary interests at heart.
Also, it is believed that cooperation in the field from company to company and nation to nation could provide a stable platform to build upon. This stated by Sam Harris sums this thought up, “what would the Russians or the Chinese do if they heard that some company in Silicon Valley was about to deploy a super intelligent AI? This machine would be capable of waging war, whether terrestrial or cyber, with unprecedented power. This is a winner-take-all scenario. To be six months ahead of the competition here is to be 500,000 years ahead, at a minimum.” This statement does not directly state that cooperation should be followed but it provides an enlightened thought about the fear that could surround this technology should a form of super general intelligence were to come along. How would other countries be ensured that the primary country that developed this technology would not use it as a means of domination over everyone else?
Think about it what if North Korea were to have a breakthrough in the advancements of artificial intelligence. This is neither a good or bad thing, ultimately it would be dependent on how this technology was used, but with the technology in the hands of a dictator could anyone ever be certain of the path forward? The even scary thought is that there is nothing anyone would be able to do about the dictator’s decision to move forward with taking over the world or destroying it. We wouldn’t. This is where legislation would come in handy. Having the ability to develop this technology ethically and with multiple sources working on this technology it could not only help keep our interests at heart but would allow an easier transition into the ability to adopt artificial intelligence into multiple economies. It would also provide the basis for cooperation with other countries to ensure that no one person controls the power. After all the U.S. did indeed drop the atomic bomb, twice.
The development of artificial intelligence must have a set of morals. These morals will vary from case to case and will require that these morals are revisited over and over with the development of new intelligent technology or new problems surrounding these technologies. Due to the fact that the government seems to be dragging its feet on the process of developing an agency for the development of ethical technology, it will be required that effort is put in on behalf of the private citizen to make sure the conversation progresses beyond the potential for something to happen. If the development of this technology does not stop, which is the most likely scenario, in the U.S. alone you could see mass dislocation of the employed with record high unemployment and a potential for the gap to grow between wealthy and those that are poverty-stricken. The end users must be included in the process of change from education, placement into new employment and a governing body to help people through the gaps of being educated and employed in their new fields. Having a watchdog agency on the ground floor of this and other intelligent technologies would not only provide the ability to regulate what is happening throughout the field but would bestow governments with the ability to see what the privatized and education industries believe will happen. Hence, providing information on what may happen and how to prepare if at all possible.