Should the Innovative Evolution of Artificial Intelligence be Regulated?
Technology is rapidly advancing every year throughout the globe. The increase of advancement has prompted many different governments to regulate the development of AI, also known as Artificial Intelligence (Scherer 2016). Periodically, industries are beginning to become more interested and involved in the integration of AI into the daily lives of society. AI can impact a society through its economy, environment and ethical aspects. The upcoming future will most likely include a further evolution of AI (Scherer 2016) that can create both benefits and potential risks. The debate is on how laws would regulate AI, and whether the effects of AI will be damaging or improving society.
The problem is that Artificial Intelligence is a very tedious topic in which there are two sides to agree and disagree upon. Much of the concern is the belief that AI will enable computers to work and do tasks just like humans. (Etzioni, A., & Etzioni, O. 2018) Society finds this concern threatening because if there are machines in which can do tasks just like a human, jobs will be decimated. Machines will be the operators of most jobs because they will be programmed to do tasks without imperfections, reducing the amount of humans needed to work jobs (Galston, W. A. 2018). Due to this discussion, regulation is a major concern because if AI is regulated, there is more restrictions to the extent AI can be evolutionize.
How it works
AI has been used immensely throughout the world to work in tasks efficiently and quickly. This technology is impacting and influencing society more than ever because AI is seen everywhere you go.
Artificial Intelligence is defined as “the activity devoted to making machines intelligent, and can enable a machine to function appropriately with knowledge of its environment.” (Etzioni, Amitai 2017) Throughout this paper, I will present and analyze two perspectives in which dispute the use of AI and whether the innovative evolution of AI be regulated.
One perspective may be that AI should undoubtedly be regulated because of the damage it can cause society. For example, world-renowned individuals, such as tech billionaire Elon Musk and physicist Stephen Hawking, believe that Ai should be regulated to prevent any potential risks (Straub 2017). In his article titled “Does regulating artificial intelligence save humanity or just stifle innovation?” Straub (2017), argues that he has seen how beneficial AI is by his profession of researching. He states that most of us have come upon AI in many different circumstances, such as online shopping, helping students in homework, or even in airport equipment, making everyone’s lives are easier to manage. He supports the fact that this type of advanced technology should be looked as doing more good than bad because it can help humans instead of overcoming them.
This source is valid because of Straub’s background as an expert. He is an Assistant Professor of Computer Science of North Dakota State University and he holds a PH. D in Scientific Computing. He shows a strong argument because he presents both sides to his argument. Moreover, he gives several examples of how AI can improve security, human tasks, and can be easily accessible. Yet, he argues the fact that AI can reasonably be seen as a risk and could understand why there would be a regulation on AI. The article is from 2017 so it is current which expresses a fresh idea with knowledge from present time.
There have been increasing public revelations and concerns about the evolution of AI in economic implications, ethical differences, and environmental effects. This leads to the concern of many countries because of AI’s constant development. In essence, debates have increased due to public anxiety on the impact of AI on their life since technology continues to develop.
Secondly, let us consider an opposing perspective which believes that AI should not be regulated because of its impact and advancement in society. Bill Gates, the founder of Microsoft, and Mark Zuckerberg, Facebooks Chief Executive, disagree with the regulation of AI and state that technology is not advanced enough for any risks to take place (Straub 2017). A major concern is whether the regulation of AI will create any economical or ethical issues that may alert government of potential dangers.