The Rise of Explainable Artificial Intelligence
John McCarthy introduced the term Artificial Intelligence in Dartmouth Conference in 1955 . He said it is an art of creating intelligent, self-thinking machines . Even before the term was coined, there was considerable research going on in this field. And as a result, today we have, Amazon’s Alexa, Microsoft’s Cortana, Apple’s SIRI, Tesla’s autopilot system, Google’s Deep Mind, IBM’s Watson and many more. Artificial Intelligence, a concept that millennial kids were introduced through science fiction movies rather than books or blogs. VIKI (Virtual Interactive Kinesthetic Interface), the Artificial Intelligence equipped supercomputer from movie I, Robot which aimed for extinction of humans, JARVIS, the Artificial Intelligence assistant of Iron man or SKYNET from The Terminator which takes control of the military operations, they all seemed so fascinating on big screens. These movies are like complex numbers, they have real and imaginary parts, with later contributing more. They give a powerful message. A message of being cautious with these machines With every second, we become more reliant and dependent on machines than ever before. A study conducted by Times revealed that 66% of participants said that they were uncomfortable relying or sharing data with AI .
Studies also show that in just 10 years, Artificial Intelligence will outdo humans . Artificial Intelligence was always associated with two main questions. Will it make us jobless? And how much can we trust these systems? After many recent cases of Artificial Intelligence failure or AI acting supposedly in an unexpected manner, I believe these questions are appropriately raised. And at last, Artificial Intelligence is developed by humans, how can we expect it to be false proof. Several cases have surfaced the media and received proportionate share of publicity to raise trust issues against AI. The case of Alice and Bob, the Facebook Artificial Intelligence chat box who started communicating with each other through a secret language or Google mini homes secretly recording their owner’s audio . There are several more examples which resist us from trusting AI.
How it works
A perfect example of AI failure due to lack of trust was failure of IBM’s AI Watson Oncology meant for cancer care . Watson failed because doctors couldn’t understand the functioning and decision-making algorithms. If I were to work as a data analyst and receive data from an AI, I would like to know how it retrieved data or came to conclusions. For example, an image recognition AI detected horse image by looking at the copyright tag that was attached to horse image and not by learning the features to detect a horse image . Trust is a foundation for any relation may that be between humans or humans and machines. Computer Scientist very well know that the key to the future of Artificial Intelligence is human trust. To gain trust, transparency was a necessity and hence Artificial Intelligence is no longer a black box. This paved way to emergence of comparatively transparent AI called Explainable Artificial Intelligence. Explainable AI conveys the decision’s reasonability to humans. With AI, neural networks, machine learning growing at a breathtaking speed, it becomes complex to explain these decision-making processes in simple human terms.
Also, it calls for additional efforts to make algorithms more explicable. Explainable AI also helps the creators understand the faults or loopholes in the algorithms. In 1997, researchers investigated how AI could be used in healthcare and medical fields. A neural network was made to predict the mortality risk of pneumonia patients. For 4352 patients, it detected mortality risk with 98.5% accuracy. The interesting part of this case is that AI would suggest that patients with pneumonia who also had asthma to have lower risk of death. Moreover, AI suggested them that they do not need hospital treatment for cure. This raised eyebrows and called for further explanation of this outcome.
Digging deeper the researchers found that the patients with asthma were rigorously treated and hence had low mortality rates . This piece of data from human history made AI take inappropriate decisions. So, the transparency of AI will help us avoid such mistakes and help it serve humans satisfactorily and reliably. The AI giants understand the sustainability depends on exploring the explainability of AI. Companies like Twitter, Google, Airbnb have already released their transparency report . Oracle has invested $5.4 billion and has a dedicated team of PhD Computer Scientist working on explainable AI at a research facility called Oracle Labs . I strongly believe that the rise of Explainable Artificial Intelligence is just the beginning of a new era of strong human-machines co-existence and interdependability.