Researchers across the globe are working extensively towards achieving an artificially intelligent system that can behave in an ethically and morally right manner. Morality, or the nature of distinguishing good from the bad, is an important human trait which researchers are now obsessively looking to infuse into machines.\n\nBut, why are humans obsessing over it? Is that even a true human trait? Hasn't history shown us that humans are capable of things worse than any AI could possibly do?\n\nThe Moral Machine\n\nThe concerns over morality often arise while talking about AI in areas like self-driving cars. Who dies in the car crash? Should it protect the passengers or passers-by? The Moral Machine, an initiative by the Massachusetts Institute of Technology gathers a perspective on moral decisions made by AI and machine learning. As a part of this initiative, participants could give their opinions on what AI in cars should do when confronted with a moral dilemma. \n\nSome of the common questions asked in this initiative to \u2018crowdsource\u2019 morality were:\n\n\n Should the self-driving car run down a pair of joggers instead of a pair of children\n Should it hit the concrete wall to save a pregnant woman or a child \n Should it put the passenger\u2019s life at risk in order to save another human? \n\n\nThe researchers then created an AI based on this data, teaching it the most \u2018predictably\u2019 moral thing a human could do. The initiative was led by a collaboration between Carnegie Mellon assistant professor, Ariel Procaccia and one of MIT\u2019s Moral Machine researchers, Iyad Rahwa, who designed it to evaluate various moral situations that an AI can encounter.\n\nThough it sounds like an interesting concept, how can the reliability of the machine based on crowdsourced morality be ensured? It couldn't exactly be trusted for making complex decisions such as those around saving human lives. As experts believe, to decide on hundreds of millions of variations based on views of few millions couldn\u2019t possibly be the best way. Professor James Grimmelmann from Cornell Law School had said, \u201cCrowdsourced morality doesn\u2019t make the AI ethical. It makes AI ethical or unethical in the same way that large numbers of people are ethical or unethical.\u201d\n\nIn a similar effort, Germany released the world\u2019s first ethical guidelines for the artificial intelligence of autonomous vehicles. Developed by the Ethics Commission at the German Ministry of Transport and Digital Infrastructure, the guidelines stated that self-driving cars must prioritise human lives over animals, whilst also restricting them from making decisions based on age, gender and disability.\n\nWhy Humans Are Obsessed About \u2018Moral\u2019 AI \u00a0\n\nIn an earlier survey carried out by MIT, while many shared genuine views, there were others who agreed that though self-driving car should sacrifice its own passenger when faced with a calamity, they would not prefer to ride in the cars themselves. This ambiguity in their thoughts raises questions about how ethical AI system could actually be, if their (human\u2019s) own opinions are disparate. \n\nBeing truly abstract in nature, teaching morality to AI \u2014 which could be done best with measurable metrics \u2014 is next to impossible. In fact, considering instances, such as the one mentioned above, it is even questionable if humans have a sound understanding of morality that all of us can agree upon. \u2018Instinct\u2019 or \u2018gut feeling\u2019 takes over precedence in many cases. For instance, an AI player can excel in games with clear rules and boundaries by learning to optimise the score, but it has to work harder when it comes to mind games such as Chess or Go. We have, however, seen in past instances where Alphabet\u2019s DeepMind was able to beat the best human players of Go. But in real-life situations, optimising problems could be more complex.\n\nFor example, teaching a machine to algorithmically overcome racial and gender biases or designing an AI system that has a precise conception of what fairness is, can be a daunting task. Remember Microsoft\u2019s AI chatbot that learnt to be misogynist and racist in less than a day? To teach AI the nuances of being ethically and morally correct is definitely not a cakewalk. \n\nCan AI Be Moral\n\nIf we assume that a perfect moral system was to exist, we could derive this perfect moral system by collecting massive amounts of data on human opinions and analyse it to produce correct results. If we could collect data on what each person thinks is morally correct, and track how these opinions change and evolve over time and over generations, probably we could have enough inputs to train AI with these massive data-sets to be perfectly moral. \n\nThough this gives a hope of building moral AI, since it relies on human inputs, it would be susceptible to human imperfections. Unsupervised data collection and analysis could in fact produce undesirable consequences and result in a system that actually represents the worst of humanity. \n\nOn A Concluding Note\n\nDespite fears by the likes of legendary scientists Stephen Hawking, arguing that once humans develop full AI, it will take off on its own and redesign itself at an ever-increasing rate, humans seem to be indulging in conversations around the importance of programming morality into AI. Tech giant Elon Musk has also time and again warned that AI may constitute a \u201cfundamental risk to the existence of human civilisation\u201d.\n\nThough these fears seem only reasonable, it cannot be denied that there is a need for more ethical implementation of AI systems, with a hope for engineers to imbue autonomous systems with a sense of ethics. It would be only ethically correct to have a moral AI that builds upon itself over and over again, and improves on its moral capabilities as it learns from previous experiences \u2014 just like humans do.