Open data is generally considered to be an aggregate of human activities and is used to ascertain things about the way one lives.\r\n\r\nAny firm that employs algorithm-based decision making, usually functions in a grey area. A data scientist collects all the data, curates and trains a model. Data curation is a tedious task and making sense out of noisy data is an accomplishment on its own. However, there is a bigger challenge even before the pipeline starts to build up \u2014 data collection and the methods involved.\r\n\r\nThe prospect of handing over the keys to increasingly autonomous systems induces paranoia in people thanks to pop sci-fi. Questions such as how will society respond to ever improving machine learning systems as they displace an ever-expanding spectrum of careers and, if the benefits of this technological revolution are broadly distributed or accrue to a lucky few have been raised.\r\nCall For Ethical Guidelines Intensifies\r\n\r\n\r\nPharmaceutical companies would like to know about the human genome, about the conditions of disease. Whereas, a power company would like to know about the power consumption and distribution of a certain area. These are few really legitimate uses for big data. However, the biggest use cases of big data today are in the domains of marketing.\u00a0\r\n\r\nCompanies want to sell products through ludicrous advertising campaigns, by barging into the browsers, through SMS alerts and other recommendations. And, for this to happen, they need tonnes of personal data so that they can make the predictions more efficient.\u00a0\r\n\r\nWe are already giving in to the terms and conditions by allowing third parties to have a sneak peek into our private lives through our mobile phones and it is only likely that we shall further entrust the management of our environment, economy, security, infrastructure, food production, healthcare, and to a large degree even our personal activities, to artificially intelligent computer systems.\r\n\r\nEvents like Facebook\u2019s \u201cFree Basics\u201d in Myanmar genocide and the Cambridge Analytica fiasco are only some of the dire situations where data has been exploited for horrendous activities. The blame game is only a part of post-mortem. The policy changes usually take place as a face-saving manoeuvre.\r\n\r\nhttps:\/\/twitter.com\/fperez_org\/status\/1141881681172819968\r\n\r\nRecently the North American and European association of radiologists issued a joint statement regarding the ethical practices of AI in medical applications. They believe that AI should respect human rights and freedoms, including dignity and privacy.\u00a0\r\n\r\nThey further suggested that the radiology community should start now to develop codes of ethics and practice for AI that promote any use that helps patients and the common good and should block use of radiology data and algorithms for financial gain.\r\n\r\nSo, this great responsibility will eventually settle on the policymakers and intellectual property owners. This raises the concern of how to ensure that these systems respect the ethical principles when they make decisions at speeds and for rationales that exceed our ability to comprehend?.\r\n\r\nPeople are coughing up data all the time and one can only be so aware of where their data is being used. One plausible solution is to start with the data consumer\u2019s end. Companies which train algorithms on vast amounts of data, can run a use case through a team of ethics officers or data ethnographers who will assess how important is data for certain applications.\r\n\r\nOperationalising data ethics, in particular, includes a union of technical, philosophical, and sociological components. This is not a surprise as data science requires critical literacies (e.g., awareness of subjective design choices) as well as functional technical literacy (e.g., data munging, building models, etc.).\u00a0\r\nFuture Direction\r\nSuccessful implementation of ethical data science practices requires a cross-disciplinary investigation of the development and deployment of the opaque complex adaptive systems that are increasingly in public and private use. While helping explore the proliferation of algorithmic decision-making, autonomous systems, and machine learning and explanation; the search for balance between regulation and innovation; and the effects of AI on the dissemination of information, along with questions related to individual rights, discrimination, and architectures of control.\r\n\r\nIn this way, there is a chance that businesses will learn how to holistically incorporate AI into their innovation process to sustain profits while honouring their purpose.\r\n\r\nOf course, AI will be designed for maximum transparency and dependability. However, the ultimate responsibility and accountability for AI remains with its human designers and operators for the foreseeable future.