Open data is generally considered to be an aggregate of human activities and is used to ascertain things about the way one lives.
Any firm that employs algorithm-based decision making, usually functions in a grey area. A data scientist collects all the data, curates and trains a model. Data curation is a tedious task and making sense out of noisy data is an accomplishment on its own. However, there is a bigger challenge even before the pipeline starts to build up — data collection and the methods involved.
The prospect of handing over the keys to increasingly autonomous systems induces paranoia in people thanks to pop sci-fi. Questions such as how will society respond to ever improving machine learning systems as they displace an ever-expanding spectrum of careers and, if the benefits of this technological revolution are broadly distributed or accrue to a lucky few have been raised.
Call For Ethical Guidelines Intensifies
Pharmaceutical companies would like to know about the human genome, about the conditions of disease. Whereas, a power company would like to know about the power consumption and distribution of a certain area. These are few really legitimate uses for big data. However, the biggest use cases of big data today are in the domains of marketing.
Companies want to sell products through ludicrous advertising campaigns, by barging into the browsers, through SMS alerts and other recommendations. And, for this to happen, they need tonnes of personal data so that they can make the predictions more efficient.
We are already giving in to the terms and conditions by allowing third parties to have a sneak peek into our private lives through our mobile phones and it is only likely that we shall further entrust the management of our environment, economy, security, infrastructure, food production, healthcare, and to a large degree even our personal activities, to artificially intelligent computer systems.
Events like Facebook’s “Free Basics” in Myanmar genocide and the Cambridge Analytica fiasco are only some of the dire situations where data has been exploited for horrendous activities. The blame game is only a part of post-mortem. The policy changes usually take place as a face-saving manoeuvre.
"What should future statisticians, CEO, and senators know about the history and ethics of data?" a special seminar next Friday June 28th, 2-3pm in 1011 Evans Hall, at @UCBerkeley, by @chrishwiggins (Columbia prof., chief data scientist at the @nytimes and all around great guy). pic.twitter.com/f1gjYTXNX3
— Fernando Perez (@fperez_org) June 21, 2019
Recently the North American and European association of radiologists issued a joint statement regarding the ethical practices of AI in medical applications. They believe that AI should respect human rights and freedoms, including dignity and privacy.
They further suggested that the radiology community should start now to develop codes of ethics and practice for AI that promote any use that helps patients and the common good and should block use of radiology data and algorithms for financial gain.
So, this great responsibility will eventually settle on the policymakers and intellectual property owners. This raises the concern of how to ensure that these systems respect the ethical principles when they make decisions at speeds and for rationales that exceed our ability to comprehend?.
People are coughing up data all the time and one can only be so aware of where their data is being used. One plausible solution is to start with the data consumer’s end. Companies which train algorithms on vast amounts of data, can run a use case through a team of ethics officers or data ethnographers who will assess how important is data for certain applications.
Operationalising data ethics, in particular, includes a union of technical, philosophical, and sociological components. This is not a surprise as data science requires critical literacies (e.g., awareness of subjective design choices) as well as functional technical literacy (e.g., data munging, building models, etc.).
Successful implementation of ethical data science practices requires a cross-disciplinary investigation of the development and deployment of the opaque complex adaptive systems that are increasingly in public and private use. While helping explore the proliferation of algorithmic decision-making, autonomous systems, and machine learning and explanation; the search for balance between regulation and innovation; and the effects of AI on the dissemination of information, along with questions related to individual rights, discrimination, and architectures of control.
In this way, there is a chance that businesses will learn how to holistically incorporate AI into their innovation process to sustain profits while honouring their purpose.
Of course, AI will be designed for maximum transparency and dependability. However, the ultimate responsibility and accountability for AI remains with its human designers and operators for the foreseeable future.
Enjoyed this story? Join our Telegram group. And be part of an engaging community.
Provide your comments below
What's Your Reaction?
I have a master's degree in Robotics and I write about machine learning advancements. email:firstname.lastname@example.org