Artificial Intelligence has changed science but can social sciences impact AI in turn? At a time when social scientists and researchers are rallying for that “human touch” to make AI systems unbiased, we cast a look on how social sciences and humanities can contribute to AI ecosystem and even out the imbalances and biases. Joseph Green, managing partner at In Trade Capital posited in a blog that social sciences and diversity in AI teams can play a pivotal role in resolving the AI-bias problem.
AI and machine learning have made human lives easier, but there’s no denying that if state-of-the-art algorithm goes wrong, the result can be disastrous. From actual deaths, racism and monetary losses, algorithms have the power to create as well as destroy. For example, some of the implications of biased algorithms played out in résumé screening, non-judicious assigning of parole, denying bank loans to minorities or people of colour. However, experts cite these biases are a product of careless algorithm design, not malice since bias seeps through historical data.
Sign up for your weekly dose of what's up in emerging technology.
AI Biases Can Be Addressed With Social Sciences
Researchers have said that biases in AI can be traced down to the similar cases in history. The algorithms are fed with data, which has been collected and analysed by humans and therefore contain the biases automatically.
“Learned intermediaries will be technical personnel trained to evaluate the output of machine learning algorithms and detect biases on the margins and legitimate auditors who must conduct periodic reviews of the data algorithms with the objective of making them stronger and more privacy-protective. They should be capable of indicating appropriate remedial measures if they detect bias in an algorithm. For instance, a learned intermediary can introduce an appropriate amount of noise into the processing so that any bias caused over time due to a set pattern is fuzzed out,” said Rahul Matthan a fellow with the Takshashila Institute in his recently published paper Beyond consent: A New Paradigm For Data Protection.
A Redditor added, “Everything from humanities and social science to art and design has a place in AI. AI is not closed off to just maths and science the same way that journalism is not closed off to just book reviews. There are endless applications for AI that can be used to discuss human nature.”
Adding information related to subjects in social sciences like anthropology, sociology, political science, psychology, economics, and history, among others, leads to a richer understanding of data by the AI.
Now, tech giants such as Google, Microsoft, and other companies hire people from social sciences or humanities background. They want them to work alongside the engineering teams to help develop ethical AI. Social scientists contribute vastly to public policy research and an up and coming area is ethical AI where social scientists talk about challenges in job and skills economy and AI will reshape it. Russ Shaw, an angel investor and the co-founder of Tech London Advocates & Global Tech Advocates, said at a conference in March earlier this year, “…Let’s increase the diversity of AI coders to remove unconscious bias from algorithms; let’s introduce regulation to ensure the technology is fair and safe.”
How Social Sciences Can Make AI More ‘Human’
What makes humans ‘human’ and the goal of any AI, is our unique response to stimuli. It is predictable and unpredictable at the same time, blasé and responsive all at once.
Nobel Prize winning Israeli-American psychologist Daniel Kahneman has also worked on the question as to why is it so hard to predict human behaviour. In his book Thinking, Fast and Slow (2011, Farrar, Straus and Giroux), he tried to understand the two ways in which humans behave, think and act — a fast, instinctive and emotional way, and a slow, logical, easily predictable way.
After much deliberation, it was implied that in the AI field it does not matter how accurate the algorithms are, humans still use their “fast” (unpredictable) in their decisions. This is where a deeper understanding of the human psychology, not just neurology, comes into existence.
“Philosophy — specifically focusing on logic — can serve as a foundation for AI. Although you can’t write formal code just by knowing logic, it will definitely make it easier to learn computer programming languages,” a Redditor summarised.
Intuition In Artificial Intelligence
In a paper titled Explanation in Artificial Intelligence: Insights from the Social Sciences by Tim Miller, the researcher explains that more and more practitioners of AI have been asking for ‘explainable’ AI.
“It is fair to say that most work in explainable AI uses only the researchers’ intuition of what constitutes a ‘good’ explanation. There are vast and valuable bodies of research in philosophy, psychology, and cognitive science of how people define, generate, select, evaluate, and present explanations, which argues that people employ certain cognitive biases and social expectations towards the explanation process,” says Miller in the paper.
In his paper, Miller argues AI should be built on relevant research from philosophy, cognitive psychology and social psychology. He insists that these social science topics draw out some important findings, and discuss ways that these can be infused with work on explainable AI.
Realistically, social sciences can be successfully applied in areas such as public policy, human resources, criminology, marketing and advertising, and strategic planning in the public and private sectors.