This is the 12th article in the weekly series of Expert’s Opinion, where we talk to academics who study AI or other emerging technologies and their impact on society and the world.
This week, we spoke to John Danaher, an academic and lecturer at the National University of Ireland (NUI), Galway, whose research focuses on the ethical, legal and social implications of new technologies.
Danaher maintains a blog called Philosophical Disquisitions, and runs a podcast with the same title. He also writes for the Institute for Ethics and Emerging Technologies where he is an affiliate scholar.
AIM Daily XO
Join our editors every weekday evening as they steer you through the most significant news of the day, introduce you to fresh perspectives, and provide unexpected moments of joy
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.
Analytics India Magazine caught up with Danaher to understand his latest research on the ethical implications of a personal AI assistant.
AIM: Why is AI considered more contentious when it handles the mental or cognitive side of human tasks?
Danaher: I presume it’s because it threatens or attacks something that is fundamental to our sense of who we are.
Download our Mobile App
“There is a theory propounded by evolutionary psychologists and anthropologists which claims that humans evolved to fill the cognitive niche”.
In other words, the thing that is distinctive about human beings, and is responsible for our evolutionary success, is that we are able to use our minds to solve problems and build an environment that is more hospitable to us. By creating a technology that can replace or supplement human cognition, we seem to be trespassing upon the very thing that makes us unique.
AIM: You have argued as long as ‘I outsource cognitive abilities I did not enjoy solving, there is little to lament. With the inherent lazy nature of human beings, if provided with an automated solution, they might never stop using them and markets will always come up with ways to automate non-automated tasks.’ Where and how do we draw a line in such cases?
Danaher: I certainly think there is a risk that people will default to the easy option. This could be problematic. There are some scholars – Evan Selinger and Brett Frischmann and their book Re-Engineering Humanity spring to mind – who worry a lot about this. They think that modern technology is turning humans into simple ‘stimulus-response’ machines. The technology presents us signals and rewards and we just react without thinking. I would, however, like to push back against this pessimistic outlook. I think it assumes an overly negative view of humans and may reinforce a sense of helplessness or fatalism in the face of rampant automation.
The main goal of my paper is to defend a more nuanced and reflective approach to technology. The reality is that we are cognitively limited beings. We have limited time and limited energy. We cannot do all our own thinking for ourselves.
This has been true throughout human history. We have always relied on other human beings to do some of our thinking for us. For example, managers have relied on human assistants to schedule their appointments and meetings. Cognitive outsourcing of this kind is essential to human life and can be quite empowering. You free up the time and energy to think about other things and pursue other opportunities. Imagine what life would be like if you had to do everything for yourself?
I don’t see outsourcing to machines as being fundamentally different. It’s just a question of choosing to outsource the right things.
AIM: Even if we consider the degeneration effect can be balanced and AI improves day-to-day functioning and cognitive ability, don’t you think this will amplify the existing inequality? If yes, how can we avoid this?
Danaher: Ironically, I think AI might help to reverse some of this inequality. Go back to the example of the manager with the human assistant. Historically, it has been a privileged few that can afford to outsource significant parts of their cognitive labour in this way. Furthermore, this practice of outsourcing to humans often reinforces socio-economic inequalities. The advantage of AI assistants is that they can be cheaply and widely dispersed at virtually zero marginal cost (i.e. limited cost per additional user). This might enable more people to take advantage of cognitive outsourcing.
This argument comes with three caveats though. First, many people will lose (and already have lost) their jobs as a result of this kind of automation: that is not good for inequality. Second, the wider use of this technology will increase the power of those that create and sustain it (i.e. AI assistant platform providers such as Google and Amazon). And third (and more fancifully), it is possible that someday AI assistants themselves will acquire a moral status that will make us worry about whether we are exploiting them through cognitive outsourcing.
AIM: Can a person who uses AI truly be autonomous? What are some of the major implications/consequences of using AI assistants and how to offset their negative impact?
Danaher: I don’t think any of us is ever fully autonomous. I think this is an ideal that is never realised in the real world. None of us is fully rational, none of us can consider every possible choice or opportunity, and none of us is fully independent. We are all products of our biology, culture and personal experiences. So I would resist any blanket claim that AI reduces or undermines our autonomy.

That said, there are certainly risks. AI assistants can reduce our autonomy in certain ways. Some kinds of app design, for example, can nudge us in the direction of certain choices, encourage us to ignore options or, in more extreme cases, effectively insist on us choosing a particular course of action. If, for example, you are planning a driving route using some mapping algorithm and it recommends a particular route as being 99% most likely to get to your destination quickly, it will be hard for you to ignore that.
To address these concerns we need to make sure that user interfaces aren’t overly coercive or manipulative. This means presenting humans with the information they need to make rational choices, retaining the space for humans to think through options, and allowing us to choose among several options for ourselves.
AIM: You mention how AI shouldn’t be used if it would only eliminate some ‘set of abilities with really high intrinsic value’ or AI can be ‘used for deceptiveness in relationships’. Who decides whether a particular ability has intrinsic value? Or, who decides the threshold of deceptiveness? What are the things to consider while developing applications in this context?
Danaher: I don’t believe I ever argue that AI can be used for deceptiveness in relationships. I believe what I argue is that some things that AI has been used for in relationships can be misclassified as being deceptive. The specific example had to do with automated messages being sent between romantic partners. Suppose I set up an app that will send my partner messages saying ‘I love you’ at random intervals during the week. At particular moments in time when she receives these messages, I will not have written them and may not even be thinking about her. Are the messages therefore deceptive? Not necessarily. They may truly represent my ongoing feelings for her and she may appreciate them. I don’t see this as being any different from sending someone a greeting card with flowers. The greeting card and flowers may arrive long after you composed the affectionate messages contained within them. But that doesn’t make them deceptive. Some uses of automated messaging might be deceptive. For instance, sending an affectionate message when you are actually conducting an illicit affair with someone else. But whether it is deceptive or not will depend on the context.
To your larger question, I guess I’m not sure if there are simple design guidelines for app developers. It is going to depend on what you are creating and on its possible uses. There are some things in human life where outsourcing cognition or behaviour to a machine is inappropriate.
To stick with the relationship example, imagine you are having a conversation with your partner via Zoom. You don’t want to listen to what they have to say and so you use a deepfake program to create a video of you that nods along and occasionally responds to them. That would be deceptive and that would be problematic. The value of the conversation lies in being fully present and engaged in it. Any app that encourages disengagement from an activity like this is problematic and its design should be reconsidered.
Of course, this is a general problem with all digital technology, particularly smartphones – they encourage us to ‘zone out’ from daily activities. I have been thinking about this a lot recently. It seems to me that many social media apps and services encourage us to instrumentalise our daily activities and hence overlook their intrinsic value. Instead of playing football with our children, we like to take videos of them to share with friends and family. In other words, we view our experiences as ‘content’ that is being shared, liked and possibly even monetised. I think this is a bad thing. Many times the primary value of our daily activities is intrinsic to the activities themselves. They don’t need to be shared or liked by others.