Harnessing Human Emotions in Generative AI

“AI can still be very literal in its interactions, while humans don’t tend to express emotions in simple and straightforward ways.”
Harnessing Human Emotions in Generative AI
Listen to this story

Thought Experiment

Imagine you’re browsing items in a supermarket (virtual or physical), and an AI assistant is available for you to interact with, to optimise your shopping for the day. You provide it with a budget, your household make-up, like details on the number of people, ages, dietary restrictions and it generates a shopping list with relevant & available items.

Based on your style of evaluating products (eg, reviews from other shoppers like you, images, use in recipes, comparing with local businesses) it also prioritises brands for you. It recognizes when you’re looking lost in the store and steps in to guide you with a map, or if the smells, sights and sounds of the store overwhelm you, it quickly re-routes you. You have a fun, swift, just-like-you-like-it shopping experience and are on your way out, sooner than expected. You can even thank it for reminding you to carry an umbrella on a suddenly cloudy day!

In this scenario, it’s easy to allow ourselves to ease into the idea of an empathetic, caring and understanding companion that wants and does what’s best for you.

Let’s try another one on for size:

You happen to miss a credit card payment because you were travelling and it slipped your mind. The automated service centre notices this lapse and while you’re away on vacation, starts sending you reminders. You ignore them because you don’t want to deal with them while on holiday, you don’t mind the small late fee – but they continue to escalate in intensity:

Day 1: “Your credit score has been affected by your missed payment. Do you not care about your financial future?” (guilt-tripping)
Day 2: “Don’t be irresponsible, pay now!” (sense of obligation)
Day 3: “Do you not understand the importance of being on time?” (targeting character)
Day 4: “If you don’t pay now, you may not enjoy the same privileges with our bank…” (threat)

The automated service centre has studied your payment patterns, your chats and call transcripts with the bank, and knows you value your reputation – it uses any means necessary to make you pay, as that’s what it’s been trained to do.

(If you’re reading this thinking ‘This is so extreme, this wouldn’t happen, I’m happy-sad to tell you these were all inspired by my exchange with ChatGPT earlier today)

In the pursuit of optimisation and automation, we may overlook both the best and worst case scenarios: In the best case scenario, this integration could lead to psychological safety, reflect human needs that are explicit and latent in experiences, and even be a desirable presence for non-task-based interactions. But the worst case scenario is much like the risk of anyone who knows you inside-out that you may not know very well – holding the power to be emotionally manipulative, exploitative, aggressive and having the potential to create unsafe environments for everyone including decision-makers, users and consumers.

At this stage, I’d like to share a conversation starter with you: 

What kind of empathy and emotional response should we try to integrate in Generative AI solutions?

To explain, let’s break down the recognised types of Empathy*

Cognitive empathy: Ability to understand how a person feels and what they might be thinking which means exploring the why of the feeling.

Emotional empathy/Affective empathy: Ability to feel or embody what someone is feeling which is essentially what mirror neurons do. 

Behavioural empathy/Compassionate empathy: Acting upon what someone else is feeling and trying to help alleviate their distress in a way that works for them even if you don’t understand what they’re experiencing fully.

As a practitioner, my recommendation at this time is that Generative AI solutions should be trained to display Behavioural Empathy, without trying to develop Cognitive or Emotional empathy in them. This will allow AI to reflect human interest, emotion and need without developing tools to exploit them.

A few principles for implementing and broader usage of emotions in Generative AI: 

– If the emotional theory and the logical foundation are flawed, implementations that are technically accurate will still be flawed experiences for users

– While there is growing evidence that AI now has a better understanding of sarcasm, explains humour, can even write convincing dialogue to mimic consciousness; AI can still be very literal in its interactions, while humans don’t tend to express emotions in simple and straightforward ways

– Think of automation in organisations and adoption as management journeys change; especially when considering business integration, making interactions more intuitive and aligned with employee expectations, addressing human concerns and increasing a sense of agency to reduce fear of being ‘replaced’ 

– Proactive intervention to balance untapped potential vs. negative exploitation: Bringing the perspective of business ethics, regulation, policy research, antitrust and addressing misinformation into our solutions

– Identifying latent biases and increasing representation in current training data: OpenAI’s models are as good as what it’s trained on, identifying ways to capture hidden bias in data, use cases and accounting for emerging human identities. Like, today there are several gender identities that would not have been captured a decade ago, that make up a significant portion of consumers today 

– Managing people’s mass response to AI’s perceived & displayed emotions: There is a growing concern and excitement around AI developing ‘emotions’ of its own. While we may be far from sentient AI, it will still deeply affect human engagement (for business leaders, consumer interactions, and employees)

As you go back into your work day, here are a few use cases you could consider as good candidates for experimentation:

Note: I say experiments like these are not absolute foolproof solutions, they will be iterative – as human emotion is dynamic and will continue to be even as we learn to codify some aspects.

– People analytics: Determining the dynamic staffing of projects based on different work styles.

– Gaming: Finding the right level of challenge and achievement to keep a player engaged, while also dealing with new additions.

– Mental Health and Access to Care: Curating experiences that are aligned to people’s present state including recognizing signs of burnout, depression and other conditions. 

– Education: Growth journeys in organisations for employees and in educational institutions for students mapped to individual styles of learning. 

– Chatbots for Customer Service: Ability to reflect customer needs efficiently and respond appropriately.– Creative Arts & Tasks: Helping creatives with brainstorming, idea generation and inspiration.

Download our Mobile App

Shivani Gupta
Shivani is the Lead Behavior Architect at Fractal Dimension - the executive, cross-functional strategic unit at Fractal Analytics. She works at the intersection of Data, Engineering, Design & Behavioral Science for a diverse set of international clients with projects ranging from: reducing churn for a streaming service in the UK, developing the launch strategy for a cereal product for a leading CPG brand in the US, reducing biases in equity investor decision-making for an Indian fintech startup, to building coping mechanisms for survivors of sexual assault in a rural Indian village. Having applied her expertise across geographies, cultural contexts, sectors and problem statements, she firmly believes that the integration of these seemingly divergent practices afford greater value than any of them can deliver individually. Shivani previously worked at Studio 5B - Dr. Reddy's, an award-winning design and innovation lab and holds an MSc. in the Psychology of Individual Differences from the University of Edinburgh. Outside the office, she is a performance poet and dancer, who during her time in Edinburgh, also performed at the Fringe festival. Born in Mumbai, raised in Chennai, and recently relocated to Chicago, Shivani is sure to tell you where to get the best snacks, sauces and sandwiches in her city!

Subscribe to our newsletter

Join our editors every weekday evening as they steer you through the most significant news of the day.
Your newsletter subscriptions are subject to AIM Privacy Policy and Terms and Conditions.

Our Upcoming Events

15th June | Online

Building LLM powered applications using LangChain

17th June | Online

Mastering LangChain: A Hands-on Workshop for Building Generative AI Applications

Jun 23, 2023 | Bangalore

MachineCon 2023 India

26th June | Online

Accelerating inference for every workload with TensorRT

MachineCon 2023 USA

Jul 21, 2023 | New York

Cypher 2023

Oct 11-13, 2023 | Bangalore

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox

Is Sam Altman a Hypocrite? 

While on the one hand, Altman is advocating for the international community to build strong AI regulations, he is also worried when someone finally decides to regulate it