Empathy as leverage for adoption of non-human agents
The study of empathy and emotional support in AI systems remained a prominent research topic throughout 2024 and the start of 2025, with numerous papers examining this ongoing area of investigation, reinforcing the idea that empathy can be a strong driver for adoption of non-human agents at scale. However, this also raises ethical aspects that are not new but remain central to discussions on ethical AI adoption. GPT-4, for example, has demonstrated empathy levels that reportedly surpass human performance by 10%. From a design perspective, this triggers a reflection on what it truly means to create human-centred AI interactions in this context, considering that emotional understanding involves nuances that are challenging to capture through standard metrics.
AI systems are evolving to detect and respond to emotional signals, generate appropriate responses, and maintain consistent emotional engagement. They're never tired, never judge, and are always available, making them, in some ways, the perfect listeners. While interacting with them some of us might forget (intentionally or not) that we are interacting with an entity that performs empathy rather than experience it. The variety of approaches to AI transparency are showing that explainability and empathy don't have to be trade-offs. If we focus on conversational touchpoints, an AI system can both acknowledge emotional context ("I understand this is a difficult situation") while being transparent about its limitations ("I process patterns in data but don't truly feel emotions"). Whether this kind of honesty can strengthen or diminish trust is an ongoing discussion.
In unpredictable or emotionally charged situations, people instinctively anthropomorphise by attributing human characteristics, behaviours, and emotions to non-human entities or objects, this helps us to give predictability to something that might create some sort of psychological discomfort. When we are vulnerable or seeking connection, this drive becomes even stronger.
So, from the ethical (and legal) point of view, if someone shares their deepest troubles with an AI system of choice, believing they are receiving genuine empathy, who bears responsibility for the outcome, if things go wrong? For now, the AI itself lacks intentionality and moral agency so it cannot be responsible. This leads back to the concept of the responsibility gap, a growing ethical challenge as AI systems become more emotionally integrated into our lives.
In 2021 Bender et al. warned us about the challenges in managing AI systems and their simulation of empathy. They describe LLMs as stochastic parrots that probabilistically reproduce viewpoints and normative biases encoded in their training corpora, an aspect often overlooked when users engage with these systems in moments of psychological vulnerability. This makes AI's role in emotional interactions a critical area requiring careful consideration in both system design and deployment.
Moving forward, the focus might involve balancing technical capability with ethical responsibility. The validation of AI systems' technical performance runs parallel to the need for robust frameworks to evaluate societal impact and ensure transparent communication of limitations. The fact that AI can outperform humans in displaying empathy raises particular sensitivities in areas such as mental health and generally speaking adoption for medical diagnosis.
Examining the role of empathy in AI adoption extends beyond surface-level metrics toward deeper understanding of both cognitive and emotional trust. The complexity of AI explainability and transparency might remain at the forefront of product development, as professionals in the field continue to expand and monitor how to foster transparency of intent in mixed human and AI agents' interactions.
De Wynter, A. (2024). If Eleanor Rigby had met ChatGPT: A study on loneliness in a post-LLM world. arXiv Preprint. https://doi.org/10.48550/arXiv.2412.01617
Myakala, P. K., Jonnalagadda, A. K., & Bura, C. (2025). The human factor in explainable AI: Frameworks for user trust and cognitive alignment. International Advanced Research Journal in Science, Engineering and Technology, 12(1), 80–93. https://doi.org/10.17148/IARJSET.2025.12110
Parts of this manuscript were drafted with the assistance of AI language models (specifically, Claude 3.7, ChatGPT 4.0, Google Gemini 2.0). The author used AI as a tool to enhance clarity and organisation of ideas, generate initial drafts of certain sections, and assist with language refinement. All AI-generated content was reviewed, edited, and verified by the author. The author takes full responsibility for the content, arguments, analyses, and conclusions presented. This disclosure is made in the interest of transparency regarding emerging research practices.