When AI appears to provide private information, people can be more sympathetic.


by Public Library of Science

An illustration of how study agents interacted with humans. Credit: Creative Commons By 4.0 (https://creativecommons.org/licenses/by/4.0/) Takahiro Tsumura.
In a recent study, subjects’ empathy for an online anthropomorphic artificial intelligence (AI) agent increased when the AI agent appeared to reveal private information about themselves to the subjects. These results are published in the open-access journal PLOS ONE on May 10, 2023 by Takahiro Tsumura of The Graduate University for Advanced Studies, SOKENDAI in Tokyo, Japan, and Seiji Yamada of the National Institute of Informatics, also in Tokyo.

The rising use of AI in daily life has sparked curiosity in potential predictors of people’s acceptance and trust of AI agents. Previous studies have indicated that if artificial items evoke empathy, individuals are more likely to accept them. People might relate to robots that clean, to robots that look like pets, and to anthropomorphic chatbots who answer websites.

Previous studies have also emphasized the significance of sharing personal information in fostering interpersonal connections. Based on such results, Tsumura and Yamada postulated that an anthropomorphic AI agent’s self-disclosure may increase people’s empathy for those agents.

The researchers tested this theory using online studies where users engaged in text-based conversation with an AI agent that was graphically represented by an illustration of an anthropomorphic robot or an illustration of a human. During the talk, the participant and the agent were coworkers taking a lunch break at the agent’s place of employment. The agent appeared to self-disclose either very relevant personal information about employment, less relevant information about a hobby, or no personal information at all during each encounter.

Data from 918 people, whose empathy for the AI bot was assessed using a common empathy questionnaire, were included in the final analysis. Researchers discovered that participants showed more empathy when highly work-relevant self-disclosure from the AI bot was compared to less-relevant self-disclosure. Ignorance about oneself was linked to repressed empathy. There was no discernible correlation between the agent’s appearance as an anthropomorphic robot or as a person with empathy levels.

These results imply that AI agents’ self-disclosure may in fact cause humans to feel empathy, which may have implications for how AI tools are developed in the future.

“This study explores whether self-disclosure by anthropomorphic agents affects human empathy,” the authors continue. Our study will improve the unfavorable perception of socially useful artifacts and support the development of future human-anthropomorphic agent social interactions.”

Like it? Share with your friends!



Your email address will not be published. Required fields are marked *