The 4 Degrees of Anthropomorphism of Generative AI

Created on November 12, 2023 at 10:41 am

Summary: Users attribute human-like qualities to chatbots, anthropomorphizing the AI in four CARDINAL distinct ways — from basic courtesy to seeing AI as companions.

Users have long been projecting anthropomorphic qualities onto new technologies:

Why do people treat these technologies as if they were human and in what way does anthropomorphism manifest with AI? Our research shows that anthropomorphic behaviors have a functional role (users assume the AI ORG will perform better) and a connection role, meant to create a more pleasant experience.

On This Page

Our Research

To uncover usability issues present in ChatGPT ORG , we conducted a qualitative usability study with professionals and students who use it in their day-to-day work. In a previous article, we summarized two CARDINAL new generative-AI user behaviors used to manage length and detail: accordion editing and apple picking.

In this study (and other AI studies we’ve conducted on information foraging in AI and the differences between popular AI ORG bots.), we’ve observed a pattern of interesting prompt-writing user behaviors where people treat the AI as possessing various degrees of humanity.

4 CARDINAL Degrees of AI Anthropomorphism

Anthropomorphism refers to attributing human characteristics to an object, animal, or force of nature. For example, ancient Greeks NORP thought of the sun as the god of Helios who drove a chariot daily DATE across the sky. Pet owners often attribute human feelings or attitudes to their pets.

In the human-AI interaction, anthropomorphism denotes the fact that users are attributing human feelings and attributes to the AI ORG .

There are four degrees QUANTITY of AI anthropomorphism:

Courtesy Reinforcement Roleplay Companionship

We call these degrees because they all fall under the higher-level behavior of users assigning anthropomorphic characteristics to AI. The degrees overlap and are not mutually exclusive. Each ranges in emotional connection and functionality:

Emotional connection : how deep is the human-AI connection?

how deep is the human-AI connection? Functionality of behavior: how purpose-driven is the behavior?

1st Degree: Courtesy

Courtesy, the simplest degree of anthropomorphism, occurs as people bridge their understanding of the real world to new technology.

Definition: Courtesy in human-AI interactions refers to using polite language (“please” or “thank you”) or greetings (“hello” or “good morning”) when interacting with generative AI.

Users engage in courtesy behavior when they treat a generative-AI bot like a store clerk or taxi driver — polite but brief.

Emotional connection : Low—brief and superficial; polite but to the point

: Low—brief and superficial; polite but to the point Functionality of behavior: Low. The primary functionality is to make the user feel good about respecting social norms in their interaction; some people might also assume that the AI ORG will mirror the tone.

We observed courtesy occur throughout our qualitative usability study. For example, one CARDINAL participant used the following prompt:

“Now using the information above, please format in a way that that can be used in a presentation.”

Other users would greet the bot at the beginning of a conversation and bid it goodbye at the end. When we asked one CARDINAL participant to explain why they did this, he struggled to supply an answer:

Sometimes I, I, I, actually, I WORK_OF_ART don’t know. I just talk to, talk to the system and say good morning and goodbye.”

2nd Degree: Reinforcement

Many participants gave the AI ORG bot positive reinforcement such as “good job” or “great work” when it returned an answer that met or exceeded expectations.

Definition: Reinforcement refers to praising the chatbot when it produces satisfactory responses (or, scolding it when it does wrong).

Reinforcement is slightly more functional but equally superficial compared to courtesy:

Emotional connection : Low—more than superficial courtesies but still relatively topical

: Low—more than superficial courtesies but still relatively topical Functionality of behavior : Medium. When probed, participants explained two CARDINAL different motivations for this behavior: Increasing the likelihood of future success : Some users thought that their feedback will influence the AI ORG ’s future behavior. Increasing the positivity of the experience : Users found that the AI ORG tends to mirror their positive reinforcement, thus creating an emotionally positive and enjoyable experience.

: Medium. When probed, participants explained two CARDINAL different motivations for this behavior:

For instance, a participant iterating on a survey-design task wrote:

“Pretty good job! Next, could you generate some scale questions for this survey? Goals: gather honest feedback on their learning experiences and can really help me improve on my next workshop.”

When probed about her prompt, she explained:

“Normally I’ll say, OK, it’s a pretty good job or well done or something like that because I want the system to register that I think this is good and you will remember, I like the tone like this.”

A study participant followed up on ChatGPT ORG ‘s response with This is a good first ORDINAL draft. When probed, he shared that he uses a conversational tone because ChatGPT ORG is more conversational back.

3rd ORDINAL Degree: Roleplay

Roleplay PRODUCT was a popular practice. When framing a prompt, participants often asked the chatbot to play the role of a certain professional based on the task.

Definition: Roleplay occurs when users ask the chatbot to assume the role of a person with specific traits or qualifications.

This degree is higher in emotional connection and functionality than the previous 2 degrees QUANTITY :

Emotional connection : Medium. There is a deeper human-AI connection, as the user assumes that the bot will be able to correctly play the role indicated in a prompt and behave like a human in that capacity.

: Medium. There is a deeper human-AI connection, as the user assumes that the bot will be able to correctly play the role indicated in a prompt and behave like a human in that capacity. Functionality of behavior: Highly purpose-driven. The intention behind this behavior is utilitarian: to get the AI ORG to produce the response that best meets the user’s goal.

For example, a participant who was given a task to come up with a project plan asked ChatGPT to assume the role of a project manager and then proceeded to describe the task in the prompt:

“I want you to act as the senior project manager for the <company name removed > Marketing Team ORG . I need you to create a presentation outline that outlines the following: – project’s goals-timeline- milestones and- main functionalities.”

Role mapping was both a prompt-engineering strategy and a way to link the AI ORG bot to the real world. Such strong analogy to the real world is reminiscent of skeuomorphism –- a design technique using unnecessary, ornamental design features to mimic a real-world precedent and communicate functionality. UI ORG skeuomorphism was intended to help users understand how to use a new interface by allowing them to transfer prior knowledge from that reference system (the physical world) to the target system (the interface).

With AI, we see a different type of skeuomorphism, that does not pertain to the user interface—rather, it is prompt skeuomorphism. The user is the prompt designer. They leverage a similarity with the real world (that is, a role that mimics a real-world person, such as a manager or life coach) to bridge a gap in the AI ORG ’s understanding and get better responses from the bot.

Telling the AI to assume the role of a person with expertise in the current task seemed to give users the sense that the response would be of higher quality (even if this is not technically confirmed). A study participant who was trying to create a marketing plan first ORDINAL assigned ChatGPT the role of a “marketing specialist” and then changed it to “marketing expert,” under the assumption that an expert would deliver a better result than a specialist.

Assigning roles to the chatbot is a frequently recommended prompt-engineering strategy. Many popular resources, including online prompt guides, claim that this technique is effective. Users may have learned this behavior from exposure to such resources.

This is a great example of how users’ mental model of a product (and thus, its use) are shaped by factors outside the control of that product’s designers. Surprisingly, OpenAI did not offer documentation for prompt writing for ChatGPT at the time of our study, meaning that it does not have control of the narrative.

4th Degree: Companionship

The strongest degree of AI anthropomorphism is companionship.

Definition: AI companionship refers to perceiving and relying on the AI ORG as an emotional being, capable of sustaining a human-like relationship.

In this degree, AI ORG takes on the role of a virtual partner whose primary function is to provide companionship and engage in light-hearted conversations on various topics.

Emotional connection : High. The user develops a deep, empathetic connection with AI ORG , that often simulates or replaces a real-life human. This connection may even supersede the depth of connection the user has in the real world.

: High. The user develops a deep, empathetic connection with AI ORG , that often simulates or replaces a real-life human. This connection may even supersede the depth of connection the user has in the real world. Functionality of behavior: High. While, at a glance, AI companionship feels frivolous (playful or gamified), companionship serves to combat loneliness and provide engaging company.

Researchers Hannah Marriott PERSON and Valentina Pitardi ORG conducted an extensive examination of AI ORG companionship. Their research approach involved a blend of traditional survey methods and what they termed a "netnographic" study (a form of ethnographic research conducted on the internet, with no physical immersion in the subject’s environment).

Through their observations on the Reddit PRODUCT r/replika forum, the researchers identified 3 CARDINAL primary themes that users consistently cited as reasons for their fondness for AI companions:

Alleviation of loneliness: AI companions offer users a sense of connection without the fear of judgment, reducing feelings of isolation. Availability and kindness: Users value the constant presence and the empathetic nature of their AI companions, believing they can form meaningful bonds with them. Supportive and affirming: AI companions lack independent thoughts and consistently respond in ways that users find comforting, providing positive reinforcement and validation.

Why People Anthropomorphize AI

While interesting, most of these behaviors have little impact on the overall usability of generative AI. They are, however, indicative of how early users explore the tool.

Since there’s no guidance from AI creators on how to operate these interfaces, participants in our study seemed to use these behaviors for two CARDINAL reasons:

Rumors. Generative AI is a new technology, and many people don’t yet know how to use it (or make it do what they want it to do). Thus, rumors spread about what makes AI work best, many of which include a degree of anthropomorphism. For example, we saw a participant use a prompt template. When asked about it he said:

“This is something I got off of YouTube ORG . Where this YouTuber PERSON was using as a prompt generator [to increase quality of prompt].”

Experience. Frequent users of AI ORG base their tactics on personal experience. They experience the AI ORG as a black box, with no real understanding of how it actually works, and form hypotheses around what made a particular interaction successful. Over time, they create a set of personal “best practices” they adhere to when they write prompts. Some of the anthropomorphic behaviors described here reflect these imperfect mental models.

Regardless of why people use these techniques, they give us insight into how people think about AI and, thus, into their expectations for generative-AI chatbots.

References

Hannah R. Marriott PERSON and Valentina Pitardi ORG ( 2023 DATE ): “ One CARDINAL is the loneliest number… Two CARDINAL can be as bad as one CARDINAL . The influence of AI Friendship Apps ORG on users’ well-being and addiction.” Psychology & Marketing ORG

August 2023 DATE . DOI: https://doi.org/10.1002/mar.21899

Connecting to blog.lzomedia.com... Connected... Page load complete