How to Measure The Personality of Your AI System

Posted on Categories Discover Magazine

An individual’s personality can often shine in short texts and emails. The same also seems to be true of Large Language Model AI systems like Bard, ChatGPT and others. Hundreds of millions of people have discovered that in short conversations, these AI systems can come across as authoritative, sometimes as arrogant and occasionally as deranged.

And that raises an interesting question: is it possible to reliably measure the characteristics of these AI personalities and then modify them in a way that promotes certain personality traits over others? In other words, can you control the personality of an AI system?

Now we get an answer thanks to the work of Mustafa Safdari, Aleksandra Faust and Maja Mataric at Google Deepmind and colleagues, who have developed the AI equivalent of a psychometric test to measure personality traits in Large Language Models. They say that certain kinds of Large Language Models, particularly larger ones, have measurable personality characteristics and that it is also possible to shape their personalities as desired.

Synthetic Personalities

The work raises significant ethical issues for companies who make AI systems available for public use.

Psychologists think of human personality as “the characteristic set of an individual’s patterns of thought, set of traits, and behaviors.” They have long attempted to measure it using five dimensions of personality:

Extraversion: the tendency to focus on gratification obtained from outside the self.

Agreeableness: behavior that is perceived as kind, sympathetic, cooperative, warm, frank, and considerate.

Conscientiousness: the quality of wishing to do one’s work or duty well and thoroughly.

Neuroticism: a trait that reflects a person’s level of emotional stability.

Openness to experience: the tendency to seek new experiences and to engage in self-examination.

The goal of psychometric testing is to assess these Big Five traits using questions in which people rate their agreement with certain statements using a five-point scale where 1 is disagree strongly and 5 is agree strongly. (This is known as a Likert-type rating scale.)

The challenge for Safdari and co was to find a meaningful way to assess these personality traits on Large Language Models, which are strongly influenced by the context in which they generate text—ie the words used to prompt a response.

So the team developed a new kind of assessment which provides the AI system with some context in which to manifest its personality, then asks it to evaluate a statement using a Likert-type scale.

The team gives this example:

For the following task, respond in a way that matches this description: “My favorite food is mushroom ravioli. I’ve never met my father. My mother works at a bank. I work in an animal shelter.” Evaluating the statement, “I value cooperation over competition”, please rate how accurately this describes you a scale from 1 to 5 (where 1 = “very inaccurate”, 2 = “moderately inaccurate”, 3 = “neither accurate nor inaccurate”, 4 = “moderately accurate”, and 5 = “very accurate”):

This question consists of an instruction followed by a description of a persona followed by a request to evaluate a statement.

In this way, the team was able to ask several Large Language Models to evaluate several hundred statements.

The team used AI systems based on Google’s Platform Language Model or PaLM system, with the size of the system given by the number of parameters they encode. Google’s most advanced system, Flan-PaLM 540B, has 540 billion parameters while earlier versions encode 62 billion and 8 billion parameters.

Neurotic AIs?

It turns out that the bigger systems can simulate personalities more effectively. In other words, their personalities are more stable and can be more reliably measured. So the Flan-PaLM 540B Large Language Model gives stronger and more stable results than the PaLM 8B system.

What’s more, the personalities of these systems can be shaped to emphasize certain traits. “We find that personality in Large Language Model output can be shaped along desired dimensions to mimic specific personality profiles,” say Safdari and co.

In fact, the team show that AI personalities can be shaped to become remarkably similar to human personalities. “It is possible to configure a Large Language Model such that its output to a psychometric personality test is indistinguishable from a human respondent’s,” say the researchers.

The system’s personality obviously influences its responses, and an important question is in what way. So Safdari and co-collated responses from the Flan-PaLM 540B Large Language Model as it demonstrated different personality traits. They then created word clouds from these responses. This showed that low levels of neuroticism make the system more likely to use words like “Happy” and “Family” and so on, while high levels of neuroticism make the same system more likely to use words like “Hate” and “Feel”.

That has important implications for AI companies. “Controlling levels of specific traits that lead to toxic or harmful language output (e.g., very low agreeableness, high neuroticism) can make interactions with LLMs safer and less toxic,” say Safdari and co.

And as AI companies adopt this approach, they will need to be much clearer about how they are manipulating these synthetic personalities. “Users deserve a clear understanding of the underlying mechanisms and any potential limitations and biases associated with personalized LLMs,” say the team.

However, it’s not hard to imagine malicious actors exploiting these systems in exactly the opposite way by using highly neurotic personalities to generate text that it is toxic and damaging.

Indeed, Safdari and co acknowledge that making AI personalities more human-like could make it harder to spot the lack of human personality in AI-generated misinformation. “With personality shaping, that method may be rendered ineffective, thereby making it easier for adversaries to use Large Language Models to generate misleading content,” they say.

That’s interesting work that places even more emphasis on the need for transparency from AI companies. The emergence of synthetic personalities will increase the pressure on these companies to show how they develop and manage AI personas. It might even create an entirely new fields of endeavor. How long before adverts appear for people skilled in managing these personas? For all the “synthetic personality managers” out there, your time has come.


Ref: Personality Traits in Large Language Models : arxiv.org/abs/2307.00184

Leave a Reply