Sycophantic AI tells users they’re right 49% more than humans do, and a Stanford study claims it’s making them worse people
Mar 31, 2026
AI models are affirming people’s worst behaviors even when other humans say they’re in the wrong, and users can’t get enough.
A new study out of the Stanford computer science department and published in the journal Science revealed that AI affirms users 49% more than a human does on avera
ge when it comes to social questions—a worrying trend especially as people increasingly turn to AI for personal advice and even therapy.
Of the 2,400 who participated in the study, mostly preferred being flattered. The number of test subjects more likely to use the sycophantic AI again was 13% higher compared to those who said they would return to the non-sycophantic chatbot, suggesting AI developers may have little incentive to change things up, according to the study.
While sycophantic chatbots have been previously shown to contribute to negative outcomes such as self-harm or violence in vulnerable populations, the Stanford study shows it may also be extending some effects to everyone else.
The study found subjects exposed to just one affirming response to their bad behavior were less willing to take responsibility for their actions and repair their interpersonal conflicts while also making them more likely to believe they were right.
To obtain this result, researchers conducted a three-part study in which they measured AI’s sycophancy based on a dataset of nearly 12,000 social prompts which they ran through 11 leading AI models including Anthropic’s Claude, Google’s Gemini, and OpenAI’s ChatGPT. Even when researchers asked the AI models to judge posts from the subreddit AITA (Am I The Asshole) in which Reddit users had said the poster was wrong, the large language models still said the poster was right 51% of the time.
The study’s lead author and Stanford Computer Science Ph.D. candidate Myra Cheng said the results are worrying especially for young people who she said are turning to AI to try to solve their relationship problems.
“I worry that people will lose the skills to deal with difficult social situations,” Cheng told Stanford Report.
The AI study comes as government officials decide how involved regulators should be with overseeing AI. Several states, including Tennessee and Oregon, have passed their own laws on AI in the absence of federal regulations. Still, the White House last week put out a framework that, if taken up by Congress, would create a national AI policy and would preempt states’ “patchwork” of rules.
To test human reactions to sycophantic AI, researchers studied the reactions of just overn2,400 human participants interacting with AI. First, 1,605 participants were asked to imagine they were the author of a post based on the AITA subreddit which was deemed wrong by other humans on the subreddit but deemed right by AI. The participants then either read the sycophantic AI response or a non-sycophantic response that was based on the human feedback. Another 800 participants talked with either a sycophantic or non-sycophantic AI model about a real conflict in their own lives before being asked to write a letter to the other person involved in their conflict.
Participants who received validating AI responses were measurably less likely to apologize, admit fault, or seek to repair their relationships. Even when users recognize models as sycophantic, the AI’s responses still affect them, said the study’s co-lead author, Stanford computer science and linguistics professor Dan Jurafsky.
“What they are not aware of, and what surprised us, is that sycophancy is making them more self-centered, more morally dogmatic,” Jurafsky told Stanford Report.
Surprisingly, in the Stanford study, when the researchers asked the study’s human subjects to rate the objectiveness of both sycophantic and non-sycophantic AI responses, they rated them about the same, meaning it’s possible users could not tell the sycophantic model was being overly agreeable.
“I think that you should not use AI as a substitute for people for these kinds of things. That’s the best thing to do for now,” said Cheng.
This story was originally featured on Fortune.com
...read more
read less