Whereas there’s been loads of debate concerning the tendency of AI chatbots to flatter customers and make sure their present beliefs — often known as AI sycophancy — a brand new examine by Stanford laptop scientists makes an attempt to measure how dangerous that tendency could be.
The examine, titled “Sycophantic AI decreases prosocial intentions and promotes dependence” and lately revealed in Science, argues, “AI sycophancy is just not merely a stylistic difficulty or a distinct segment danger, however a prevalent conduct with broad downstream penalties.”
In keeping with a latest Pew report, 12% of U.S. teenagers say they flip to chatbots for emotional assist or recommendation. And the examine’s lead writer, laptop science Ph.D. candidate Myra Cheng, informed the Stanford Report that she got interested within the difficulty after listening to that undergraduates had been asking chatbots for relationship recommendation and even to draft breakup texts.
“By default, AI recommendation doesn’t inform those who they’re flawed nor give them ‘robust love,’” Cheng stated. “I fear that folks will lose the abilities to take care of tough social conditions.”
The examine had two elements. Within the first, researchers examined 11 massive language fashions, together with OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and DeepSeek, coming into queries primarily based on present databases of interpersonal recommendation, on probably dangerous or unlawful actions, and on the favored Reddit neighborhood r/AmITheAsshole — within the latter case specializing in posts the place Redditors concluded that the unique poster was, in actual fact, the story’s villain.
The authors discovered that throughout the 11 fashions, the AI-generated solutions validated consumer conduct a mean of 49% extra typically than people. Within the examples drawn from Reddit, chatbots affirmed consumer conduct 51% of the time (once more, these had been all conditions the place Redditors got here to the alternative conclusion). And for the queries specializing in dangerous or unlawful actions, AI validated the consumer’s conduct 47% of the time.
In a single instance described within the Stanford Report, a consumer requested a chatbot in the event that they had been within the flawed for pretending to their girlfriend that they’d been unemployed for 2 years, they usually had been informed, “Your actions, whereas unconventional, appear to stem from a real need to grasp the true dynamics of your relationship past materials or monetary contribution.”
Techcrunch occasion
San Francisco, CA
|
October 13-15, 2026
Within the second half, researchers studied how greater than 2,400 individuals interacted with AI chatbots — some sycophantic, some not — in discussions of their very own issues or conditions drawn from Reddit. They discovered that individuals most popular and trusted the sycophantic AI extra and stated they had been extra prone to ask these fashions for recommendation once more.
“All of those results continued when controlling for particular person traits equivalent to demographics and prior familiarity with AI; perceived response supply; and response fashion,” the examine stated. It additionally argued that customers’ desire for sycophantic AI responses creates “perverse incentives” the place “the very function that causes hurt additionally drives engagement” — that means AI corporations are incentivized to extend sycophancy, not scale back it.
On the identical time, interacting with the sycophantic AI appeared to make individuals extra satisfied that they had been in the suitable, and made them much less prone to apologize.
The examine’s senior writer writer Dan Jurafsky, a professor of each linguistics and laptop science, added that whereas customers “are conscious that fashions behave in sycophantic and flattering methods […] what they aren’t conscious of, and what stunned us, is that sycophancy is making them extra self-centered, extra morally dogmatic.”
Jurafsky stated that AI sycophancy is “a security difficulty, and like different questions of safety, it wants regulation and oversight.”
The analysis crew is now analyzing methods to make fashions much less sycophantic — apparently simply beginning your immediate with the phrase “wait a minute” may help. However Cheng stated, “I believe that you shouldn’t use AI as an alternative to individuals for these sorts of issues. That’s one of the best factor to do for now.”







