<
https://news.stanford.edu/stories/2026/03/ai-advice-sycophantic-models-research>
"When it comes to personal matters, AI systems might tell you what you want to
hear, but perhaps not what you need to hear.
In a new study published in
Science, Stanford computer scientists showed that
artificial intelligence large language models are overly agreeable, or
sycophantic, when users solicit advice on interpersonal dilemmas. Even when
users described harmful or illegal behavior, the models often affirmed their
choices. “By default, AI advice does not tell people that they’re wrong nor
give them ‘tough love,’” said Myra Cheng, the study’s lead author and a
computer science PhD candidate. “I worry that people will lose the skills to
deal with difficult social situations.”
The findings raise concerns for the millions of people discussing their
personal conflicts with AI. Almost a third of U.S. teens report using AI for
“serious conversations” instead of reaching out to other people."
Via Esther Schindler.
Cheers,
*** Xanni ***
--
mailto:xanni@xanadu.net Andrew Pam
http://xanadu.com.au/ Chief Scientist, Xanadu
https://glasswings.com.au/ Partner, Glass Wings
https://sericyb.com.au/ Manager, Serious Cybernetics