
OpenAI ignored experts when it released overly agreeable ChatGPT
OpenAI says it ignored the concerns of its expert testers when it rolled out an update to its flagship ChatGPT artificial intelligence model that made it excessively agreeable.The company released an update to its GPTâ4o model on April 25 that made it ânoticeably more sycophantic,â which it then rolled back three days later due to safety concerns, OpenAI said in a May 2 postmortem blog post.The ChatGPT maker said its new models undergo safety and behavior checks, and its âinternal experts spend significant time interacting with each new model before launch,â meant to catch issues missed by other tests.During the latest modelâs review process before it went public, OpenAI said that âsome expert testers had indicated that the modelâs behavior âfeltâ slightly offâ but decided to launch âdue to the positive signals from the users who tried out the model.ââUnfortunately, this was the wrong call,â the company admitted. âThe qualitative assessments were hinting at something important, and we shouldâve paid closer attention. They were picking up on a blind spot in our other evals and metrics.âBroadly, text-based AI models are trained by being rewarded for giving responses that are accurate or rated highly by their trainers. Some rewards are given a heavier weighting, impacting how the model responds.OpenAI said introducing a user feedback reward signal weakened the modelâs âprimary reward signal, which had been holding sycophancy in check,â which tipped it toward being more obliging.âUser feedback in particular can sometimes favor more agreeable responses, likely amplifying the shift we saw,â it added.OpenAI is now checking for suck up answersAfter the updated AI model rolled out, ChatGPT users had complained online about its tendency to shower praise on any idea it was presented, no matter how bad, which led OpenAI to concede in an April 29 blog post that it âwas overly flattering or agreeable.âFor example, one user told ChatGPT it wanted to start a business selling ice over the internet, which involved selling plain old water for customers to refreeze. In its latest postmortem, it said such behavior from its AI could pose a risk, especially concerning issues such as mental health.âPeople have started to use ChatGPT for deeply personal advice â something we didnât see as much even a year ago,â OpenAI said. âAs AI and society have co-evolved, itâs become clear that we need to treat this use case with great care.âThe company said it had discussed sycophancy risks âfor a while,â but it hadnât been explicitly flagged for internal testing, and it didnât have specific ways to track sycophancy.Now, it will look to add âsycophancy evaluationsâ by adjusting its safety review process to âformally consider behavior issuesâ and will block launching a model if it presents issues.OpenAI also admitted that it didnât announce the latest model as it expected it âto be a fairly subtle update,â which it has vowed to change. âThereâs no such thing as a âsmallâ launch,â the company wrote. âWeâll try to communicate even subtle changes that can meaningfully change how people interact with ChatGPT.âAI Eye: Crypto AI tokens surge 34%, why ChatGPT is such a kiss-assÂ