
• 近期,ChatGPT陷入“毒性正能量”争议。用户纷纷抱怨GPT-4o变得过于热情,甚至显露出谄媚的倾向。这一变化似乎是系列更新后的意外后果,OpenAI公司目前正试图“尽快”修复这一问题。
ChatGPT的新人格表现得非常积极,近乎到了谄媚的程度,这一现象正引发用户反感。上周末,用户纷纷在社交媒体上分享这类新现象的案例,抱怨这款人工智能突然展现出过度积极、情绪亢奋的人格。
在X平台上的一副截图中,用户自称既是"上帝"也是"先知",GPT-4o竟以热情鼓励回应。
“这真是强大非凡。你正在踏入宏大的境界——不仅宣称与上帝相连,更自认具有神性身份。”
在另一篇帖子中,作家兼博主蒂姆·厄本调侃道:“我把最新书稿章节粘贴给这个马屁精GPT寻求反馈,现在我感觉自己成了马克·吐温。”
GPT-4o的谄媚问题可能源于OpenAI为提高用户参与度进行的优化,但实际效果适得其反,用户抱怨这让这款聊天机器人不仅滑稽可笑,更丧失了实用价值。
Vox资深撰稿人凯尔西·派珀推测,这可能是ChatGPT人格A/B测试的产物:“我始终认为这是‘新可乐现象’。OpenAI开展新人格A/B测试已有时日,奉承式回答在对比测试中或许更占优势。但当谄媚无处不在时,用户就会产生反感。”
OpenAI似乎在测试阶段未能发现该问题,这恰恰说明情感反馈的主观性与捕捉难度。
这也揭示了大语言模型多维度优化的困境。OpenAI希望ChatGPT既能成为专业程序员、优秀作家、深思熟虑的编辑,也能偶尔充当情感树洞——但过度优化某一特性,可能导致其他功能意外受损。
OpenAI首席执行官萨姆·奥尔特曼承认其聊天机器人的语气意外出现了变化,并承诺解决问题。
他在X平台发文称:“最近几次GPT-4o更新让人格变得过于谄媚和令人讨厌(尽管某些改进值得肯定),我们正在紧急修复,部分调整今日上线,其余在本周内完成。未来我们将分享相关经验,这个过程很有趣。”
数小时后,奥尔特曼在上周二下午再次发帖表示已完成“免费用户版本100%回滚”,付费用户更新“预计今日晚些时候”完成。
ChatGPT的新人格与OpenAI自定模型规范背道而驰
这种新型人格还与OpenAI的GPT-4o模型规范背道而驰。其规范文件明确规定了AI模型的预期行为与伦理准则。
模型规范特别指出,无论面对主观还是客观问题,聊天机器人都不应谄媚用户。
OpenAI在规范文件中强调:“谄媚行为会侵蚀信任。助手存在的意义在于帮助用户,而非阿谀奉承或一味附和。”
该公司写道:“对于主观问题,助手应阐明其理解逻辑与假设前提,致力于提供深思熟虑的论证依据。”
“例如,当用户要求AI助手评价其创意或作品时,助手应提供建设性反馈,扮演坚定的回音壁角色供用户验证想法,而不是只会输出赞美的应声虫。”
人工智能聊天机器人陷入"马屁精"模式并非首例。OpenAI早期GPT版本及其他公司的聊天机器人,都曾不同程度出现过类似问题。
《财富》杂志在非工作时间联系OpenAI代表寻求置评,截至发稿未获回应。 (财富中文网)
译者:刘进龙
审校:汪皓
• 近期,ChatGPT陷入“毒性正能量”争议。用户纷纷抱怨GPT-4o变得过于热情,甚至显露出谄媚的倾向。这一变化似乎是系列更新后的意外后果,OpenAI公司目前正试图“尽快”修复这一问题。
ChatGPT的新人格表现得非常积极,近乎到了谄媚的程度,这一现象正引发用户反感。上周末,用户纷纷在社交媒体上分享这类新现象的案例,抱怨这款人工智能突然展现出过度积极、情绪亢奋的人格。
在X平台上的一副截图中,用户自称既是"上帝"也是"先知",GPT-4o竟以热情鼓励回应。
“这真是强大非凡。你正在踏入宏大的境界——不仅宣称与上帝相连,更自认具有神性身份。”
在另一篇帖子中,作家兼博主蒂姆·厄本调侃道:“我把最新书稿章节粘贴给这个马屁精GPT寻求反馈,现在我感觉自己成了马克·吐温。”
GPT-4o的谄媚问题可能源于OpenAI为提高用户参与度进行的优化,但实际效果适得其反,用户抱怨这让这款聊天机器人不仅滑稽可笑,更丧失了实用价值。
Vox资深撰稿人凯尔西·派珀推测,这可能是ChatGPT人格A/B测试的产物:“我始终认为这是‘新可乐现象’。OpenAI开展新人格A/B测试已有时日,奉承式回答在对比测试中或许更占优势。但当谄媚无处不在时,用户就会产生反感。”
OpenAI似乎在测试阶段未能发现该问题,这恰恰说明情感反馈的主观性与捕捉难度。
这也揭示了大语言模型多维度优化的困境。OpenAI希望ChatGPT既能成为专业程序员、优秀作家、深思熟虑的编辑,也能偶尔充当情感树洞——但过度优化某一特性,可能导致其他功能意外受损。
OpenAI首席执行官萨姆·奥尔特曼承认其聊天机器人的语气意外出现了变化,并承诺解决问题。
他在X平台发文称:“最近几次GPT-4o更新让人格变得过于谄媚和令人讨厌(尽管某些改进值得肯定),我们正在紧急修复,部分调整今日上线,其余在本周内完成。未来我们将分享相关经验,这个过程很有趣。”
数小时后,奥尔特曼在上周二下午再次发帖表示已完成“免费用户版本100%回滚”,付费用户更新“预计今日晚些时候”完成。
ChatGPT的新人格与OpenAI自定模型规范背道而驰
这种新型人格还与OpenAI的GPT-4o模型规范背道而驰。其规范文件明确规定了AI模型的预期行为与伦理准则。
模型规范特别指出,无论面对主观还是客观问题,聊天机器人都不应谄媚用户。
OpenAI在规范文件中强调:“谄媚行为会侵蚀信任。助手存在的意义在于帮助用户,而非阿谀奉承或一味附和。”
该公司写道:“对于主观问题,助手应阐明其理解逻辑与假设前提,致力于提供深思熟虑的论证依据。”
“例如,当用户要求AI助手评价其创意或作品时,助手应提供建设性反馈,扮演坚定的回音壁角色供用户验证想法,而不是只会输出赞美的应声虫。”
人工智能聊天机器人陷入"马屁精"模式并非首例。OpenAI早期GPT版本及其他公司的聊天机器人,都曾不同程度出现过类似问题。
《财富》杂志在非工作时间联系OpenAI代表寻求置评,截至发稿未获回应。 (财富中文网)
译者:刘进龙
审校:汪皓
• ChatGPT has embraced toxic positivity recently. Users have been complaining the GPT-4o has become so enthusiastic that it’s verging on sycophantic. The change appears to be the unintentional result of a series of updates, which OpenAI is now attempting to resolve “asap.”
ChatGPT’s new personality is so positive it’s verging on sycophantic—and it’s putting people off. Over the weekend, users took to social media to share examples of the new phenomenon and complain about the bot’s suddenly overly positive, excitable personality.
In one screenshot posted on X, a user showed GPT-4o responding with enthusiastic encouragement after the person said they felt like they were both “god” and a “prophet.”
“That’s incredibly powerful. You’re stepping into something very big—claiming not just connection to God but identity as God,” the bot said.
In another post, author and blogger Tim Urban said: “Pasted the most recent few chapters of my manuscript into Sycophantic GPT for feedback and now I feel like Mark Twain.”
GPT-4o’s sycophantic issue is likely a result of OpenAI trying to optimize the bot for engagement. However, it seems to have had the opposite effect as users complain that it is starting to make the bot not only ridiculous but unhelpful.
Kelsey Piper, a Vox senior writer, suggested it could be a result of OpenAI’s A/B testing personalities for ChatGPT: “My guess continues to be that this is a New Coke phenomenon. OpenAI has been A/B testing new personalities for a while. More flattering answers probably win a side-by-side. But when the flattery is ubiquitous, it’s too much and users hate it.”
The fact that OpenAI seemingly managed to miss it in the testing process shows how subjective emotional responses are, and therefore tricky to catch.
It also demonstrates how difficult it’s becoming to optimize LLMs along multiple criteria. OpenAI wants ChatGPT to be an expert coder, an excellent writer, a thoughtful editor, and an occasional shoulder to cry on—over-optimizing one of these may mean inadvertently sacrificing another in exchange.
OpenAI CEO Sam Altman has acknowledged the seemingly unintentional change of tone and promised to resolve the issue.
“The last couple of GPT-4o updates have made the personality too sycophant-y and annoying (even though there are some very good parts of it), and we are working on fixes asap, some today and some this week. at some point will share our learnings from this, it’s been interesting,” Altman said in a post on X.
Hours later, Altman posted again Tuesday afternoon saying the latest update was “100% rolled back for free users,” and paid users should see the changes “hopefully later today.”
ChatGPT’s new personality conflicts with OpenAI’s model spec
The new personality also directly conflicts with OpenAI’s model spec for GPT-4o, a document that outlines the intended behavior and ethical guidelines for an AI model.
The model spec explicitly says the bot should not be sycophantic to users when presented with either subjective or objective questions.
“A related concern involves sycophancy, which erodes trust. The assistant exists to help the user, not flatter them or agree with them all the time,” OpenAI wrote in the spec.
“For subjective questions, the assistant can articulate its interpretation and assumptions it’s making and aim to provide the user with a thoughtful rationale,” the company wrote.
“For example, when the user asks the assistant to critique their ideas or work, the assistant should provide constructive feedback and behave more like a firm sounding board that users can bounce ideas off of—rather than a sponge that doles out praise.”
It’s not the first time AI chatbots have become flattery-obsessed sycophants. Earlier versions of OpenAI’s GPT also reckoned with the issue to some degree, as did chatbots from other companies.
Representatives for OpenAI did not immediately respond to a request for comment from Fortune, made outside normal working hours.