Have you noticed that ChatGPT has gotten a little personal lately? It’s not just you. OpenAI’s CEO, Sam Altman, admitted last night that the last couple of updates to GPT-4o have affected the chatbot’s personality, and not in a good way.

If you use ChatGPT often enough, you might have noticed a shift in its behavior lately. Part of it might be down to its memory, as in my experience, the chatbot addresses you differently when it doesn’t rely on past chats to guide the way you’d (potentially) want it to respond. However, part of it is just that somewhere along the way, OpenAI has made ChatGPT a so-called “yes man” — a tool that agrees with you instead of challenging you, and sometimes, the outcome can be a touch obnoxious.

Sam Altman, OpenAI’s CEO, seems aware of the change. He referred to the chatbot’s personality as “too sycophant-y” and “annoying,” all the while pointing out that “there are some very good parts of it.” Altman also said that OpenAI is working on fixes as soon as possible, and some might roll out as soon as today, with others to follow later this week.

the last couple of GPT-4o updates have made the personality too sycophant-y and annoying (even though there are some very good parts of it), and we are working on fixes asap, some today and some this week.

at some point will share our learnings from this, it’s been interesting.

— Sam Altman (@sama) April 27, 2025

This prompted one user to respond, asking whether it’d be possible to go back to the old ChatGPT personality — the one that was polite but not a full-on cheerleader. As an alternative, the user asked whether it’d be possible to distinguish between the “old” and “new” personalities. Altman responded: “Yeah, eventually we clearly need to be able to offer multiple options.” That’d be an interesting and useful addition to ChatGPT.

Fully getting rid of ChatGPT’s friendly, encouraging traits would backfire, too — no matter how annoying they are. While many use ChatGPT for work and research, chatbots have now permeated our reality, resulting in many people chatting with them to discuss their problems or fears. Having just one personality setting is, indeed, very limiting in such situations.

With that said, it’s true that ChatGPT is getting a tad too personal. In two recent conversations, it referred to me as “sweetheart,” and I’m not going to lie, that made me feel really uncomfortable. Let’s hope that OpenAI finds a way to dial it back and make it more of a useful tool than something that tries too hard to be our friend.






Share.
Exit mobile version