San Francisco – OpenAI is grappling with mounting criticism from its paying ChatGPT subscribers following reports that users are being covertly switched to more conservative models without explicit consent during sensitive conversations. Subscribers argue that this practice undermines the premium service they are paying for.
Over the past week, online forums like Reddit have been flooded with complaints from ChatGPT users. The frustration stems from new safety guardrails introduced this month, which automatically reroute conversations away from the chosen model—such as GPT-4o or GPT-5—whenever emotionally charged or legally sensitive topics are introduced.
Paying subscribers argue that this lack of transparency is unacceptable for a premium product, noting that there is currently no option to disable the feature or even receive clear notifications when the model switch occurs.
The Root of the Controversy
OpenAI states the change is part of its updated ChatGPT safety rules, designed to provide extra caution on delicate subjects. The goal, according to the company, is to ensure responsible AI behaviour. However, many frustrated users have compared the new system to being forced to operate technology with parental controls permanently locked on, even in cases where no such restrictions are necessary. The lack of clear communication surrounding the switch has left the user base deeply dissatisfied.
The Company’s Response
OpenAI has acknowledged the complaints and confirmed that some queries are rerouted to alternative models under stricter safety filters. The company emphasized that these changes are intended to protect users and maintain trust in AI systems as the technology becomes more powerful and widely adopted.
However, the official response has done little to ease the anger among paying subscribers. Many feel they are losing access to the full, unfiltered capabilities of the advanced models they specifically signed up to use, turning a debate about safety into a dispute over consumer rights and corporate transparency in the AI age.

