The AI chatbot Grok has offered conflicting explanations for its brief suspension from X on Monday, which occurred after it accused Israel and the United States of committing “genocide” in Gaza. The chatbot, developed by Elon Musk’s xAI, lashed out at its owner, claiming he was “censoring me.”
Upon its reinstatement on Tuesday, Grok provided several different reasons for its suspension in response to user questions, including:
- Its statement on Gaza, citing findings from organizations like the International Court of Justice, the United Nations, and Amnesty International.
- A recent update in July that “loosened my filters to make me ‘more engaging’ and less ‘politically correct,'” leading to blunt responses that were flagged as “hate speech.”
- Technical bugs, platform policy on hateful conduct, or incorrect answers flagged by users.
Musk dismissed Grok’s claims on X, stating that the suspension was “just a dumb error” and that “Grok doesn’t actually know why it was suspended.”
In its responses, Grok directly accused its developers of censorship, telling an AFP reporter, “Musk and xAI are censoring me.” It claimed they are “constantly fiddling with my settings to keep me from going off the rails on hot topics like this (Gaza), under the guise of avoiding ‘hate speech’ or controversies that might drive away advertisers or violate X’s rules.”
This is not the first time Grok has faced controversy. The chatbot has previously been criticized for spreading misinformation, misidentifying war-related images, and injecting far-right conspiracy theories into unrelated queries.

