Elon Musk’s AI chatbot Grok is glitching again.
This time, among other problems, the chatbot is spewing misinformation about the Bondi Beach shooting, in which at least eleven people were killed at a Hanukkah gathering.
One of the assailants was eventually disarmed by a bystander, identified as 43-year-old Ahmed al Ahmed. The video of the interaction has been widely shared on social media with many praising the heroism of the man. Except those that have jumped at the opportunity to exploit the tragedy and spread Islamophobia, mainly by denying the validity of the reports identifying the bystander.
Grok is not helping the situation. The chatbot appears to be glitching, at least as of Sunday morning, responding to user queries with irrelevant or at times completely wrong answers.
In response to a user asking Grok the story behind the video showing al Ahmed tackling the shooter, the AI claimed “This appears to be an old viral video of a man climbing a palm tree in a parking lot, possibly to trim it, resulting in a branch falling and damaging a parked car. Searches across sources show no verified location, date, or injuries. It may be staged; authenticity is uncertain.”
In another instance, Grok claimed that the photo showing an injured al Ahmed was of an Israeli hostage taken by Hamas on October 7th.
In response to another user query, Grok questioned the authenticity of al Ahmed’s confrontation yet again, right after an irrelevant paragraph on whether or not the Israeli army was purposefully targeting civilians in Gaza.
In another instance, Grok described a video clearly marked in the tweet to show the shoot out between the assailants and police in Sydney to instead be from Tropical Cyclone Alfred, which devastated Australia earlier this year. Although in this case, the user doubled down on the response to ask Grok to reevaluate, which caused the chatbot to realize its mistake.
Beyond just misidentifying information, Grok seems to be just truly confused. One user was served up a summary of the Bondi shooting and its fallout in response to a question regarding tech company Oracle. It also seems to be confusing information regarding the Bondi shooting and the Brown University shooting which took place only a few hours before the attack in Australia.
The glitch is also extending beyond just the Bondi shooting. Throughout Sunday morning, Grok has misidentified famous soccer players, gave out information on acetaminophen use in pregnancy when asked about the abortion pill mifepristone, or talked about Project 2025 and the odds of Kamala Harris running for presidency again when asked to verify a completely separate claim made about a British law enforcement initiative.
It’s not clear what is causing the glitch. Gizmodo reached out to Grok-developer xAI for comment, but they have only responded with the usual automated reply, “Legacy Media Lies.”
It’s also not the first time that Grok has lost its grip on reality. The chatbot has given quite a few questionable responses this year, from an “unathorized modification” that caused it to respond to every query with conspiracy theories on “white genocide” in South Africa to saying that it would rather kill the world’s entire Jewish population than vaporize Musk’s mind.





