In a bizarre turn of events, users of the popular AI chatbot ChatGPT discovered that mentioning certain names caused the system to crash or refuse to respond. The names included David Mayer, Brian Hood, Jonathan Turley, Jonathan Zittrain, David Faber, and Guido Scorza. OpenAI, the company behind ChatGPT, did not immediately provide an explanation for this peculiar behavior.
However, users speculated that these individuals might have requested their information be restricted or removed from the AI model due to privacy, legal, or safety concerns. Brian Hood, an Australian mayor, had previously accused ChatGPT of falsely describing him as a perpetrator of a crime he had actually reported. Although no lawsuit was filed, the offending material was removed in a subsequent update to the model.
Other names on the list included Jonathan Turley, a lawyer and Fox News commentator who was “swatted” in late 2023, and Guido Scorza, a member of Italy’s Data Protection Authority.
Name-related issues in ChatGPT responses
These individuals may have formally requested that information about them be restricted.
The case of David Mayer is particularly intriguing. While there is no widely known notable person by that name, there was a Professor David Mayer who taught drama and history. He faced legal issues due to his name being associated with a wanted criminal using it as a pseudonym.
OpenAI later confirmed that the name “David Mayer” had been flagged by internal privacy tools, stating, “There may be instances where ChatGPT does not provide certain information about people to protect their privacy.” However, the company did not provide further details on the tools or processes involved. This incident highlights the active monitoring and interference by companies that create AI models. It also serves as a reminder that while chatbots can be helpful, it is always better to verify information directly from reliable sources rather than relying solely on AI-generated responses.