ChatGPT glitch leaks users’ conversation histories, raises concerns over privacy
A ChatGPT bug recently allowed some users to read other users’ conversations, the AI chatbot’s boss has revealed. A number of users recently took to social media sites such as Twitter and Reddit to share images of chat histories they claim to not be theirs.
Despite OpenAI CEO Sam Altman informing soon that the “significant” error had now been taken care of, several users still remained concerned about their privacy on the platform.
Since its launch in November last year, millions of people have been using the tool to write songs, draft messages, and even code. In order to allow users to revisit their conversations with the chatbot, the data is stored in their chat history bar.
Keep Reading
But as early as Monday, a number of users started seeing conversations they said they hadn’t had with the chatbot. One user shared a photo of their chat history on Reddit, showing conversations in Mandarin and titles like “Chinese Socialism Development”.
The company told Bloomberg on Tuesday that it briefly disabled the chatbot on Monday to address the glitch. OpenAI’s chief executive even tweeted regarding an upcoming “technical postmortem”.
But the incident definitely elevated users’ concerns over their private data getting released through the tool. Moreover, the glitch also seems to indicate that the company has access to user conversations. OpenAI’s privacy policy does mention that user data, such as prompts and responses, may be used to continue training the model.
But the conversations are only utilised after the removal of personally identifiable information.