ChatGPT Security Concerns Emerge: Personal Details Reportedly Leaked
Microsoft-backed OpenAI's generative AI-based chatbot, ChatGPT, has become a popular tool among users for answering a wide range of queries. While the chatbot is primarily used for simpler questions, some users have turned to ChatGPT plugins and extensions for more complex tasks.
However, recent reports suggest that users should exercise caution when sharing sensitive information with ChatGPT, as there have been instances of the chatbot leaking private conversations.

Leaked Conversations Raise Concerns
According to a report by ArsTechnica, a user has come forward with screenshots demonstrating how ChatGPT is leaking private details, including usernames and passwords. These screenshots were discovered when the user accessed ChatGPT for an unrelated query but noticed additional conversations in the bot's history that did not belong to them.
Among these were conversations involving troubleshooting for a pharmacy prescription drug portal's support system, including sensitive information such as app names, store numbers, and login credentials. Another leaked conversation revealed the name of a presentation and details of an unpublished research proposal.
OpenAI's Response to the Allegations, Previous Incidents
In response to the allegations of leaked conversations, OpenAI officials stated that the incidents were due to the user's account being compromised. An OpenAI representative suggested that the unauthorized access originated from Sri Lanka, contrasting the user's claim of logging in from Brooklyn, New York.
The representative further explained that this situation seemed to involve a 'pool' of identities used by an external community or proxy server to distribute free access, indicating a broader issue of account takeover. This is not the first time ChatGPT has been involved in a data leak.
In March 2023, a bug reportedly allowed the leak of chat titles, and in November 2023, researchers could prompt the AI to divulge private data used in its training. Despite these issues, OpenAI has yet to implement user security features such as two-factor authentication (2FA) or mechanisms to track details like IP location for current and recent logins, leaving ChatGPT accounts vulnerable.
Importance of Safeguarding Personal Info in the Age of AI
As AI technologies like ChatGPT continue to evolve, so too do the risks associated with their use. The recent leaks underscore the importance of being mindful of the information shared with these platforms.
Users should remain vigilant and take proactive steps to protect their data, especially in the absence of robust security features from OpenAI. As AI becomes increasingly integrated into our daily lives, the responsibility for safeguarding personal information ultimately falls on both the developers and the users.


Click it and Unblock the Notifications








