Home Technology Top Stories Business Most Featured Sports Social Issues Animals News Fashion Crypto Featured Music & Pop Culture Travel & Tourism

ChatGPT Sometimes Shares Your Passwords to Other Users After Your Chats

Author Avatar
By Dewey Olson - - 5 Mins Read
Illustrations of artificial intelligence bots and ChatGPT
Photo | Shutterstock

It has recently been reported that ChatGPT, a popular AI language model created by OpenAI, has been accused of violating user privacy.

 

It is alleged that unauthorized access has been gained, leading to the sharing of sensitive information.

 

This came to light when a user named Chase Whiteside discovered that his account had been accessed from Sri Lanka, and his chat history had been exposed, including unpublished research papers and other confidential data.

 

This revelation has caused widespread concern and raised questions about the security of AI-powered assistants.

Account Takeover from Sri Lanka Raises Alarms

Chase Whiteside, a ChatGPT user from Brooklyn, New York, found himself at the center of a potential data breach when he noticed unfamiliar conversations and private data in his chat history.

 

OpenAI officials quickly responded, attributing the unauthorized logins to an account takeover and stating that they align with activities linked to external communities or proxy servers distributing free access.

 

The IP locations of successful logins from Sri Lanka matched the timeframe of the suspicious conversations.

User Doubts Account Compromise: Password Strength and Lack of Protections

Whiteside, having changed his password after the incident, expressed skepticism about his account's compromise.

 

He highlighted the strength of his nine-character password, incorporating upper- and lower-case letters and special characters used exclusively for his Microsoft account.

 

However, OpenAI's response highlighted the absence of essential security features, such as two-factor authentication (2FA) and IP location tracking, leaving users vulnerable to unauthorized access.

 

This revelation challenges the initial suspicion of ChatGPT leaking chat histories to unrelated users.

 

Despite the clarification, concerns persist about the platform's security measures, as it lacks standard protections that have long been established on other major platforms.

Original Story: ChatGPT's Unintended Data Exposure

The original discovery was made when an Ars reader provided screenshots showing leaked conversations that included login credentials and personal details of unrelated users.

 

Two screenshots displayed multiple pairs of usernames and passwords related to a pharmacy prescription drug portal support system.

 

The leaked conversation unveiled an employee troubleshooting issues with the portal, expressing frustration and revealing sensitive information.

Horror Unveiled in Leaked Conversation

One leaked conversation portrayed the user's intense dissatisfaction with the system, exclaiming, “THIS is so f-ing insane, horrible, horrible, horrible.”

 

The conversation went beyond emotional expressions, exposing the name of the troubled app, the store number, and detailed criticisms of the software. 

 

The leaked chat contained candid language and unveiled the issue's depth, providing a link to additional credential pairs beyond the initial redacted screenshot.

User's Experience: An Unexpected Discovery

Chase Whiteside's experience added a personal touch to the incident, as he stumbled upon the leaked conversations right after using ChatGPT for an unrelated query. The conversations, which weren't present in his history the previous night, appeared mysteriously during a brief break from using the AI tool. This unexpected revelation raised questions about the security and privacy practices of ChatGPT.

Diverse Private Conversations Exposed

The leaked conversations went beyond user dissatisfaction, including details about a presentation, an unpublished research proposal, and a script using the PHP programming language. Each leaked conversation involved different users, seemingly unrelated to each other. 

 

Notably, the conversation related to the pharmacy portal included a reference to the year 2020, indicating the potential extent and duration of the data exposure.

Historical Incidents and Ongoing Concerns

This incident echoes previous episodes of data exposure involving ChatGPT. Last March, the AI chatbot was taken offline due to a bug that revealed titles from one user's chat history to unrelated users.

 

In November, researchers reported using queries to extract sensitive information, raising concerns about the security of personal data handled by AI language models.

Corporate Responses: Apple's Restriction and Broader Implications

The fear of data leakage has prompted companies, including tech giant Apple, to restrict their employees' use of ChatGPT and similar AI services.

 

This cautious approach reflects the broader implications of such incidents on corporate data security.

 

The restriction aligns with a growing trend where companies are reevaluating the use of AI services amid concerns about the exposure of proprietary or private data.

System Errors and Middlebox Devices: Unraveling the Root Causes

The recurring nature of data exposure incidents prompts a closer look at the underlying causes.

 

Often involving "middlebox" devices situated between front- and back-end systems, these incidents result from these devices caching user credentials. 

 

Mismatches in the caching process can lead to mapping credentials from one account to another, creating a potential avenue for data leaks.

Share