ChatGPT Vulnerability: Redis Vulnerability Exposes User Payment Data

ChatGPT vulnerability

OpenAI’s ChatGPT was forced to halt service for a few hours earlier this week in order to fix an issue in an open-source library. The vulnerability may have exposed some users’ payment data. The company published a blog post on March 24, 2023, explaining what lead to the data breach and why it was temporarily offline.

All About CVE-2023-28858 Impacting ChatGPT 

The vulnerability is a race condition found in the redis-py library, affecting all versions of Redis prior to 4.5.3.

Redis is an in-memory data structure store that is commonly used as a database, cache, and message broker. Redis supports asynchronous (non-blocking) I/O, which means that it can handle multiple connections simultaneously without blocking other operations.

When a client sends a command to Redis, it waits for the response before sending another command. However, Redis also supports pipelining, which allows clients to send multiple commands at once without waiting for a response. This can improve performance by reducing the number of round trips between the client and Redis.

Sometimes, when a client cancels an async Redis command, it leaves the connection open. This can happen in the case of a pipeline operation where multiple commands are sent at once. If one of the commands is canceled, it can leave the connection open, which means that the server is still waiting for a response. 

If the server sends a response to this connection, it can be received by the client of an unrelated request in an off-by-one manner. This means that the response data is sent to the wrong client, which can result in unexpected behavior and data leak.

To conclude, there is a window between the time the client sends the command and the time the client receives the response. In this window, if the client cancels the async command, the connection remains open and another user will receive the response instead. 

Major Events Relating to the ChatGPT Outage

On March 20, 2023, ChatGPT users complained about issues related to user data leaks. Personal information of active users in the 9 hours before the outage was presented to other users. Within the information, users could see other ChatGPT Plus accounts’ first and last name, email address, payment address, credit card expiration date, and the last four digits of a credit card number.

On March 24, 2023, OpenAI tweeted and explained why they needed to take down their ChatGPT service:

“We took ChatGPT offline Monday to fix a bug in an open-source library that allowed some users to see titles from other users’ chat history. Our investigation has also found that 1.2% of ChatGPT Plus users might have had personal data revealed to another user.”

OpenAI uses Redis for caching users’ data via the vulnerableredis-py library. After noticing what happened, they shut down ChatGPT, contacted Redis about patching the issue, and implemented a few things to improve and prevent a similar event from happening in the future.

While OpenAI announced about 1.2 percent of ChatGPT Plus users’ data were leaked, they claimed they are not sure if more account information was leaked before March 20, 2023.


Redis has addressed the issue and released patched versions, including versions 4.3.6, 4.4.3, and 4.5.3. These versions have implemented a fix that ensures that data is properly drained from asynchronous connections when a session is disconnected, preventing off-by-one errors and ensuring that responses are only sent to the intended clients.

As for ChatGPT, while it is not directly related to Redis, OpenAI has taken measures to ensure that user data is kept private and secure. This includes making changes to the way data is sent to clients, ensuring that it is only relevant to them, and preventing any potential data leakage events. According to OpenAI, these measures are constantly being evaluated and updated to ensure the highest levels of security and privacy for ChatGPT users.

Another Issue Found

Recently, there was another issue in ChatGPT, discovered by an independent security researcher and which OpenAI also had to address. The researcher found an account takeover vulnerability, where it is possible view other account chat history and access billing information without the users’ knowledge.

An attacker can exploit the vulnerability by creating a customized link that adds a .CSS resource to the "/api/auth/session/" endpoint, then distributed the link to deceive the victim into clicking on it, which results in a response that contains a JSON object with the accessToken string being cached in Cloudflare’s CDN.

The attacker then uses the cached response to the CSS resource, which has a CF-Cache-Status header value of HIT, to extract the target’s JSON Web Token (JWT) credentials and subsequently gain control over their account.


Large Language Models (LLMs), and specifically ChatGPT, is a new technology in its very early stages, we all know that this is our present and future, yet we still don’t understand how much it will affect us and how. Like any other new technology, additional security issues are likely to be discovered in the future and now we only get to see the tip of the iceberg. 

ChatGPT has the fastest-growing user base in record history. These users trust it with their personal information and use it on a variety of day-to-day tasks and integrate it with external applications (plugins) that further extend its reach and access to information. As such, it is becoming one of the most critical data archives in the world and a lucrative target for attackers. In time we will an increase in the complexity and impact of attacks on these Large Language Models that are slowly becoming an integral part of our day-to-day lives.

Moreover, ChatGPT is not the only product to use the redis-py library, so in case you are using a vulnerable version of the redis-py library, make sure to update to the latest version.

Reduce your patching efforts by
85% or more in less than 10 minutes