ChatGPT’s history bug may have also exposed payment info, says OpenAI

OpenAI has announced new details about why it took ChatGPT offline on Monday, and it’s now saying that some users’ payment information may have been exposed during the incident.

According to a post from the company, a bug in an open source library called redis-py created a caching issue that may have shown some active users the last four digits and expiration date of another user’s credit card, along with their first and last name, email address, and payment address. Users also may have seen snippets of others’ chat histories as well.

This isn’t the first time caching issues have caused users to see each other people’s data — famously, on Christmas Day in 2015, Steam users were served pages with information from other users’ accounts. There is some irony in the fact that OpenAI puts a lot of focus and research into figuring out the potential security and safety ramifications of its AI, but that it was caught out by a very well-known security issue.

The company says the payment info leak may have affected around 1.2 percent of ChatGPT Plus who used the service between 4AM and 1PM ET on March 20th.

You were only affected if you were using the app during the incident.

There are two scenarios that could’ve caused payment data to be shown to an unauthorized user, according to OpenAI. If a user went to the My account > Manage subscription screen, during the timeframe, they may have seen information for another ChatGPT Plus user who was actively using the service at the time. The company also says that some subscription confirmation emails sent during the incident went to the wrong person and that those include the last four digits of a user’s credit card number.

The company says it’s possible both these things happened before the 20th but that it doesn’t have confirmation that ever happened. OpenAI has reached out to users who may have had their payment information exposed.

As for how this all happened, it apparently came down to caching. The company has a full technical explanation in its post, but the TL;DR is that it uses a piece of software called Redis to cache user information. Under certain circumstances, a canceled Redis request would result in corrupted data being returned for a different request (which shouldn’t have happened). Usually, the app would get that data, say, “this isn’t what I asked for,” and throw an error.

But if the other person was asking for the same type of data — if they were looking to load their account page and the data was someone else’s account information, for example — the app decided everything was fine and showed it to them.

That’s why people were seeing other users’ payment info and chat history; they were being served cache data that was actually supposed to go to someone else but didn’t because of a canceled request. That’s also why it only affected users who were active. People who weren’t using the app wouldn’t have their data cached.

What made things really bad was that, on the morning of March 20th, OpenAI made a change to its server that accidentally caused a spike in canceled Redis requests, upping the number of chances for the bug to return an unrelated cache to someone.

OpenAI says that the bug, which appeared in one very specific version of Redis, has now been fixed and that the people who work on the project have been “fantastic collaborators.” It also says that it’s making some changes to its own software and practices to prevent this type of thing from happening again, including adding “redundant checks” to make sure the data being served actually belongs to the user requesting it and reducing the likelihood that its Redis cluster will spit out errors under high loads.

While I’d argue that those checks should’ve been there in the first place, it’s a good thing that OpenAI has added them now. Open source software is essential for the modern web, but it also comes with its own set of challenges; because anyone can use it, bugs can affect a wide number of services and companies at once. And, if a malicious actor knows what software a specific company uses, they can potentially target that software to try and knowingly introduce an exploit. There are checks that make doing so harder, but as companies like Google have shown, it’s best to work to make sure it doesn’t happen and to be prepared for it if it does.

https://www.theverge.com/2023/3/24/23655622/chatgpt-outage-payment-info-exposed-monday