Published: July 03, 2023 on our newsletter Security Fraud News & Alerts Newsletter.
Not long ago, more than 100,000 ChatGPT users learned their account credentials were for sale on the dark web. ChatGPT’s parent company, OpenAI, confirms the data breach occurred, but says it had nothing to do with a lack of data security on their part. Although the breach may be a blame game for now, there’s more to it than what’s bubbling on the surface.
Group-IB, a cybersecurity company, compiled a Threat Intelligence report on the ChatGPT breach, finding far more than account credentials were exposed. It seems the bad actors responsible also accessed chats and other communications stored in a user’s account.
Some of the content stored by ChatGPT users goes far beyond what’s personal, including company secrets, app developments, business plans, and what appears to be classified documents. The lesson here is the importance of taking great care about what data you choose to store, where you choose to store it, and the password and other protections you put in place to secure that account. Also keep in mind that with the availability of new products like ChatGPT, they’re learning from what you put into it. If you don’t want others to know your secrets, don’t put them in those programs.
In its report, Group-IB relays a statement by OpenAI regarding security practices of their ChatGPT platform as “OpenAI maintains industry best practices for authenticating and authorizing users to services including ChatGPT, and we encourage our users to use strong passwords and install only verified and trusted software to personal computers.” And if you haven’t changed your ChatGPT password, get right on it.
According to Group-IB, OpenAI is placing the lion’s share of blame on the availability of info-stealer malware like Raccoon and Vidar that are easily rented to anyone perusing the dark web.
The Security Web We Weave
Users of all kinds on platforms everywhere should know how critical strong passwords are for keeping our accounts and the information they hold secure. Also, how adding MFA (multi-factor authentication) to our logins adds a layer of identity verification and should always be used whenever possible. Also, the risks downloading apps and other software from third-party providers instead of the official sites can infect our device in a heartbeat.
Regardless of how this ChatGPT breach shakes out, it exposes the predictability of an unfortified user account being compromised at some point and how vulnerable the data stored in that account truly is. It’s up to all of us to strengthen our own account security using strong passwords and MFA – always and for every account. That’s because any amount of our hijacked PII, no matter what it is, can be used by hackers for more in-depth, future attacks.
Keep up to date: Sign up for our Fraud alerts and Updates newsletter
Want to schedule a conversation? Please email us at advisor@nadicent.com
Comments