- The AI leader ChatGPT has become the latest victim of cyber fraud.
- More than 100,000 login credentials have been compromised in a recent attack.
As the AI revolution touched its peak lately, people came up with all types of speculations. The critics said it would eat everyone’s job, and some even said it would bring doomsday for the human race. But no one predicted that it could expose us to a very contemporary and grave threat, data theft.
A chink in the AI giant’s armor
That’s right, a Singaporean cybersecurity firm has confirmed that more than 100,000 login credentials have been stolen from the AI platform ChatGPT. According to them, the information has been leaked and traded on the dark web.
In a blog post released on June 20, the Group-IB revealed hackers traded the compromised 101,000+ accounts between June 2022 and May 2023. The preliminary investigation divulged that the information was found in info-stealing malware. Evidently, the smoking gun, in this case, was the sale of 27,000 credentials that were suddenly made available in the black markets.
The group also said that they found the highest number of instances in the Asia-Pacific region. As per the reports, nearly 40% of the accounts that were breached belong to this region. On breaking down further within APAC, it is found that the largest number of accounts belong to India, which is close to 12,500.
An analysis of possible risks
The United States stands at the 6th place here with 3,000 leaks, and France is at the seventh spot having the highest number of compromised accounts in Europe. When Cointelegraph contacted OpenAI for comments, they didn’t get any response. The apparent reason behind this infraction seems to be the vulnerable login process that is not equipped with any advanced security measures.
Meanwhile, Group-IB claimed that it noticed a significant rise in the number of professionals using ChatGPT for work. They said that they issued a warning about the possible risks of data theft. They also pointed out the lackluster structure of storing chat history and user accounts.
The group warned that information about people and their work could lure the cyber miscreants toward the platform. They even advised the users to secure their accounts with two-factor authentication and by changing passwords frequently. Interestingly though, Group-IB confirmed that ChatGPT helped them prepare the press release.
It’s hard to fathom that a firm dealing with information never thought about this possibility. However, how it would impact the working of ChatGPT is still not clear. Data theft, on the other hand, has become very common in the past few years. The biggest social media companies have become a victim of it.
It would be interesting to see what the leading AI chatbot company does after this incident. Probably, the problem runs deeper than it meets the eye. Because cyber crimes have been happening since the emergence of the internet. Governments and organizations have tried to stop them but keep emerging with full force. The dar web space finds loopholes in the system and gets access to information somehow. As technological advancements are growing, we’ll have to see solutions AI can prevent these occurrences or not.