Skip to content

OpenAI Bans Users Exploiting ChatGPT for Surveillance and Hacking

From spying on minorities to refining hacking techniques, foreign users are exploiting ChatGPT. OpenAI fights back with bans, but more action is needed.

In the picture we can see three boys standing near the desk on it, we can see two computer systems...
In the picture we can see three boys standing near the desk on it, we can see two computer systems towards them and one boy is talking into the microphone and they are in ID cards with red tags to it and behind them we can see a wall with an advertisement board and written on it as Russia imagine 2013.

OpenAI Bans Users Exploiting ChatGPT for Surveillance and Hacking

OpenAI recently revealed that users likely linked to foreign governments have been exploiting ChatGPT for potentially malicious purposes. The AI assistant, developed by OpenAI, has been used to aid in surveillance, promote divisive tools, and even refine hacking techniques. This comes amidst a global race for AI dominance, with the US and China leading the charge.

OpenAI reported that a user, seemingly connected to a Chinese government entity, asked ChatGPT to analyze travel movements and police records of Uyghur minorities and 'high-risk' individuals. Another Chinese-speaking user sought help in designing promotional materials for a tool that scans social media for political and religious content. Both users were subsequently banned by OpenAI.

Suspected hackers from Russia, North Korea, and China have been utilizing ChatGPT to enhance their influence operations. They've used the AI to refine coding, create more convincing phishing links, and improve language errors in their communications. Meanwhile, scammers based in Myanmar have been employing OpenAI's models for various business tasks, while users are increasingly turning to ChatGPT to identify scams.

The AI race between the US and China is heating up, with each country investing billions in new capabilities. This competition was highlighted when Chinese firm DeepSeek presented a ChatGPT-like AI model called R1 at a fraction of the cost, alarming US officials and investors.

The misuse of ChatGPT by foreign entities underscores the need for robust AI governance and security measures. As the AI race between the US and China intensifies, it's crucial for both countries to ensure that their technologies are used responsibly and do not fall into the wrong hands. OpenAI's ban on the offending users is a step in the right direction, but more needs to be done to protect users and safeguard AI's potential benefits.

Read also:

Latest