OpenAI terminates accounts of confirmed state-affiliated bad actors

The terminations are part of what's being called an "early detection effort."
By Chase DiBenedetto  on 
The OpenAI and Microsoft logos reflecting each other on a dark screen.
Credit: Jonathan Raa/NurPhoto via Getty Images

OpenAI has confirmed that state-affiliated bad actors are using the company's tech for malicious purposes, a validation of what many have feared since the company's rise to prominence in the generative AI race.

The discovery comes as part of a collaboration with Microsoft Threat Intelligence, a community of thousands of security experts, researchers, and threat hunters that analyze and detect cyber threats.

Using the network's intelligence gathering, OpenAI discovered at least five confirmed state-affiliated actors that were using OpenAI services for querying open-source information, translating, finding coding errors, and running basic coding tasks, the company explained. The actors included two China-affiliated actors known as Charcoal Typhoon and Salmon Typhoon; an Iran-affiliated actor known as Crimson Sandstorm; a North Korea-affiliated actor known as Emerald Sleet; and a Russia-affiliated actor known as Forest Blizzard.

The accounts were said to be relying on OpenAI's services to bolster potential cyber attacks, but Microsoft did not detect any significant uses of the most-highly monitored LLMs.

"These include reconnaissance, such as learning about potential victims’ industries, locations, and relationships; help with coding, including improving things like software scripts and malware development; and assistance with learning and using native languages," Microsoft explained. "Language support is a natural feature of LLMs and is attractive for threat actors with continuous focus on social engineering and other techniques relying on false, deceptive communications tailored to their targets’ jobs, professional networks, and other relationships."

Mashable Light Speed
Want more out-of-this world tech, space and science stories?
Sign up for Mashable's weekly Light Speed newsletter.
By signing up you agree to our Terms of Use and Privacy Policy.
Thanks for signing up!

Microsoft distinguished this announcement as an early-detection effort, intended to expose "early-stage, incremental moves that we observe well-known threat actors attempting."

The collaboration aligns with recent moves from the White House to require safety testing and government supervision for AI systems that could impacts national and economic security, public health, and general safety. "While attackers will remain interested in AI and probe technologies’ current capabilities and security controls, it’s important to keep these risks in context. As always, hygiene practices such as multifactor authentication (MFA) and Zero Trust defenses are essential because attackers may use AI-based tools to improve their existing cyberattacks that rely on social engineering and finding unsecured devices and accounts."

While OpenAI admits that its current models are limited in their ability to detect cyber attacks, the company committed to future security investments, including:

  • Investments in technology and teams, including its Intelligence and Investigations and Safety, Security, and Integrity teams, to detect threats.

  • Collaborations with industry partners and other stakeholders to exchange information about malicious uses.

  • Continued public reporting of security threats and solutions.

"Although we work to minimize potential misuse by such actors, we will not be able to stop every instance," OpenAI wrote. "But by continuing to innovate, investigate, collaborate, and share, we make it harder for malicious actors to remain undetected across the digital ecosystem and improve the experience for everyone else."

Chase sits in front of a green framed window, wearing a cheetah print shirt and looking to her right. On the window's glass pane reads "Ricas's Tostadas" in red lettering.
Chase DiBenedetto
Social Good Reporter

Chase joined Mashable's Social Good team in 2020, covering online stories about digital activism, climate justice, accessibility, and media representation. Her work also touches on how these conversations manifest in politics, popular culture, and fandom. Sometimes she's very funny.


Recommended For You
'The future is going to be harder than the past': OpenAI's Altman and Brock address high-profile resignation
openai logo on iphone

Open AI and Google trained AI models on YouTube videos
A phone screen displaying the YouTube logo mirrored.

OpenAI reveals its ChatGPT AI voice assistant
OpenAI Spring update livestream

Scarlett Johansson was 'shocked' by OpenAI 'Sky' voice
Scarlett Johansson smiles directly at the camera.


Trending on Mashable
'Wordle' today: Here's the answer hints for May 29
a phone displaying Wordle

NYT Connections today: See hints and answers for May 29
A phone displaying the New York Times game 'Connections.'

NYT's The Mini crossword answers for May 29
Closeup view of crossword puzzle clues

NYT Connections today: See hints and answers for May 28
A phone displaying the New York Times game 'Connections.'

'Wordle' today: Here's the answer hints for May 28
a phone displaying Wordle
The biggest stories of the day delivered to your inbox.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.
Thanks for signing up. See you at your inbox!