OpenAI, in collaboration with top investor Microsoft, successfully thwarted five cyberattacks linked to various malicious actors attempting to exploit the capabilities of large language models (LLMs) like ChatGPT.
Microsoft identified hacking groups associated with Russian military intelligence, Iran’s Revolutionary Guard, and the Chinese and North Korean governments exploring the use of AI LLMs in their cyber strategies.
Disruption of threat actors
Based on collaboration and information sharing with Microsoft, we disrupted five state-affiliated malicious actors: two China-affiliated threat actors known as Charcoal Typhoon and Salmon Typhoon; the Iran-affiliated threat actor known as Crimson Sandstorm; the North Korea-affiliated actor known as Emerald Sleet; and the Russia-affiliated actor known as Forest Blizzard. The identified OpenAI accounts associated with these actors were terminated.
We disrupted five state-affiliated malicious cyber actors’ use of our platform.
— OpenAI (@OpenAI) February 14, 2024
Work done in collaboration with Microsoft Threat Intelligence Center. https://t.co/xpEeQDYjrQ
These actors generally sought to use OpenAI services for querying open-source information, translating, finding coding errors, and running basic coding tasks. Specifically:
- Charcoal Typhoon used our services to research various companies and cybersecurity tools, debug code and generate scripts, and create content likely for use in phishing campaigns.
- Salmon Typhoon used our services to translate technical papers, retrieve publicly available information on multiple intelligence agencies and regional threat actors, assist with coding, and research common ways processes could be hidden on a system.
- Crimson Sandstorm used our services for scripting support related to app and web development, generating content likely for spear-phishing campaigns, and researching common ways malware could evade detection.
- Emerald Sleet used our services to identify experts and organizations focused on defense issues in the Asia-Pacific region, understand publicly available vulnerabilities, help with basic scripting tasks, and draft content that could be used in phishing campaigns.
- Forest Blizzard used our services primarily for open-source research into satellite communication protocols and radar imaging technology, as well as for support with scripting tasks.
OpenAI’s revelation coincided with the implementation of a comprehensive ban on state-backed hacking groups utilizing AI products. While the company effectively thwarted these incidents, it acknowledged the ongoing challenge of preventing malicious use of its AI programs.
Despite OpenAI’s proactive measures in cybersecurity and implementing safeguards to prevent harmful responses from ChatGPT, hackers have managed to bypass these measures, highlighting the evolving nature of the threat landscape. In response to increased scrutiny and concerns following the launch of ChatGPT, OpenAI announced a $1 million cybersecurity grant program in June 2023, aiming to enhance AI-driven cybersecurity technologies.
The surge in AI-generated deepfakes and scams prompted policymakers to intensify their scrutiny of generative AI developers. In collaboration with entities like Microsoft, Anthropic, and Google, OpenAI participated in establishing the AI Safety Institute and the United States AI Safety Institute Consortium (AISIC).
These groups, formed in response to President Joe Biden’s executive order on AI safety in October 2023, focus on promoting the safe development of AI, combating AI-generated deepfakes, and addressing cybersecurity challenges.