Blog

Disrupting Malicious AI: Five Key Takeaways from OpenAI’s June 2025 Report

AI is undeniably reshaping industries, fuelling growth and innovation. But with great power comes significant responsibility and increasingly sophisticated cyber threats. OpenAI’s recent report on their internal operations policing the misuse of their platform, Disrupting Malicious Uses of AI (June 2025), provides essential insights every cyber security conscious organisation should understand about how GPTs are being used by threat actors. It’s long been speculated how adversaries are using these tools, what we see in this report is the direct confirmation of prolific adversarial usage beyond marketing hype.

1. AI-Powered Job Scams Linked to North Korean Cyber Operations

OpenAI identified an alarming increase in AI-driven employment scams specifically linked to North Korean cyber operations. These attackers use AI to create convincing fake CVs, job postings, and elaborate recruitment schemes, turning ordinary job seekers into unwitting participants in cyberattacks as “laptop mules.”

What this means for you: It’s critical to thoroughly vet hiring processes, physically meet new employees and continually educate employees about the latest scam techniques.

2. Advanced Malware Campaigns by State-Aligned APTs

OpenAI’s report confirms how state-linked advanced persistent threat (APT) groups utilise AI to streamline and scale up malware development. These campaigns often involve well-established actors using AI tools to make malicious code more evasive, adaptive, and efficient.

One notable example is Emerald Sleet (also associated with North Korea’s Lazarus Group), which used ChatGPT to support spear-phishing and reconnaissance activities to establish their campaigns.

Keyhole Panda (APT5) and Vixen Panda (APT15), both China-linked groups, employed AI for tasks like automating open-source intelligence gathering, debugging code, generating scripts (e.g., for port scanning or brute-force attempts), and building infrastructure used for malware delivery and social media manipulation.

Additionally, a Russian-speaking campaign labelled ScopeCreep used AI iteratively to develop a modular Windows malware toolkit. The malware included components for privilege escalation, credential harvesting, obfuscation, and communication via Telegram-based command and control infrastructure. AI’s role in this case wasn’t to create novel threats but to enhance existing development modelling legitimate usage.

While many of the underlying capabilities are achievable with traditional methods, AI made these operations more scalable and harder to detect. OpenAI’s intervention through platform enforcement and account takedowns likely helped disrupting these evolving threats using ChatGPT, but most likely the actor moved to a less monitored AI toolset.

AI Vendors: Monitor your tooling for abuse; have a robust detection set with internal AI driven control on user queries, collate and track adversarial accounts. Collaborate and share cyber intelligence.

3. The Emergence of “Vibe Hacking”

The accessibility of AI tools has created a troubling new phenomenon—”Vibe Hacking” enabling even inexperienced attackers to automate phishing schemes and malware distribution effectively. Tools like WormGPT and FraudGPT have dramatically lowered the barrier to entry for cybercriminals. Rather than simply generating new malware from scratch, these actors employ AI to produce code snippets they can pull together into a completed project, whether it’s Malware or a powershell script.

In one example an actor used the LLM to generate Powershell scripts that appeared routine but included subtle malicious uses, hiding true intent from OpenAI detection systems. This approach is not dissimilar to typical environments where the code is obfuscated or separated to avoid traditional detection systems.

Not a vibe: Now even unskilled developers can generate code, this opens pathways for malware affiliates to increase potency and rapid upskilling of traditional ‘script kiddies’.


4. The Need for Stronger Collaboration

A united front in cybersecurity has never been more vital. OpenAI emphasises partnerships among businesses, governments, and cybersecurity communities- including key industry players like Anthropic, Google, OWASP, and NIST as essential to creating resilient defences.

Industry collaboration: Foster strong industry partnerships and collaborative networks to rapidly adapt to and address emerging cyber threats.

5. The Role of Regulation

AI development is moving fast, way faster than our ability to govern it effectively. OpenAI calls for sharper, clearer regulation that balances innovation with responsibility. That means greater transparency in how models are trained and used, stronger enforcement of usage policies, and regular red teaming to identify vulnerabilities before they’re exploited. They also highlight the need for improved coordination between developers, regulators, and security professionals especially through responsible disclosure and active information sharing.

OpenAI outlines several key areas where regulators and developers should direct their attention. These include making the development and deployment of AI systems more transparent, ensuring clear accountability for how these technologies are used. They recommend enforcing robust usage policies with consistent consequences for misuse, prompt monitoring, and routinely subjecting systems to red teaming exercises to uncover and address potential vulnerabilities. Crucially, OpenAI also advocates for more effective information sharing across the ecosystem between vendors, governments, and the security community, to foster a coordinated and informed response to emerging threats.

Stay ahead of the curve: The industry needs to drive policymakers, contribute to discussions shaping AI regulatory frameworks, and integrate proactive compliance practices aligned with evolving global cybersecurity standards.

As a security leader watching AI capabilities evolve at a frankly dizzying pace, this report resonates deeply. We’re now facing adversaries who automate deception, code, and influence at scale. What’s new isn’t just the tools – it’s the speed, scale, and precision with which threats can now emerge.

OpenAI’s findings confirm what many in the cybersecurity industry already feel: the threat model has a new paradigm. Having led the response to the Sophos Pacific Rim attacks, I understand firsthand the complexity of policing platform abuse at scale. As vendors, we would have to walk a tightrope, balancing accessibility and innovation with the need to pre-empt misuse and enforce policies swiftly and fairly. We’re not just defending networks; we’re now actively defending trust.

AI makes it easier to replicate identities, mimic brands, and weaponise communication.

When a fake job offer can trigger a multi-stage intrusion orchestrated from halfway across the globe, our playbook must evolve, the cadence of attack amplifies. This is going to be further exacerbated by the introduction of Agent2Agent or MCP protocols where AI systems can communicate directly with minimal human interaction. This ability will realistically give these systems the ability to orchestrate attacks end to end – soup to nuts, relentlessly.

My emphasis on regulation here isn’t about bureaucracy, it’s the groundwork for trust and accountability. Transparency, shared detection, consistent enforcement, and red teaming aren’t theoretical niceties—they’re table stakes for safe deployment. To their credit OpenAI is leading both developers and defenders towards a future where innovation doesn’t outpace ethics.

In short, we’re entering an era where cybersecurity isn’t just about preventing breaches. It’s about building processes, systems and cultures that anticipate abuse, design for misuse, and can adapt in real time. For teams like ours at Ontinue, that means being ahead in how we utilise AI and employing AI first principals, allowing us to match the pace of our adversaries.

Let’s stay sharp. Let’s stay human. Let’s stay ahead of it.

Sharing
Article By

Craig Jones
Vice President, Security Operations

Craig Jones oversees Ontinue’s global network of Security Operations Centers (SOCs) as Vice President of Security Operations. His role includes managing and optimizing the teams responsible for security monitoring, incident response, and threat detection across the company’s four SOCs. Before joining Ontinue, Craig spent eight years at Sophos, where he rose to Senior Director of Global Security Operations. At Sophos, Craig was responsible for the operational aspects of the company’s worldwide security program, ensuring that the organization’s global security infrastructure was robust and scalable.

Craig is a well-regarded expert in the field of cybersecurity, holding certifications such as GCIH and CISSP. He is actively involved in the cybersecurity community, volunteering as director of BSides Cymru/Wales since 2019 and frequently speaking at industry events. His thought leadership covers topics like incident response, SOC automation, threat intelligence, and SIEM. Craig earned a bachelor’s degree in Information Technology from the University of South Wales.