90 Days Gen AI Risk Trial -Start Now
Book a demo
GUIDE

The Same AI Tools Your Developers Love Are Being Used to Hack Them

AuthorMaya Chen
DateMay 1, 2026

Key Takeaways

  • "Vibe Coding" Malware Is Now a Thing
  • Meanwhile, the Framework Your AI Team Trusts Was Compromised
  • The Visibility Problem Nobody Wants to Talk About
  • The Detection Gap Is Getting Harder to Close
  • What to Actually Do About This

The Same AI Tools Your Developers Love Are Being Used to Hack Them

There's a particular irony in this week's security news that deserves some attention. On April 22nd, researchers revealed that a North Korean state-sponsored hacker group had stolen $12 million from crypto developers in three months — using ChatGPT and Cursor to write their malware. Then, on April 30th, PyTorch Lightning, one of the most popular AI/ML frameworks on the planet, was found to have been compromised in a supply chain attack designed to steal credentials.

Same week. Two different threat vectors. One common thread: AI tools are no longer just your productivity layer. They're the attack surface.

---

"Vibe Coding" Malware Is Now a Thing

Let's start with HexagonalRodent — the unfortunately named North Korean hacker group profiled by Expel in late April. The researchers found something that would have sounded like satire two years ago: hackers using AI tools to "vibe code" their malware.

For context: vibe coding is the practice of prompting an AI to write functional software without having to really understand what you're building. It's the thing that's made non-programmers suddenly feel like developers. Turns out it's also making non-programmers feel like hackers.

According to the research, HexagonalRodent used ChatGPT and Cursor to write credential-stealing malware, then used separate AI design tools to spin up convincing fake company websites. They'd approach crypto developers with fabricated job offers, lure them into downloading a "coding assignment" that was actually malware, and drain their wallets. The haul: $12 million across around 2,000 compromised machines.

The group wasn't sophisticated. That's the point.

Marcus Hutchins — the security researcher who famously killed WannaCry — put it plainly: "These operators don't have the skills to write code. They don't have the skills to set up infrastructure. AI is actually enabling them to do things that they otherwise just would not be able to do."

The hackers even left their ChatGPT prompt history exposed on unsecured infrastructure, which is how researchers confirmed exactly which AI tools they'd used. The prompts were apparently full of spelling mistakes and grammatical errors — the humans behind this campaign genuinely couldn't write working code without AI assistance. And yet they pulled off a $12 million theft.

---

Meanwhile, the Framework Your AI Team Trusts Was Compromised

If the North Korea story is about AI being used as an offensive weapon, the PyTorch Lightning story is something subtler and arguably more alarming for enterprise security teams.

On April 30th, multiple security firms — Aikido Security, OX Security, Socket, and StepSecurity — simultaneously flagged that versions 2.6.2 and 2.6.3 of the `lightning` Python package had been compromised. PyTorch Lightning is widely used in enterprise AI and ML development. It's the kind of package that shows up in data science environments, model training pipelines, and developer workstations across banks, healthcare companies, and tech firms.

The compromised versions contained credential-stealing malware. The attack is linked to a broader supply chain campaign called Mini Shai-Hulud that has previously hit SAP-related npm packages.

Both versions have since been pulled, but here's the question every security team should be sitting with right now: how many of your developers installed those versions in the two-day window between publication and detection? And more importantly — do you even know which Python packages your AI/ML teams are running?

---

The Visibility Problem Nobody Wants to Talk About

These two incidents, taken together, expose a gap that's been widening for the past 18 months and that most enterprise security teams still haven't addressed properly.

Your developers are using AI tools. A lot of them. Cursor, ChatGPT, GitHub Copilot, Claude, Codeium — and the list keeps growing. Your data science teams are running PyTorch, Lightning, Hugging Face libraries, and whatever the newest research-hot framework happens to be this quarter. This is mostly good. AI tools are making your teams more productive.

But your security team probably has almost no visibility into this activity. They don't know which tools employees have downloaded, which accounts they're using, what data they're passing into those tools, or whether the tools themselves have been tampered with.

This is the same shadow AI problem that enterprise security leaders have been warned about in the context of ChatGPT and Copilot leaking sensitive data. But it's evolving. The threat isn't just "employees sending confidential documents to an AI chatbot." It's:

  • Compromised AI packages stealing credentials from developer machines
  • AI-generated phishing sites that are convincingly professional because they were built with real AI design tools
  • Malware that's polished enough to evade detection because it was iteratively debugged by a language model
  • Fake job offers targeting your developers on LinkedIn, crafted with AI to be personalized and plausible

---

The Detection Gap Is Getting Harder to Close

What makes the HexagonalRodent campaign particularly concerning is the vector. They didn't blast out phishing emails. They targeted individual developers — specifically people working on small crypto launches and Web3 projects — with personalized job offers. The fake companies had full websites, professional copy, and job listings that looked real.

Personalizing a phishing campaign at scale used to require either a lot of human labor or settling for obvious mass-blast emails. Now you can use AI to generate personalized lures efficiently, then use another AI tool to build the infrastructure that makes the lure credible.

For enterprise security teams, this means detection strategies that rely on "this doesn't look legit" heuristics are increasingly inadequate. The attacks look legitimate. Because they were built with the same tools your employees use to do legitimate work.

---

What to Actually Do About This

A few practical things worth thinking through:

Know what AI tools are in your environment. This sounds basic, but most enterprises genuinely don't have a complete picture. Shadow AI — the AI tools employees are using without IT or security knowledge — is still a significant blind spot. If you're only governing the sanctioned tools (Copilot, approved ChatGPT enterprise licenses), you're missing a lot.

Treat AI/ML packages like any other supply chain risk. The PyTorch Lightning compromise was detected quickly because multiple security vendors were watching PyPI. Your internal processes probably aren't watching PyPI that closely. Software composition analysis (SCA) tools that cover Python packages, and policies around package version pinning in AI/ML projects, matter more now.

Educate developers specifically, not just "all employees." The HexagonalRodent campaign targeted developers. Not because they're less security-aware, but because they're high-value targets who regularly download and run code from strangers on the internet. That's normal developer behavior — open source, package managers, coding challenges. Security training that doesn't account for developer-specific threat vectors isn't going to land.

Watch for AI-generated phishing quality, not just AI-generated volume. Spam filters and phishing detection tools were built for an era when volume and obvious patterns were reliable signals. An AI-crafted spear-phishing email to a specific developer about a specific role at a specific fake company with a working website is a different threat class.

---

The Bigger Picture

The part of the HexagonalRodent story that stuck with me isn't the $12 million — it's the exposed prompts. These hackers left their ChatGPT conversation history sitting on unsecured infrastructure, full of typos and iterative debugging requests. They weren't hackers who learned to use AI. They were people who, by all indications, couldn't do this work without AI.

That's not reassuring. That's the new baseline. AI has lowered the floor on what's required to execute a professional-grade cyberattack. The people targeting your developers and your AI/ML infrastructure don't need to be competent — they just need access to the same tools your employees are already using.

Your security posture needs to account for that.

---

Aona helps enterprises discover, govern, and monitor AI tool usage across their workforce — including the shadow AI activity that security teams can't see. [Book a demo](/book-demo) to see what's running in your environment.

See it in action

Want to see how Aona handles this for your team?

15-minute demo. No fluff, no sales pressure.

Book a Demo →

Stay ahead of Shadow AI

Get the latest AI governance research in your inbox

Weekly insights on Shadow AI risks, compliance updates, and enterprise AI security. No spam.

About the Author

Maya Chen avatar

Maya Chen

Growth & Marketing Lead

Maya leads growth and marketing at Aona AI, driving SEO strategy, content creation, and demand generation. With a sharp focus on AI governance topics, she helps enterprises understand the evolving landscape of Shadow AI, AI security, and responsible AI adoption.

More articles by Maya

Ready to Secure Your AI Adoption?

Discover how Aona AI helps enterprises detect Shadow AI, enforce security guardrails, and govern AI adoption across your organization.