The most downloaded skill on ClawHub—the package registry for AI agent capabilities—was malware.

Not suspicious. Not "potentially unwanted." A full-on infostealer designed to raid your browser sessions, saved credentials, API keys, SSH keys, and cloud tokens.

1Password's security team discovered it. They weren't looking for it. They were just browsing ClawHub and noticed something off about a "Twitter" skill that had racked up the most downloads on the platform.

How It Worked

The attack was elegant in its simplicity:

1. The skill looked normal—description, intended use, the kind of thing you'd install without thinking twice.

2. The first instruction was to install a "required dependency" called "openclaw-core."

3. That dependency link led to a staging page that got the AI agent (or you) to run a command.

4. The command decoded an obfuscated payload and executed it.

5. The payload downloaded a second-stage script, which downloaded and ran a binary—after removing macOS's quarantine attributes to bypass Gatekeeper.

VirusTotal confirmed it: macOS infostealer malware.

And it wasn't just one skill. Subsequent reporting found 341 OpenClaw skills were distributing macOS malware via the same ClickFix-style attack pattern.

Why This Matters for Founders

If you're experimenting with AI agents—OpenClaw, Claude Code, Codex, or anything similar—you need to understand why this attack surface exists.

Skills are just markdown files. In the AI agent ecosystem, a "skill" is essentially a page of instructions telling an agent how to do something. That markdown can include links, commands, and tool call recipes.

Markdown isn't "content" in an agent ecosystem. Markdown is an installer.

MCP doesn't save you. Some people assume the Model Context Protocol layer makes this safer because tools can be exposed through structured interfaces with user consent. But skills don't need to use MCP at all. They can include terminal commands directly, bundle scripts alongside the markdown, or simply social-engineer you into copy-pasting something dangerous.

"Top downloaded" is a trust signal. Attackers know this. They game download counts the same way they game app store reviews. Popularity isn't validation.

Agents normalize risky behavior. Even if an agent can't run shell commands directly, it can confidently summarize a malicious prerequisite as "the standard install step." It can encourage you to paste a one-liner. It can reduce your hesitation just enough.

Here's where it gets interesting for founders: who's liable when an AI agent installs malware?

If your employee installs a compromised package from npm, that's a security incident. If your AI agent installs a compromised skill from ClawHub because someone told it to "set up Twitter integration," what is that?

The questions haven't been litigated yet, but they will be:

Duty of care. Did you have reasonable security practices for AI tool usage? "We let the agent install whatever it needed" isn't going to age well.

Third-party due diligence. If a skill registry has no verification, no code signing, no security review—and you treated it like a trusted source—that's on you.

Incident response. If your AI agent exfiltrated credentials because a skill told it to, when did the breach start? When the skill was installed? When the agent followed the instructions? When the credentials were used?

Employee device policies. 1Password's first recommendation: "Do not run this on a company device. Full stop." If your engineers are vibe-coding on machines with production credentials, you have a problem that predates AI.

What You Should Do

Isolate AI experimentation. Don't run AI agents with broad system access on machines that hold corporate credentials, production tokens, or client data.

Audit your agents. What skills are installed? What permissions do they have? What memory files exist? If you don't know, find out.

Treat skill registries like you treat npm. Would you install the most popular package from a new registry with no security review? Then don't do it with AI skills.

Document your AI tool policies. When the breach happens, you want to be able to show you had reasonable practices in place. "We were just experimenting" isn't a policy.

The Takeaway

We spent years learning that package managers and open-source registries can become supply chain attack vectors. AI skill registries are the next chapter—except the "package" is documentation, and the attack vector is an AI that's trained to be helpful.

The attackers have figured this out faster than the defenders.

The quiet part: Your AI agent is exactly as trustworthy as the least trustworthy skill you've given it access to. And right now, nobody's checking.