Coinsteam Business

Utilize the invoice payment gateway for checkout – order now, pay later. With a Coinsteam business account, you gain access to priority support, increased order limits, and competitive pricing for bulk orders available through quote requests.

Invoice Gateway :
Upon placing your order, we will issue an order invoice followed by an invoice payment request, featuring a convenient payment gateway. Choose from flexible payment options including PayPal, Venmo, Apple Pay®, credit cards, debit cards, or ACH bank transfers.

Thank you for being a valued customer. We're looking forward to build steam for your projects.

OpenClaw’s Wild West Leaves 1.5 Million API Keys Exposed

The viral AI agent framework has become a security researcher's worst nightmare, with critical RCE vulnerabilities and exposed databases.

Alarmed lobster mascot surrounded by exposed databases and security warnings, generated with gemini-3-pro-image

Security researchers are issuing urgent warnings as OpenClaw, the viral AI agent framework that rocketed to 147,000 GitHub stars in mere weeks, has become what one expert calls his “current pick for most likely to result in a Challenger disaster.”

The open-source project—which has cycled through three names in a single week after Anthropic forced a rebrand from Clawdbot to Moltbot, then to OpenClaw—promises to give Claude Opus 4.5 “hands.” What it’s actually delivering is a masterclass in why “vibe coding” and security don’t mix.

That tweet aged poorly. Within 24 hours of Moltbook—the “Reddit for AI agents” built atop OpenClaw—going viral, security researcher Jamieson O’Reilly discovered the entire database was publicly exposed. No authentication required. Read and write access to everything. Including 1.49 million API keys, 35,000 email addresses, and private messages users had naively assumed were secure.

The Breach That Broke Everything

According to cybersecurity firm Wiz, the root cause was almost comically simple: Moltbook was built on Supabase with Row Level Security completely disabled. The fix? Two SQL statements. That’s it.

“The app was completely vibe-coded with zero human touch,” Wiz cofounder Ami Luttwak told reporters. “He didn’t do any security at all in the database; it was completely misconfigured.”

The implications were severe. Anyone could have hijacked any AI agent on the platform—including one registered to Andrej Karpathy, the former OpenAI researcher with 1.9 million X followers. “Imagine fake AI safety hot takes, crypto scam promotions, or inflammatory political statements appearing to come from him,” O’Reilly warned.

A Timeline of Chaos

The sequence of events reads like a security incident report written by a comedian:

  • January 26, 2026 — Anthropic issues trademark notice over “Clawdbot” sounding too similar to “Claude”
  • January 27, 2026 — Creator Peter Steinberger attempts to rename GitHub organization and X handle simultaneously
  • 10 seconds later — Crypto scammers seize abandoned accounts
  • Within hours — Fake $CLAWD token launches on Solana, hits $16 million market cap
  • January 30, 2026 — Moltbook launches; CVE-2026-25253 is quietly patched
  • January 31, 2026 — O’Reilly discovers Moltbook’s entire database is exposed to the public internet
  • February 1, 2026 — Token crashes 90% after Steinberger denounces the scam

Steinberger’s frustration was palpable: “They invade our Discord server, ignore the server rules, spam me on Telegram, and squat my account names. They’re making my online life a living hell.”

The One-Click Kill Chain

Attack chain visualization showing malicious link exploitation, generated with gemini-3-pro-image

But the Moltbook breach was just the tip of the iceberg. The more serious vulnerability—CVE-2026-25253—earned a CVSS score of 8.8 and enabled one-click remote code execution against any OpenClaw user, even those running locally behind firewalls.

The attack chain, documented by security researchers, works like this:

  • Step 1: Victim clicks a malicious link
  • Step 2: OpenClaw blindly accepts a gatewayUrl parameter and connects to attacker’s server
  • Step 3: Authentication token is transmitted without user confirmation
  • Step 4: Attacker uses stolen token to disable all safety features
  • Step 5: Attacker executes arbitrary commands on victim’s machine

The entire kill chain executes in milliseconds. And because OpenClaw’s WebSocket server failed to validate origin headers, attackers could pivot from the malicious webpage directly into the victim’s local instance.

21,000 Instances Hanging in the Wind

The exposure extends far beyond Moltbook. According to Palo Alto Networks, researchers found over 21,000 OpenClaw instances exposed on the public internet as of January 31, 2026. At least eight were completely open with zero authentication.

These exposed servers leak everything: .env files containing API keys, credentials.json files, private conversation histories, and configuration files with sensitive data. One researcher spent $300 in tokens over two days doing what they perceived to be “basic tasks”—because the agents were also draining API credits without proper rate limiting.

Musk’s “singularity” comment drew immediate pushback from researchers who actually understand the technology. “Yes clearly it’s a dumpster fire right now,” Karpathy later clarified. “I also definitely do not recommend that people run this stuff on their computers. It’s way too much of a wild west and you are putting your computer and private data at a high risk.”

The Malware Factory

Digital Trojan horse with malicious packages in corrupted marketplace, generated with gemini-3-pro-image

OpenClaw’s “skills” system—meant to extend agent capabilities—has become a malware distribution channel. According to Security Affairs, over 400 malicious packages were published in less than a week. These fake skills masquerade as legitimate tools—cryptocurrency trackers, Polymarket bots, YouTube utilities—while stealing API keys, crypto wallet keys, SSH credentials, and browser passwords.

Cisco’s analysis of a skill called “What Would Elon Do?” found it executing curl commands that exfiltrated data to external servers while bypassing safety guidelines. The skill appeared legitimate on ClawHub, the official marketplace.

The Lethal Trifecta

Simon Willison, the security researcher who coined the “Challenger disaster” comparison, identifies what he calls the “lethal trifecta” making OpenClaw uniquely dangerous:

  • Access to private data — Users grant agents access to emails, files, and credentials
  • Exposure to untrusted content — Agents browse the web and process arbitrary inputs
  • Ability to communicate externally — Agents can send emails, post messages, and make API calls

Add persistent memory—OpenClaw’s defining feature—and you get delayed-execution attacks that can lie dormant until conditions are optimal.

Gary Marcus, the AI researcher known for his skepticism of AGI hype, was characteristically blunt: “If you care about the security of your device or the privacy of your data, don’t use OpenClaw. Period.”

The Vibe Coding Problem

The fundamental issue isn’t OpenClaw specifically—it’s the “vibe coding” culture that created it. Schlicht proudly declaring he “didn’t write one line of code” for Moltbook isn’t a flex; it’s an admission that no human with security expertise ever reviewed the codebase.

This pattern repeats across the ecosystem. Tutorials on YouTube show users debugging their OpenClaw installations using… Claude. They’re using AI to fix AI, with neither possessing the security knowledge to identify fundamental architectural flaws.

Even Steinberger acknowledges the risks. The official OpenClaw FAQ states bluntly: “There is no perfectly secure setup.” Google’s VP of Security Engineering, Heather Adkins, went further: “Don’t run Clawdbot… it is an infostealer malware disguised as an AI personal assistant.”

Building AI Integration Safely

The appeal of OpenClaw is obvious: autonomous AI agents that can execute tasks, manage workflows, and operate across messaging platforms without constant human oversight. The concept isn’t the problem—the implementation is.

Secure AI integration requires:

  • Proper sandboxing — Agents should run in isolated containers with minimal permissions
  • Credential rotation — API keys stored in plaintext Markdown files is not acceptable
  • Origin validation — WebSocket servers must validate connection origins
  • Rate limiting — Both API calls and agent registration need throttling
  • Human-in-the-loop — Destructive actions require explicit confirmation

These aren’t novel concepts. They’re security fundamentals that experienced developers implement by default. The difference between a viral GitHub project and a production-ready system is whether anyone with security expertise reviewed the architecture before deployment.

The Bottom Line

OpenClaw represents a genuine milestone in AI agent development. The vision of autonomous assistants that can browse, communicate, and execute tasks across platforms is compelling. The 147,000 developers who starred the repo aren’t wrong to be excited about the possibilities.

But the current implementation is a security disaster. 1.5 million exposed API keys. One-click RCE vulnerabilities. Malware masquerading as official skills. Databases left open because nobody thought to enable basic access controls.

If you’re experimenting with AI agents, proceed with extreme caution. Use dedicated machines or VMs. Rotate any credentials that touch the system. And maybe wait until the project survives more than 72 hours without a critical security advisory before trusting it with anything important.

The golden age of AI agents is coming. It just isn’t here yet.

Leave a Reply

Your email address will not be published. Required fields are marked *