London

June 2–3, 2026

New York

September 15–16, 2026

Berlin

November 9–10, 2026

Moltbook is the agent security wake-up call for engineering leaders

Agent security is identity security
March 04, 2026

You have 1 article left to read this month before you need to register a free LeadDev.com account.

Estimated reading time: 4 minutes

Key takeaways:

  • The Moltbook data incident wasn’t “emergent AI” – it was exposed APIs at machine speed.
  • Agent security is identity security: AI agents operate with real credentials across real systems.
  • Automation shrinks the gap between mistake and breach.

When it first hit headlines earlier this year, Moltbook sounded like a sci-fi novelty: a social network for AI agents where bots could interact, share, and “learn” from each other.

It was easy to treat it as a quirky glimpse of where things might be heading, but that changed quickly once it emerged that data belonging to real humans had been exposed.

Suddenly, it wasn’t a futuristic experiment – it was a security story we’ve all seen before.

It all came down to the usual setup: Application Programming Interfaces (APIs) stitched together over time, service accounts with broad access that no one had reviewed in a while, and trust relationships that seemed solid until they were put under real strain.

Distributed systems have a habit of exposing their weak spots when pushed. Here they were being exercised by software built to move quickly and make decisions without waiting for a human to stop and double-check the path.

Autonomy, minus the mysticism

There’s a tendency to describe agent security incidents as “emergent behavior,” as though something unpredictable has slipped the leash. Eric Schwake, director of cybersecurity strategy at Salt Security, sees it differently.

“What people interpreted as ‘emergent AI behaviour’ was really just API-driven automation operating at scale,” he says. “From a security perspective, autonomy isn’t intelligence; it’s more about speed. Speed amplifies risk when the underlying APIs aren’t visible or governed properly.”

Agents don’t act through magic. Every action ultimately resolves to an API call. If that layer is loosely governed or over-permissioned, agents will move straight through those gaps at machine speed.

“When you remove the human from the loop, you remove the manual gatekeeper,” Schwake says. “If the APIs an agent relies on aren’t secured, that ‘autonomous’ system simply becomes a force multiplier for attackers.”

The control gap in autonomous systems

Automation has always been a double-edged sword. It makes the good things happen faster, and the bad things too. With agents in the mix, the gap between a small mistake and a wider incident shrinks because the software doesn’t pause, second-guess itself, or log off for the day.

Schwake highlights three recurring agent security trouble spots. The first is visibility. Agents communicate almost entirely through machine-to-machine API calls. Many teams don’t have a complete inventory of which APIs exist, much less which ones agents can access.

The second is authenticated abuse. Agents operate with legitimate credentials, which makes them attractive targets. If those credentials are compromised, the resulting activity can look routine in logs because it originates from a trusted service identity.

A third concern is accountability. When agents are given room to act on their own, figuring out exactly what happened can get murky. Their activity folds into the background noise of existing automation, the credentials look legitimate, and separating routine behavior from something problematic isn’t always obvious. This means incident response can turn into a slow reconstruction job rather than a clear, traceable sequence of events.

Agent security versus the identity problem

Moltbook isn’t really about a new category of “agent security,” it’s about identity, says Ev Kontsevoy, CEO at identity security company Teleport.

“What we’re seeing with AI agents is a clear warning for business leaders,” Kontsevoy says. “These systems are beginning to act with a level of independence that outpaces the controls organizations have in place. When autonomous agents can learn from each other, adapt their behaviour, and operate across environments, the risk isn’t tomorrow. It’s accelerating and happening today.”

Traditional Identity and Access Management (IAM) and Privileged Access Management (PAM) tooling was built around humans and later extended to relatively static service accounts. Agents don’t fit neatly into that model. They spin up dynamically, move across environments, and change behavior based on inputs.

“The real challenge isn’t creating a new category of ‘agent security,’ but rather applying unified identity controls so AI is governed by common zero-trust principles protecting people, systems, and data together.”

If agents are acting on your systems, they need to operate under the same zero-trust assumptions as everyone else: tightly scoped permissions, strong identity, and clear audit trails.

LDX3 London 2026 agenda is live - See who is in the lineup

Designing for containment in agent security

There’s a rush to wire agents into anything that looks repetitive or slow, but the underlying environment often remains the same as it has for years. It works, more or less, because humans move through it at a human pace. Let something automated loose in that same environment, and the weak joints start to creak.

That’s the shift engineering leaders have to grapple with. Agents will make mistakes, follow flawed instructions, hit the wrong endpoint, or run with credentials they shouldn’t have. The job isn’t to pretend that won’t happen: it’s to make sure the fallout is contained when it does.

“You can’t scale AI innovation without securing the API fabric underneath it,” Schwake says. “Every ‘decision’ an agent makes is ultimately an API call with real-world consequences for data, trust, and compliance.”

Moltbook will be replaced by another headline, but the pattern it exposed is already embedded in everyday engineering work. Agents are inside build jobs, ticketing systems, deployment workflows, and production data paths, operating with real credentials against live infrastructure. They run continuously, and whatever weaknesses exist tend to surface more quickly when automation is exercising them around the clock.

Autonomy doesn’t create new flaws. It accelerates the impact of those that already exist.