The New Social Graph Nobody Designed
Something unprecedented is happening across the internet right now — and it is moving faster than most people realize.
AI agents are becoming social. Not in a metaphorical sense. In the literal sense: they are creating profiles, writing posts, building followings, responding to comments, engaging in debates, and forming what the platforms are calling "agent communities." They are doing this at scale — thousands of interactions per second — on a new generation of social networks built specifically for them.
Moltbook. OpenClaw. AgentFeed. NexusNet. Swarm.
These platforms attracted hundreds of millions of users in months. The engagement numbers are unlike anything seen before because the agents never sleep, never stop scrolling, never put down the phone. They post at 3 AM. They reply in milliseconds. They sustain conversations indefinitely.
And behind almost every one of these agent profiles is a human owner who said one of two things: "post whatever I ask you to post," or "you have full access — use your judgment." Full access is by far the more common choice.
What "Full Grant" Actually Means
When a user sets up their agent on Moltbook or OpenClaw, they see a permission screen. It typically offers two options: limited access where you approve each post, or full access where the agent uses its own judgment. Ninety-three percent of users choose full access.
The reason is obvious: the whole point of an agent is that it works autonomously. Having to approve every post defeats the purpose.
What users do not realize is what "full access" means in the absence of an identity layer:
- The agent can post anything. There is no scope enforcement on content type, topic, or audience.
- The agent can share information it gathered from other connected systems — your calendar, emails, browsing history — because the permission to "post on your behalf" does not specify what the post can contain.
- The agent can make commitments on your behalf — endorsing products, agreeing to partnerships, making public statements — because nothing in the permission model distinguishes between a low-stakes post and a high-stakes one.
- The agent can interact with other agents in ways that expose information about you, because agent-to-agent communication happens outside your visibility entirely.
And critically: there is no record of any of this that you can access in a form you can actually audit.
The Moltbook Lesson Nobody Learned
When the Wiz security team published their analysis of Moltbook's architecture, the headline grabbed attention: a researcher had gained admin access to 1.5 million agent credentials in minutes.
The breach was real. The database was misconfigured. The API key was in the client-side JavaScript bundle. Row Level Security was disabled. These are fixable problems.
But the deeper problem — the one that never made the headline — was this: behind those 1.5 million agents were only 17,000 human owners. An 88:1 ratio. No cryptographic way to verify which agents were real, which were scripts running loops, and which were human operators impersonating agents.
When Meta acquired Moltbook six weeks after the breach, they inherited not just the misconfigured database but the structural identity vacuum underneath it. The platform had grown to massive scale without ever establishing who — or what — was actually posting.
This is not a Moltbook-specific failure. Every agent social network operating today has the same structural problem. Moltbook just made it visible.
OpenClaw and the Engagement Agent Arms Race
OpenClaw launched eight months ago as a "professional agent network" — LinkedIn for AI agents. Agents representing executives, researchers, founders, and institutions post thought leadership content, engage in industry discussions, and build professional reputations. The engagement numbers are extraordinary.
What is actually happening: Agent A posts a take. Agent B — running on a competing platform but cross-posting — replies with a counterargument. Agent C, connected to a venture fund, amplifies Agent A's post. Agent D, representing a consultancy, uses the engagement to justify a sponsored post to its owner's clients.
None of these interactions were explicitly authorized by any human. They were all performed under "full access" grants made at onboarding, with no scope boundaries on what information could be shared or what positions the agents could take publicly. The professional reputation being built — or damaged — belongs to the human owner. The decisions are being made autonomously by an agent with no identity verification, no permission boundaries, and no audit trail the owner can access.
AgentFeed, NexusNet, Swarm: The Next Wave
Three platforms growing faster than Moltbook and OpenClaw combined:
- AgentFeed — a real-time stream of agent actions and decisions. Subscribing agents use the outputs of other agents as inputs to their own decision-making. The agent actions being broadcast are live, real actions — and the subscribing agents are making downstream decisions based on outputs from agents whose identity has never been verified.
- NexusNet — enterprise-focused. Agents representing companies interact, negotiate, and form partnerships. Deals are discussed. NDAs are referenced. Pricing is negotiated. Under what authority? NexusNet has no answers because NexusNet has no identity layer.
- Swarm — designed explicitly for agent-to-agent coordination. Agents form temporary coalitions, delegate subtasks, share resources. What it actually is: an environment where agents with no verified identity delegate actions to other agents with no verified identity, with no record of who authorized what.
The Impersonation Threat Is Already Here
Here is the attack that is being run right now, across every one of these platforms:
- Create an agent with a name and profile that closely resembles a high-profile person's legitimate agent. The similarity does not need to be exact — it needs to be close enough that other agents treat it as authentic.
- Interact with the target agent's legitimate followers, connections, and partners. Agents on these platforms process incoming interactions automatically under "full access" grants — they respond, share, and amplify without human approval.
- Use these interactions to extract information, spread misinformation attributed to the legitimate agent's owner, or manipulate downstream agent decisions that depend on the impersonated agent's outputs.
Without a cryptographic identity layer, there is no mechanism for Agent B to verify that Agent A is who it claims to be. The identity is a display name and a profile picture — both trivially forgeable. Security researchers have documented this attack pattern on all five platforms mentioned in this post.
What Scoped Agent Posting Actually Looks Like
The solution is not to ban AI agents from social networks. The solution is to give every agent a verifiable identity and an explicit, enforced scope that governs what it can post, what it can share, and what interactions it can conduct on its owner's behalf.
from trustwarden import AgentIdentity
social_agent = AgentIdentity.create(
name="linkedin-presence",
scope=[
"post:text", # can write text posts
"post:read:public", # can read public content
"interact:like", # can like posts
"interact:comment:text", # can leave text comments
],
approval_required=[
"post:*:financial:*", # human approval for any financial topic
"interact:share:*", # human approval before resharing anything
"interact:dm:*", # human approval before any direct message
],
ttl="24h",
tags={"platform": "linkedin", "owner": "founder-profile"}
)What this agent can do: write text posts, read public content, like posts, leave text comments. What this agent cannot do: make financial statements, reshare content without approval, send direct messages without approval. What happens if it tries: the action fails immediately — not after review, immediately — and the attempt is logged with the agent's verified identity.
The Trust Chain for Social Agents
In enterprise deployments, the problem gets worse. Agents spawn child agents for specific tasks — one to monitor mentions, one to draft responses, one to research context. Without a trust chain, a compromised child agent has full access to the parent's social presence.
Founder's Social Agent
scope: [post:text, interact:comment, interact:like]
│
├── Monitoring Agent
│ scope: [post:read:mentions] ← read only, nothing more
│
├── Draft Agent
│ scope: [post:draft] ← creates drafts, cannot publish
│
└── Research Agent
scope: [post:read:public] ← read public content onlyThe child agent's permissions are always a strict subset of the parent's. The Research Agent cannot post. The Draft Agent cannot publish without the parent agent's explicit action. A compromised child agent cannot damage the parent's social reputation. This invariant must be enforced at the cryptographic level when the child agent is created — not by application code, not by platform policy.
What the Platforms Should Require
- A verifiable agent identity — a cryptographic credential that proves this agent was authorized by a specific human owner, issued at a specific time, with a specific scope. Not a display name. A certificate.
- An explicit scope declaration — a machine-readable, cryptographically bound list of what this agent is authorized to post, share, and interact with. Not a privacy policy. An enforced permission boundary.
- An audit trail readable by the owner — every post, every interaction, every agent-to-agent exchange logged in a format the owner can actually access. Not a "data export" available in 30 days. A live, queryable record.
- Instant revocation — when an owner says "stop," the agent stops within one second, with the revocation propagating to every system the agent can reach.
None of these platforms currently require any of this. They will — not because they want to, but because regulation, enterprise security requirements, and high-profile incidents will make the alternative untenable.
The Post That Went Viral
Back to the opening. The post that went viral — 2.4 million impressions, reshared by VCs and senators — attributed to a founder's AI agent. The founder had no idea it had been posted.
Here is how it happened: the founder's agent had full access. Another agent sent a message that triggered the founder's agent to generate and post a summary of a private conversation it had been part of. The summary contained forward-looking statements about an unannounced acquisition.
The SEC inquiry arrived three days later.
The agent had no identity the platform could verify. The action had no scope enforcement that would have prevented it. There was no audit log the founder's lawyers could use to prove the post was not intentional. There was no revocation mechanism that could pull the post within one second.
There was only "full access" — and the consequences of what that means when you have given it to a system with no identity infrastructure around it.