Blockchain

LangChain splits AI agents into two security classes with fleet updates

Darius Varu
March 23, 2026 18:08

LangSmith Fleet addresses critical authentication challenges for enterprise AI deployments by introducing the Assistants and Claws agent types.

LangChain splits AI agents into two security classes with fleet updates

LangChain formalizes two unique authentication models for AI agents on the LangSmith Fleet platform, addressing problematic issues as enterprises deploy autonomous systems that require access to sensitive corporate data.

The framework, detailed in a March 23 blog post, divides agents into “assistants,” who inherit end-user permissions, and “claws,” which operate with fixed organizational credentials. This difference is in part due to how OpenClaw has changed developer expectations for agent IDs.

Why This Matters for Enterprise Adoption

Approval questions may sound technical, but they have real results. Whose permissions should an AI agent use when pulling data from Slack or searching my company’s Notion workspace? A wrong answer creates a security hole or a useless agent.

Consider an onboarding bot that has access to your HR system. This is appropriate if Alice uses Alice’s credentials when asking the question. However, if Bob inadvertently gains access to Alice’s personal salary information by querying the same bot, he will have compliance difficulties.

LangChain’s solution:

assistant Authentication via OAuth on a per-user basis. The agent inherits any access rights the calling user already has. Each user’s interactions remain isolated in their agent inbox.

nippers Use a shared service account. Everyone who interacts with the agent has the same fixed permissions, regardless of who they are. This applies to team-wide automation where individual identities are not important.

OpenClaw Elements

The two model approaches reflect how agent usage patterns have evolved. Traditional thinking assumed that agents always act “on behalf of” a specific user. OpenClaw then popularized a different model called agents, where creators expose themselves to others through channels like email or social media.

When someone creates an agent and shares it publicly, using the creator’s private credentials becomes problematic. Agents may have access to private documents that the author never intended to be exposed. This led developers to create dedicated service accounts for agents, effectively inventing the Claw pattern organically.

channel restrictions

There are practical limitations. Currently, Assistant only works on channels where LangSmith can map external user IDs (e.g. Slack) to your LangSmith account. Claw has fewer restrictions, but requires more careful human intervention guardrails because it can effectively open up fixed credentials to variable input.

LangChain provided specific examples of its own deployments. The onboarding agent runs as an assistant and must respect individual Notion permissions. Because an email agent manages a person’s calendar regardless of who sends the email, it works like a Claw with a human approval gate for sending messages.

What’s next

The company has indicated user-specific memory as a future feature. Current memory permissions are binary. The agent’s memory can or cannot be edited. A future version may prevent the Assistant from leaking information learned from one user’s session to another user’s session.

For companies evaluating agent platforms, the authentication model is just as important as the underlying AI capabilities. LangSmith Fleet launched on March 19 with these identity controls built in from the start.

Image source: Shutterstock


Related Articles

Back to top button