langgraph-docs
π―Skillfrom langchain-ai/deepagentsjs
Generates and maintains documentation for LangGraph using automated documentation generation techniques and agent-based workflows.
Part of
langchain-ai/deepagentsjs(5 items)
Installation
npm install deepagentsnpm install @langchain/tavilySkill Details
Overview
# π§ π€ Deep Agents
Using an LLM to call tools in a loop is the simplest form of an agent.
This architecture, however, can yield agents that are "shallow" and fail to plan and act over longer, more complex tasks.
Applications like "Deep Research", "Manus", and "Claude Code" have gotten around this limitation by implementing a combination of four things:
a planning tool, sub agents, access to a file system, and a detailed prompt.
> π‘ Tip: Looking for the Python version of this package? See [langchain-ai/deepagents](https://github.com/langchain-ai/deepagents)

deepagents is a TypeScript package that implements these in a general purpose way so that you can easily create a Deep Agent for your application.
Acknowledgements: This project was primarily inspired by Claude Code, and initially was largely an attempt to see what made Claude Code general purpose, and make it even more so.
[](https://www.npmjs.com/package/deepagents)
[](https://opensource.org/licenses/MIT)
[](https://www.typescriptlang.org/)
[Documentation](https://docs.langchain.com/oss/javascript/deepagents/overview) | [Examples](./examples) | [Report Bug](https://github.com/langchain-ai/deepagentsjs/issues) | [Request Feature](https://github.com/langchain-ai/deepagentsjs/issues)
π Overview
Using an LLM to call tools in a loop is the simplest form of an agent. However, this architecture can yield agents that are "shallow" and fail to plan and act over longer, more complex tasks.
Applications like Deep Research, Manus, and Claude Code have overcome this limitation by implementing a combination of four key components:
- Planning Tool - Strategic task decomposition
- Sub-Agents - Specialized agents for subtasks
- File System Access - Persistent state and memory
- Detailed Prompts - Context-rich instructions
Deep Agents is a TypeScript package that implements these patterns in a general-purpose way, enabling you to easily create sophisticated agents for your applications.
β¨ Features
- π― Task Planning & Decomposition - Break complex tasks into manageable steps
- π€ Sub-Agent Architecture - Delegate specialized work to focused agents
- πΎ File System Integration - Persistent memory and state management
- π Streaming Support - Real-time updates, token streaming, and progress tracking
- π LangGraph Powered - Built on the robust LangGraph framework
- π TypeScript First - Full type safety and IntelliSense support
- π Extensible - Easy to customize and extend for your use case
Installation
```bash
# npm
npm install deepagents
# yarn
yarn add deepagents
# pnpm
pnpm add deepagents
```
Usage
(To run the example below, you will need to npm install @langchain/tavily).
Make sure to set TAVILY_API_KEY in your environment. You can generate one [here](https://www.tavily.com/).
```typescript
import { tool } from "langchain";
import { TavilySearch } from "@langchain/tavily";
import { createDeepAgent } from "deepagents";
import { z } from "zod";
// Web search tool
const internetSearch = tool(
async ({
query,
maxResults = 5,
topic = "general",
includeRawContent = false,
}: {
query: string;
maxResults?: number;
topic?: "general" | "news" | "finance";
includeRawContent?: boolean;
}) => {
const tavilySearch = new TavilySearch({
maxResults,
tavilyApiKey: process.env.TAVILY_API_KEY,
includeRawContent,
topic,
});
return await tavilySearch._call({ query });
},
{
name: "internet_search",
description: "Run a web search",
schema: z.object({
query: z.string().describe("The search query"),
maxResults: z
.number()
.optional()
.default(5)
.describe("Maximum number of results to return"),
topic: z
.enum(["general", "news", "finance"])
.optional()
.default("general")
.describe("Search topic category"),
includeRawContent: z
.boolean()
.optional()
.default(false)
.describe("Whether to include raw content"),
}),
},
);
// System prompt to steer the agent to be an expert researcher
const researchInstructions = `You are an expert researcher. Your job is to conduct thorough research, and then write a polished report.
You have access to an internet search tool as your primary means of gathering information.
\`internet_search\`
Use this to run an internet search for a given query. You can specify the max number of results to return, the topic, and whether raw content should be included.
`;
// Create the deep agent
const agent = createDeepAgent({
tools: [internetSearch],
systemPrompt: researchInstructions,
});
// Invoke the agent
const result = await agent.invoke({
messages: [{ role: "user", content: "What is langgraph?" }],
});
```
See [examples/research/research-agent.ts](examples/research/research-agent.ts) for a more complex example.
The agent created with createDeepAgent is just a LangGraph graph - so you can interact with it (streaming, human-in-the-loop, memory, studio)
in the same way you would any LangGraph agent.
Core Capabilities
Planning & Task Decomposition
Deep Agents include a built-in write_todos tool that enables agents to break down complex tasks into discrete steps, track progress, and adapt plans as new information emerges.
Context Management
File system tools (ls, read_file, write_file, edit_file, glob, grep) allow agents to offload large context to memory, preventing context window overflow and enabling work with variable-length tool results.
Subagent Spawning
A built-in task tool enables agents to spawn specialized subagents for context isolation. This keeps the main agent's context clean while still going deep on specific subtasks.
Long-term Memory
Extend agents with persistent memory across threads using LangGraph's Store. Agents can save and retrieve information from previous conversations.
Customizing Deep Agents
There are several parameters you can pass to createDeepAgent to create your own custom deep agent.
`model`
By default, deepagents uses "claude-sonnet-4-5-20250929". You can customize this by passing any [LangChain model object](https://js.langchain.com/docs/integrations/chat/).
```typescript
import { ChatAnthropic } from "@langchain/anthropic";
import { ChatOpenAI } from "@langchain/openai";
import { createDeepAgent } from "deepagents";
// Using Anthropic
const agent = createDeepAgent({
model: new ChatAnthropic({
model: "claude-sonnet-4-20250514",
temperature: 0,
}),
});
// Using OpenAI
const agent2 = createDeepAgent({
model: new ChatOpenAI({
model: "gpt-5",
temperature: 0,
}),
});
```
`systemPrompt`
Deep Agents come with a built-in system prompt. This is relatively detailed prompt that is heavily based on and inspired by [attempts](https://github.com/kn1026/cc/blob/main/claudecode.md) to [replicate](https://github.com/asgeirtj/system_prompts_leaks/blob/main/Anthropic/claude-code.md)
Claude Code's system prompt. It was made more general purpose than Claude Code's system prompt. The default prompt contains detailed instructions for how to use the built-in planning tool, file system tools, and sub agents.
Each deep agent tailored to a use case should include a custom system prompt specific to that use case as well. The importance of prompting for creating a successful deep agent cannot be overstated.
```typescript
import { createDeepAgent } from "deepagents";
const researchInstructions = `You are an expert researcher. Your job is to conduct thorough rese
More from this repository4
Organizes files and directories by automatically categorizing, sorting, and structuring files based on their content, type, or metadata.
Performs comprehensive web research by systematically searching, analyzing, and synthesizing online information to answer complex queries or support research tasks.
Generates and creates custom skills or tools for AI agents by analyzing requirements and automatically designing modular, reusable code components.
Searches arXiv research papers by keyword, retrieving relevant academic publications with metadata and summaries.