Building Your First AI Agent with Tool Access
A step-by-step tutorial for building an AI agent that can search the web, execute code, and send emails using MCP tools.
This tutorial walks you through building an AI agent that can use external tools — searching the web, running code in a sandbox, and sending emails. We will use MCP-compatible tools from the 4agent catalog.
Prerequisites
- Node.js 20+
- An API key from your preferred LLM provider
- Basic familiarity with TypeScript
Step 1: Choose Your Tools
Every agent needs the right set of tools for its task. For our example agent — a research assistant that can find information and summarize it — we need:
- Web search — to find current information
- Code execution — to process and analyze data
- Email — to deliver the results
Cloud sandbox for AI agents to execute code
Step 2: Set Up Your Project
mkdir my-agent && cd my-agent
npm init -y
npm install @anthropic-ai/sdk @e2b/code-interpreterCreate a tsconfig.json with ESM module resolution and an src/index.ts entry point.
Step 3: Define Your Agent Loop
The core of any agent is the tool-use loop:
- Send a message to the LLM with available tools
- The LLM decides which tool to call (if any)
- Execute the tool and return the result
- Repeat until the LLM has a final answer
import Anthropic from "@anthropic-ai/sdk";
const client = new Anthropic();
async function runAgent(task: string) {
const messages: Anthropic.MessageParam[] = [
{ role: "user", content: task },
];
while (true) {
const response = await client.messages.create({
model: "claude-sonnet-4-6",
max_tokens: 4096,
tools: toolDefinitions,
messages,
});
// Check if the model wants to use a tool
if (response.stop_reason === "tool_use") {
const toolUse = response.content.find(
(c) => c.type === "tool_use"
);
const result = await executeTool(
toolUse.name,
toolUse.input
);
messages.push(
{ role: "assistant", content: response.content },
{ role: "user", content: [{ type: "tool_result", tool_use_id: toolUse.id, content: result }] },
);
} else {
// Final response
return response.content;
}
}
}Step 4: Implement Tool Execution
Each tool needs a handler that translates the LLM's request into an API call:
async function executeTool(
name: string,
input: Record<string, unknown>
): Promise<string> {
switch (name) {
case "run_code":
return await runCodeInSandbox(input.code as string);
case "search_web":
return await searchWeb(input.query as string);
default:
return `Unknown tool: ${name}`;
}
}Code Execution with E2B
import { Sandbox } from "@e2b/code-interpreter";
async function runCodeInSandbox(code: string) {
const sandbox = await Sandbox.create();
const result = await sandbox.runCode(code);
await sandbox.kill();
return JSON.stringify(result);
}Step 5: Run Your Agent
const result = await runAgent(
"Find the top 5 trending AI tools this week and create a summary table."
);
console.log(result);Your agent will now search the web, process results in a sandbox, and return a structured answer.
Next Steps
- Add memory so your agent can remember context across conversations
- Add error handling and retries for tool failures
- Explore more tools in the 4agent catalog to expand your agent's capabilities
The full source code for this tutorial is available on GitHub. Star the repo and follow along with each step.
Related Posts
What Is the Model Context Protocol (MCP)?
A practical introduction to MCP — the open standard that lets AI agents connect to external tools and data sources through a unified interface.
Best Hosting Platforms for AI Agents in 2026
A practical comparison of Railway, Zeabur, Vercel, Agentuity, Blaxel, Modal, and Freestyle for deploying AI agents, APIs, workers, and sandboxes.
The Best Browser Tools for AI Agents in 2026
A comparison of browser automation tools purpose-built for AI agents — Browserbase, Steel, and Browser Use evaluated on features, pricing, and integration.