Skip to main content

Introduction

The Claude Agent SDK allows you to build powerful AI agents that can use tools and make decisions autonomously.
This integration uses Helicone’s Model Context Protocol (MCP) to provide seamless AI Gateway access to your Claude agents.

Integration Steps

1
Sign up at helicone.ai and generate an API key.Make sure to have some credits available in your Helicone account to make requests (or BYOK).
2

Install the Helicone MCP Package

npm install @helicone/mcp
3

Configure MCP Server in Your Application

  • Claude Desktop (Development)
  • Claude Agent SDK
Add to your Claude Desktop configuration:
  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json
{
  "mcpServers": {
    "helicone": {
      "command": "npx",
      "args": ["@helicone/mcp@latest"],
      "env": {
        "HELICONE_API_KEY": "sk-helicone-xxxxxxx-xxxxxxx-xxxxxxx-xxxxxxx"
      }
    }
  }
}
The Helicone MCP tools will be automatically available in Claude Desktop.
4

Test the Integration

import { query } from '@anthropic-ai/claude-agent-sdk';

const result = await query({
  prompt: 'Use the use_ai_gateway tool to generate a creative story about AI using gpt-4o with temperature 0.8',
  options: {
    mcpServers: {
      helicone: {
        command: 'npx',
        args: ['@helicone/mcp'],
        env: {
          HELICONE_API_KEY: process.env.HELICONE_API_KEY
        }
      }
    },
    allowedTools: ['mcp__helicone__use_ai_gateway']
  }
});

// Get the response
for await (const message of result.sdkMessages) {
  if (message.type === 'result' && message.result) {
    console.log(message.result);
  }
}
The agent will automatically use the use_ai_gateway tool to make the request through Helicone AI Gateway.

Available MCP Tools

use_ai_gateway

Make requests to any LLM provider through Helicone AI Gateway with automatic observability. Parameters:
  • model (required): Model name (e.g., gpt-4o, claude-sonnet-4, gemini-2.0-flash - see Supported Models for more)
  • messages (required): Array of conversation messages
  • max_tokens (optional): Maximum tokens to generate
  • temperature (optional): Response randomness (0-2)
  • sessionId (optional): Session ID for request grouping
  • sessionName (optional): Human-readable session name
  • userId (optional): User identifier for tracking
  • customProperties (optional): Custom metadata for filtering

query_requests

Query historical requests for debugging and analysis with filters, pagination, and sorting.

query_sessions

Query conversation sessions with filtering, search, and time range capabilities.

Complete Working Examples

Basic Agent with Session Tracking

import { query } from '@anthropic-ai/claude-agent-sdk';

// Configure MCP server
const mcpConfig = {
  helicone: {
    command: 'npx',
    args: ['@helicone/mcp'],
    env: {
      HELICONE_API_KEY: process.env.HELICONE_API_KEY
    }
  }
};

// Make a request with session tracking
const sessionId = `chat-${Date.now()}`;
const result = await query({
  prompt: `Use the use_ai_gateway tool to ask Claude Sonnet: "Plan a 3-day trip to Japan"

Use these settings:
- sessionId: "${sessionId}"
- sessionName: "travel-planning"
- customProperties: {"topic": "travel", "destination": "japan"}`,
  options: {
    mcpServers: mcpConfig,
    allowedTools: ['mcp__helicone__use_ai_gateway']
  }
});

// Extract response
for await (const message of result.sdkMessages) {
  if (message.type === 'result' && message.result) {
    console.log('Travel Plan:', message.result);
  }
}

Multi-Model Comparison

import { query } from '@anthropic-ai/claude-agent-sdk';

const sessionId = `comparison-${Date.now()}`;
const result = await query({
  prompt: `Compare responses from multiple models on: "Explain quantum computing in simple terms"

1. Use GPT-4o-mini (fast, cost-effective)
2. Use Claude Sonnet (high quality)
3. Use GPT-4o (balanced)

Use sessionId: "${sessionId}" for all requests so I can compare them later.`,
  options: {
    mcpServers: {
      helicone: {
        command: 'npx',
        args: ['@helicone/mcp'],
        env: {
          HELICONE_API_KEY: process.env.HELICONE_API_KEY
        }
      }
    },
    allowedTools: ['mcp__helicone__use_ai_gateway']
  }
});

// Get comparison results
for await (const message of result.sdkMessages) {
  if (message.type === 'result') {
    console.log('Comparison:', message.result);
  }
}

Self-Analyzing Agent

import { query } from '@anthropic-ai/claude-agent-sdk';

const result = await query({
  prompt: `Perform a task and then analyze your own performance:

1. Use the use_ai_gateway tool to generate a haiku about AI
2. Then use query_requests to check how much the request cost
3. Use query_sessions to see your recent activity
4. Provide a summary of your performance and costs`,
  options: {
    mcpServers: {
      helicone: {
        command: 'npx',
        args: ['@helicone/mcp'],
        env: {
          HELICONE_API_KEY: process.env.HELICONE_API_KEY
        }
      }
    },
    allowedTools: [
      'mcp__helicone__use_ai_gateway',
      'mcp__helicone__query_requests',
      'mcp__helicone__query_sessions'
    ]
  }
});

// Get self-analysis
for await (const message of result.sdkMessages) {
  if (message.type === 'result') {
    console.log('Self-Analysis:', message.result);
  }
}

Next Steps