Documentation Index Fetch the complete documentation index at: https://docs.helicone.ai/llms.txt
Use this file to discover all available pages before exploring further.
Introduction
LangGraph is a framework for building stateful, multi-agent applications with LLMs. The integration with Helicone AI Gateway is nearly identical to the LangChain integration , with the addition of agent-specific features.
This integration requires only two changes to your existing LangGraph code - updating the base URL and API key. See the LangChain AI Gateway docs for full feature details.
Quick Start
Follow the same setup as LangChain AI Gateway integration , then create your agent:
TypeScript - OpenAI
Python - OpenAI
import { ChatOpenAI } from "@langchain/openai" ;
import { createReactAgent } from "@langchain/langgraph/prebuilt" ;
import { MemorySaver } from "@langchain/langgraph" ;
const model = new ChatOpenAI ({
model: 'gpt-4.1-mini' ,
apiKey: process . env . HELICONE_API_KEY ,
configuration: {
baseURL: "https://ai-gateway.helicone.ai/v1" ,
},
});
const agent = createReactAgent ({
llm: model ,
tools: yourTools ,
checkpointer: new MemorySaver (),
});
Migration Example
Before (Direct Provider)
import { ChatOpenAI } from "@langchain/openai" ;
import { createReactAgent } from "@langchain/langgraph/prebuilt" ;
const model = new ChatOpenAI ({
model: 'gpt-4o-mini' ,
apiKey: process . env . OPENAI_API_KEY ,
});
const agent = createReactAgent ({
llm: model ,
tools: myTools ,
});
After (Helicone AI Gateway)
import { ChatOpenAI } from "@langchain/openai" ;
import { createReactAgent } from "@langchain/langgraph/prebuilt" ;
const model = new ChatOpenAI ({
model: 'gpt-4.1-mini' , // 100+ models supported
apiKey: process . env . HELICONE_API_KEY , // Your Helicone API key
configuration: {
baseURL: "https://ai-gateway.helicone.ai/v1" // Add this!
},
});
const agent = createReactAgent ({
llm: model ,
tools: myTools ,
});
You can add custom properties when calling your agent with invoke():
import { HumanMessage } from "@langchain/core/messages" ;
import { v4 as uuidv4 } from 'uuid' ;
const result = await agent . invoke (
{ messages: [ new HumanMessage ( "What is the weather in San Francisco?" )] },
{
options: {
headers: {
"Helicone-Session-Id" : uuidv4 (),
"Helicone-Session-Path" : "/weather/query" ,
"Helicone-Property-Query-Type" : "weather" ,
},
},
}
);
AI Gateway Overview Learn about Helicone’s AI Gateway features and capabilities
Provider Routing Configure intelligent routing and automatic failover
Model Registry Browse all available models and providers
LangChain Integration Full AI Gateway feature documentation
Sessions Track multi-turn conversations and agent workflows
Custom Properties Add metadata to track and filter your requests