Introduction
Semantic Kernel is Microsoftβs open-source SDK for building AI agents and orchestrating LLM workflows across multiple languages (.NET, Python, Java). By integrating Helicone AI Gateway with Semantic Kernel, you can:
- Route to different models & providers with automatic failover through a single endpoint
- Unified billing with pass-through billing or bring your own keys
- Monitor all requests with automatic cost tracking in one dashboard
This integration requires only one line change to your existing Semantic Kernel code - adding the AI Gateway endpoint.
Integration Steps
Create an account + Generate an API Key
Sign up at helicone.ai and generate an API key.Youβll also need to configure your provider API keys (OpenAI, Anthropic, etc.) at Helicone Providers for BYOK (Bring Your Own Keys). Set environment variables
# Your Helicone API key
export HELICONE_API_KEY=<your-helicone-api-key>
Create a .env file in your project:HELICONE_API_KEY=sk-helicone-...
Add the AI Gateway endpoint to your Semantic Kernel configuration
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;
using DotNetEnv;
// Load environment variables
Env.Load();
var heliconeApiKey = Environment.GetEnvironmentVariable("HELICONE_API_KEY");
// Create kernel builder
var builder = Kernel.CreateBuilder();
// Add OpenAI chat completion with Helicone AI Gateway endpoint
builder.AddOpenAIChatCompletion(
modelId: "gpt-4.1-mini", // Any model from Helicone registry
apiKey: heliconeApiKey, // Your Helicone API key
endpoint: new Uri("https://ai-gateway.helicone.ai/v1") // Helicone AI Gateway
);
var kernel = builder.Build();
The only change from a standard Semantic Kernel setup is adding the endpoint parameter. Everything else stays the same!
Use the chat service normally
Your existing Semantic Kernel code continues to work without any changes:using Microsoft.SemanticKernel.ChatCompletion;
// Get the chat service
var chatService = kernel.GetRequiredService<IChatCompletionService>();
// Create chat history
var chatHistory = new ChatHistory();
chatHistory.AddUserMessage("What is the capital of France?");
// Get response
var response = await chatService.GetChatMessageContentAsync(chatHistory);
Console.WriteLine(response.Content);
View requests in the Helicone dashboard
All your Semantic Kernel requests are now visible in your Helicone dashboard:
- Request/response bodies
- Latency metrics
- Token usage and costs
- Model performance analytics
- Error tracking
Migration Example
Hereβs what migrating an existing Semantic Kernel application looks like:
Before (Direct OpenAI)
var builder = Kernel.CreateBuilder();
builder.AddOpenAIChatCompletion(
modelId: "gpt-4o-mini",
apiKey: openAiApiKey
);
var kernel = builder.Build();
After (Helicone AI Gateway)
var builder = Kernel.CreateBuilder();
builder.AddOpenAIChatCompletion(
modelId: "gpt-4.1-mini", // Use Helicone model names
apiKey: heliconeApiKey, // Your Helicone API key
endpoint: new Uri("https://ai-gateway.helicone.ai/v1") // Add this line!
);
var kernel = builder.Build();
Thatβs it! Just one additional parameter and youβre routing through Heliconeβs AI Gateway.
Complete Working Example
Hereβs a full example that tests multiple models:
using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;
using DotNetEnv;
// Load environment
Env.Load();
var apiKey = Environment.GetEnvironmentVariable("HELICONE_API_KEY");
if (string.IsNullOrEmpty(apiKey))
{
Console.WriteLine("β HELICONE_API_KEY not found in environment");
return;
}
Console.WriteLine("π Testing multiple models through Helicone AI Gateway\n");
// Test different models
await TestModel("gpt-4.1-mini", "OpenAI GPT-4.1 Mini");
await TestModel("claude-opus-4-1", "Anthropic Claude Opus 4.1");
await TestModel("gemini-2.5-flash-lite", "Google Gemini 2.5 Flash Lite");
Console.WriteLine("\nβ
All models tested!");
Console.WriteLine("π Check your dashboard: https://us.helicone.ai/dashboard");
async Task TestModel(string modelId, string modelName)
{
try
{
var builder = Kernel.CreateBuilder();
// Configure with Helicone AI Gateway
builder.AddOpenAIChatCompletion(
modelId: modelId,
apiKey: apiKey,
endpoint: new Uri("https://ai-gateway.helicone.ai/v1")
);
var kernel = builder.Build();
var chatService = kernel.GetRequiredService<IChatCompletionService>();
var chatHistory = new ChatHistory();
chatHistory.AddUserMessage("Say hello in one sentence.");
Console.Write($"π€ Testing {modelName}... ");
var response = await chatService.GetChatMessageContentAsync(chatHistory);
Console.WriteLine("β
");
Console.WriteLine($" Response: {response.Content}\n");
}
catch (Exception ex)
{
Console.WriteLine("β");
Console.WriteLine($" Error: {ex.Message}\n");
}
}