Skip to main content

Introduction

Vercel AI SDK is a TypeScript toolkit for building AI-powered applications with React, Next.js, Vue, and more.
The Helicone provider for Vercel AI SDK is available as a dedicated package: @helicone/ai-sdk-provider.

Integration Steps

1
Sign up at helicone.ai and generate an API key.
You’ll also need to configure your provider API keys (OpenAI, Anthropic, etc.) at Helicone Providers for BYOK (Bring Your Own Keys).
2
HELICONE_API_KEY=sk-helicone-...
3

Install the Helicone AI SDK provider

pnpm add @helicone/ai-sdk-provider ai
4

Configure Vercel AI SDK with Helicone

import { createHelicone } from '@helicone/ai-sdk-provider';
import { generateText } from 'ai';

// Initialize Helicone provider
const helicone = createHelicone({
  apiKey: process.env.HELICONE_API_KEY
});

// Use any model from 100+ providers
const result = await generateText({
  model: helicone('claude-4.5-haiku'),
  prompt: 'Write a haiku about artificial intelligence'
});

console.log(result.text);
You can switch between 100+ models without changing your code. Just update the model name!

Complete Working Examples

Basic Text Generation

import { createHelicone } from '@helicone/ai-sdk-provider';
import { generateText } from 'ai';

const helicone = createHelicone({
  apiKey: process.env.HELICONE_API_KEY
});

const { text } = await generateText({
  model: helicone('gemini-2.5-flash-lite'),
  prompt: 'What is Helicone?'
});

console.log(text);

Streaming Text

import { createHelicone } from '@helicone/ai-sdk-provider';
import { streamText } from 'ai';

const helicone = createHelicone({
  apiKey: process.env.HELICONE_API_KEY
});

const result = await streamText({
  model: helicone('deepseek-v3.1-terminus'),
  prompt: 'Write a short story about a robot learning to paint',
  maxTokens: 300
});

for await (const chunk of result.textStream) {
  process.stdout.write(chunk);
}

console.log('\n\nStream completed!');

Provider Selection

By default, Helicone’s AI gateway automatically routes to the cheapest provider. You can also manually select a specific provider:
import { createHelicone } from '@helicone/ai-sdk-provider';
import { generateText } from 'ai';

const helicone = createHelicone({
  apiKey: process.env.HELICONE_API_KEY
});

// Automatic routing (cheapest provider)
const autoResult = await generateText({
  model: helicone('gpt-4o'),
  prompt: 'Hello!'
});

// Manual provider selection
const manualResult = await generateText({
  model: helicone('claude-4.5-sonnet/anthropic'),
  prompt: 'Hello!'
});

// Multiple provider selection: first model/provider is used, if it fails, the second model/provider is used, and so on.
const manualResult = await generateText({
  model: helicone('claude-4.5-sonnet/anthropic,gpt-4o/openai'),
  prompt: 'Hello!'
});

With Custom Properties and Session Tracking

import { createHelicone } from '@helicone/ai-sdk-provider';
import { generateText } from 'ai';

const helicone = createHelicone({
  apiKey: process.env.HELICONE_API_KEY
});

const result = await generateText({
  model: helicone('claude-4.5-haiku', {
    extraBody: {
      helicone: {
        sessionId: 'my-session',
        userId: 'user-123',
        properties: {
          environment: 'production',
          appVersion: '2.1.0',
          feature: 'quantum-explanation'
        }
      }
    }
  }),
  prompt: 'Explain quantum computing'
});

Tool Calling

import { createHelicone } from '@helicone/ai-sdk-provider';
import { generateText, tool } from 'ai';
import { z } from 'zod';

const helicone = createHelicone({
  apiKey: process.env.HELICONE_API_KEY
});

const result = await generateText({
  model: helicone('gpt-4o'),
  prompt: 'What is the weather like in San Francisco?',
  tools: {
    getWeather: tool({
      description: 'Get weather for a location',
      parameters: z.object({
        location: z.string().describe('The city name')
      }),
      execute: async (args) => {
        return `It's sunny in ${args.location}`;
      }
    })
  }
});

console.log(result.text);

Helicone Prompts Integration

Use prompts created in your Helicone dashboard instead of hardcoding messages in your application:
import { createHelicone } from '@helicone/ai-sdk-provider';
import type { WithHeliconePrompt } from '@helicone/ai-sdk-provider';
import { generateText } from 'ai';

const helicone = createHelicone({
  apiKey: process.env.HELICONE_API_KEY
});

const result = await generateText({
  model: helicone('gpt-4o', {
    promptId: 'sg45wqc',
    inputs: {
      customer_name: 'Sarah Johnson',
      issue_type: 'billing',
      account_type: 'premium'
    },
    environment: 'production',
    extraBody: {
      helicone: {
        sessionId: 'support-session-123',
        properties: {
          department: 'customer-support'
        }
      }
    }
  }),
  messages: [{ role: 'user', content: 'placeholder' }]
} as WithHeliconePrompt);
When using promptId, you must still pass a placeholder messages array to satisfy the Vercel AI SDK’s validation. The actual prompt content will be fetched from your Helicone dashboard, and the placeholder messages will be ignored.
Benefits of using Helicone prompts:
  • 🎯 Centralized Management: Update prompts without code changes
  • 👩🏻‍💻 Perfect for non-technical users: Create prompts using the Helicone dashboard
  • 🚀 Lower Latency: Single API call, no message construction overhead
  • 🔧 A/B Testing: Test different prompt versions with environments
  • 📊 Better Analytics: Track prompt performance across versions

Additional Examples

For more comprehensive examples, check out the GitHub repository:

Additional Resources