Skip to main content

Introduction

PostHog is a comprehensive product analytics platform that helps you understand user behavior and product performance.
1

Sign up at helicone.ai and generate an API key.

Create a Posthog account if you don’t have one. Get your Project API Key from your PostHog project settings.

HELICONE_API_KEY=sk-helicone-...
POSTHOG_PROJECT_API_KEY=phc_...

# Optional: PostHog host (defaults to https://app.posthog.com)
# Only needed if using self-hosted PostHog
# POSTHOG_CLIENT_API_HOST=https://app.posthog.com
2
npm install openai
# or
yarn add openai
3

Configure OpenAI client with Helicone AI Gateway

import { OpenAI } from "openai";
import dotenv from "dotenv";

dotenv.config();

const client = new OpenAI({
  baseURL: "https://ai-gateway.helicone.ai",
  apiKey: process.env.HELICONE_API_KEY,
  defaultHeaders: {
    "Helicone-Posthog-Key": POSTHOG_PROJECT_API_KEY,
    "Helicone-Posthog-Host": POSTHOG_CLIENT_API_HOST
  },
});
4
Your existing OpenAI code continues to work without any changes. Events will automatically be exported to PostHog.
  const response = await client.chat.completions.create({
    model: "gpt-4o-mini",
    messages: [{ role: "user", content: "Hello, world!" }],
    temperature: 0.7,
  });

  console.log(response.choices[0]?.message?.content);
5
  1. Go to your PostHog Events page
  2. Look for events with the helicone_request event name
  3. Each event contains metadata about the LLM request including:
    • Model used
    • Token counts
    • Latency
    • Cost
    • Request/response data
While you’re here, why not give us a star on GitHub? It helps us a lot!
Looking for a framework or tool not listed here? Request it here!