Quickstart

Logging calls to custom models is currently supported via the Helicone NodeJS SDK.

1

To get started, install the `@helicone/helpers` package

npm install @helicone/helpers
2

Set `HELICONE_API_KEY` as an environment variable

export HELICONE_API_KEY=sk-<your-api-key>
You can also set the Helicone API Key in your code (See below)
3

Create a new HeliconeManualLogger instance

import { HeliconeManualLogger } from "@helicone/helpers";

const heliconeLogger = new HeliconeManualLogger({
apiKey: process.env.HELICONE_API_KEY, // Can be set as env variable
headers: {} // Additional headers to be sent with the request
});

4

Log your request

const reqBody = {
  model: "text-embedding-ada-002",
  input: "The food was delicious and the waiter was very friendly.",
  encoding_format: "float"
}
const res = await heliconeLogger.logRequest(
  reqBody,
  async (resultRecorder) => {
    const r = await fetch("https://api.openai.com/v1/embeddings", {
      method: "POST",
      headers: {
        "Content-Type": "application/json",
        Authorization: `Bearer ${process.env.OPENAI_API_KEY}`
      },
      body: JSON.stringify(reqBody)
    })
    const resBody = await r.json();
    resultRecorder.appendResults(resBody);
    return results; // this will be returned by the logRequest function
  },
  {
   // Additional headers to be sent with the request
  }
);

API Reference

HeliconeManualLogger

class HeliconeManualLogger {
  constructor(opts: IHeliconeManualLogger);
}

type IHeliconeManualLogger = {
  apiKey: string;
  headers?: Record<string, string>;
  loggingEndpoint?: string; // defaults to https://api.hconeai.com/custom/v1/log
};

logRequest

logRequest<T>(
    request: HeliconeLogRequest,
    operation: (resultRecorder: HeliconeResultRecorder) => Promise<T>,
    additionalHeaders?: Record<string, string>
  ): Promise<T>

Parameters

  1. request: HeliconeLogRequest - The request object to log
type HeliconeLogRequest = ILogRequest | HeliconeCustomEventRequest; // ILogRequest is the type for the request object for custom model logging

// The name and structure of the prompt field depends on the model you are using.
// Eg: for chat models it is named "messages", for embeddings models it is named "input".
// Hence, the only enforced type is `model`, you need still add the respective prompt property for your model.
// You may also add more properties (eg: temperature, stop reason etc)
type ILogRequest = {
  model: string;
  [key: string]: any;
};
  1. operation: (resultRecorder: HeliconeResultRecorder) => Promise<T> - The operation to be executed and logged
class HeliconeResultRecorder {
  private results: Record<string, any> = {};

  appendResults(data: Record<string, any>): void {
    this.results = { ...this.results, ...data };
  }

  getResults(): Record<string, any> {
    return this.results;
  }
}
  1. additionalHeaders: Record<string, string>

Token Tracking

Helicone supports token tracking for custom model integrations. To enable this, include a usage object in your providerResponse.json. Here are the supported formats:

OpenAI-style Format

{
  "providerResponse": {
    "json": {
      "usage": {
        "prompt_tokens": 10,
        "completion_tokens": 20,
        "total_tokens": 30
      }
      // ... rest of your response
    }
  }
}

Anthropic-style Format

{
  "providerResponse": {
    "json": {
      "usage": {
        "input_tokens": 10,
        "output_tokens": 20
      }
      // ... rest of your response
    }
  }
}

Google-style Format

{
  "providerResponse": {
    "json": {
      "usageMetadata": {
        "promptTokenCount": 10,
        "candidatesTokenCount": 20,
        "totalTokenCount": 30
      }
      // ... rest of your response
    }
  }
}

Alternative Format

{
  "providerResponse": {
    "json": {
      "prompt_token_count": 10,
      "generation_token_count": 20
      // ... rest of your response
    }
  }
}

If your model returns token counts in a different format, you can transform the response to match one of these formats before logging to Helicone. If no token information is provided, Helicone will still log the request but token metrics will not be available.

Streaming Responses

For streaming responses, token counts should be included in the final message of the stream. If you’re experiencing issues with cost calculation in streaming responses, please refer to our streaming usage guide for additional configuration options.