Quickstart

Logging calls to custom models is currently supported via the Helicone NodeJS SDK.

import { HeliconeManualLogger } from "@helicone/helicone";

const logger = new HeliconeManualLogger({
  apiKey: process.env.HELICONE_API_KEY
});

const reqBody = {
  "model": "text-embedding-ada-002",
  "input": "The food was delicious and the waiter was very friendly.",
  "encoding_format": "float"
}

logger.registerRequest(reqBody);

const r = await fetch("https://api.openai.com/v1/embeddings", {
  method: "POST",
  headers: {
    "Content-Type": "application/json",
    Authorization: `Bearer ${process.env.OPENAI_API_KEY}`
  },
  body: JSON.stringify(reqBody)
})

const res = await r.json();
console.log(res);

logger.sendLog(res);

Structure of Objects

Registering Request

A typical request will have the following structure:

type BaseRequest = {
  model: string,
  temperature?: number;
  max_tokens?: number;
  stream?: boolean;
}

// The name and structure of the prompt field depends on the model you are using.
// Eg: for chat models it is named "messages", for embeddings models it is named "input".
// Hence, the only enforced type is `model`, you need still add the respective prompt property for your model.
// You may also add more properties (eg: temperature, stop reason etc)
type ILogRequest = {
  model: string;
} & {
  [key: string]: any;
};

Sending Log

To submit logs, you need to extract the response from the model and send it to Helicone. The response should follow the below structure:

type ILogResponse = {
  id: string;
  object: string;
  created: string;
  model: string;
  choices: Array<{
    index: number;
    finish_reason: string;
    message: {
      role: string;
      content: string;
    };
  }>
  usage?: {
    prompt_tokens: number;
    completion_tokens: number;
    total_tokens: number;
  }
}