TypeScript Manual Logger

Logging calls to custom models is supported via the Helicone NodeJS SDK.

1

To get started, install the `@helicone/helpers` package

npm install @helicone/helpers
2

Set `HELICONE_API_KEY` as an environment variable

export HELICONE_API_KEY=sk-<your-api-key>
You can also set the Helicone API Key in your code (See below)
3

Create a new HeliconeManualLogger instance

import { HeliconeManualLogger } from "@helicone/helpers";

const heliconeLogger = new HeliconeManualLogger({
apiKey: process.env.HELICONE_API_KEY, // Can be set as env variable
headers: {} // Additional headers to be sent with the request
});

4

Log your request

const reqBody = {
  model: "text-embedding-ada-002",
  input: "The food was delicious and the waiter was very friendly.",
  encoding_format: "float"
}
const res = await heliconeLogger.logRequest(
  reqBody,
  async (resultRecorder) => {
    const r = await fetch("https://api.openai.com/v1/embeddings", {
      method: "POST",
      headers: {
        "Content-Type": "application/json",
        Authorization: `Bearer ${process.env.OPENAI_API_KEY}`
      },
      body: JSON.stringify(reqBody)
    })
    const resBody = await r.json();
    resultRecorder.appendResults(resBody);
    return resBody; // this will be returned by the logRequest function
  },
  {
   // Additional headers to be sent with the request
  }
);

API Reference

HeliconeManualLogger

class HeliconeManualLogger {
constructor(opts: IHeliconeManualLogger);
}

type IHeliconeManualLogger = {
apiKey: string;
headers?: Record<string, string>;
loggingEndpoint?: string; // defaults to https://api.hconeai.com/custom/v1/log
};

logRequest

logRequest<T>(
    request: HeliconeLogRequest,
    operation: (resultRecorder: HeliconeResultRecorder) => Promise<T>,
    additionalHeaders?: Record<string, string>
  ): Promise<T>

Parameters

  1. request: HeliconeLogRequest - The request object to log
type HeliconeLogRequest = ILogRequest | HeliconeCustomEventRequest; // ILogRequest is the type for the request object for custom model logging

// The name and structure of the prompt field depends on the model you are using.
// Eg: for chat models it is named "messages", for embeddings models it is named "input".
// Hence, the only enforced type is `model`, you need still add the respective prompt property for your model.
// You may also add more properties (eg: temperature, stop reason etc)
type ILogRequest = {
  model: string;
  [key: string]: any;
};
  1. operation: (resultRecorder: HeliconeResultRecorder) => Promise<T> - The operation to be executed and logged
class HeliconeResultRecorder {
  private results: Record<string, any> = {};

  appendResults(data: Record<string, any>): void {
    this.results = { ...this.results, ...data };
  }

  getResults(): Record<string, any> {
    return this.results;
  }
}
  1. additionalHeaders: Record<string, string>

Available Methods

The HeliconeManualLogger class provides several methods for logging different types of requests and responses. Here’s a comprehensive overview of each method:

logRequest

Used for logging non-streaming requests and responses with full control over the operation.

logRequest<T>(
  request: HeliconeLogRequest,
  operation: (resultRecorder: HeliconeResultRecorder) => Promise<T>,
  additionalHeaders?: Record<string, string>
): Promise<T>

Parameters:

  • request: The request object to log
  • operation: A function that performs the actual API call and records the results
  • additionalHeaders: Optional additional headers to include with the log request

Example:

const result = await helicone.logRequest(
  requestBody,
  async (resultRecorder) => {
    const response = await llmProvider.createCompletion(requestBody);
    resultRecorder.appendResults(response);
    return response;
  },
  { "Helicone-User-Id": userId }
);

logStream

Used for logging streaming operations with full control over stream handling.

logStream<T>(
  request: HeliconeLogRequest,
  operation: (resultRecorder: HeliconeStreamResultRecorder) => Promise<T>,
  additionalHeaders?: Record<string, string>
): Promise<T>

Parameters:

  • request: The request object to log
  • operation: A function that performs the streaming API call and attaches the stream to the recorder
  • additionalHeaders: Optional additional headers to include with the log request

Example:

const stream = await helicone.logStream(
  requestBody,
  async (resultRecorder) => {
    const response = await llmProvider.createChatCompletion({
      stream: true,
      ...requestBody,
    });
    const [stream1, stream2] = response.tee();
    resultRecorder.attachStream(stream2.toReadableStream());
    return stream1;
  },
  { "Helicone-User-Id": userId }
);

logSingleStream

A simplified method for logging a single ReadableStream without needing to manage the operation.

logSingleStream(
  request: HeliconeLogRequest,
  stream: ReadableStream,
  additionalHeaders?: Record<string, string>
): Promise<void>

Parameters:

  • request: The request object to log
  • stream: The ReadableStream to consume and log
  • additionalHeaders: Optional additional headers to include with the log request

Example:

const response = await llmProvider.createChatCompletion({
  stream: true,
  ...requestBody,
});
const stream = response.toReadableStream();
const [streamForUser, streamForLogging] = stream.tee();

helicone.logSingleStream(requestBody, streamForLogging, {
  "Helicone-User-Id": userId,
});

return streamForUser;

logSingleRequest

Used for logging a single request with a response body without needing to manage the operation.

logSingleRequest(
  request: HeliconeLogRequest,
  body: string,
  additionalHeaders?: Record<string, string>
): Promise<void>

Parameters:

  • request: The request object to log
  • body: The response body as a string
  • additionalHeaders: Optional additional headers to include with the log request

Example:

const response = await llmProvider.createCompletion(requestBody);
await helicone.logSingleRequest(requestBody, JSON.stringify(response), {
  "Helicone-User-Id": userId,
});

Streaming Examples

Using the Async Stream Parser

Helicone provides an asynchronous stream parser for efficient handling of streamed responses. This is particularly useful when working with custom integrations that support streaming.

Here’s an example of how to use the async stream parser with a custom integration:

import { HeliconeManualLogger } from "@helicone/helpers";

// Initialize the Helicone logger
const heliconeLogger = new HeliconeManualLogger({
  apiKey: process.env.HELICONE_API_KEY!,
  headers: {}, // You can add custom headers here
});

// Your custom model API call that returns a stream
const response = await customModelAPI.generateStream(prompt);

// If your API supports splitting the stream
const [stream1, stream2] = response.tee();

// Log the stream to Helicone using the async stream parser
heliconeLogger.logStream(requestBody, async (resultRecorder) => {
  resultRecorder.attachStream(stream1);
});

// Process the stream for your application
for await (const chunk of stream2) {
  console.log(chunk);
}

The async stream parser offers several benefits:

  • Processes stream chunks asynchronously for better performance
  • Reduces latency when handling large streamed responses
  • Provides more reliable token counting for streamed content

Using Vercel’s after Function with Streaming

When building applications with Next.js App Router on Vercel, you can use the after function to log streaming responses without blocking the client response:

import { HeliconeManualLogger } from "@helicone/helpers";
import { after } from "next/server";
import Together from "together-ai";

export async function POST(request: Request) {
  const { prompt } = await request.json();

  const together = new Together({ apiKey: process.env.TOGETHER_API_KEY });
  const helicone = new HeliconeManualLogger({
    apiKey: process.env.HELICONE_API_KEY!,
  });

  // Example with non-streaming response
  const nonStreamingBody = {
    model: "meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo",
    messages: [{ role: "user", content: prompt }],
    stream: false,
  };

  const completion = await together.chat.completions.create(nonStreamingBody);

  // Log non-streaming response after sending the response to the client
  after(
    helicone.logSingleRequest(nonStreamingBody, JSON.stringify(completion))
  );

  // Example with streaming response
  const streamingBody = {
    model: "meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo",
    messages: [{ role: "user", content: prompt }],
    stream: true,
  };

  const response = await together.chat.completions.create(streamingBody);
  const [stream1, stream2] = response.tee();

  // Log streaming response after sending the response to the client
  after(helicone.logSingleStream(streamingBody, stream2.toReadableStream()));

  return new Response(stream1.toReadableStream());
}

For a comprehensive guide on using the Manual Logger with streaming functionality, check out our Manual Logger with Streaming cookbook.