Using the Helicone SDK (recommended)

Walkthrough: logging chat completions

1

To get started, install the `@helicone/helpers` package

npm install @helicone/helpers
2

Setup the logger

  import { HeliconeManualLogger } from "@helicone/helpers";

  const logger = new HeliconeManualLogger({
    apiKey: process.env.HELICONE_API_KEY
  });
3

Call your LLM and log the request

const reqBody = {
  model: "phi3:mini",
  messages: [{
     role: "user",
     content: "Why is the sky blue?"
  }],
  stream: false
}

const res = await logger.logRequest(reqBody, async (resultRecorder) => {
   const r = await fetch("http://localhost:11434/api/chat", {
     method: "POST",
     headers: {
       "Content-Type": "application/json",
     },
     body: JSON.stringify(reqBody)
   }) 
   
   const resBody = await r.json();
   resultRecorder.appendResult(resBody);
   return resBody;
})
4

Go to the Helicone Requests page and see your request!

Example: logging completion request

The above example uses phi3:mini model and the Chat Completion interface. The same can be done with a regular completion request:

// ...
const reqBody = {
  model: "llama3.1",
  prompt: "Why is the sky blue?",
  stream: false,
};

const res = await logger.logRequest(reqBody, async (resultRecorder) => {
  const r = await fetch("http://localhost:11434/api/generate", {
    method: "POST",
    headers: {
      "Content-Type": "application/json",
    },
    body: JSON.stringify(reqBody)
  }) 
  
  const resBody = await r.json();
  resultRecorder.appendResult(resBody);
  return resBody;
})

Resources