Documentation Index
Fetch the complete documentation index at: https://docs.helicone.ai/llms.txt
Use this file to discover all available pages before exploring further.
This integration method is maintained but no longer actively developed. For the best experience and latest features, use our new AI Gateway with unified API access to 100+ models.
Using the Helicone SDK (recommended)
Walkthrough: logging chat completions
To get started, install the `@helicone/helpers` package
npm install @helicone/helpers
Setup the logger
import { HeliconeManualLogger } from "@helicone/helpers";
const logger = new HeliconeManualLogger({
apiKey: process.env.HELICONE_API_KEY
});
Call your LLM and log the request
const reqBody = {
model: "phi3:mini",
messages: [{
role: "user",
content: "Why is the sky blue?"
}],
stream: false
}
const res = await logger.logRequest(reqBody, async (resultRecorder) => {
const r = await fetch("http://localhost:11434/api/chat", {
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify(reqBody)
})
const resBody = await r.json();
resultRecorder.appendResult(resBody);
return resBody;
})
Go to the Helicone Requests page and see your request!
Example: logging completion request
The above example uses phi3:mini model and the Chat Completion interface. The same can be done with a regular completion request:
// ...
const reqBody = {
model: "llama3.1",
prompt: "Why is the sky blue?",
stream: false,
};
const res = await logger.logRequest(reqBody, async (resultRecorder) => {
const r = await fetch("http://localhost:11434/api/generate", {
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify(reqBody)
})
const resBody = await r.json();
resultRecorder.appendResult(resBody);
return resBody;
})
Resources