Ollama
Ollama Javascript Integration
Use Helicone’s JavaScript SDK to log your Ollama usage.
Using the Helicone SDK (recommended)
Walkthrough: logging chat completions
1
To get started, install the `@helicone/helicone` package
npm install @helicone/helicone
2
Setup the logger
import { HeliconeManualLogger } from "@helicone/helicone";
const logger = new HeliconeManualLogger({
apiKey: process.env.HELICONE_API_KEY
});
3
Register the request and call your LLM
const reqBody = {
"model": "phi3:mini",
"messages": [{
"role": "user",
"content": "Why is the sky blue?"
}],
"stream": false
}
logger.registerRequest(reqBody);
const r = await fetch("http://localhost:11434/api/chat", {
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify(reqBody)
})
4
Decode and send the logs!
const res = await r.json();
console.log(res);
logger.sendLog(res);
5
Go to the Helicone Requests pgae and see your request!
Example: logging completion request
The above example uses phi3:mini
model and the Chat Completion
interface. The same can be done with a regular completion
request:
// ...
const reqBody = {
model: "llama3.1",
prompt: "Why is the sky blue?",
stream: false,
};
logger.registerRequest(reqBody);
const r = await fetch("http://localhost:11434/api/generate", {
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify(reqBody),
});
logger.sendLog(res);
Resources
- See Logging Custom Models to learn how to log any model.
- Ollama API Reference