Integrations
Custom Model (Beta)
We recently added support for custom models like llama or GPT-Neo. This allows you to use your own models with Helicone. This is currently in beta, so please let us know if you have any issues.
NodeJS
Curl
Add HELICONE_API_KEY
to your environment variables.
export HELICONE_API_KEY=sk-<your-api-key>
# You can also set it in your code (See below)
import { HeliconeAsyncConfiguration } from "../core/HeliconeAsyncConfiguration";
import {
HeliconeLogBuilder,
HeliconeLogger,
ResponseBody,
} from "../async_logger/HeliconeLogger";
const heliconeApiKey = process.env.HELICONE_API_KEY;
const config = new HeliconeAsyncConfiguration({
heliconeMeta: {
apiKey: heliconeApiKey,
},
});
const logger = new HeliconeLogger(config);
const llmArgs = {
model: "llama-2",
prompt: "Say hi!",
};
const builder = new HeliconeLogBuilder(llmArgs);
/*
result = callToLLM(llmArgs)
*/
const result: ResponseBody = {
text: "This is my response",
usage: {
total_tokens: 13,
prompt_tokens: 5,
completion_tokens: 8,
},
};
builder.addResponse(result);
builder.addUser("test-user");
const response = await logger.submit(builder);
if (response.status !== 200) {
throw new Error(response.data);
}