Generic Gateway
Custom Model Integration
Integrate any custom LLM, including open-source models like Llama and GPT-Neo, with Helicone. Step-by-step guide for both NodeJS and Curl implementations to connect your proprietary or open-source models.
Quickstart
Logging calls to custom models is currently supported via the Helicone NodeJS SDK.
1
To get started, install the `@helicone/helpers` package
2
Set `HELICONE_API_KEY` as an environment variable
You can also set the Helicone API Key in your code (See below)
3
Create a new HeliconeManualLogger instance
4
Log your request
API Reference
HeliconeManualLogger
logRequest
Parameters
request
:HeliconeLogRequest
- The request object to log
operation
:(resultRecorder: HeliconeResultRecorder) => Promise<T>
- The operation to be executed and logged
additionalHeaders
:Record<string, string>
- Additional headers to be sent with the request
- This can be used to use features like session management, custom properties, etc.
Was this page helpful?