Overview
Adding a new provider to Helicone involves several key components:- Authors: Companies that create the models (e.g., OpenAI, Anthropic)
- Models: Individual model definitions with pricing and metadata
- Providers: Inference providers that host models (e.g., OpenAI, Vertex AI, DeepInfra, Bedrock)
- Endpoints: Model-provider combinations with deployment configurations
Prerequisites
- OpenAI-compatible API (recommended for simplest integration)
- Access to provider’s pricing and inference documentation
- Model specifications (context length, supported features)
- API authentication details
Step 1: Understanding the File Structure
All model support configurations are located in thepackages/cost/models
directory:
Step 2: Create Provider Definition
We will useDeepInfra
as our example.
For OpenAI-Compatible Providers
Create a new file inpackages/cost/models/providers/[provider-name].ts
:
BaseProvider
class handles the standard Bearer ${apiKey}
authentication pattern automatically when you set auth = "api-key"
, which is the common pattern for OpenAI-compatible APIs.
For Non-OpenAI Compatible Providers
For non-OpenAI compatible providers, you’ll need to override additional methods. You can find options by reviewing theBaseProvider
definition.
Step 3: Add Provider to Index
Updatepackages/cost/models/providers/index.ts
:
Step 4: Add Provider to the Web’s Data
Updateweb/data/providers.ts
to include the new provider:
Step 5: Define Authors (Model Creators)
Create author definitions inpackages/cost/models/authors/[author-name]/
:
Folder Structure
models.ts
Include the model within themodels
object. This can contain all model versions within that model family, in this case, the mistral-nemo
model family.
Make sure to research each value and include the tokenizer in the Tokenizer
interface type if it is not there already.
endpoints.ts
Now, update thepackages/models/[author]/[model-family]/endpoints.ts
file with model-provider endpoint combinations.
Make sure to review the provider’s page itself since the inference cost changes per provider.
Make sure the initial key "mistral-nemo:deepinfra"
is human-readable and friendly. It’s what users will call!
-
Some providers have multiple deployment regions:
-
Pricing Configuration
Step 6: Add model to Author registries (if needed)
If the model family hasn’t been created, you will need to add it within the AI Gateway’s registry.index.ts
Updatepackages/cost/models/authors/[author]/index.ts
to include the new model family.
You don’t need to update anything if the model family has already been created.
metadata.ts
Updatepackages/cost/models/authors/[author]/metadata.ts
to fetch models.
You don’t need to update anything if the author has already been created.
Step 7: Update the Model Registry & its Types
Add your new model topackages/cost/models/registry-types.ts
:
packages/cost/models/registry.ts
:
Step 8: Create Tests
Create test files inworker/tests/ai-gateway/
for the author.
You can use the tests there as a base example. Make sure to include all edge cases and error scenarios.
We map out the provider errors so we don’t pass them to the user directly. You’ll find the correct error codes in the worker/src/lib/ai-gateway/SimpleAIGateway.ts
in the createErrorResponse
function here.
Step 9: Snapshots
Make sure to rerun snapshots before deploying by running this command in your console.cd <your-path-to-the-repo>/helicone/helicone/packages && npx jest --updateSnapshot **tests**/cost/registrySnapshots.test.ts
Common Issues & Solutions
Issue: Complex Authentication
Solution: Override theauth()
method with custom logic:
Issue: Non-Standard Request Format
Solution: Override thebuildBody()
method:
Issue: Multiple Pricing Tiers
Solution: Use threshold-based pricing:Deployment Checklist
- Provider class created with correct authentication
- Models defined with accurate specifications
- Endpoints configured with correct pricing
- Registry types updated
- Tests written and passing
- Snapshots updated
- Documentation updated
- Pass-through billing tested (if applicable)
- Fallback behavior verified