Skip to main content
GET
/
v1
/
public
/
model-registry
/
models
Returns a comprehensive list of all AI models with their configurations, pricing, and capabilities
curl --request GET \
  --url https://api.helicone.ai/v1/public/model-registry/models
{
  "models": [
    {
      "id": "claude-opus-4-1",
      "name": "Anthropic: Claude Opus 4.1",
      "author": "anthropic",
      "contextLength": 200000,
      "endpoints": [
        {
          "provider": "anthropic",
          "providerSlug": "anthropic",
          "supportsPtb": true,
          "pricing": {
            "prompt": 15,
            "completion": 75,
            "cacheRead": 1.5,
            "cacheWrite": 18.75
          }
        }
      ],
      "maxOutput": 32000,
      "trainingDate": "2025-08-05",
      "description": "Most capable Claude model with extended context",
      "inputModalities": [
        null
      ],
      "outputModalities": [
        null
      ],
      "supportedParameters": [
        null,
        null,
        null,
        null,
        null,
        null,
        null
      ]
    }
  ],
  "total": 150,
  "filters": {
    "providers": [
      {
        "name": "anthropic",
        "displayName": "Anthropic"
      },
      {
        "name": "openai",
        "displayName": "OpenAI"
      },
      {
        "name": "google",
        "displayName": "Google"
      }
    ],
    "authors": [
      "anthropic",
      "openai",
      "google",
      "meta"
    ],
    "capabilities": [
      "audio",
      "image",
      "thinking",
      "caching",
      "reasoning"
    ]
  }
}

Documentation Index

Fetch the complete documentation index at: https://docs.helicone.ai/llms.txt

Use this file to discover all available pages before exploring further.

This endpoint returns the complete catalog of AI models and provider endpoints that the Helicone AI Gateway can route to. The gateway uses this registry to determine which providers support a requested model and how to intelligently route requests for maximum reliability and cost optimization. When you request a model through the AI Gateway (like gpt-4o-mini), the gateway consults this registry to find all providers offering that model, then applies routing logic to select the best provider based on your configuration, availability, and pricing.

Response

200 - application/json

Complete model registry with models and filter options

data
object
required
error
enum<number> | null
required
Available options:
null