Helicone OSS LLM Observability home pagelight logodark logo
  • Discord
  • Github
  • Dashboard
  • Dashboard
  • Documentation
  • API Reference
  • Guides
  • FAQ
  • Guides
    • Overview
    • Cookbooks
    • Prompt Engineering
    Applications
    • Overview
    • Use Cases

    Quick Start

    How developers build AI applications. Explore our docs to learn how to easily integrate Helicone into your application and explore features that will accelerate your team’s development.

    ​
    Providers

    OpenAI

    OpenAI

    OpenAI

    Gateway

    The optimal method to integrate with Helicone is through our Gateway.

    OpenAI

    Python, Node, cURL, Langchain

    Anthropic

    Python, Node, cURL, Langchain

    Azure

    Python, Node, cURL, Langchain

    Anyscale

    OpenRouter

    LiteLLM

    ​
    Features

    Custom Properties

    Label and segment your requests

    Feedback

    Provide user feedback

    Caching

    Save cost and improve latency

    Streaming

    Usage statistics

    Rate Limiting

    Easily rate-limit power users

    Retries

    Smartly retry requests

    Fine Tuning

    Fine-tune a model on your logs

    Customer Portal

    Fine-tune a model on your logs

    Key Vault

    Remove request and responses

    Jobs

    Visualize chained requests

    Omit Logs

    Remove request and responses

    User Metrics

    Insights into your users

    Was this page helpful?

    Suggest editsRaise issue
    twitterlinkedingithubdiscord
    Powered by Mintlify
    On this page
    • Providers
    • Features
    Assistant
    Responses are generated using AI and may contain mistakes.