Helicone OSS LLM Observability home pagelight logodark logo
  • Discord
  • Github
  • Dashboard
  • Dashboard
Helicone OSS LLM Observability home pagelight logodark logo
  • Documentation
  • API Reference
  • Guides
  • FAQ
  • Getting Started
    • Introduction
    • Self Host
    Integrations
    • LLM & Inference Providers
    • Frameworks & Libraries
    Tracing
    • Caching
    • Custom Properties
    • Customer Portal
    • Omit Logs
    • Sessions
    • User Metrics
    Prompt Engineering
    • Experiments
    • Datasets & Fine-Tuning
    • Prompts
    • Scores
    • User Feedback
    • Webhooks
    AI Gateway
    • Custom Rate Limits
    • Gateway Fallbacks
    • LLM Security
    • Moderations
    • Retries
    • Key Vault
    References
    • Async Logging
    • Cost Calculation
    • Data Security & Privacy
    • Generic Gateway
    • Header Directory
    • Latency
    • Open Source
    • Proxy vs Async
    • Reliability

    Introduction

    How developers build AI applications. Explore our docs to learn how to easily integrate Helicone into your application and explore features that will accelerate your team’s development.

    ​
    Who Are We?

    We are a team of passionate developers who are committed to making best in class tooling for the AI community. We believe that the future of AI is in large language models (LLMs), and we are dedicated to making LLMs more accessible to developers and enterprises. Our goal is to simplify the use and management of LLMs, so that developers can focus on building the next generation of AI-driven applications.

    ​
    Getting Started

    Get Started

    Quickly integrate Helicone into your application and get started.

    ​
    Integrations

    Gateway

    The optimal method to integrate with Helicone is through our Gateway.

    OpenAI

    Python, Node, cURL, Langchain

    Anthropic

    Python, Node, cURL, Langchain

    Azure

    Python, Node, cURL, Langchain

    Anyscale

    OpenRouter

    LiteLLM

    ​
    Features

    Prompts

    Effortlessly monitor prompt versions and inputs

    Custom Properties

    Label and segment your requests

    Feedback

    Provide user feedback

    Caching

    Save cost and improve latency

    Streaming

    Usage statistics

    Rate Limiting

    Easily rate-limit power users

    Retries

    Smartly retry requests

    Fine Tuning

    Fine-tune a model on your logs

    Customer Portal

    Fine-tune a model on your logs

    Key Vault

    Remove request and responses

    Jobs

    Visualize chained requests

    Omit Logs

    Remove request and responses

    User Metrics

    Insights into your users

    Was this page helpful?

    Suggest editsRaise issue
    twitterlinkedingithubdiscord
    Powered by Mintlify
    On this page
    • Who Are We?
    • Getting Started
    • Integrations
    • Features