Helicone home pagelight logodark logo
  • Sign In
  • Discord
  • Github
  • Sign In
Welcome
Helicone Documentation
DocumentationUse Cases
  • Community
  • GitHub
  • Welcome
  • Introduction
  • Overview
  • Getting Started
  • Quick Start
  • Proxy vs Async
  • Self Deploy
  • Cloud Deploy
  • Self Deploy Docker Compose
  • Integrations
  • Gateway Integration
  • Gateway Fallback (Beta)
  • Proxy

  • Async

  • Custom Model (Beta)
  • Helicone Headers
  • Introduction
  • Request ID Predefinition
  • Features
  • Jobs

  • Custom Properties
  • User Metrics
  • User Feedback
  • Key Vault
  • Templated Prompts
  • Caching
  • Retries
  • Custom Rate Limits
  • Omit Logs
  • Streaming
  • Webhooks
  • Tools
  • Man-in-the-Middle Proxy
  • Export Script
  • GraphQL API
  • Getting Started
  • Python Example
  • FAQ
  • How encryption works
  • Latency Impact
  • How We Calculate Cost
Welcome

Helicone Documentation

Hundreds of organizations leverage Helicone's monitoring and tooling to make their Large-Language Model operations more efficient. We experienced first-hand the pain of managing internal tools and monitoring for LLM's at scale - so we built Helicone to solve these problems for you.

Get Started

Quickly integrate Helicone into your application and get started.

Features

Learn about the features that Helicone provides to help you manage your Large-Language Model usage.

Roadmap

Create and vote on feature requests to help us prioritize what to build next.

Join Discord

Have a question? Join our Discord community to get help from our team and other users.

Overview
twitterlinkedingithubdiscord
Powered by Mintlify