Helicone OSS LLM Observability home pagelight logodark logo
  • Discord
  • Github
  • Dashboard
  • Dashboard
  • Documentation
  • API Reference
  • Guides
  • FAQ
  • Getting Started
    • Introduction
    • Self Host
    Integrations
    • LLM & Inference Providers
    • Frameworks & Libraries
    Tracing
    • Caching
    • Custom Properties
    • Customer Portal
    • Omit Logs
    • Sessions
    • User Metrics
    Prompt Engineering
    • Experiments
    • Datasets & Fine-Tuning
    • Prompts
    • Scores
    • User Feedback
    • Webhooks
    AI Gateway
    • Custom Rate Limits
    • Gateway Fallbacks
    • LLM Security
    • Moderations
    • Retries
    • Key Vault
    References
    • Async Logging
    • Cost Calculation
    • Data Security & Privacy
    • Generic Gateway
    • Header Directory
    • Latency
    • Open Source
    • Proxy vs Async
    • Reliability

    FAQ - Using Helicone

    Questions about how to best use Helicone’s platform features.

    Incorrect cost calculation while streaming?

    Learn how to accurately calculate costs when using streaming features.

    Was this page helpful?

    Suggest editsRaise issue
    twitterlinkedingithubdiscord
    Powered by Mintlify
    Assistant
    Responses are generated using AI and may contain mistakes.