# Helicone OSS LLM Observability ## Docs - [LLM Caching](https://docs.helicone.ai/features/advanced-usage/caching.md) - [Custom Properties](https://docs.helicone.ai/features/advanced-usage/custom-properties.md) - [Custom LLM Rate Limits](https://docs.helicone.ai/features/advanced-usage/custom-rate-limits.md): Set custom rate limits for model provider API calls. Control usage by request count, cost, or custom properties to manage expenses and prevent unintended overuse. - [User Feedback](https://docs.helicone.ai/features/advanced-usage/feedback.md) - [LLM Security](https://docs.helicone.ai/features/advanced-usage/llm-security.md): Enable robust security measures in your LLM applications to protect against prompt injections, detect anomalies, and prevent data exfiltration. - [Moderations](https://docs.helicone.ai/features/advanced-usage/moderations.md): Enable OpenAI's moderation feature in your LLM applications to automatically detect and filter harmful content in user messages. - [Prompt Assembly](https://docs.helicone.ai/features/advanced-usage/prompts/assembly.md): Understand how prompts are compiled from templates and runtime parameters - [Prompt Management Overview](https://docs.helicone.ai/features/advanced-usage/prompts/overview.md): Compose and iterate prompts, then easily deploy them in any LLM call with the AI Gateway. - [SDK Integration](https://docs.helicone.ai/features/advanced-usage/prompts/sdk.md): Use prompts directly via SDK without the AI Gateway - [Eval Scores](https://docs.helicone.ai/features/advanced-usage/scores.md) - [Token Limit Exception Handlers](https://docs.helicone.ai/features/advanced-usage/token-limit-exception-handlers.md): Automatically handle requests that exceed a model's context window using truncate, middle-out, or fallback strategies. - [User Metrics & Analytics](https://docs.helicone.ai/features/advanced-usage/user-metrics.md): Understand user behavior, track engagement patterns, and optimize AI experiences with detailed user analytics - [Alerts](https://docs.helicone.ai/features/alerts.md): Get notified when your LLM applications hit error thresholds or cost limits - [Datasets](https://docs.helicone.ai/features/datasets.md): Curate and export LLM request/response data for fine-tuning, evaluation, and analysis - [HQL (Helicone Query Language)](https://docs.helicone.ai/features/hql.md): Query your Helicone analytics data directly using SQL with row-level security and built-in limits - [Editor](https://docs.helicone.ai/features/prompts-legacy/editor.md): Design, version, and manage your prompts collaboratively, then [effortlessly deploy them across your app](/features/prompts/generate). - [Generate API](https://docs.helicone.ai/features/prompts-legacy/generate.md): Deploy your [Editor](/features/prompts/editor) prompts effortlessly with a light and modern package. - [Reports](https://docs.helicone.ai/features/reports.md): Get automated weekly summaries of your LLM usage, costs, and performance delivered to email or Slack - [Sessions](https://docs.helicone.ai/features/sessions.md) - [Webhooks](https://docs.helicone.ai/features/webhooks.md) - [Webhooks Local Testing](https://docs.helicone.ai/features/webhooks-testing.md) - [Context Editing](https://docs.helicone.ai/gateway/concepts/context-editing.md): Automatically manage conversation context by clearing old tool uses and thinking blocks for long-running AI agent sessions - [Error Handling & Fallback](https://docs.helicone.ai/gateway/concepts/error-handling.md): How Helicone AI Gateway handles errors and automatically falls back between billing methods - [Image Generation](https://docs.helicone.ai/gateway/concepts/image-generation.md): Generate images through Helicone's AI Gateway using models with native image output like Nano Banana Pro - [Prompt Caching](https://docs.helicone.ai/gateway/concepts/prompt-caching.md): Cache frequently-used context across LLM providers for reduced costs and faster responses - [Reasoning](https://docs.helicone.ai/gateway/concepts/reasoning.md): Enable reasoning through a unified API on Helicone's AI Gateway - [Responses API](https://docs.helicone.ai/gateway/concepts/responses-api.md): Use the OpenAI Responses API format through Helicone AI Gateway with your Helicone API key - [Claude Agent SDK Integration](https://docs.helicone.ai/gateway/integrations/claude-agent-sdk.md): Use Helicone AI Gateway with the Claude Agent SDK for building AI agents with automatic observability - [OpenAI Codex](https://docs.helicone.ai/gateway/integrations/codex.md): Use OpenAI Codex CLI and SDK with Helicone AI Gateway to log your coding agent interactions. - [DSPy](https://docs.helicone.ai/gateway/integrations/dpsy.md): Integrate Helicone AI Gateway with DSPy to access 100+ LLM providers with unified observability and optimization. - [LangChain Integration](https://docs.helicone.ai/gateway/integrations/langchain.md): Integrate Helicone AI Gateway with LangChain to access 100+ LLM providers with unified observability. - [Langfuse Integration](https://docs.helicone.ai/gateway/integrations/langfuse.md): Integrate Helicone AI Gateway with Langfuse to access 100+ LLM providers with observability and LLM tracing. - [LangGraph Integration](https://docs.helicone.ai/gateway/integrations/langgraph.md): Integrate Helicone AI Gateway with LangGraph to build multi-agent workflows with access to 100+ LLM providers. - [LiteLLM Integration](https://docs.helicone.ai/gateway/integrations/litellm.md): Use Helicone AI Gateway with LiteLLM to get top tier observability for your LLM requests. - [LlamaIndex Integration](https://docs.helicone.ai/gateway/integrations/llamaindex.md): Use the Helicone LLM for LlamaIndex to route OpenAI-compatible requests through the Helicone AI Gateway with full observability. - [n8n Integration](https://docs.helicone.ai/gateway/integrations/n8n.md): Use the Helicone Chat Model node in n8n workflows to route LLM requests through the AI Gateway with full observability. - [OpenAI Agents Integration](https://docs.helicone.ai/gateway/integrations/openai-agents.md): Integrate Helicone AI Gateway with OpenAI Agents SDK to build AI agents with tools and full observability. - [Integrations Overview](https://docs.helicone.ai/gateway/integrations/overview.md): Integrate Helicone AI Gateway with popular frameworks and tools to access 100+ LLM providers with top tier observability - [PostHog Integration](https://docs.helicone.ai/gateway/integrations/posthog.md): Integrate Helicone AI Gateway with PostHog to automatically export LLM request events to your PostHog analytics platform for unified product analytics. - [Semantic Kernel Integration](https://docs.helicone.ai/gateway/integrations/semantic-kernel.md): Integrate Helicone AI Gateway with Microsoft Semantic Kernel to access 100+ LLM providers with unified observability. - [Vercel AI SDK Integration](https://docs.helicone.ai/gateway/integrations/vercel-ai-sdk.md): Integrate Helicone AI Gateway with Vercel AI SDK to access 100+ LLM providers with full observability. - [Zapier Integration](https://docs.helicone.ai/gateway/integrations/zapier.md): Use the Helicone Zapier app to run Chat Completions via the AI Gateway — no provider keys required. - [AI Gateway Overview](https://docs.helicone.ai/gateway/overview.md): Use any LLM provider through a single OpenAI-compatible API with intelligent routing, fallbacks, and unified observability - [Prompt Management](https://docs.helicone.ai/gateway/prompt-integration.md): Deploy and iterate prompts through the AI Gateway without code changes - [Provider Routing](https://docs.helicone.ai/gateway/provider-routing.md): Automatic model routing across 100+ providers for reliability and performance - [Web Search](https://docs.helicone.ai/gateway/web-search.md): Enable web search capabilities for Anthropic models through Helicone's Gateway using the :online suffix - [Anyscale Integration](https://docs.helicone.ai/getting-started/integration-method/anyscale.md): Connect Helicone with any LLM deployed on Anyscale, including Llama, Mistral, Gemma, and GPT. - [Crew AI Integration](https://docs.helicone.ai/getting-started/integration-method/crewai.md): Integrate Helicone with Crew AI, a multi-agent framework supporting multiple LLM providers. Monitor AI-driven tasks and agent interactions across providers. - [Deepinfra Integration](https://docs.helicone.ai/getting-started/integration-method/deepinfra.md): Connect Helicone with OpenAI-compatible models on Deepinfra. Simple setup process using a custom base_url for seamless integration with your Deepinfra-based AI applications. - [DeepSeek AI Integration](https://docs.helicone.ai/getting-started/integration-method/deepseek.md): Connect Helicone with DeepSeek AI, a platform that provides powerful language models including MoE and Code models for various AI applications. - [Hyperbolic Integration](https://docs.helicone.ai/getting-started/integration-method/hyperbolic.md): Integrate Helicone with Hyperbolic, a platform for running open-source LLMs. Monitor and analyze interactions with any Hyperbolic-deployed model using a simple base_url configuration. - [LiteLLM Integration with Callbacks](https://docs.helicone.ai/getting-started/integration-method/litellm.md): Connect Helicone with LiteLLM using callbacks to log and monitor API calls across various AI models. - [Manual Logger - cURL](https://docs.helicone.ai/getting-started/integration-method/manual-logger-curl.md): Integrate any custom LLM with Helicone using cURL. Step-by-step guide for direct API integration to connect your proprietary or open-source models. - [Manual Logger - Go](https://docs.helicone.ai/getting-started/integration-method/manual-logger-go.md): Integrate any custom LLM with Helicone using the Go Manual Logger. Step-by-step guide for Go implementation to connect your proprietary or open-source models. - [Manual Logger - Python](https://docs.helicone.ai/getting-started/integration-method/manual-logger-python.md): Integrate any custom LLM with Helicone using the Python Manual Logger. Step-by-step guide for Python implementation to connect your proprietary or open-source models. - [Manual Logger - TypeScript](https://docs.helicone.ai/getting-started/integration-method/manual-logger-typescript.md): Integrate any custom LLM with Helicone using the TypeScript Manual Logger. Step-by-step guide for NodeJS implementation to connect your proprietary or open-source models. - [Mistral AI Integration](https://docs.helicone.ai/getting-started/integration-method/mistral.md): Connect Helicone with Mistral AI, a platform that provides state-of-the-art language models including Mistral-Large and Mistral-Medium for various AI applications. - [Nebius Token Factory Integration](https://docs.helicone.ai/getting-started/integration-method/nebius.md): Connect Helicone with Nebius Token Factory, a platform that provides powerful AI models including text and multimodal models, embeddings and guardrails, and text-to-image models. - [Novita AI Integration](https://docs.helicone.ai/getting-started/integration-method/novita.md): Connect Helicone with Novita AI, a platform that provides powerful LLM models including DeepSeek, Llama, Mistral, and more. - [OpenLLMetry Async Integration](https://docs.helicone.ai/getting-started/integration-method/openllmetry.md): Log LLM traces directly to Helicone, bypassing our proxy, with OpenLLMetry. Supports OpenAI, Anthropic, Azure OpenAI, Cohere, Bedrock, Google AI Platform, and more. - [OpenRouter Integration](https://docs.helicone.ai/getting-started/integration-method/openrouter.md): Integrate Helicone with OpenRouter, a unified API for accessing multiple LLM providers. Monitor and analyze AI interactions across various models through a single, streamlined interface. - [Perplexity AI Integration](https://docs.helicone.ai/getting-started/integration-method/perplexity.md): Connect Helicone with Perplexity AI, a platform that provides powerful language models including Sonar and Sonar Pro for various AI applications. - [PostHog Integration](https://docs.helicone.ai/getting-started/integration-method/posthog.md): Combine Helicone's LLM analytics with PostHog, a comprehensive product analytics platform. View LLM insights alongside other application data for holistic performance analysis. - [Together AI Integration](https://docs.helicone.ai/getting-started/integration-method/together.md): Connect Helicone with Together AI, a platform for running open-source language models. Monitor and optimize your AI applications using Together AI's powerful models through a simple base_url configuration. - [Vercel AI SDK Integration](https://docs.helicone.ai/getting-started/integration-method/vercelai.md): Integrate Vercel AI SDK with Helicone to monitor, debug, and improve your AI applications. - [Platform Overview](https://docs.helicone.ai/getting-started/platform-overview.md): Understand how Helicone solves the core challenges of building production LLM applications - [Quickstart](https://docs.helicone.ai/getting-started/quick-start.md): Get your first LLM request logged with Helicone in under 2 minutes using the AI Gateway. - [Docker](https://docs.helicone.ai/getting-started/self-host/docker.md): Deploy Helicone using Docker. Quick setup guide for running a containerized instance of the LLM observability platform on your local machine or server. - [Kubernetes Self-Hosting](https://docs.helicone.ai/getting-started/self-host/kubernetes.md): Deploy Helicone using Kubernetes and Helm. Quick setup guide for running a containerized instance of the LLM observability platform on your Kubernetes cluster. - [Self-Hosting Helicone](https://docs.helicone.ai/getting-started/self-host/overview.md): Comprehensive guides to help you deploy and manage your own instance of Helicone. - [Building and Monitoring AI Agents with Helicone](https://docs.helicone.ai/guides/cookbooks/ai-agents.md): Learn how to build autonomous AI agents, monitor and optimize their performance using Helicone's Sessions. - [Cost Tracking & Optimization](https://docs.helicone.ai/guides/cookbooks/cost-tracking.md): Monitor LLM spending, optimize costs, and understand unit economics across your AI application - [Debugging LLM Applications](https://docs.helicone.ai/guides/cookbooks/debugging.md): Helicone provides an efficient platform for identifying and rectifying errors in your LLM applications, offering insights into their occurrence. - [Environment Tracking](https://docs.helicone.ai/guides/cookbooks/environment-tracking.md): Effortlessly track and manage your development, staging, and production environments with Helicone. - [ETL / Data Extraction](https://docs.helicone.ai/guides/cookbooks/etl.md): Extract, transform, and load data from Helicone into your data warehouse using our CLI tool or REST API. - [How to Run LLM Prompt Experiments](https://docs.helicone.ai/guides/cookbooks/experiments.md): Run experiments with historical datasets to test, evaluate, and improve prompts over time while preventing regressions in production systems. - [How to fine-tune LLMs with Helicone and OpenPipe](https://docs.helicone.ai/guides/cookbooks/fine-tune.md): Learn how to fine-tune large language models with Helicone and OpenPipe to optimize performance for specific tasks. - [Retrieving Sessions](https://docs.helicone.ai/guides/cookbooks/getting-sessions.md): Use the Request API to retrieve session data, allowing you to analyze conversation threads. - [Getting User Requests](https://docs.helicone.ai/guides/cookbooks/getting-user-requests.md): Use the Request API to retrieve user-specific requests, allowing you to monitor, debug, and track costs for individual users. - [Integrating Helicone with GitHub Actions](https://docs.helicone.ai/guides/cookbooks/github-actions.md): Automate the monitoring and caching of LLM calls in your CI pipelines with Helicone. - [Helicone Evals with Ragas](https://docs.helicone.ai/guides/cookbooks/helicone-evals-with-ragas.md): Evaluate your LLM applications with Ragas and Helicone. - [How to Label Your Request Data](https://docs.helicone.ai/guides/cookbooks/labeling-request-data.md): Label your request data to make it easier to search and filter in Helicone. Learn about custom properties, feedback, and scores. - [Manual Logger with Streaming](https://docs.helicone.ai/guides/cookbooks/manual-logger-streaming.md): Learn how to use Helicone's Manual Logger to track streaming LLM responses - [Logging OpenAI Batch API Requests with Helicone](https://docs.helicone.ai/guides/cookbooks/openai-batch-api.md): Learn how to track and monitor OpenAI Batch API requests using Helicone's Manual Logger for comprehensive observability. - [How to build a chatbot with OpenAI structured outputs](https://docs.helicone.ai/guides/cookbooks/openai-structured-outputs.md): This step-by-step guide covers function calling, response formatting and monitoring with Helicone. - [Predefined Request IDs](https://docs.helicone.ai/guides/cookbooks/predefining-request-id.md): Learn how to predefine Helicone request IDs for advanced tracking and asynchronous operations in your LLM applications. - [How to Prompt Thinking Models](https://docs.helicone.ai/guides/cookbooks/prompt-thinking-models.md): Learn how to effectively prompt thinking models like DeepSeek R1 and OpenAI o1/o3 for optimal results. - [Replaying LLM Sessions](https://docs.helicone.ai/guides/cookbooks/replay-session.md): Learn how to replay and modify LLM sessions using Helicone to optimize your AI agents and improve their performance. - [Using Custom Properties to Segment Data](https://docs.helicone.ai/guides/cookbooks/segmentation.md): Derive powerful insights into costs and user behaviors using custom properties in Helicone. Learn to track environments, user types, and more. - [How to Build a Multi-Model AI Assistant with Vercel AI Gateway and Helicone](https://docs.helicone.ai/guides/cookbooks/vercel-ai-gateway.md): Build a customer support assistant that switches between AI models based on query complexity while tracking costs - [Build an AI Debate Simulator with Vercel AI Gateway](https://docs.helicone.ai/guides/cookbooks/vercel-ai-gateway-demo.md): Create an interactive debate app that showcases different ways to integrate Vercel AI Gateway with Helicone observability - [Helicone Guides](https://docs.helicone.ai/guides/overview.md): Guides for building, optimizing, and analyzing LLM applications with Helicone. - [Be specific and clear](https://docs.helicone.ai/guides/prompt-engineering/be-specific-and-clear.md): Be specific and clear in your prompts to improve the quality of the responses you receive. - [Implement few-shot learning](https://docs.helicone.ai/guides/prompt-engineering/implement-few-shot-learning.md): Provide the model with a few examples of the desired output to guide it to produce responses that closely align with your expectations. - [Leverage role-playing](https://docs.helicone.ai/guides/prompt-engineering/leverage-role-playing.md): Assign a specific role or persona to the model as a system prompt to set the style, tone, and content of the output. - [Overview](https://docs.helicone.ai/guides/prompt-engineering/overview.md): Prompt engineering is the strategic crafting of prompts to guide Large Language Models to produce accurate and desired outputs. - [Use Chain-of-Thought prompting](https://docs.helicone.ai/guides/prompt-engineering/use-chain-of-thought-prompting.md): By encouraging the model to generate intermediate reasoning steps before arriving at a final answer, you can achieve more accurate and insightful responses. - [Use constrained outputs](https://docs.helicone.ai/guides/prompt-engineering/use-constrained-outputs.md): Set clear boundaries and rules for the model's responses to improve accuracy, consistency, and utility - [Use Least-to-Most prompting](https://docs.helicone.ai/guides/prompt-engineering/use-least-to-most-prompting.md): Break down complex problems into smaller parts, starting with the least amount of information. - [Use Meta-Prompting](https://docs.helicone.ai/guides/prompt-engineering/use-meta-prompting.md): Use large language models (LLMs) to create and refine prompts dynamically. - [Use structured formats](https://docs.helicone.ai/guides/prompt-engineering/use-structured-formats.md): Format the generated output to make it easier to interpret and parse the information. - [Use Thread-of-Thought prompting](https://docs.helicone.ai/guides/prompt-engineering/use-thread-of-thought-prompting.md): Maintain a coherent line of reasoning between LLM interactions by building on previous ideas. - [Helicone Header Directory](https://docs.helicone.ai/helicone-headers/header-directory.md): Comprehensive guide to all Helicone headers. Learn how to access and implement various Helicone features through custom request headers. - [Claude Code](https://docs.helicone.ai/integrations/anthropic/claude-code.md): Integrate Helicone to log your Claude Code interactions. - [Anthropic cURL Integration](https://docs.helicone.ai/integrations/anthropic/curl.md): Use cURL to integrate Anthropic with Helicone to log your Anthropic LLM usage. - [Anthropic JavaScript SDK Integration](https://docs.helicone.ai/integrations/anthropic/javascript.md): Use Anthropic's JavaScript SDK to integrate with Helicone to log your Anthropic LLM usage. - [Anthropic LangChain Integration](https://docs.helicone.ai/integrations/anthropic/langchain.md): Use LangChain to integrate Anthropic with Helicone to log your Anthropic LLM usage. - [Anthropic Python SDK Integration](https://docs.helicone.ai/integrations/anthropic/python.md): Use Anthropic's Python SDK to integrate with Helicone to log your Anthropic LLM usage. - [Azure OpenAI with cURL](https://docs.helicone.ai/integrations/azure/curl.md): Use cURL to integrate Azure OpenAI with Helicone to log your Azure OpenAI usage. - [Azure OpenAI with JavaScript](https://docs.helicone.ai/integrations/azure/javascript.md): Use JavaScript to integrate Azure OpenAI with Helicone to log your Azure OpenAI usage. - [Azure OpenAI with LangChain](https://docs.helicone.ai/integrations/azure/langchain.md): Use LangChain to integrate Azure OpenAI with Helicone to log your Azure OpenAI usage. - [Azure OpenAI with Python](https://docs.helicone.ai/integrations/azure/python.md): Use Python to integrate Azure OpenAI with Helicone to log your Azure OpenAI usage. - [AWS Bedrock JavaScript SDK Integration](https://docs.helicone.ai/integrations/bedrock/javascript.md): Learn how to integrate AWS Bedrock with Helicone using JavaScript. - [AWS Bedrock Python SDK Integration](https://docs.helicone.ai/integrations/bedrock/python.md): Learn how to integrate AWS Bedrock with Helicone using Python. - [Custom Logs with cURL](https://docs.helicone.ai/integrations/data/curl.md): Log any custom operations to Helicone using cURL for complete observability across your application stack. - [Custom Logs with the Logger SDK](https://docs.helicone.ai/integrations/data/logger-sdk.md): Log any custom operations using Helicone's Logger SDK for complete observability across your application stack. - [Gemini AI cURL Integration](https://docs.helicone.ai/integrations/gemini/api/curl.md): Use cURL to integrate Gemini AI with Helicone to log your Gemini AI usage. - [Gemini JavaScript SDK Integration](https://docs.helicone.ai/integrations/gemini/api/javascript.md): Use Gemini's JavaScript SDK to integrate with Helicone to log your Gemini AI usage. - [Gemini Python SDK Integration](https://docs.helicone.ai/integrations/gemini/api/python.md): Use Gemini's Python SDK to integrate with Helicone to log your Gemini AI usage. - [Vertex AI cURL Integration](https://docs.helicone.ai/integrations/gemini/vertex/curl.md): Use cURL to integrate Vertex AI with Helicone to log your Vertex AI usage. - [Vertex AI JavaScript SDK Integration](https://docs.helicone.ai/integrations/gemini/vertex/javascript.md): Use Vertex AI's JavaScript SDK to integrate with Helicone to log your Vertex AI usage. - [Vertex AI Python SDK Integration](https://docs.helicone.ai/integrations/gemini/vertex/python.md): Use Vertex AI's Python SDK to integrate with Helicone to log your Vertex AI usage. - [Groq cURL Integration](https://docs.helicone.ai/integrations/groq/curl.md): Use cURL to integrate Groq with Helicone to log your Groq usage. - [Groq JavaScript SDK Integration](https://docs.helicone.ai/integrations/groq/javascript.md): Use Groq's JavaScript SDK to integrate with Helicone to log your Groq usage. - [Groq Python SDK Integration](https://docs.helicone.ai/integrations/groq/python.md): Use Groq's Python SDK to integrate with Helicone to log your Groq usage. - [Instructor JavaScript SDK Integration](https://docs.helicone.ai/integrations/instructor/javascript.md): Use Instructor's JavaScript SDK to log your LLM calls in Helicone. - [Instructor Python SDK Integration](https://docs.helicone.ai/integrations/instructor/python.md): Use Instructor's Python SDK to log your LLM calls in Helicone. - [Llama cURL Integration](https://docs.helicone.ai/integrations/llama/curl.md): Use cURL to integrate Llama with Helicone to log your Llama LLM usage. - [Llama JavaScript SDK](https://docs.helicone.ai/integrations/llama/javascript.md): Use the OpenAI JavaScript SDK to integrate with Llama via Helicone to log your Llama usage. - [Llama Python SDK](https://docs.helicone.ai/integrations/llama/python.md): Use the OpenAI Python SDK to integrate with Llama via Helicone to log your Llama usage. - [Nvidia NIM cURL Integration](https://docs.helicone.ai/integrations/nvidia/curl.md): Use cURL to integrate Nvidia NIM with Helicone to log your Nvidia LLM usage. - [Nvidia Dynamo Integration](https://docs.helicone.ai/integrations/nvidia/dynamo.md): Use Nvidia Dynamo with Helicone for comprehensive logging and monitoring. - [Nvidia NIM JavaScript SDK](https://docs.helicone.ai/integrations/nvidia/javascript.md): Use the OpenAI JavaScript SDK to integrate with Nvidia NIM via Helicone to log your Nvidia usage. - [Nvidia NIM Python SDK](https://docs.helicone.ai/integrations/nvidia/python.md): Use the OpenAI Python SDK to integrate with Nvidia NIM via Helicone to log your Nvidia usage. - [Ollama Javascript Integration](https://docs.helicone.ai/integrations/ollama/javascript.md): Use Helicone's JavaScript SDK to log your Ollama usage. - [OpenAI with cURL](https://docs.helicone.ai/integrations/openai/curl.md): Use cURL to integrate OpenAI with Helicone to log your OpenAI usage. - [OpenAI JavaScript SDK](https://docs.helicone.ai/integrations/openai/javascript.md): Use OpenAI's JavaScript SDK to integrate with Helicone to log your OpenAI usage. - [OpenAI with LangChain](https://docs.helicone.ai/integrations/openai/langchain.md): Use LangChain to integrate OpenAI with Helicone to log your OpenAI usage. - [OpenAI with LlamaIndex](https://docs.helicone.ai/integrations/openai/llamaindex.md): Use LlamaIndex to integrate with Helicone to log your LlamaIndex usage. - [OpenAI Python SDK](https://docs.helicone.ai/integrations/openai/python.md): Use OpenAI's Python SDK to integrate with Helicone to log your OpenAI usage. - [OpenAI Realtime API](https://docs.helicone.ai/integrations/openai/realtime.md): Integrate OpenAI's Realtime API with Helicone to monitor and analyze your real-time conversations. - [OpenAI Responses API](https://docs.helicone.ai/integrations/openai/responses.md): Integrate OpenAI Responses API with Helicone to monitor and analyze your model's responses. - [Legacy Integrations](https://docs.helicone.ai/integrations/overview.md): Legacy proxy-based integrations for existing codebases - [Trace Tools with cURL](https://docs.helicone.ai/integrations/tools/curl.md): Log responses from any external tools used in your LLM applications to Helicone using cURL. - [Trace Tools with the Logger SDK](https://docs.helicone.ai/integrations/tools/logger-sdk.md): Log LLM responses from any external tools used in your LLM applications using Helicone's Logger SDK. - [Helicone MCP Server](https://docs.helicone.ai/integrations/tools/mcp.md): Query your Helicone observability data directly from MCP-compatible AI assistants using the Helicone MCP server. - [Xcode Integration (AI Gateway)](https://docs.helicone.ai/integrations/tools/xcode.md): Configure Xcode's Intelligence model provider to route through Helicone's AI Gateway for observability. - [Vector DB tracing with cURL](https://docs.helicone.ai/integrations/vectordb/curl.md): Log any Vector DB interactions to Helicone using cURL. - [Trace Any Vector DB interactions](https://docs.helicone.ai/integrations/vectordb/logger-sdk.md): Log any Vector DB interactions using Helicone's Logger SDK. - [xAI cURL Integration](https://docs.helicone.ai/integrations/xai/curl.md): Use cURL to integrate xAI with Helicone to log your xAI LLM usage. - [xAI with OpenAI JavaScript SDK](https://docs.helicone.ai/integrations/xai/javascript.md): Use the OpenAI JavaScript SDK to integrate with xAI via Helicone to log your xAI usage. - [xAI with OpenAI Python SDK](https://docs.helicone.ai/integrations/xai/python.md): Use the OpenAI Python SDK to integrate with xAI via Helicone to log your xAI usage. - [Dify](https://docs.helicone.ai/other-integrations/dify.md): Dify is an open-source LLM app development platform. Its intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production. Here is how to get Observability and logs for your dify instanc… - [LangGraph Integration](https://docs.helicone.ai/other-integrations/langgraph.md): Use LangGraph to integrate Helicone with your LLM workflows. - [Ragas Integration](https://docs.helicone.ai/other-integrations/ragas.md): Integrate Helicone with Ragas, an open-source framework for evaluating Retrieval-Augmented Generation (RAG) systems. Monitor and analyze the performance of your RAG pipelines. - [Availability and Reliability](https://docs.helicone.ai/references/availability.md): Helicone ensures high availability for your LLM applications using Cloudflare's global network. Learn about our deployment practices and how we maintain reliability. - [Data Security & Privacy](https://docs.helicone.ai/references/data-autonomy.md): Helicone ensures top-tier data security and privacy through our SOC2 compliant cloud solution, with options for enhanced control and data ownership. - [How We Calculate Cost](https://docs.helicone.ai/references/how-we-calculate-cost.md): Learn how Helicone calculates the cost per request for nearly all models, including both streamed and non-streamed requests. Detailed explanations and examples provided. - [Latency Impact](https://docs.helicone.ai/references/latency-affect.md): Helicone minimizes latency for your LLM applications using Cloudflare's global network. Detailed benchmarking results and performance metrics included. - [Open Source](https://docs.helicone.ai/references/open-source.md): Understanding Helicone's open-source status and how to contribute - [How to Integrate a Model Provider to the AI Gateway](https://docs.helicone.ai/references/provider-integration.md): Tutorial to integrate a new model provider into the AI Gateway - [Proxy vs Async Integration](https://docs.helicone.ai/references/proxy-vs-async.md): Compare Helicone's Proxy and Async integration methods. Understand the features, benefits, and use cases for each approach to choose the best fit for your LLM application. - [Get Models](https://docs.helicone.ai/rest/ai-gateway/get-v1models.md): Returns all available models supported by Helicone AI Gateway (OpenAI-compatible endpoint) - [Get Multimodal Models](https://docs.helicone.ai/rest/ai-gateway/get-v1models-multimodal.md): Returns all available multimodal models supported by Helicone AI Gateway (OpenAI-compatible endpoint) - [Chat Completions (Gateway)](https://docs.helicone.ai/rest/ai-gateway/post-v1-chat-completions.md): Create chat completions via the AI Gateway - [Responses (Gateway)](https://docs.helicone.ai/rest/ai-gateway/post-v1-responses.md): Create responses via the AI Gateway - [Query Dashboard Scores](https://docs.helicone.ai/rest/dashboard/post-v1dashboardscoresquery.md): Retrieve and filter dashboard scoring metrics - [Get Evaluation Scores](https://docs.helicone.ai/rest/evals/get-v1evalsscores.md): Retrieve scoring metrics for evaluations - [Create Evaluation](https://docs.helicone.ai/rest/evals/post-v1evals.md): Create a new evaluation for a specific request - [Query Evaluations](https://docs.helicone.ai/rest/evals/post-v1evalsquery.md): Search and filter through evaluation results - [Query Score Distributions](https://docs.helicone.ai/rest/evals/post-v1evalsscore-distributionsquery.md): Analyze distribution of evaluation scores - [Get Model Registry](https://docs.helicone.ai/rest/models/get-v1public-model-registry-models.md): Returns all models and endpoints supported by the Helicone AI Gateway - [Delete Prompt](https://docs.helicone.ai/rest/prompts/delete-v1prompt-2025-promptid.md): Delete an entire prompt and all its versions - [Delete Prompt Version](https://docs.helicone.ai/rest/prompts/delete-v1prompt-2025-promptid-versionid.md): Delete a specific version of a prompt - [Get Prompt Count](https://docs.helicone.ai/rest/prompts/get-v1prompt-2025-count.md): Get the total number of prompts - [Get Environments](https://docs.helicone.ai/rest/prompts/get-v1prompt-2025-environments.md): Get all available environments across your prompts - [Get Prompt](https://docs.helicone.ai/rest/prompts/get-v1prompt-2025-id-promptid.md): Retrieve a specific prompt by ID - [Get Prompt Inputs](https://docs.helicone.ai/rest/prompts/get-v1prompt-2025-id-promptid-versionid-inputs.md): Get the inputs used for a specific prompt version in a request - [Get Prompt Body](https://docs.helicone.ai/rest/prompts/get-v1prompt-2025-promptversionid-prompt-body.md): Retrieve the full prompt body (messages, tools, etc.) for a specific prompt version - [Get Prompt Tags](https://docs.helicone.ai/rest/prompts/get-v1prompt-2025-tags.md): Retrieve all available prompt tags - [Update Prompt Tags](https://docs.helicone.ai/rest/prompts/patch-v1prompt-2025-id-promptid-tags.md): Update the tags for a prompt - [Create Prompt](https://docs.helicone.ai/rest/prompts/post-v1prompt-2025.md): Create a new prompt with initial version - [Rename Prompt](https://docs.helicone.ai/rest/prompts/post-v1prompt-2025-id-promptid-rename.md): Rename an existing prompt - [Query Prompts](https://docs.helicone.ai/rest/prompts/post-v1prompt-2025-query.md): Search and filter prompts with pagination - [Get Prompt Version by Environment](https://docs.helicone.ai/rest/prompts/post-v1prompt-2025-query-environment-version.md): Retrieve a prompt version for a specific environment - [Get Production Version](https://docs.helicone.ai/rest/prompts/post-v1prompt-2025-query-production-version.md): Retrieve the production version of a specific prompt - [Get Prompt Version Counts](https://docs.helicone.ai/rest/prompts/post-v1prompt-2025-query-total-versions.md): Get version count statistics for a specific prompt - [Get Prompt Version](https://docs.helicone.ai/rest/prompts/post-v1prompt-2025-query-version.md): Retrieve a specific prompt version with its content - [Get Prompt Versions](https://docs.helicone.ai/rest/prompts/post-v1prompt-2025-query-versions.md): Retrieve all versions of a specific prompt - [Update Prompt](https://docs.helicone.ai/rest/prompts/post-v1prompt-2025-update.md): Create a new version of an existing prompt - [Set Version Environment](https://docs.helicone.ai/rest/prompts/post-v1prompt-2025-update-environment.md): Set the environment for a specific prompt version - [Query Properties](https://docs.helicone.ai/rest/property/post-v1propertyquery.md): Query properties for a specific user - [Get Single Request](https://docs.helicone.ai/rest/request/get-v1request.md): Retrieve a single request visible in the request table at Helicone. - [Get Request Inputs](https://docs.helicone.ai/rest/request/get-v1request-inputs.md): Retrieve the prompt template inputs (variables) used for a specific request made through AI Gateway prompt management. - [Submit Request Assets](https://docs.helicone.ai/rest/request/post-v1request-assets.md): Submit assets for a specific request. - If you don't know what this is, you probably don't need this. - [Submit Feedback](https://docs.helicone.ai/rest/request/post-v1request-feedback.md): Submit feedback for a specific request. - [Submit Score](https://docs.helicone.ai/rest/request/post-v1request-score.md): Submit a score for a specific request. - [Get Requests (Point Queries)](https://docs.helicone.ai/rest/request/post-v1requestquery.md): Retrieve all requests visible in the request table at Helicone. - [Get Requests](https://docs.helicone.ai/rest/request/post-v1requestquery-clickhouse.md): Retrieve all requests visible in the request table at Helicone. - [Get Requests by IDs](https://docs.helicone.ai/rest/request/post-v1requestquery-ids.md): Retrieve all requests visible in the request table at Helicone. - [Upsert Request Property](https://docs.helicone.ai/rest/request/put-v1request-property.md): Create or update a property of a specific request. - [Add Session Feedback](https://docs.helicone.ai/rest/session/post-v1session-feedback.md): Submit feedback for a specific session - [Query Session Metrics](https://docs.helicone.ai/rest/session/post-v1sessionmetricsquery.md): Search and analyze session performance metrics - [Query Sessions](https://docs.helicone.ai/rest/session/post-v1sessionquery.md): Search and filter through session data - [Log Trace](https://docs.helicone.ai/rest/trace/post-v1tracelog.md): Log a trace to the Helicone API - [Query User Metrics Overview](https://docs.helicone.ai/rest/user/post-v1usermetrics-overviewquery.md): Get an overview of aggregated user metrics - [Query User Metrics](https://docs.helicone.ai/rest/user/post-v1usermetricsquery.md): Search and filter through user-specific metrics - [Get User Data](https://docs.helicone.ai/rest/user/post-v1userquery.md): Retrieve user data based on specified user IDs and time filters - [Delete Webhook](https://docs.helicone.ai/rest/webhooks/delete-v1webhooks.md): Delete a webhook - [Get Webhooks](https://docs.helicone.ai/rest/webhooks/get-v1webhooks.md): Get all webhooks - [Create Webhook](https://docs.helicone.ai/rest/webhooks/post-v1webhooks.md): Create a new webhook ## OpenAPI Specs - [swagger](https://docs.helicone.ai/swagger.json) - [ai-gateway.openapi](https://docs.helicone.ai/ai-gateway.openapi.json)