Telemetry
Monitor AI Gateway application performance, routing decisions, and system health with OpenTelemetry
The AI Gateway provides comprehensive application monitoring through OpenTelemetry (OTel), enabling deep visibility into how the gateway processes requests, makes routing decisions, and performs under load. With built-in trace propagation and multiple exporter options, you can seamlessly integrate with your existing monitoring stack.
Getting Started
Why Monitor the AI Gateway?
AI Gateway application monitoring provides:
- Routing decision visibility - See which providers were selected and why
- Gateway performance tracking - Monitor request processing latency and throughput
- System health insights - Track application health, errors, and resource usage
Built-in Instrumentation - The AI Gateway automatically instruments its own application performance with distributed tracing, metrics collection, and structured logging using OpenTelemetry.
For complete configuration options, see the Configuration Reference.
Telemetry Levels
The recommended telemetry level is info,ai_gateway=debug
for development and info
for production.
Configuration Examples
Use case: Local development with console output for immediate debugging.
Result: Pretty-printed logs to console with full trace information and no external dependencies.
Use case: Local development with console output for immediate debugging.
Result: Pretty-printed logs to console with full trace information and no external dependencies.
Use case: Production deployment with complete observability stack integration.
Result: OTLP export to collector with structured telemetry data and cross-service correlation.
Use case: Local testing with the provided Grafana + Prometheus + Tempo stack.
Setup:
Reference
Grafana Stack (Included)
Use the included Docker Compose setup for complete local observability:
Included services:
- Grafana (port 3010) - Dashboards and visualization
- Prometheus (port 9090) - Metrics storage and querying
- Loki (port 3100) - Log aggregation and search
- OpenTelemetry Collector (port 4317) - Telemetry ingestion
Pre-built Dashboard: The setup includes a comprehensive Grafana dashboard for AI Gateway monitoring that you can import into your own Grafana instance.
Custom Monitoring Stack
The AI Gateway can integrate with any OpenTelemetry-compatible monitoring solution. Simply configure the telemetry endpoint in your configuration file to point to your existing monitoring infrastructure.