Monitor AI Gateway application performance, routing decisions, and system health with OpenTelemetry
The AI Gateway provides comprehensive application monitoring through OpenTelemetry (OTel), enabling deep visibility into how the gateway processes requests, makes routing decisions, and performs under load. With built-in trace propagation and multiple exporter options, you can seamlessly integrate with your existing monitoring stack.
AI Gateway application monitoring provides:
Built-in Instrumentation - The AI Gateway automatically instruments its own application performance with distributed tracing, metrics collection, and structured logging using OpenTelemetry.
For complete configuration options, see the Configuration Reference.
The recommended telemetry level is info,ai_gateway=debug
for development and info
for production.
Use case: Local development with console output for immediate debugging.
Result: Pretty-printed logs to console with full trace information and no external dependencies.
Use case: Local development with console output for immediate debugging.
Result: Pretty-printed logs to console with full trace information and no external dependencies.
Use case: Production deployment with complete observability stack integration.
Result: OTLP export to collector with structured telemetry data and cross-service correlation.
Use case: Local testing with the provided Grafana + Prometheus + Tempo stack.
Setup:
Use the included Docker Compose setup for complete local observability:
Included services:
Pre-built Dashboard: The setup includes a comprehensive Grafana dashboard for AI Gateway monitoring that you can import into your own Grafana instance.
The AI Gateway can integrate with any OpenTelemetry-compatible monitoring solution. Simply configure the telemetry endpoint in your configuration file to point to your existing monitoring infrastructure.