Crew AI Integration
Integrate Helicone with Crew AI, a multi-agent framework supporting multiple LLM providers. Monitor AI-driven tasks and agent interactions across providers.
Introduction
Crew AI is a multi-agent framework that supports multiple LLM providers through LiteLLM integration. By using Helicone as a proxy, you can track and optimize your AI model usage across different providers through a unified dashboard.
Quick Start
Create an Account & Generate an API Key
- Log into Helicone (or create a new account)
- Generate a write-only API key
Store your Helicone API key securely (e.g., in environment variables)
Set OPENAI_BASE_URL
Configure your environment to route API calls through Helicone:
This points OpenAI API requests to Helicone’s proxy endpoint.
See Advanced Provider Configuration for other LLM providers.
Verify in Helicone
Run your CrewAI application and check the Helicone dashboard to confirm requests are being logged.
Advanced Provider Configuration
CrewAI supports multiple LLM providers. Here’s how to configure different providers with Helicone:
OpenAI (Alternative Method)
Anthropic
Gemini
Groq
Other Providers
CrewAI supports many LLM providers through LiteLLM integration. If your preferred provider isn’t listed above but is supported by CrewAI, you can likely use it with Helicone. Simply:
- Check the dev integrations on the sidebar for your specific provider
- Configure your CrewAI LLM using the same base URL and headers structure shown in the provider’s Helicone documentation
For example, if a provider’s Helicone documentation shows:
You would configure your CrewAI LLM like this:
Helicone Features
Request Tracking
Add custom properties to track and filter requests:
Learn more about:
Caching
Enable response caching to reduce costs and latency:
Learn more about Caching
Prompt Management
Track and version your prompts:
Learn more about Prompts
Multi-Agent Example
Create agents using different LLM providers:
Additional Resources
Was this page helpful?