Easily log your LiteLLM API calls to Helicone using OpenLLmetry.
Install Helicone Async
pip install helicone-async
Initialize Logger
from helicone_async import HeliconeAsyncLogger from openai import OpenAI from litellm import completion logger = HeliconeAsyncLogger( api_key=HELICONE_API_KEY, ) logger.init() client = OpenAI(api_key=OPENAI_API_KEY) #openai call response = completion( model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hi 👋 - i'm openai"}], metadata={ "Helicone-Property-Hello": "World" } ) #cohere call response = completion( model="command-r", messages=[{"role": "user", "content": "Hi 👋 - i'm cohere"}], metadata={ "Helicone-Property-Hello": "World" } ) print(response.choices[0])
Was this page helpful?