Async
LiteLLM
LiteLLM is a model I/O library to standardize API calls to Azure, Anthropic, OpenAI, etc. Hereβs how you can log your LLM API calls to Helicone from LiteLLM.
Python
Approach 1: Use Callbacks
1 line integration
Add HELICONE_API_KEY
to your environment variables.
export HELICONE_API_KEY=sk-<your-api-key>
# You can also set it in your code (See below)
Tell LiteLLM you want to log your data to Helicone
litellm.success_callback=["helicone"]
Complete code
from litellm import completion
## set env variables
os.environ["HELICONE_API_KEY"] = "your-helicone-key"
os.environ["OPENAI_API_KEY"], os.environ["COHERE_API_KEY"] = "", ""
# set callbacks
litellm.success_callback=["helicone"]
#openai call
response = completion(model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Hi π - i'm openai"}])
#cohere call
response = completion(model="command-nightly", messages=[{"role": "user", "content": "Hi π - i'm cohere"}])
Approach 2: [OpenAI + Azure only] Use Helicone as a proxy
Helicone provides advanced functionality like caching, etc. which they support for Azure and OpenAI.
If you want to use Helicone to proxy your OpenAI/Azure requests, then you can -
2 line integration
litellm.api_url = ""https://oai.hconeai.com/v1"" # set the base url
litellm.headers = {"Helicone-Auth": f"Bearer {os.getenv('HELICONE_API_KEY')}"} # set your headers
Complete code
import litellm
from litellm import completion
litellm.api_base = "https://oai.hconeai.com/v1"
litellm.headers = {"Helicone-Auth": f"Bearer {os.getenv('HELICONE_API_KEY')}"}
response = litellm.completion(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "how does a court case get to the Supreme Court?"}]
)
print(response)
Feel free to check it out and tell us what you think π