You can follow their documentation here: https://docs.fireworks.ai/getting-started/quickstart

Gateway Integration

1

Create a Helicone account

Log into helicone or create an account. Once you have an account, you can generate an API key.

2

Create an FireworksAI account

Log into www.fireworks.ai or create an account. Once you have an account, you can generate an API key.

3

Set HELICONE_API_KEY and FIREWORKS_API_KEY as environment variable

HELICONE_API_KEY=<your API key>
FIREWORKS_API_KEY=<your API key>
4

Modify the base URL and add Auth headers

Replace the following FireworksAI URL with the Helicone Gateway URL:

https://api.fireworks.ai/inference/v1/chat/completions -> https://fireworks.helicone.ai/inference/v1/chat/completions

and then add the following authentication headers.

Helicone-Auth: `Bearer ${HELICONE_API_KEY}`
Authorization: `Bearer ${FIREWORKS_API_KEY}`

Now you can access all the models on FireworksAI with a simple fetch call:

Example

curl \
  --header 'Authorization: Bearer <FIREWORKS_API_KEY>' \
  --header 'Content-Type: application/json' \
  --data '{
    "model": "accounts/fireworks/models/llama-v3-8b-instruct",
    "prompt": "Say this is a test"
}' \
  --url https://fireworks.helicone.ai/inference/v1/completions

For more information on how to use headers, see Helicone Headers docs. And for more information on how to use FireworksAI, see FireworksAI Docs.

You can follow their documentation here: https://docs.fireworks.ai/getting-started/quickstart

Gateway Integration

1

Create a Helicone account

Log into helicone or create an account. Once you have an account, you can generate an API key.

2

Create an FireworksAI account

Log into www.fireworks.ai or create an account. Once you have an account, you can generate an API key.

3

Set HELICONE_API_KEY and FIREWORKS_API_KEY as environment variable

HELICONE_API_KEY=<your API key>
FIREWORKS_API_KEY=<your API key>
4

Modify the base URL and add Auth headers

Replace the following FireworksAI URL with the Helicone Gateway URL:

https://api.fireworks.ai/inference/v1/chat/completions -> https://fireworks.helicone.ai/inference/v1/chat/completions

and then add the following authentication headers.

Helicone-Auth: `Bearer ${HELICONE_API_KEY}`
Authorization: `Bearer ${FIREWORKS_API_KEY}`

Now you can access all the models on FireworksAI with a simple fetch call:

Example

curl \
  --header 'Authorization: Bearer <FIREWORKS_API_KEY>' \
  --header 'Content-Type: application/json' \
  --data '{
    "model": "accounts/fireworks/models/llama-v3-8b-instruct",
    "prompt": "Say this is a test"
}' \
  --url https://fireworks.helicone.ai/inference/v1/completions

For more information on how to use headers, see Helicone Headers docs. And for more information on how to use FireworksAI, see FireworksAI Docs.