Connect Helicone with Together AI, a platform for running open-source language models. Monitor and optimize your AI applications using Together AI’s powerful models through a simple base_url configuration.
You can seamlessly integrate Helicone with your OpenAI compatible models that are deployed on Together AI.The integration process closely mirrors the proxy approach. The only distinction lies in the modification of the base_url to point to the dedicated TogetherAI endpoint https://together.helicone.ai/v1.
Copy
Ask AI
base_url="https://together.helicone.ai/v1"
Please ensure that the base_url is correctly set to ensure successful integration.
Helicone now provides enhanced support for streaming with Together AI through our improved asynchronous stream parser. This allows for more efficient and reliable handling of streamed responses.
Here’s an example of how to use Helicone’s manual logging with Together AI’s streaming functionality:
Copy
Ask AI
import Together from "together-ai";import { HeliconeManualLogger } from "@helicone/helpers";export async function main() { // Initialize the Helicone logger const heliconeLogger = new HeliconeManualLogger({ apiKey: process.env.HELICONE_API_KEY!, headers: {}, // You can add custom headers here }); // Initialize the Together client const together = new Together(); // Create your request body const body = { model: "meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo", messages: [{ role: "user", content: "Your question here" }], stream: true, } as Together.Chat.CompletionCreateParamsStreaming & { stream: true }; // Make the request const response = await together.chat.completions.create(body); // Split the stream into two for logging and processing const [stream1, stream2] = response.tee(); // Log the stream to Helicone using the async stream parser heliconeLogger.logStream(body, async (resultRecorder) => { resultRecorder.attachStream(stream1.toReadableStream()); }); // Process the stream for your application const textDecoder = new TextDecoder(); for await (const chunk of stream2.toReadableStream()) { console.log(textDecoder.decode(chunk)); } return stream2;}
This approach allows you to:
Log all your Together AI streaming requests to Helicone
Process the stream in your application simultaneously
Benefit from Helicone’s improved async stream parser for better performance