await
keyword when calling openai.ChatCompletion.acreate
, and use an async
for loop to iterate over the response.
HeliconeManualLogger
class now includes enhanced methods for working with streams:
logBuilder
: The recommended method for handling streaming responses with better error handling and simplified workflowlogStream
: Logs a streaming operation with full control over stream handlinglogSingleStream
: Simplified method for logging a single ReadableStreamlogSingleRequest
: Logs a single request with a response bodylogBuilder
method provides a more streamlined approach to working with streaming responses, with better error handling:
logBuilder
approach offers several advantages:
setError
methodtoReadableStream
sendLog
stream_options: { include_usage: true }
in your requesthelicone-stream-usage: true
header to your requestafter
function to log streaming responses without blocking the response to the client: