What is User Feedback?

User feedback is a feature that allows users to evaluate the responses generated by the LLM. This feedback can be either positive or negative, offering crucial insights into the effectiveness and relevance of the LLM’s outputs.

Benefits of User Feedback

User feedback allows you to:

  • Gauge the efficacy of the LLM’s responses.
  • Enhance the quality of interactions by modifying prompts or models based on the received feedback.
  • Identify trends in feedback to make informed decisions about model training or fine-tuning.

Logging Feedback using the Node Package

Below is a simplified example of how to log feedback using our Node package:

const {
  HeliconeProxyOpenAIApi,
  HeliconeProxyConfiguration,
  HeliconeFeedbackRating,
} = require("helicone"); // Replace with the actual package name

// Configuration for the OpenAI client
const config = new HeliconeProxyConfiguration({
  apiKey: process.env.OPENAI_API_KEY,
  heliconeMeta: {
    apiKey: process.env.MY_HELICONE_API_KEY,
  },
});

// Instantiate the OpenAI client
const openAi = new HeliconeProxyOpenAIApi(config);

// Generate a chat completion and log feedback
const result = await openAi.createChatCompletion({
  model: "gpt-3.5-turbo",
  messages: [{ role: "user", content: "Say hi!" }],
});

const heliconeId = result.headers[openAi.helicone.heliconeIdHeader];

// Log feedback (either Positive or Negative)
const rating = HeliconeFeedbackRating.Positive; // or HeliconeFeedbackRating.Negative
await openAi.helicone.logFeedback(heliconeId, rating);

Logging Feedback using Fetch

In some packages or scenarios, you may not be able to retrieve headers to get the helicone-id. However, you can still log feedback by supplying a UUID as the helicone-id.

For more information, refer to the Helicone API docs.

If you’re not using our package, you can still log feedback using the Fetch API. Here’s a simple example:

import OpenAI from "openai";

// Initialize the OpenAI client with Helicone integration
const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
  baseURL: "https://oai.hconeai.com/v1",
  defaultHeaders: {
    "Helicone-Auth": `Bearer ${process.env.HELICONE_API_KEY}`,
  },
});

// Generate a chat completion
const {
  data: completions,
  response,
}: { data: ChatCompletion, response: Response } = await openai.chat
  .completions({
    model: "gpt-3.5-turbo",
    messages: [{ role: "user", content: "Say hi!" }],
  })
  .withResponse();

// Retrieve the heliconeId header
const heliconeId = response.headers.get("helicone-id");

// Log feedback
const options = {
  method: "POST",
  headers: {
    "Helicone-Auth": "YOUR_HELICONE_AUTH_HEADER",
    "Content-Type": "application/json",
  },
  body: JSON.stringify({
    rating: true, // true for positive, false for negative
  }),
};

const response = await fetch(
  `https://api.hconeai.com/v1/${heliconeId}/feedback`,
  options
);