This feature is currently in Beta. Use it at your own risk.

Prerequisites

You must have Helicone set up in proxy mode via Gateway or custom package.

Read Proxy vs Async to learn more about the differences between proxy and async integrations.

You must be using Helicone via Proxy. If you are not, please refer to our Quick Start Async.

  • Typescript / Javascript

  • Packageless (curl)

  • Other

TS/JS Quick Start

// 1. Add this line
import { hprompt } from "@helicone/helicone";

const chatCompletion = await openai.chat.completions.create(
  {
    messages: [
      {
        role: "user",
        // 2: Add hprompt to any string, and nest any variable in additional brackets `{}`
        content: hprompt`Write a story about ${{ scene }}`,
      },
    ],
    model: "gpt-3.5-turbo",
  },
  {
    // 3. Add Prompt Id Header
    headers: {
      "Helicone-Prompt-Id": "prompt_story",
    },
  }
);

Step by Step

1

Add hprompt

import { hprompt } from "@helicone/helicone";
2

Replace any text input

Using the backtick string formatter in JavaScript, add hprompt in front of your backtick to automatically format your text so that Helicone can determine where your variables are.

Also, nest your inputted variable so that it is within another bracket {}, this is essentially making it so that we can determine the input key for Helicone.

content: hprompt`Write a story about ${{ scene }}`,
3

Tag your prompt and assign an id

Assign a Helicone-Prompt-Id header to your LLM request.

Assigning an id allows us to associate your prompt with future versions of your prompt, and automatically manage versions on your behalf.

Depending on the package you are using, you will need to add a header. For more information on adding headers to packages, please see Helicone Headers.

headers: {
  "Helicone-Prompt-Id": "prompt_story",
},