What is Prompt Management?

Helicone’s prompt management provides a seamless way for users to version, track and optimize their prompts used in their AI applications.

Example: A prompt template designed for a course generation application.

Why manage prompts in Helicone?

Once you set up prompts in Helicone, your incoming requests will be matched to a helicone-prompt-id, allowing you to:

  • version and track iterations of your prompt over time.
  • maintain a dataset of inputs and outputs for each prompt.

Quick Start

Prerequisites

Please set up Helicone in proxy mode using one of the methods in the Starter Guide.

Not sure if proxy is for you? We’ve created a guide to explain the difference between Helicone Proxy vs Helicone Async integration.

Create prompt templates

As you modify your prompt in code, Helicone automatically tracks the new version and maintains a record of the old prompt. Additionally, a dataset of input/output keys is preserved for each version.

1

Import hpf

import { hpf } from "@helicone/prompts";
2

Add `hpf` and identify input variables

By prefixing your prompt with hpf and enclosing your input variables in an additional {}, it allows Helicone to easily detect your prompt and inputs. We’ve designed for minimum code change to keep it as easy as possible to use Prompts.

const location = "space";
const character = "two brothers";
const promptInput = hpf`
Compose a movie scene involving ${{ character }}, set in ${{ location }}
`;

Static Prompts with hpstatic

In addition to hpf, Helicone provides hpstatic for creating static prompts that don’t change between requests. This is useful for system prompts or other constant text that you don’t want to be treated as variable input.

To use hpstatic, import it along with hpf:

import { hpf, hpstatic } from "@helicone/prompts";

Then, you can use it like this:

const systemPrompt = hpstatic`You are a helpful assistant.`;
const userPrompt = hpf`Write a story about ${{ character }}`;

The hpstatic function wraps the entire text in <helicone-prompt-static> tags, indicating to Helicone that this part of the prompt should not be treated as variable input.

Change input name

To rename your input or have a custom input, change the key-value pair in the passed dictionary to the string formatter function:

content: hpf`Write a story about ${{ "my_magical_input": character  }}`,
3

Assign an id to your prompt

Assign a Helicone-Prompt-Id header to your LLM request.

Assigning an id allows us to associate your prompt with future versions of your prompt, and automatically manage versions on your behalf.

headers: {
  "Helicone-Prompt-Id": "prompt_story",
},

Put it together

Let’s say we have an app that generates a short story, where users are able to input their own character. For example, the prompt is “Write a story about a secret agent”, where the character is “a secret agent”.

// 1. Add these lines
import { hpf, hpstatic } from "@helicone/prompts";

const chatCompletion = await openai.chat.completions.create(
  {
    messages: [
      {
        role: "system",
        // 2. Use hpstatic for static prompts
        content: hpstatic`You are a creative storyteller.`,
      },
      {
        role: "user",
        // 3: Add hpf to any string, and nest any variable in additional brackets `{}`
        content: hpf`Write a story about ${{ character }}`,
      },
    ],
    model: "gpt-3.5-turbo",
  },
  {
    // 3. Add Prompt Id Header
    headers: {
      "Helicone-Prompt-Id": "prompt_story",
    },
  }
);

Create prompts from the UI

In Helicone, you can create a prompt without touching the codebase for better collaboration between technical and non-technical teams.

1

Click on `Create Prompt`

2

Toggle on `I'm not technical`

Create prompts from Helicone's UI

3

Click on `Save prompt`

Use prompts created in the UI in code

If you’ve created a prompt on the UI, you can easily pull this prompt into your codebase by calling the following API endpoint:

export async function getPrompt(
  id: string,
  variables: Record<string, any>
): Promise<any> {
  const getHeliconePrompt = async (id: string) => {
    const res = await fetch(
      `https://api.helicone.ai/v1/prompt/${id}/template`,
      {
        headers: {
          Authorization: `Bearer ${YOUR_HELICONE_API_KEY}`,
          "Content-Type": "application/json",
        },
        method: "POST",
        body: JSON.stringify({
          inputs: variables,
        }),
      }
    );

    return (await res.json()) as Result<PromptVersionCompiled, any>;
  };

  const heliconePrompt = await getHeliconePrompt(id);
  if (heliconePrompt.error) {
    throw new Error(heliconePrompt.error);
  }
  return heliconePrompt.data?.filled_helicone_template;
}

async function pullPromptAndRunCompletion() {
  const prompt = await getPrompt("my-prompt-id", {
    color: "red",
  });
  console.log(prompt);

  const openai = new OpenAI({
    apiKey: "YOUR_OPENAI_API_KEY",
    baseURL: `https://oai.helicone.ai/v1/${YOUR_HELICONE_API_KEY}`,
  });
  const response = await openai.chat.completions.create(
    prompt satisfies OpenAI.Chat.Completions.ChatCompletionCreateParamsStreaming
  );
  console.log(response);
}

Run experiments

Once you’ve set up prompt management, you can use Helicone’s Experiments feature to test and improve your prompts.

Local testing

Many times in development, you may want to test your prompt locally before deploying it to production and you don’t want Helicone to track new prompt versions.

To do this, you can set the Helicone-Prompt-Mode header to testing in your LLM request. This will prevent Helicone from tracking new prompt versions.

headers: {
  "Helicone-Prompt-Mode": "testing",
},

FAQ