Skip to main content

Introduction

The Helicone Chat Model is a community node for n8n that provides a LangChain-compatible interface for AI workflows. Route requests to any LLM provider through the Helicone AI Gateway.
This is an n8n community node that integrates seamlessly with n8n’s AI chain functionality.

Prerequisites

Integration Steps

1

Install the Helicone community node

From your n8n interface:
  1. Click the user menu (bottom left corner)
  2. Select Settings
  3. Go to Community Nodes
  4. Click Install a community node
  5. Enter the package name: n8n-nodes-helicone
  6. Click Install
Wait ~30 seconds for installation. The node will appear in your nodes panel.
Learn more about installing community nodes in the n8n documentation.
n8n install community node
2

Configure Helicone credentials

Add your Helicone API key to n8n:
  1. Go to SettingsCredentials
  2. Click Add Credential
  3. Search for “Helicone” and select Helicone LLM Observability
  4. Enter your Helicone API key
  5. Click Save n8n credentials tab
3

Add the Helicone Chat Model node to your workflow

  1. Create a new workflow or open an existing one
  2. Click ”+” to add a node
  3. Search for “Helicone Chat Model”
  4. Configure the node:
    • Credentials: Select your saved Helicone credentials
    • Model: Choose any model from the model registry (e.g., gpt-4.1-mini, claude-3-opus-20240229)
    • Options: Configure temperature, max tokens, and other model parameters
    n8n search for Helicone node
The Helicone Chat Model node outputs a LangChain-compatible model that can be used with other AI nodes in n8n.
4

Use in AI chains

The Helicone Chat Model node is designed to work with n8n’s AI chain functionality:
  1. Connect the node to other AI nodes that accept ai_languageModel inputs
  2. Build complex AI workflows with Chat nodes, Chain nodes, and other AI processing nodes
  3. All requests are automatically logged to Helicone
Example workflow: Chat Input → Helicone Chat Model → Chat Outputn8n workflow example
5

View requests in Helicone dashboard

Open your Helicone dashboard to see:
  • All workflow requests logged automatically
  • Token usage and costs per request
  • Response time metrics
  • Full request/response bodies
  • Session tracking for multi-turn conversations
  • Custom properties for filtering and analysis Helicone dashboard verification

Node Configuration

Required Parameters

  • Model: Any model supported by Helicone AI Gateway. Examples: gpt-4.1-mini, claude-opus-4-1, gemini-2.5-flash-lite. See all models in the Helicone’s model registry

Model Options

  • Temperature (0-2): Controls randomness in responses
  • Max Tokens: Maximum tokens to generate
  • Top P (0-1): Nucleus sampling parameter
  • Frequency Penalty (-2 to 2): Reduces repetition
  • Presence Penalty (-2 to 2): Encourages new topics
  • Response Format: Text or JSON
  • Timeout: Request timeout in milliseconds
  • Max Retries: Number of retry attempts on failure

Example Workflows

Basic Chat Workflow

[Chat Input] → [Helicone Chat Model] → [Chat Output]
  1. Add a Chat Input node (triggers on user message)
  2. Add the Helicone Chat Model node
    • Model: gpt-4.1-mini
    • Temperature: 0.7
  3. Add a Chat Output node to display the response

Multi-Step AI Chain

[Webhook] → [Helicone Chat Model] → [Extract Data] → [Helicone Chat Model] → [Response]
  1. Receive data via webhook
  2. First Helicone Chat Model analyzes the input
  3. Extract structured data
  4. Second Helicone Chat Model generates a response
  5. Both requests appear in Helicone dashboard with session tracking

Workflow with Custom Properties

Configure the node with custom properties to track workflow metadata:
  1. Open the Helicone Chat Model node
  2. Expand Helicone OptionsCustom Properties
  3. Add a JSON object:
{
  "workflow_name": "customer-onboarding",
  "environment": "production",
  "version": "2.1.0"
}
All requests from this node will include these properties in Helicone.

Troubleshooting

Node Installation Issues

  • Node not appearing: Wait 30 seconds after installation, then refresh n8n
  • Installation failed: Check your n8n instance has internet access
  • Version conflicts: Ensure you’re running a compatible n8n version (>= 1.0)

Authentication Errors

  • Invalid API key: Verify your Helicone API key starts with sk-helicone-
  • 403 Forbidden: Ensure your API key has write access enabled
  • Provider not configured: Check the name of the model is exactly the model ID expected by the gateway. If you’ve added your own provider keys, make sure they are correctly set in your Helicone dashboard

Model Errors

  • Model not found: Check the exact model name at Helicone’s model registry
  • Model unavailable: Verify provider access in your Helicone account
  • Different naming: Providers use different conventions (e.g., OpenAI uses gpt-4o-mini, while the gateway uses gpt-4.1-mini)

Getting Help