Helicone Guides
Guides for building, optimizing, and analyzing LLM applications with Helicone.
Cookbooks
Practical, hands-on guides for building and optimizing your LLM applications with Helicone.
LLM Application Development
How to run prompt experiments
Test your LLM prompts with real-world data and prevent regressions.
How to replay LLM sessions
Optimize your AI agents and improve their performance with Helicone’s Session.
How to debug your LLM app
Identify and debug errors in your LLM app with Helicone.
Data Management and Analytics
ETL / data extraction
Extract data from your LLM app with Helicone’s ETL.
Segmenting data with Custom Properties
Derive powerful insights into costs and user behaviors, track environments, user types, and more.
Labeling request data
Make it easier to search and filter your request data in Helicone.
Getting user requests
Retrieve user-specific requests to monitor, debug, and track costs for individual users.
Getting Sessions
Retrieve session data to analyze and replay conversation threads.
Integration and Environment Management
Environment tracking
Track and manage your development, staging, and production environments.
Integrating Helicone with GitHub Actions
Automate the monitoring and caching of LLM calls in your CI pipelines.
Knowledge Base
Educational resources to deepen your understanding of LLM concepts and best practices.
Prompt Engineering
Be specific and clear
Create concise prompts for better LLM responses.
Use structured formats
Format the generated output for easier parsing and interpretation.
Role-playing
Assign specific roles in system prompts to set the style, tone, and content.
Few-shot learning
Provide examples of desired outputs to guide the LLM towards better responses.
Use constrained outputs
Set clear rules for the model’s responses to improve accuracy and consistency.
Chain-of-thought prompting
Encourage the model to generate intermediate reasoning steps before arriving at a final answer.
Thread-of-thought prompting
Build on previous ideas to maintain a coherent line of reasoning between interactions.
Least-to-most prompting
Break down complex problems into smaller parts, gradually increasing in complexity.
Meta-Prompting
Use LLMs to create and refine prompts dynamically.
Was this page helpful?