Helicone OSS LLM Observability home pagelight logodark logo
  • Discord
  • Github
  • Dashboard
  • Dashboard
  • Documentation
  • API Reference
  • Guides
  • FAQ
  • Getting Started
    • Introduction
    • Self Host
    Integrations
    • LLM & Inference Providers
    • Frameworks & Libraries
    Tracing
    • Caching
    • Custom Properties
    • Customer Portal
    • Omit Logs
    • Sessions
    • User Metrics
    Prompt Engineering
    • Experiments
    • Datasets & Fine-Tuning
    • Prompts
    • Scores
    • User Feedback
    • Webhooks
    AI Gateway
    • Custom Rate Limits
    • Gateway Fallbacks
    • LLM Security
    • Moderations
    • Retries
    • Key Vault
    References
    • Async Logging
    • Cost Calculation
    • Data Security & Privacy
    • Generic Gateway
    • Header Directory
    • Latency
    • Open Source
    • Proxy vs Async
    • Reliability

    FAQ - Fine-tuning

    Questions about fine-tuning in Helicone.

    How to use OpenAI fine-tuning API?

    Step-by-step guide on using the OpenAI fine-tuning API.

    Fine-tuning Duration

    Understand the time requirements for fine-tuning Large Language Models.

    RAG vs Fine-tuning

    Compare Retrieval Augmented Generation (RAG) and fine-tuning approaches.

    Was this page helpful?

    Suggest editsRaise issue
    twitterlinkedingithubdiscord
    Powered by Mintlify
    Assistant
    Responses are generated using AI and may contain mistakes.