Prompt Engineering
Overview
Prompt engineering is the strategic crafting of prompts to guide Large Language Models to produce accurate and desired outputs.
Before prompt engineering
- have a first draft of your prompt
- know the audience that you are tailoring your prompt to
- have some benchmark to measure prompt improvements
- have some example inputs and desired outputs to test your prompts with
Prompt engineering techniques
- Be specific and clear
- Use structured formats
- Leverage role-playing
- Implement few-shot learning
- Use constrained outputs
- Use chain-of-thought prompting
- Use thread-of-thought prompting
- Use least-to-most prompting
- Use meta-prompting
When should prompt engineering be used?
- From the beginning. It’s never too early to think about how your prompt will affect the output.
- When refining model outputs to meet your expectation.
- When expanding features and need the model to adapt to new use cases.
- When optimizing cost and performance. Prompt engineering can reduce token usage, lower latency, and improve performance.
Why prompt engineering is important
- Get more accurate and relevant responses.
- Get the response in a specific instructions, styles, or formats.
- Reduce costs by decreasing the number of tokens used, lowering API costs.
- Avoid inappropriate or biased outputs.
- Get consistent and reliable responses across different interactions.
- Improve user experience with more helpful and concise responses.
FAQ
Was this page helpful?