Prompt Engineering
Prompt engineering is the practice of crafting inputs to LLMs to reliably produce the output you want.
Core Principles
Be Specific About Format
Vague prompts produce vague outputs. If you want JSON, ask for JSON and show the schema.
Few-Shot Examples
Showing 2–3 examples of input/output pairs dramatically improves accuracy on structured tasks.
Chain of Thought
For reasoning tasks, ask the model to “think step by step” before giving the final answer.
Prompt Structure Template
A reliable prompt structure for most tasks:
- Role: “You are an expert at…”
- Task: “Your job is to…”
- Constraints: “Do not…”, “Always…”
- Format: Explicit output format with examples
- Input: The actual data to process
Common Pitfalls
- Instruction conflict: Multiple instructions that contradict each other cause unpredictable behavior.
- Ambiguous pronouns: “it”, “this”, “that” confuse models when the referent is not clear.
- Too many constraints: Models start ignoring constraints after 5-7 of them.
- No examples for unusual formats: If you need a non-standard output format, always include an example.