Prompt Engineering

Prompt engineering is the practice of crafting inputs to LLMs to reliably produce the output you want.

Core Principles

Be Specific About Format

Vague prompts produce vague outputs. If you want JSON, ask for JSON and show the schema.

Few-Shot Examples

Showing 2–3 examples of input/output pairs dramatically improves accuracy on structured tasks.

Chain of Thought

For reasoning tasks, ask the model to “think step by step” before giving the final answer.

Prompt Structure Template

A reliable prompt structure for most tasks:

  1. Role: “You are an expert at…”
  2. Task: “Your job is to…”
  3. Constraints: “Do not…”, “Always…”
  4. Format: Explicit output format with examples
  5. Input: The actual data to process

Common Pitfalls

  • Instruction conflict: Multiple instructions that contradict each other cause unpredictable behavior.
  • Ambiguous pronouns: “it”, “this”, “that” confuse models when the referent is not clear.
  • Too many constraints: Models start ignoring constraints after 5-7 of them.
  • No examples for unusual formats: If you need a non-standard output format, always include an example.