Self-Reflection / Internalized Critique
Single-prompt technique where AI generates, reviews, and improves its own response before presenting
What Is This Pattern?
The Self-Reflection pattern is a single-prompt technique that instructs the AI to generate a response, then critically review it, identify flaws, and improve it before presenting the final answer. Unlike the iterative Critique & Improve pattern, this happens in a single pass, making it faster and more automatable while producing higher-quality first-pass responses.
How It Works
You structure your prompt to instruct the AI to: (1) Generate an initial response, (2) Critically review its own output for errors, style issues, or improvements, (3) Apply the improvements, and (4) Present only the final, improved version. This internalizes the quality control loop.
When To Use This Pattern
- You need high-quality output in a single pass
- Building automated systems where iterative loops are costly
- Code generation where correctness is critical
- Technical documentation that must be accurate
- Any task where self-correction improves quality
- You want to reduce the need for human review
Example
Generate the Python code. Then, perform a self-reflection, checking for bugs, style guide violations, and inefficiencies. Provide only the final, improved code.Best Practices
- Define specific criteria for the self-review (bugs, style, performance)
- Instruct the AI to provide only the final output, not the reflection process
- Use for critical outputs where correctness matters
- Combine with persona pattern for expert-level review
- Specify the improvement areas explicitly
- Use for code, technical documentation, or analytical tasks
Common Mistakes to Avoid
- Not being specific about what to review
- Asking the AI to show its reflection (adds noise)
- Using for simple tasks where it adds unnecessary overhead
- Not testing if the reflection actually improves quality
- Expecting perfect results (still validate)