Prompt Engineering for AI System
Prompt Engineering — discipline of constructing inputs to LLM for predictable, quality results. Includes request structuring, context management, technique selection (CoT, Few-Shot, ReAct), parameter tuning, and iterative calibration.
Key Principles
Specificity: Unclear instructions produce unclear results. «Write good description» vs. «Write 80-100 words, focus on customer benefit, friendly tone, avoid tech jargon».
Role and Context: «You are [role]. Your task is [goal]. Context: [conditions].»
Output Format: Explicit format eliminates ambiguity. JSON, markdown, numbered list.
Constraints: What NOT to do is as important as what to do.
Basic Techniques
Template-based prompting with role, task, rules, output format. A/B testing prompts with LLM-as-judge. Verification of answers against context.
Hallucination Management
Anti-hallucination addendum: «Answer only based on provided context. If not in context, say 'I don't have data on this.'»
Self-verification: LLM checks if all claims are supported by context.
Practical Advice: Iterative Process
- Baseline → test on 20 examples → measure quality
- Add specificity → test → measure improvement
- Add Few-Shot examples → test
- Fine-tune constraints → final test
Each iteration — measurable improvement or rollback. Without metrics — prompt engineering becomes intuition.
Timeline
- Basic prompt for use case: 1–3 days
- A/B testing with eval set: 3–5 days
- Production prompt with verification: 1 week







