Prompt engineering isn't magic—it's a skill. The difference between a mediocre AI response and an exceptional one often comes down to how you phrase your request.
After writing thousands of prompts for ChatGPT, Claude, and other AI tools, I've identified 15 techniques that consistently produce better results. These aren't theoretical—I use them daily.
Why Most Prompts Fail
Bad prompt: “Write a blog post about marketing”
This will get you generic, unfocused content. Why? Too vague, no context, no constraints, no examples.
Good prompt: “Write a 1,500-word blog post about email marketing for SaaS startups. Target audience: founders with 0-100 employees. Focus on growing from 0 to 1,000 subscribers in 90 days. Include 5 specific tactics with real examples. Tone: actionable and encouraging, not salesy. Format with H2 headers, bullet points, and a 5-question FAQ section.”
See the difference? Specific, contextualized, constrained, and formatted.
The 15 Prompt Engineering Techniques
1. Role Assignment: Tell AI Who to Be
Start prompts with: “You are a [specific role]…”
Examples:
- “You are a senior email marketing strategist with 10 years at B2B SaaS companies…”
- “You are a Python developer specializing in data science…”
- “You are a skeptical investor reviewing pitch decks…”
Why it works: AI adopts the expertise, perspective, and communication style of that role.
2. Context Loading: Give Background Information
Bad: “Help me write an email”
Good: “I'm reaching out to a prospect who visited our pricing page 3 times but hasn't responded to my previous email from 5 days ago. Our product is a $99/month project management tool for remote teams. Write a follow-up email…”
The more relevant context, the better the output.
3. Format Specification: Exact Output Structure
Tell AI exactly how to format the response:
- “Format as: Title (H1), 3 main sections (H2), 5 bullet points per section”
- “Output as JSON with keys: title, summary, action_items”
- “Create a table with columns: Feature, Our Product, Competitor A, Competitor B”
4. Examples (Few-Shot Prompting)
Show AI 2-3 examples of what you want:
“Generate product descriptions in this style:
Example 1: [your example]
Example 2: [your example]
Now write one for: [new product]”
5. Constraints: Set Boundaries
Limit what AI can do:
- “Maximum 150 words”
- “Use only data from 2023-2024”
- “Do not use jargon or technical terms”
- “Write at 8th grade reading level”
- “Include exactly 5 examples”
6. Chain of Thought: Ask for Reasoning
Add: “Think step-by-step” or “Show your reasoning”
This forces AI to work through logic before answering, improving accuracy on complex tasks.
7. Negative Instructions: Say What NOT to Do
- “Do NOT use these words: revolutionary, game-changing, synergy”
- “Avoid generic advice—be specific”
- “Don't mention competitors by name”
8. Audience Specification
Define who will read/use the output:
- “Target audience: CTOs at enterprise companies”
- “Reader: Complete beginner with no coding experience”
- “For busy executives who skim content”
9. Tone and Style Direction
Be specific about voice:
- “Tone: Professional but warm, like talking to a colleague”
- “Style: Direct, no fluff, action-oriented”
- “Voice: First person, conversational, enthusiastic but not salesy”
10. Iterative Refinement
Don't expect perfection on first try. Use follow-up prompts:
- “Make it 30% shorter”
- “Add more specific examples”
- “Change tone to be more urgent”
- “Focus more on benefits, less on features”
11. Multi-Step Instructions
Break complex tasks into steps:
“First, analyze the data and identify top 3 trends.
Second, for each trend, find 2 supporting examples.
Third, write a summary paragraph for each.
Finally, create action recommendations.”
12. Comparison Requests
Ask AI to evaluate options:
- “Compare approach A vs approach B across these criteria: cost, speed, quality”
- “Rank these 5 solutions from best to worst and explain why”
13. Perspective Shifting
Ask AI to analyze from different angles:
- “Evaluate this marketing campaign from: 1) Customer perspective, 2) Competitor perspective, 3) Investor perspective”
14. Quality Checks Built-In
Have AI verify its own work:
- “After writing, check for: grammatical errors, factual claims without sources, vague statements”
- “Review and score your output 1-10 on: clarity, actionability, creativity”
15. Temperature Control (Advanced)
In API calls, adjust temperature parameter:
- 0.0-0.3: Factual, consistent, predictable (analytics, data, code)
- 0.7-0.9: Creative, varied, interesting (marketing, storytelling)
- 1.0+: Highly creative, sometimes chaotic (brainstorming)
The Ultimate Prompt Template
Combine these techniques into one master template:
[ROLE]
You are a [specific expert role with relevant experience].
[CONTEXT]
Here's the situation: [relevant background information]
[TASK]
Your task is to [specific action] that [achieves specific goal].
[AUDIENCE]
Target audience: [who will use/read this]
[CONSTRAINTS]
- Length: [word/character count]
- Tone: [specific tone description]
- Format: [exact structure needed]
- Do NOT: [things to avoid]
[EXAMPLES] (optional)
Here are examples of the quality I expect:
[example 1]
[example 2]
[OUTPUT FORMAT]
Provide your response in this format:
[exact structure specification]
[QUALITY CHECK]
After generating, verify: [quality criteria]Real Examples: Before vs After
Example 1: Email Subject Lines
Before: “Write email subject lines”
After: “You are a direct response copywriter. Create 10 email subject lines for a webinar about AI automation for small business owners. Goal: 40%+ open rate. Constraints: Under 50 characters, create curiosity without clickbait, include a number or question in at least 5. Tone: Professional but friendly. Avoid: ‘Revolutionary,' ‘game-changing,' all caps.”
Example 2: Code Debugging
Before: “Fix this code: [code]”
After: “You are a senior Python developer. Here's code that should fetch data from an API and save to CSV, but it's returning a 403 error. Debug the issue, explain what's wrong in simple terms, provide the corrected code with inline comments, and suggest 2 ways to make it more robust. Code: [code]”
Common Prompt Engineering Mistakes
1. Too vague
“Write about marketing” → Specify what aspect, for whom, how long, what tone
2. Expecting mind-reading
AI doesn't know your business context unless you provide it
3. No examples
Show, don't just tell. Examples dramatically improve output quality
4. Accepting first output
Refine iteratively. First try is rarely best try
5. Not setting constraints
AI needs boundaries: length, tone, format, what to avoid
Tool-Specific Tips
ChatGPT
- Use Custom Instructions for consistent context
- Create Custom GPTs for repeated tasks
- Upload files for context (PDFs, CSVs, images)
Claude
- Takes longer, more detailed prompts better
- Excellent at analysis and reasoning tasks
- Can handle 200k token context (vs GPT-4's 128k)
Midjourney
- Use –ar for aspect ratio, –v 6 for latest model
- Describe style, lighting, camera angle, mood
- Reference artists/art styles for consistency
Practice Exercise: Improve This Prompt
Bad prompt: “Create a landing page for my product”
Your turn: Rewrite using 5+ techniques from this guide. What role? What context? What constraints? What format?
Post your improved version in the comments!
Next Steps
- Bookmark the Ultimate Prompt Template above
- Try it on your next 5 AI requests
- Track which techniques work best for your use cases
- Build your own library of winning prompts
Want ready-made prompts? Browse our Prompts Library with 100+ tested templates for every use case.

