Why Generic Output Happens
Generic AI output is almost never a model failure. It is a prompt failure. When you give Claude a vague input, it produces the average of all reasonable responses to that type of vague input. That average is generic by definition.
"Write a blog post about AI" produces the average of all blog posts about AI. "Write a 900-word AEO-optimized post for a non-technical marketer who has heard about AI but does not know where to start, with a direct answer in the first paragraph and three specific actionable steps with real tool names" produces something specific and usable.
The rule: the specificity of output equals the specificity of input. No exceptions. If the output is generic, the prompt was generic. The fix is in the prompt, not in regenerating.
The Four-Part Framework
Role: What is Claude acting as? "Act as my content strategist" or "You are reviewing this as a skeptical buyer who is on the fence about the price."
Context: The specific situation. Who is the audience? What do they already know? What problem are we solving? What has already been tried?
Task: Exactly what to produce. Not "write something about X" but "write a 600-word AEO blog post with a direct answer in the first 75 words, three H2 sections, and a five-question FAQ."
Constraints: What to include, exclude, format requirements, length, tone, words to avoid. Constraints are what separate a specific useful output from a reasonable generic one.