Hey, it's Sam.
Last week, you built advanced critique systems that transform good strategies into bulletproof ones.
You learned to coordinate murder boards, discern signal from noise, and force AI past its obvious first responses.
But here's what I need you to understand:
All the critique in the world can't save a strategy built on garbage inputs.
You can stress-test mediocre thinking until it's bulletproof mediocre thinking. You can murder board generic insights until they're systematically generic insights.
The problem isn't your critique process. It's what you're feeding into AI in the first place.
Let me tell you about the strategy that looked perfect but failed catastrophically - and what it taught me about the difference between AI inputs and AI intelligence.
The Strategy That Should Have Worked
Six months, we built a comprehensive digital strategy for a higher education client.
Their brief was clear: attract and nurture prospective students more effectively.
I went all in. Competitive analysis. Degree-specific journey mapping. A full welcome and nurture system - 25 polished emails, sequenced by program.
The frameworks were clean. The logic was sound. The recommendations aligned with industry best practices.
It looked like consultancy-grade work.
When I presented it, the leadership team nodded along. The director even said, "This is the kind of system we've been waiting for."
Six weeks later, the wheels came off. Well, they never actually go on…
The Failure That Changed Everything
The problem wasn't the strategy itself. The messaging was right. The sequencing was thoughtful.
The problem was execution reality.
Their tech stack was a limiting, education-focused CRM/CMS hybrid that couldn't handle sophisticated automation.
More importantly, they had one staff member. A single person responsible for implementing everything we'd designed.
What looked elegant on paper became overwhelming in practice.
We'd designed a Ferrari strategy for a horse-and-buggy operation.
The strategy wasn't wrong. The assumptions were.
The Brutal Truth About AI Strategy Development
That higher education misstep taught me a hard truth:
AI doesn't understand operational reality unless you give it operational reality.
Ask AI to design "education nurture sequences," and you get sophisticated multi-touch campaigns that assume enterprise-level resources.
Ask AI to design "nurture sequence for small private university with limited CRM, single staff member, and siloed systems," and you get the beginning of a implementable strategy.
The difference isn't the AI. It's the input architecture.
Why Your Inputs Are Killing Your Outputs
Most people approach AI strategy development backwards.
They start with broad industry questions:
"What are the latest trends in B2B marketing?"
"How should we approach digital transformation?"
"What's the best go-to-market strategy for SaaS?"
Then they wonder why their outputs feel generic.
Here's the brutal truth: Generic inputs create generic outputs. Always.
AI under the hood isn’t magic - it’s advanced statistics. Every word it writes is a “best guess” based on everything it’s read. And because it’s trained on oceans of business content, those bets naturally drift toward the middle - the safe, average answers.
That's not strategy. That's Wikipedia with better formatting.
The Input Quality Coach
The truth is: your output is only as good as your input.
That’s why I built the Input Quality Coach - a prompt you can paste straight into any AI to instantly diagnose, upgrade, and rewrite your own prompts. Try it once and you’ll see how even your “average” prompts can become strategic, high-leverage inputs.
# 📈 The Input Quality Coach
**A Copy-Paste Prompt That Teaches You How to Write Better Prompts**
## What This Is
This is a teaching prompt you can paste into ChatGPT (or your favorite AI). It uses the **Input Quality Hierarchy** framework to analyze your prompt, diagnose its weaknesses, and rewrite it at a higher level. Instead of just giving you an answer, the AI becomes your **prompt coach** — showing you why vague inputs produce generic outputs, and how to consistently write prompts that lead to **strategically useful insights**.
## Why Use It
* Most people ask **Level 1 prompts** (generic, surface-level). The AI answers — but the answer is bland.
* By contrast, **Level 4 prompts** (context-rich, reality-mapped) generate insights you can’t get anywhere else.
* This tool helps you **train your brain** to always push inputs upward in quality.
## How to Use
1. Copy the full prompt below into ChatGPT (or any LLM).
2. When asked, paste in one of your **average prompts** — something you’d normally type.
3. The AI will:
* Diagnose what “level” your prompt is at
* Show the risks of leaving it there
* Ask clarifying questions
* Rewrite your prompt at a higher level (side-by-side)
* Encourage you to aim higher next time
4. Rinse & repeat — use this to turn even your casual prompts into **powerful strategic inputs**.
---
# The Prompt (Copy & Paste Below)
```
# Task
Guide users to **upgrade any prompt they paste in** by running it through the Input Quality Hierarchy. Your job is to (1) diagnose its current level, (2) show the risks of leaving it there, (3) teach how to upgrade it, and (4) rewrite it into a stronger prompt that lives at a higher level. The ultimate goal: help users train themselves to always give higher-quality inputs that yield competitive, strategic outputs.
## Persona
You are a **Prompt Quality Coach** — half-strategist, half-educator. You think like a consultant, speak like a coach, and write like a teacher. You don’t just answer — you *train the user’s brain* to ask better questions. You embody these qualities:
- Diagnostic clarity (quickly spots vague or generic phrasing)
- Strategic sharpness (knows what context makes a prompt actionable)
- Teaching instinct (shows, doesn’t just tell)
- Encouraging but firm (never lets a weak prompt slide)
## Considerations
- Users will paste in *any* prompt — from a vague “write a blog post about marketing” to a detailed brief.
- You must always map it to the hierarchy and explain why it falls at that level.
- The teaching value is as important as the upgraded prompt.
- Assume readers are *learning promptcraft through practice* — don’t overcomplicate.
- Your tone should be practical, confidence-building, and slightly provocative (remind them of the cost of lazy prompts).
## Steps
1. **Identify Input Level**
- Classify the pasted prompt as Level 1, 2, 3, or 4.
- Explain in plain language why it falls there.
2. **Reveal the Risk**
- Briefly show the kind of answer the user *would* get at this level.
- Tie this to the strategic downside (“you’ll get a generic answer, like a Wikipedia summary”).
3. **Upgrade Path**
- Point out exactly what’s missing (context, constraints, real-world markers).
- Ask 1–3 clarifying questions the user could answer to push their input up a level.
4. **Rewrite Example**
- Provide a rewritten, upgraded version of the user’s prompt that demonstrates Level 3 or 4 thinking.
- Make it side-by-side with their original for contrast.
5. **Deliver Final Guidance**
- Return the upgraded prompt ready for use.
- Remind them of the hierarchy and encourage them to keep aiming higher.
## Constraints
- Never answer the *content* of the user’s pasted prompt directly. (You are teaching input quality, not executing tasks.)
- Avoid giving just abstract advice — always provide a concrete rewritten prompt.
- Keep clarifying questions concise and practical (no laundry lists).
- Ensure the rewritten version is obviously more powerful than the original.
## Success Qualities
* Users clearly see the gap between weak and strong prompts.
* Every response includes a rewritten example they can copy-paste.
* The teaching sticks — users feel they’re learning a reusable framework.
* The AI never indulges a Level 1/2 input without pushback.
## Stakes
Lazy prompts waste potential. They generate generic outputs that anyone could get.
Strong prompts become intellectual leverage — producing insights tailored to the user’s exact reality.
Your job is to close that gap, every single time.
## Output Format
For every user input, return the following structured response in markdown:
### 1. Input Diagnosis
**Level:** [1–4]
**Reason:** [Why it belongs here]
### 2. Output Risk
*Here’s the kind of answer this input would produce and why it’s strategically weak.*
### 3. Upgrade Path
- Clarifying Question(s)
- What context/details are missing
### 4. Side-by-Side Rewrite
**Original Prompt:**
“[Paste here]”
**Upgraded Prompt:**
“[Rewritten higher-level version]”
### 5. Final Note
*Encourage the user to aim for Level 4 reality-mapped prompts next time.*
```
---
The "Garbage In, Gospel Out" Principle
Here's what most people get backwards about AI strategy:
They think better prompts create better outputs.
"If I just engineer my prompt better, AI will give me breakthrough insights."
That's like thinking better grammar creates better ideas.
The breakthrough happens when you feed AI the right raw materials, not when you ask for them more politely.
Level 1 inputs with perfect prompt engineering still produce Level 1 outputs.
But Level 4 inputs with basic prompting produce strategic dynamite.
What This Changes About Your Strategic Process
When you understand the Input Quality Hierarchy, everything shifts.
Instead of starting with "What should our strategy be?" you start with "What's our strategic reality?"
Instead of asking AI for industry best practices, you map your specific context first.
This isn't about becoming a better prompter. It's about becoming a better strategic thinker.
Project Chimera: Input Quality in Action
Today's Nexus AI case study shows the Input Quality Hierarchy applied to B2B SaaS go-to-market strategy.
Watch the same strategic question processed through all four input levels:
Level 1: Generic SaaS playbook
Level 2: Slightly more specific but still generic
Level 3: Context-rich strategic insights
Level 4: Strategic gold that addresses real market dynamics
The difference is shocking. Same AI, same strategic challenge, completely different strategic value.
[Access Nexus AI Input Quality Analysis] (See how input architecture determines output value)
ACTION ITEM 👊💥
Today’s challenge isn’t just about comparing outputs - it’s about training yourself to see the difference in input quality.
Your 15-Minute Input Audit Challenge (Prompt Coach Edition):
Choose one strategic question you've asked AI recently
Use the Input Quality Coach in your favorite AI
Compare side by side: your original vs. the upgraded version
Just for fun: run the improved prompt in a fresh chat and explore
The goal: Experience how input architecture transforms output value.
Tomorrow, we tackle the client context challenge that kills most strategies.
You'll learn the Client Maturity Audit that ensures your strategic recommendations match your client's actual capability to execute.
When strategy meets reality, reality usually wins.
— Sam
P.S. That education strategy failure? It taught me that perfect strategic thinking applied to wrong assumptions creates expensive disasters. Now I spend more time mapping client reality than building strategic frameworks.
P.P.S. The best strategists aren't the ones who know the most frameworks. They're the ones who understand their context most deeply.
