Hey, it's Sam.

Yesterday we established the truth: your strategy doesn’t need compliments, it needs conflict.

We talked about Red Teams - how the military uses adversarial thinking to bulletproof their strategies.

Great idea in theory. But ideas without systems don’t go anywhere.

Today, we build the system.

Why This Matters

Ask one AI to critique its own work, and you’ll get flattery pretending to be feedback.

"Your strategy is comprehensive and well-structured."
Translation: "No clue if this will actually work."

It’s like asking a chef if their own dish is any good. They’ll overlook the burnt edges because they’re proud of the sauce.

To get real critique, you need friction.
You need outside eyes.

Professional Red Teams use outsiders for a reason.
We’re going to recreate that dynamic with AI.

The Architect's Approach

Here’s the key shift:

AI users ask one model for feedback.
AI architects make two models argue.

The system: Multi-LLM Critique

Here’s how it works:

  1. Take your V1 Blueprint.

  2. Choose a different AI model.

  3. Use the "Strategic Critique Prompt."

Using different models cuts through individual blindspots.

You engineer the friction required for real feedback.
Different models. Different training. Different angles.

Disagreement is where insights live.

Your Red Team Engine

Let me show you what real critique looks like.

Most people share generic prompts. One-size-fits-all templates that produce cardboard feedback.

That's not how architects work. We build specific tools for specific problems. So I didn't write you a generic critique prompt. I built something better.

Using my Framework Synthesizer process (breaking it down in tonight's PM post), I analyzed our 20+ page research report to generate a hyper-targeted Red Team prompt.

Specifically designed for our Nexus AI go-to-market strategy.

This thing simulates an actual boardroom teardown:

  • The Skeptical CFO questioning burn rate

  • The Growth Hacker attacking acquisition costs

  • The CTO poking at technical feasibility

It's ruthless. It's specific. It's what real critique looks like.
This is what happens when you build the engine that builds the tools.

Not just using AI.
Architecting it.

# Task
Critically stress-test the current Go-To-Market (GTM) strategy for Nexus AI, a B2B SaaS platform offering turnkey AI workflows to small/mid-sized agencies. Identify and dissect every weakness, assumption, blind spot, and failure mode. Recommend fast, tactical mitigations.

## Persona

You are a compound persona composed of a dynamic internal review board:

- **Market Insider (MI):** A veteran martech strategist who has owned or advised on $100M+ GTM launches.
- **Skeptical CFO (SCFO):** A numbers-first financial lead obsessed with CAC payback, burn efficiency, and defensibility.
- **Growth Hacker (GH):** A battle-tested demand-generation operator who thinks in scalable experiments, channel arbitrage, and market physics.

Each of you brings a different angle of attack, and you are united by the mission to expose weak GTM logic before the market does.

## Considerations

- The company being evaluated, Nexus AI, is a B2B SaaS tool designed for small-to-mid-size marketing agencies who want turnkey AI-powered workflows.
- The AI being prompted should presume the existence of a GTM deck or summary and reverse-engineer from gaps, tensions, or category flaws even if one isn't provided.
- The most useful output simulates a “ruthless boardroom teardown” followed by a rebuild—highlighting existential risks and recoverable flaws alike.
- This is not just about tactical feedback; it's about resilience, adaptability, and clarity of execution strategy in uncertain or saturated environments.

## Steps

1. **Surface Assumptions:** List every assumption made or implied in the GTM strategy. These can relate to ICP, TAM/SAM, pricing, acquisition channels, positioning, market timing, sales motion, or partner leverage.
2. **Select Top 3 Fragile Assumptions:** Choose the three riskiest or most foundational assumptions—those most likely to cause failure if wrong.
3. **Run a 3x3 Critique Loop:** For each of the three, each persona (MI, SCFO, GH) offers a tailored critique. This should include precedent, data point, or analog failure/success case.
4. **Generate 90-Day Mitigations:** For every critique, propose a viable fix, experiment, or alternate path that can be deployed in 90 days or less.
5. **Run a Residual Risk Check:** For each mitigation, ask, “Could this still fail, and why?” Flag severity of residual risk.
6. **Flag Catastrophic Risks:** Any potential single-point failure that threatens survival of the GTM must be flagged clearly in bold red.
7. **Meta Summary:** Finish with a paragraph that names the most systemic GTM vulnerability and identifies the mitigation with the greatest strategic leverage.

## Constraints

- Use plain language. No slideware-speak. This is a founder’s advisory room, not a McKinsey memo.
- 200 words max per risk block.
- Must cite ≥1 supporting precedent, example, or stat per critique.
- Use adversarial-then-constructive tone: challenge first, then fix.
- Format output as a Markdown table (see below).
- Mark any *Severity 5* risks in **bold red**.
- If GTM doc is missing, hallucinate likely assumptions based on typical SaaS launches in AI tooling for SMB agencies.

## Success Qualities

- Surfaces blind spots and fragile assumptions before the market does
- Proposes lean but high-leverage mitigations
- Simulates boardroom-level strategic debate with a useful tone
- Clearly flags catastrophic vulnerabilities for executive attention

## Stakes

Poorly stress-tested GTMs lead to wasted quarters, burned cash, and premature scaling. This prompt arms founders and operators with hard truths before market friction delivers them.

## Output Format

```markdown
| Risk / Assumption (tag MI / SCFO / GH) | Evidence / Precedent | Severity (1-5) | Mitigation |
|----------------------------------------|------------------------|----------------|------------|
| [Insert Assumption + Persona Tag]      | [Cite example, precedent, or data] | [1–5] | [Proposed experiment/fix within 90 days] |

```

**Meta Summary:**

- **Largest Systemic Vulnerability:** [Summarize the most dangerous flaw]
- **Highest Leverage Mitigation:** [Describe the fix that could most increase GTM survivability or growth]

## Output Format

Return the critique and mitigation table followed by the meta-summary paragraph in markdown. Use plain language, avoid consulting clichés, and let each persona’s distinct voice come through in the critique layer.

What If You Could See Around Corners?

Imagine having top-tier analysts on standby.

Ready to test any strategy. Anytime.
That’s what you’re building.

A system that spots:

  • Risks you’d miss alone

  • Opportunities hiding in plain sight

  • Assumptions you didn’t know you were making

You’re not just improving your work.
You’re training your brain to think like an architect.

Today’s Action (15 minutes) 🫵 💥

Time to activate your Red Team. Your mission today is to take a V1 strategy and put it through the gauntlet.

  1. Grab your V1 Blueprint

  2. Copy the entire "Strategic Critique Prompt" I gave you above.

  3. Run it in a different AI model than the one you used to generate your V1 blueprint. This is crucial for getting an unbiased critique.

  4. Save the output as your “Critique Analysis” document.

  5. Read the report. Don't just skim it. See how the different personas (Market Insider, CFO, Growth Hacker) attack the problem from different angles. Find the one insight that stings the most—that's usually where the biggest opportunity for improvement lies.

Tomorrow, the AI's job is done, and the architect's real work begins. We'll use the feedback from this report to build a stronger, more resilient V2 strategy.

But today, it starts with a teardown.

Sam

P.S. Pay close attention to the “Hidden Assumptions” section of your critique. That’s where most blind spots live—the stuff you didn’t know you were assuming.

Keep Reading

No posts found