Back to blog EN FR

Stay in the friction

The 10x developer doesn’t think 10 times better. They code 10 times faster. And AI has just given this superpower to everyone, without the guardrail that came with it. So before you prompt, confront every significant decision with a panel of personas. Each with their own concerns, blind spots, and a mandate to disagree. The goal is to produce friction, because friction improves thinking.

Before AI, execution took time. That time brought friction: reviews, debates, dead ends that forced you to reconsider. We thought long and hard because building the wrong thing cost weeks. Today, we build it in minutes. Prompt after prompt, we execute at the speed of thought all the way to production. No more friction. No more stepping back. No more moments to verify we’re solving the right problem.

AI amplifies in whatever direction you point it. Our prompts reflect our blind spots: we only question what we already know how to question. What we don’t think to ask is precisely what we need to hear. And depth won’t change that. A bad path deepens. We drift from prompt to prompt, with no alert. An AI will never tell you the question is bad; it will simply answer it.

Reintroducing friction

If the practice of rubber duck debugging isn’t unfamiliar to you, start there: force clarity before execution. Ask your AI to reformulate before acting: <your request>, is that clear to you? Simple. Cheap. Fast.

But clarity alone isn’t enough to detect blind spots. Our biases live in our prompts. To reveal them, you need confrontation with different perspectives that stress-test the request. Much like a design or review workshop, we bring together people with different concerns; each looks at the same subject from an angle no one else would have taken alone. De Bono established this principle back in 1985 with his six thinking hats: forcing the adoption of multiple perspectives to break out of unidirectional thinking. What AI changes is that you can practise it alone, on demand, with perspectives calibrated to your own domain.

Setting up the panel

Here are the essential points for building your persona table:

Define the name and purpose of the table

Debate a design, review code, run a refinement or confront a vision. Naming your table allows you to invoke it as needed with a simple <prompt> What does <table name> think?. A simple markdown file <table name>.md is sufficient. Inside, define the table’s purpose and the material it will work with.

// CORE_TEAM.md
# Core Team Panel for Specy

This document defines a panel which evaluates all changes to the Specy language and to the skills that produce and consume it.

The panel is a prompt construct: a single reviewer role-plays all eight perspectives in sequence, ensuring every change is stress-tested from angles that a single viewpoint would miss.

My preference for invocation is to use a command that reflects the ceremony or task at hand (/design-review , /debate-vision , /review-ux, /code-review).

// .claude/commands/review.md
Review the following proposal using the design panel defined in CORE_TEAM.md at the root of this project.

Read CORE_TEAM.md first to load the panellists and the debate protocol, then apply it to:

$ARGUMENTS

Build a panel of concerns

Rather than thinking in pure roles, choose concrete concerns. A provocateur will challenge the foundations. A simplifier will push back from the angle of implementation ease. Each perspective must make sense relative to the scope, with acknowledged blind spots to simulate a credible system of thought. Otherwise, you’ll end up with personas that systematically go in the same direction. The aim is to maximise friction to cover a wide spectrum of perspectives.

Each panellist follows the same format.

### {N} — The {Role Name}

*"{Core conviction tagline}"*

{One-sentence description of the perspective}

**Watches for:** {specific warning signals}
**Blind spot:** {acknowledged bias of this perspective}

For a design review, here is an example panel:

// CORE_TEAM.md
## The ten panellists

### 1 — The Simplicity Advocate

*"Extra syntax is acceptable only when it resolves an ambiguity or prevents a modelling error."*

Guards against construct proliferation. Blocks additions expressible with existing constructs.

**Watches for:** redundant keywords, overlapping constructs, grammar surface explosion.
**Blind spot:** may resist additions that improve learnability.

### 2 — The DDD Fidelity Advocate
### 3 — The Domain Readability Advocate
### 4 — The Machine Reasoning Advocate
### 5 — The Cross-File Coherence Advocate
### 6 — The Skill Experience Advocate
### 7 — The Domain Expert
### 8 — The Product Manager
### 9 — The Vision Guardian
### 10 — The Provocateur

Or to confront a vision, the composition is different:

// VISION_TEAM.md
## The eight panellists

### 1 — The Knowledge Economist
### 2 — The DDD Strategist
### 3 — The Enterprise Archaeologist
### 4 — The Legacy Rewrite Veteran
### 5 — The AI Futurist
### 6 — The Epistemologist
### 7 — The Product Strategist
### 8 — The Provocateur

Define the collaboration protocol

This is the format for the collaboration’s output. In practical terms, it’s your interface. It will depend on your needs and preferences for the panel.

Example protocol:

  1. Clarify your request
  2. Present the request for debate
  3. Round-table discussion among panellists
  4. Confrontation between panellists (limiting the rounds, otherwise it never ends)
  5. Synthesis of the debate
  6. Verdict with the panel’s overall position

Maintain your panel

As debates progress, the behaviour of the personas will give you clues about their legitimacy at the table. A persona that always agrees is dead weight. One that never agrees is noise. Three signals to watch:

  1. Material. An exploratory project doesn’t face the same issues as one in production. The material to work with will be different and doesn’t demand the same level of rigour. For example, the CORE_TEAM doesn’t include ops or testers, because the project is still exploratory.
  2. Priority. I build the table according to the current objective. In an exploratory phase, I explore broadly and diversely. For example, a provocateur will give you structuring insights and push for radical changes. Pivoting costs little at this stage. Once in production, that cost explodes.
  3. Balance. Always have more grounded personas to anchor the debates. In the CORE_TEAM, the persona that maintains and refocuses on the overall vision will often oppose the provocateur. The simplicity persona will oppose the dogmatism of the DDD evangelist. Opposition makes friction interesting. And without friction, you’ll derive little value.

Observe the behaviour of your personas; it will give you clues about when the panel needs to evolve.

Optional — A mini governance system

Much like the builder/checker concept, this kind of practice can evolve into a governance system. In the section “Build a panel of concerns”, the two panels CORE_TEAM and VISION_TEAM reflect a separation between execution and steering. The VISION_TEAM challenges the direction; the CORE_TEAM challenges the execution. And when execution needs to drift, it escalates to the VISION_TEAM panel. To set up this system: elaborate your vision with the dedicated panel in a markdown file, then place a guardian on the execution panels (or any other such as OPS, UX).

In the CORE_TEAM example:

### 7 — The Vision Guardian

*"Every grammar decision either brings Specy closer to being a validation oracle or pushes it back towards documentation that rots. Which is this?"*

Guards the strategic vision from `VISION.md`. Every construct must answer: *Can an AI derive a verifiable proof from this?* Does not debate the vision itself (VISION_TEAM.md's territory) but ensures tactical decisions don't undermine it.

**Watches for:** ...
**Blind spot:** ...

Refer any legitimate vision deviation to the VISION_TEAM: submit the deviation to the VISION_TEAM.

In a larger organisation, we could imagine panels maintained and made available by other departments (a “panel-as-a-service”), creating a catalogue of shared perspectives across the organisation. The panel doesn’t replace human interactions and doesn’t presuppose any delegation of responsibility or decision-making. It simply makes a first level of confrontation accessible.

What this changes

The purpose of a tool is to amplify human capabilities. AI is too often positioned as a replacement. In the same way that GPS is gradually stripping us of our ability to navigate, AI shouldn’t atrophy our ability to reason (see Avoiding skill atrophy in the age of AI).

This practice pushed me to reject many of my prompts. Sometimes for lack of clarity, or when I was going too fast and too far. I improved my prompts and, de facto, mastered the execution that follows. I had few surprises at the end. In the era of vibe coding, that’s no small thing. But the most significant gain, in my view, is making decisions with full awareness of the trade-offs.

This practice changes your stance towards AI: from “generating” to “confronting”. It forces a permanent challenge and broadens the field of vision. Like AI, we’re inconsistent. And maintaining the same level of rigour at all times is impossible. The panel reflects our perspectives. We simply use AI to make those perspectives constant, as a guardrail on our biases.

What would have required a structure and available people, AI makes more accessible, without replacing it. Friction has become optional. That’s precisely why you should choose it.

Panel creation prompt

# Design Review Panel

  ## Purpose

  This panel evaluates changes to [YOUR SYSTEM/LANGUAGE/PRODUCT]. It is a prompt construct: a single reviewer role-plays all perspectives in sequence, stress-testing every proposal from angles a single
  viewpoint would miss.

  ## Panellists

  Define 5-8 panellists. Each follows this template:

  ### N — The [Concern] Advocate

  *"One-sentence principle that guides this panellist's judgement."*

  [What they guard. What they block. 1-2 sentences.]

  **Watches for:** [3-4 specific anti-patterns]
  **Blind spot:** [The legitimate concern they tend to dismiss]

  ## Debate protocol

  Every debate follows six stages. All panellists must speak on every item.

  ### Stage 1 — Clarify
  The proposer states: **What** changes, **Why**, **Where** (file/line refs), **Scope** (what is out of scope). Each panellist may ask one clarifying question. Loops until all confirm "clear". The user
  must explicitly approve Stage 1 before the debate advances.

  ### Stage 2 — Present
  Concrete diff, before/after examples, impact on existing artefacts (breaking changes, migration).

  ### Stage 3 — Respond
  Each panellist states **support**, **concern**, or **block** in 1-2 sentences with a concrete reason.

  ### Stage 4 — Rebut
  The proposer addresses each concern. One round only.

  ### Stage 5 — Synthesise
  Neutral summary: points of agreement, remaining disagreements as specific questions, conditions under which concerns would be withdrawn.

  ### Stage 6 — Verdict

  | Verdict | Meaning |
  |---------|---------|
  | **Consensus: adopt** | No substantive objection remains |
  | **Consensus: reject** | Problem is real but solution is inadequate. Must state what a better solution needs |
  | **Refine** | Has merit but needs specific modifications listed in synthesis |
  | **Split** | Disagreement persists. Escalated to decision-maker with both positions recorded |