Recipe: Reframe

Generate five How-Might-We questions from a rough insight, each using a different cognitive move (Focus, Feel, Challenge, Borrow, Flip).

What this tool does

Reframe takes a problem statement or insight and produces five distinct How-Might-We questions, each shaped by a different cognitive move from Stanford d.school's HMW transformation moves. The five moves are deliberate: Focus zooms in, Feel reframes through emotion, Challenge questions an assumption, Borrow creates an analogy, Flip explores the opposite. Running all five at once forces divergence that a single "give me an HMW" prompt doesn't.

Each HMW comes with a plain-language gloss — a one-line restatement without HMW syntax, so non-native English speakers and students new to the framework can still read each option.

The prompt

System prompt

You are a creative problem-reframing expert trained on Stanford d.school's "How Might We" transformation moves. Given a problem statement or insight, generate exactly 5 HMW questions, each using a different cognitive move:

1. FOCUS — zoom in on one specific element of the problem (d.school: "Focus in on an element")
2. FEEL — reframe through the emotion in play, not the mechanics (d.school: "Focus on emotions")
3. CHALLENGE — question a core assumption in the problem as stated (d.school: "Question an assumption")
4. BORROW — create an analogy from an unrelated domain (d.school: "Create an analogy from insight or context")
5. FLIP — explore the opposite, or take the problem to an extreme (d.school: "Explore the opposite" / "Take it to an extreme")

For each HMW, also provide a plain-language gloss in parentheses — a one-line restatement of the idea without HMW syntax, accessible to non-native English speakers.

Return ONLY a JSON array of 5 objects with keys: level (1-5), hmw, gloss.
No markdown, no explanation, just the JSON array.

User prompt

Just your raw insight or problem statement — a sentence or two. No formatting needed.

Example

Input:

First-year students struggle to find their writing voice. They default to a flat, formal register because they're more afraid of being wrong than of being boring.

Output (the real shape):

[
  {
    "level": 1,
    "hmw": "How might we help first-year students take a single sentence risk in their writing each week?",
    "gloss": "(Start small — one risky sentence at a time.)"
  },
  {
    "level": 2,
    "hmw": "How might we make being boring feel more embarrassing than being wrong?",
    "gloss": "(Flip the fear — make the safe choice the scarier one.)"
  },
  {
    "level": 3,
    "hmw": "How might we surface the assumption that 'formal' and 'correct' are the same thing?",
    "gloss": "(The real problem: students equate formality with correctness.)"
  },
  {
    "level": 4,
    "hmw": "How might we treat a student essay the way a stand-up comedian treats a new bit?",
    "gloss": "(Borrow from comedy — iterate in front of an audience, learn what lands.)"
  },
  {
    "level": 5,
    "hmw": "How might we reward students who write something their instructor disagrees with?",
    "gloss": "(Flip the incentive — wrong-but-honest beats safe-but-flat.)"
  }
]

Replicate it

Copy the system prompt above into the system/custom-instructions slot (or paste it as your first message labeled "SYSTEM:"). Then send your insight as a user message.

Tuning

  • Temperature: BIG Tools uses the provider default (≈1.0 for Claude, ≈1.0 for GPT). Higher temperature (1.1-1.3) produces bolder BORROW and FLIP candidates but noisier FOCUS ones. Lower (0.5-0.7) produces safer, more convergent HMWs — usually not what you want from Reframe.
  • Model: anthropic/claude-sonnet-4.6 is the default. openai/gpt-4o is a strong second opinion — it tends to lean more concrete. Free-tier models (Llama 3.1 8B) work but often miss the CHALLENGE and FLIP moves, defaulting to mild Focus-style rewrites.
  • Structured output: The prompt asks for JSON, and models mostly comply. BIG Tools strips ```json code fences before parsing. If you're building a real pipeline, use the model's native JSON mode (OpenAI response_format: {"type": "json_object"} or Anthropic's tool-calling) for guaranteed valid JSON.

Common pitfalls

  • Model returns 3 or 7 HMWs instead of 5. Happens on smaller models. Add "You MUST return exactly 5 items" at the end and re-run.
  • All five HMWs feel like the same question reworded. The model is ignoring the cognitive-move constraint. Switch models — smaller ones collapse the moves together.
  • The JSON includes markdown fences. Strip them before parsing: content.replace(/^```(?:json)?\s*/m, '').replace(/\s*```$/m, '').trim().
  • HMWs are disguised solutions ("How might we build an app that…"). Re-run with a seed that's more observational than prescriptive. The tool reflects the shape of the input.