index

Durable Questions

- 10min

You have a totally normal everyday question.

Something like: why does nobody seem to know what’s going on? Or dude, where’s my car?

Depending on what you do, they might look like…

  • If you’re in sales:

    Which enterprise accounts have open support tickets and active renewal conversations right now?

  • An engineer might ponder:

    Which teams are still using the deprecated version of the shared component?

  • If you’re a normal human being (congrats):

    Can you just tell me where we are at with this thing? What’s done and not done?

The question changes. The problem doesn’t.

You pull the data together. You find the answer. You share it.

Three weeks later, someone asks again.

You do it again.

This isn’t a data problem. It’s not a tooling problem.
It’s not a “we need better dashboards” problem.
It’s a what-you-decided-to-keep problem.

You kept the answer. You should have kept the method.

TL;DR

When the answer lives in more than one place, the method is worth more than the answer.

Describe the question in plain English to an AI, let it build the method, push back until it’s right.

Treat the output as disposable - it’s just what things looked like today.

Save the method. Throw away the answer. Run it again whenever you need a fresh one.

The MethodOwned by TeamContext AwareImprove Over TimeSAVErunThe OutputMoving TargetAccurate Today; Stale TomorrowRun On DemandDISPOSABLE

You’ll Do It Again

Here’s what usually happens.

Someone asks which enterprise accounts have open support tickets
and active renewal conversations at the same time.
You pull a Salesforce export, cross-reference the support queue, build a list.
You find six. You send the slide. You move on.

Three weeks later, someone asks again.

It happens in engineering too.
Someone asks whether any downstream app is using a deprecated export from the shared framework.

You pull down a handful of repos, run some searches, build up a picture in your head.
You find three places. You report back. You move on.

What you built is a point-in-time snapshot that lives nowhere.
The data moved on the moment you stopped looking.
There’s no record of what you checked, how you combined it, or what things looked like when you did.

The answer is ephemeral. The question isn’t.

Some questions are genuinely recurring - who depends on what, which accounts are at risk, whether two systems are in sync. They come back. The answer evaporates every time. The work starts over.

This isn’t just an engineering problem. Sales, RevOps, finance, marketing - same deal. The data is scattered across tools, the question comes back every quarter, and the answer from last time is already stale.

How It Works

Three parts. Simple ones. Don’t let that fool you.

Contexts are your data sources - a GitHub repo, a Salesforce export, a local directory, an API spec. Declare them in a manifest. Run one command to sync them locally before each run. The synced copies are throwaway. The manifest isn’t.

contexts.json
{
"contexts": [
{ "name": "mfe-checkout", "type": "git", "url": "git@github.com:org/mfe-checkout.git" },
{ "name": "mfe-account", "type": "git", "url": "git@github.com:org/mfe-account.git" },
{ "name": "mfe-catalog", "type": "git", "url": "git@github.com:org/mfe-catalog.git" }
]
}

Scripts are the files that walk your contexts and produce an answer. Write them yourself, or describe the question in plain English to an AI, push back until the approach is right, and let it write the code. Either way, they’re committed.

Artifacts are the output - a table, a list, a report. Gitignored. Disposable. Run the script again any time and you get a fresh one.

The scripts are committed. The artifacts are not. That distinction is the whole idea.

Data SourcesFiles, Repos, Exports, APIs, DocsSynced; Fresh Each RunEPHEMERALThe MethodAI Builds, You ValidateOwned by TeamImproves Over TimeCOMMITTEDThe OutputAccurate Today; Stale Tomorrowrun again any timeDISPOSABLE

Here’s what that looks like on disk:

your-project/
├── contexts.json # tracked - the manifest
├── Makefile # tracked - sync commands
├── scripts/ # tracked - committed questions
│ └── legacybutton-audit.sh
├── artifacts/ # gitignored - disposable output
└── contexts/ # gitignored - synced local copies
└── mfe-checkout/

You run make sync before each run to pull current state. Then make clean when you’re done. That’s the entire sync layer. It’s not clever. That’s the point.

With contexts synced, open your AI and describe the question in plain English. The key: ask it to show its approach before writing the script.

Given the repos synced into contexts/, I need to know which micro-frontends are still importing LegacyButton from the design system, how many files in each, and where the first occurrence is.

Show your approach first, then write the script.

The AI proposes its approach - what it’ll search for, how it’ll handle edge cases, what the output will look like. You push back, iterate, refine. This is the creative phase. The AI is designing the instrument. Once you’re happy with the approach, it writes the script.

scripts/legacybutton-audit.sh
#!/usr/bin/env bash
# Which MFEs are still importing LegacyButton from the design system?
echo "| MFE | Files | First occurrence |"
echo "| --- | ----- | ---------------- |"
for dir in contexts/*/; do
name=$(basename "$dir")
count=$(grep -rl "LegacyButton" "$dir/src" 2>/dev/null | wc -l | tr -d ' ')
if [ "$count" -gt 0 ]; then
first=$(grep -rn "LegacyButton" "$dir/src" 2>/dev/null | head -1 | sed "s|$dir||")
printf "| \`%s\` | %s | \`%s\` |\n" "$name" "$count" "$first"
fi
done

You run it. The output is actual file paths and counts pulled from actual code. Not prose. Not a summary. Not an AI opinion.

MFEFilesFirst occurrence
mfe-checkout3src/components/Form.tsx:14
mfe-account1src/pages/Profile.tsx:8
mfe-catalog7src/components/ProductCard.tsx:23

Now you know. Not approximately. You know.

It Compounds

Take the accounts question: “Which enterprise accounts have open support tickets and active renewal conversations?” It comes up before every board meeting, every QBR, every time someone asks about churn risk. Once you’ve built the method to cross-reference those two data sources, the answer is always one run away - and always current.

The same in engineering. “Which repos are consuming packages from our shared component library?” comes up during every dependency upgrade, every breaking change conversation, every planning cycle. Once you’ve built the method, it’s one command. Always fresh, because you sync before you run.

A follow-up question:

“Who has taken something from that library and customized or forked it?”

This is harder. Shadow implementations, internal layers built on top of shared primitives, style overrides.
The method for that took twenty minutes to build with AI. Now it runs in seconds and covers every consumer simultaneously.
Without it, answering that manually would take most of a day and leave you uncertain you’d caught everything.

Most methods aren’t sophisticated. They’re just: pull this, look for this, show what you find. The power isn’t in the analysis. It’s in the repeatability.

Questions worth turning into methods:

  • Which accounts have open support tickets and active renewal conversations?
  • Which repos are still on the legacy version vs the new version?
  • Which features have high usage but no documentation or test coverage?
  • Which contracts have SLA commitments the system isn’t currently meeting?
  • Which consumers have customized or forked a shared component?
  • Which surfaces are not yet using the new auth flow?

Each becomes a method. Each one runs clean. Each one answers something you couldn’t know without pulling every source by hand.

Own the Method

The first version might not be right. Run it. Look at the output. Ask: does this match what I’d expect?

The AI doesn’t know that one team’s repo follows a different convention. It doesn’t know about the exception made three years ago that was never documented. It doesn’t know what your stakeholders actually mean when they ask the question, as opposed to what the words literally say. That tribal knowledge lives with you - not in any data source you’ve synced.

Review the output before sharing it. Take it to someone who knows the domain. If something seems off, go back to the method and fix it there - not in the output. Every fix gets baked in permanently, for everyone who runs it next.

What you’re protecting against is toxic data - output that looks authoritative, is structured, and is wrong. A confident table full of gaps gets shared, cited, acted on. That’s worse than no answer at all.

The discipline isn’t just “save the method.” It’s “own the method.” Understand what it does. Validate what it produces. Make sure it’s answering the right question before you treat the output as truth.

The Trap

Here’s where this breaks down.

You answer a question, share the output, and don’t commit the script
because “it was just a quick one-off.”
Maybe it was. But now you have an artifact without provenance - a text file that says something was true on a given day, with no way to regenerate it or verify it against the current state of the world.

The output looks like a record. It isn’t. It’s a snapshot without the camera.

The test for whether a script is worth committing is simple:
is this a question I’d want answered again?
If yes, commit the script.
If no, delete the whole thing - script and artifact both.
What you don’t want is a graveyard of outputs from undocumented one-off searches that future you will mistake for authoritative records.

The scripts are the thing of value. The output is just what they said today.

Why This Works With AI

The setup does something useful for AI that most setups don’t: it gives the model actual context.

You ask a question. The AI reads the context manifest, sees what’s available, looks at a few existing scripts to understand the structure, and writes a new one. You run it. You get an answer. If the question was worth asking, you commit the script.

The institutional knowledge isn’t locked in anyone’s head.
It’s not in a Slack thread from eight months ago.
It’s in scripts/, documented by what the file does, discoverable by anyone who looks.

“Which apps are still on the old API version?” is a Slack question.

bash scripts/api-version-audit.sh is an answer.

The difference isn’t just speed. It’s that the second one is still accurate tomorrow.

What This Is. What This Isn’t.

A few things people conflate this with.

“It’s just reporting.”
Reporting owns the output - a slide, a dashboard, an export from last Tuesday.
This owns the question. The output is explicitly disposable; you throw it away.
Anyone can run it. Any time. Against current data.

“It’s like a dashboard.”
Dashboards are maintained surfaces. Someone has to keep them alive.
This has no surface. You run it when you need it and move on.
A dashboard is a window someone installed. This is the ability to look out any window.

“It’s just automation.”
Automation runs on a schedule whether you need it or not.
This doesn’t run until you ask. It doesn’t move anything. It answers a question.

“It’s a script.”
A script you ran once and deleted is a one-off. A committed method is institutional property.
It improves over time. Anyone on the team can run it.
A script you ran once is a memory. A committed method is a capability.

What it also is: a context layer.
Once you have committed methods describing the state of your systems,
you can pipe that output directly into other AI workflows.
Run a context script, then hand it to a PR review skill or a renewal prep workflow.
The methods give AI what it’s usually missing: accurate, current, specific context.

What You’re Actually Building

Organizations accumulate questions faster than they accumulate answers.

Most cross-cutting knowledge - who depends on what, where the patterns diverge, what’s consistent and what isn’t - lives in the heads of the people who’ve been around long enough to know.

When they leave, it goes with them.

Repeatable methods don’t have that problem. They’re readable, runnable, and not attached to any particular person. The question, once committed, belongs to the team.

This isn’t a framework. It’s barely a pattern - it’s more like a discipline.
Keep the method.
Throw away the output.
Ask the question again whenever you need a fresh answer.

The answer is what it said today. The question is yours to keep.