You have a TOTALLY NORMAL … everyday question.
Something like: why does nobody seem to know what’s going on? … or dude, where’s my car?
Depending on your usecase or your role at a company those questions could look like…
- If you’re in sales:
Which enterprise accounts have open support tickets and active renewal conversations right now?
- An engineer might ponder:
Which teams are still using the deprecated version of the shared component?
- If you’re a normal human being (congrats):
Can you just tell me where we are at with this thing? What’s done and not done?
The question changes. The problem doesn’t.
You pull the data together. You find the answer. You share it.
Three weeks later, someone asks again.
You do it again.
This isn’t a data problem. It’s not a tooling problem.
It’s not a “we need better dashboards” problem.
It’s a what-you-decided-to-keep problem.
You kept the answer. You should have kept the method.
TL;DR
When you ask a question whose answer lives in more than one place, a repeatable way of answering it is more valuable than the answer itself.
Describe the question in plain English to an AI, let it build the method, push back until it’s right.
Treat the output as disposable - it’s just what things looked like today.
Save the method. Throw away the answer. Run it again whenever you need a fresh one.
You’ll Do It Again
Here’s what usually happens.
Someone asks which enterprise accounts have open support tickets
and active renewal conversations at the same time.
You pull a Salesforce export, cross-reference the support queue, build a list.
You find six. You send the slide. You move on.
Three weeks later, someone asks again.
It happens in engineering too.
Someone asks whether any downstream app is using a deprecated export from the shared framework.
You pull down a handful of repos, run some searches, build up a picture in your head.
You find three places. You report back. You move on.
What you produced - in both cases - is a point-in-time snapshot that lives nowhere.
The data moved on the moment you stopped looking.
There’s no record of what you checked, how you combined it, or what state things were in when you did.
The answer is ephemeral. The question isn’t.
That mismatch is the problem.
Some questions are genuinely recurring - about who depends on what,
about which accounts are at risk, about whether two systems are in sync.
Those questions deserve a more durable answer than a one-off search.
And this isn’t just an engineering thing.
If you work in sales, RevOps, finance, or marketing, you have versions of this too.
The accounts, the campaigns, the contracts, the pipelines -
the data is scattered and the question comes back every quarter like a bad penny.
The answer evaporates. The work starts over.
The Pattern
The fix has three parts. They’re simple. Don’t let the simplicity fool you.
Contexts are any external sources you want to analyze - a GitHub repo, a Salesforce export, a local directory, a remote API spec. You declare them in a manifest file. When you need to answer a question, you sync the relevant ones locally with a single command. The synced copies are ephemeral - temporary local copies, nothing you’d track. But the declarations aren’t. The manifest is tracked.
Scripts are the files that walk the synced contexts and produce an answer. You can write them yourself - or, and this is where AI earns its keep, describe the question in plain English, ask it to propose an approach, push back until it’s right, and let it write the code. Either way, scripts are committed.
Artifacts are the output - a table, a list, a report. They’re gitignored. They’re disposable. Running the script again at any point produces a fresh one.
The scripts are committed. The artifacts are not. That distinction is the whole idea.
Three things are tracked. Two things are not:
| Path | |
|---|---|
contexts.json | tracked - the manifest |
Makefile | tracked - sync commands |
scripts/ | tracked - the committed questions |
artifacts/ | gitignored - disposable output |
contexts/ | gitignored - synced local copies |
You run make sync before each run to pull current state. Then make clean when you’re done.
That’s the entire sync layer. It’s not clever. That’s the point.
With contexts synced, open your AI and describe the question in plain English. The key: ask it to show its approach before writing the script.
Given the repos synced into contexts/, I need to know which micro-frontends are still importing LegacyButton from the design system, how many files in each, and where the first occurrence is.
Show your approach first, then write the script.
The AI proposes its approach - what it’ll search for, how it’ll handle edge cases, what the output will look like. You push back, iterate, refine. This is the creative phase. The AI is designing the instrument. Once you’re happy with the approach, it writes the script.
You run it. The output is actual file paths and counts pulled from actual code.
Not prose. Not a summary. Not an AI opinion.
Now you know. Not approximately. You know.
It Compounds
Take the accounts question.
“Which enterprise accounts have open support tickets and active renewal conversations?”
comes up before every board meeting, every QBR, every time someone asks about churn risk.
Once you’ve built the method to cross-reference those two data sources, the answer is always one run away - and always current.
The same in engineering.
“Which repos are consuming packages from our shared component library?”
comes up during every dependency upgrade, every breaking change conversation,
every planning cycle.
Once you’ve built the method, it’s one command. Always fresh, because you sync before you run.
A follow-up question:
“Who has taken something from that library and customized or forked it?”
This is harder. Shadow implementations, internal layers built on top of shared primitives, style overrides.
The method for that took twenty minutes to build with AI. Now it runs in seconds and covers every consumer simultaneously.
Without it, answering that manually would take most of a day and leave you uncertain you’d caught everything.
The methods don’t have to be sophisticated. Most of them are just: pull this data, look for this pattern, show what you find. The power isn’t in the analysis. It’s in the repeatability.
Questions worth turning into methods:
- Which accounts have open support tickets and active renewal conversations?
- Which repos are still on the legacy version vs the new version?
- Which features have high usage but no documentation or test coverage?
- Which contracts have SLA commitments the system isn’t currently meeting?
- Which consumers have customized or forked a shared component?
- Which surfaces are not yet using the new auth flow?
Each becomes a method. Each one runs clean. Each one tells you something you genuinely couldn’t know without pulling every source by hand.
Keep Your Hand on the Wheel
The first version of a method might not be right.
Run it. Look at the output. Ask: does this match what I’d expect?
The AI doesn’t know that one team’s repo follows a different convention.
It doesn’t know about the exception made three years ago that was never documented.
It doesn’t know what your stakeholders actually mean when they ask the question,
as opposed to what the words literally say.
That tribal knowledge lives with you - not in any data source you’ve synced.
Review the output before sharing it. Take it to someone who knows the domain. If something seems off, go back to the method and fix it there - not in the output. Every fix gets baked in permanently, for everyone who runs it next.
What you’re protecting against is toxic data - output that looks authoritative because it came from a structured process, but is wrong in ways nobody caught. A confident table full of gaps gets shared, cited, acted on. That’s worse than no answer at all.
The discipline isn’t just “save the method.” It’s “own the method.” Understand what it does. Validate what it produces. Make sure it’s answering the right question before you treat the output as truth.
The Trap
Here’s where this breaks down.
You answer a question, share the output, and don’t commit the script
because “it was just a quick one-off.”
Maybe it was. But now you have an artifact without provenance - a text file that says something was true on a given day, with no way to regenerate it or verify it against the current state of the world.
The output looks like a record. It isn’t. It’s a snapshot without the camera.
The test for whether a script is worth committing is simple:
is this a question I’d want answered again?
If yes, commit the script.
If no, delete the whole thing - script and artifact both.
What you don’t want is a graveyard of outputs from undocumented one-off searches that future you will mistake for authoritative records.
The scripts are the thing of value. The output is just what they said today.
Why This Works With AI
The setup does something useful for AI that most setups don’t: it gives the model actual context.
You ask a question.
The AI reads the context manifest, sees what’s available,
looks at a few existing scripts to understand the structure, and writes a new one.
You run it. You get an answer. If the question was worth asking, you commit the script.
The institutional knowledge isn’t locked in anyone’s head.
It’s not in a Slack thread from eight months ago.
It’s in scripts/, documented by what the file does, discoverable by anyone who looks.
“Which apps are still on the old API version?” is a Slack question.
bash scripts/api-version-audit.shis an answer.
The difference isn’t just speed. It’s that the second one is still accurate tomorrow.
What This Is. What This Isn’t.
A few things people conflate this with.
“It’s just reporting.”
Reporting owns the output - a slide, a dashboard, an export from last Tuesday.
This owns the question. The output is explicitly disposable; you throw it away.
Anyone can run it. Any time. Against current data.
“It’s like a dashboard.”
Dashboards are maintained surfaces. Someone has to keep them alive.
This has no surface. You run it when you need it and move on.
A dashboard is a window someone installed. This is the ability to look out any window.
“It’s just automation.”
Automation runs on a schedule whether you need it or not.
This doesn’t run until you ask. It doesn’t move anything. It answers a question.
“It’s a script.”
A script you ran once and deleted is a one-off. A committed method is institutional property.
It improves over time. Anyone on the team can run it.
A script you ran once is a memory. A committed method is a capability.
What it also is: a context layer.
Once you have committed methods describing the state of your systems,
you can pipe that output directly into other AI workflows.
Run a context script, then hand it to a PR review skill or a renewal prep workflow.
The methods give AI what it’s usually missing: accurate, current, specific context.
The Bigger Thing
Organizations accumulate questions faster than they accumulate answers.
Most cross-cutting knowledge - who depends on what, where the patterns diverge, what’s consistent and what isn’t - lives in the heads of the people who’ve been around long enough to know.
When they leave, it goes with them.
Repeatable methods don’t have that problem. They’re readable, runnable, and not attached to any particular person. The question, once committed, belongs to the team.
This isn’t a framework. It’s barely a pattern - it’s more like a discipline.
Keep the method.
Throw away the output.
Ask the question again whenever you need a fresh answer.
The answer is what it said today. The question is yours to keep.