You figured something out about working with AI. Maybe you built a prompt that saves you an hour a week. Maybe you invented a little ritual - always give it context first, always check the output against the original ask. Maybe you wrote a document that tells AI how you work, what you care about, what words you use.
It felt like your idea. Maybe it was.
But there’s a good chance that right now, across every industry, thousands of other people are building the exact same thing. And there’s an equally good chance that six months from now, it’s just a feature.
That’s not a bad thing. That’s how good ideas work. But it’s worth understanding, because the people who get the most out of this moment are the ones who know when to keep building and when to let go.
TL;DR
Good ideas emerge everywhere, at roughly the same time - nobody coordinating, everyone arriving independently. The best ones get shared, named, and absorbed into the tools. That’s not the idea dying. That’s the idea becoming the floor everyone stands on.
The skill worth building: discover early, use it, share it freely. When it gets absorbed into the standard, clean up your version and build from the new height - because you’re standing on ground that didn’t exist before your idea did. The novelty doesn’t disappear. It becomes infrastructure. And infrastructure is what the next thing gets built on.
Hold your experiments loosely. Not because they don’t matter. Because the goal is to keep climbing, not to guard the last rung.
A Word That Actually Fits
There’s a concept worth borrowing here: emergence.
Watch a murmuration of starlings - one of those massive flocks that moves like a single fluid shape, contracting and expanding across the sky. It looks like someone is choreographing it. Nobody is. Computational models of flocking behavior show that collective motion like this can emerge from a small set of local rules: stay close, don’t collide, match your neighbors’ speed.1 No leader. No plan. And yet something beautiful and coordinated appears, from the bottom up, without anyone designing it.
Emergence is when a system of simple actors - each doing their own thing, each following their own instincts - produces behavior or patterns that nobody planned and nobody could have predicted by looking at any one of them.
That’s what’s happening right now with AI.
Not inside the AI, but around it. Hundreds of thousands of people, in every industry, in every role, figuring out how to work with these tools in real time. Each one experimenting. Each one discovering what works. Nobody coordinating. And from all of that, patterns are surfacing.
The Same Idea, Everywhere at Once
Here’s what emergence looks like in practice.
From 2020 on, people across every industry were discovering that if you give AI a little background before asking it to do something, the output gets dramatically better. A salesperson pasting in CRM notes before writing a follow-up email. A marketer including their brand guide before asking for copy. An engineer describing the codebase before asking for code.
These aren’t stories from any single paper - they’re the kind of thing that spreads through Slack messages and blog posts and “wait, have you tried…” conversations at lunch. Practitioners and researchers were arriving at the same insight from different directions.
The pattern spread because it worked. It got named - “context” - and got built into the tools. System prompts. Memory features. The little box that says “tell us about yourself.” What started as a workaround is now just… how the software works.
Or: people found that assigning AI a specific role - “act as a senior copywriter” or “you are a financial analyst reviewing this for risk” - improved output quality noticeably. The informal discovery spread through blogs and word of mouth long before it got a name. Now it’s custom GPTs, AI personas, built-in role settings in every major tool.
Or: people learned that asking AI to check its own work helps. Then they learned that asking a different AI to check the work helps more. That pattern is showing up in products too - built-in review steps, critique modes, second-pass features.
The ideas weren’t wrong. They were right. That’s exactly why they got codified.
Why This Happens
The mechanism is simpler than it sounds.
Everyone working in AI is using tools with the same constraints, solving the same underlying problems. You can only fit so much in a prompt - so batching emerges. Output is inconsistent - so role assignment emerges. Nothing persists between sessions - so context documents emerge. The friction points are shared, so the solutions are shared.
And once one person names what they found - a blog post, a Slack message, an offhand “wait, have you tried…” - it travels fast. Because other people have already half-discovered the same thing. The recognition is immediate. That’s what makes it spread.
This isn’t purely independent invention and it isn’t purely copying. It’s shared constraints producing parallel discovery, and shared platforms letting those discoveries spread the moment someone puts a name to them. The terrain rewards the same moves because it’s the same terrain.
And when a pattern is visible enough, someone builds it into a product. Suddenly what was a workaround is a checkbox. What was a prompt template is a settings field. What was a clever habit is onboarding documentation.
This is happening across every tool, every workflow, every industry right now. Fast.
The Trap
Here’s where people get stuck.
You built something. It took real effort. It works. You’re proud of it - and you should be, because you developed real judgment to build it. Then the tool ships a feature that does the same thing, more elegantly, maintained by a team of engineers instead of you.
The trap is staying attached to your version.
Not out of stubbornness - usually out of familiarity, sunk cost, or just not noticing the feature shipped. But now you’re maintaining two things: your version - a document, a habit, a system - and the built-in one. They drift. They conflict. You spend energy reconciling them instead of moving forward.
The way to avoid this is to treat your experiments as experiments from the start. Not permanent infrastructure. Not “the way we do things here.” Working hypotheses, running until something better arrives - whether that’s your next iteration or a feature that ships. When you frame things that way, the question becomes obvious when the moment comes: does this replace mine, or does mine do something the feature doesn’t?
“Oh, interesting - this is just a feature now. Let me clean up my version.”
That’s not admitting you wasted time. You didn’t. You built the judgment that lets you recognize a good thing when it gets productized. You figured it out before the feature existed. Now you don’t have to maintain it anymore. That’s a win.
What to Actually Do
Keep experimenting. The people figuring out novel patterns right now are developing judgment that matters. You can’t shortcut that by waiting for features to ship - you have to build the intuition yourself.
But build in a way that’s easy to audit and easy to replace. Know what your version does and why. That way, when the feature ships, the decision is clean.
You’ll know something has been codified when the tool now has a setting for it, a tutorial covers exactly what you invented, or a new colleague can replicate your workflow just from the product’s own docs. You don’t need to track changelogs or follow AI news obsessively. The signal will find you - usually in the form of “oh, interesting, they shipped that.”
Don’t be precious about your version. The goal was never to maintain a clever workaround forever. The goal was to work better. If the standard now does that better than your version, the standard won. Clean up and move on.
And when you find something genuinely new - something that isn’t a feature yet, that you had to invent - share it. You’re probably not the only one who found it. Putting it into words helps the whole group figure out whether it’s a pattern worth keeping.
The Bigger Picture
Nobody is in charge of this. Nobody has the full picture. Everyone is doing their own experiments, following their own instincts, discovering their own moves. And from all of that, the way people work with AI is taking shape. The murmuration is forming.
You’re in it. Some of what you’ve built is already part of the pattern. Some of it will get absorbed and become someone else’s floor. Some of it will get replaced by something better - and that’s fine, because it means the floor rose.
All of it is useful for building the thing that matters most right now, which isn’t a great prompt or a smart workflow.
It’s judgment. The sense of what’s good, what’s new, what’s noise, and when to let go.
That one doesn’t get codified. That one’s yours.
Footnotes
-
Boids on Wikipedia covers Craig Reynolds’ 1987 simulation, which showed that flocking behavior emerges from three simple local rules. Real starlings turn out to be more complex - each bird tracks roughly six or seven nearest neighbors by topological position, not a fixed spatial radius (Ballerini et al., PNAS, 2008) - but the emergence is real. ↩