The Demo Is Not the Definition
Why we keep confusing what something can do with what something is — and why it matters
I’ve been thinking about the name “Artificial Intelligence,” and the more I sit with it, the less it holds up.
We named this technology after what we think it is — intelligent. But almost every conversation I hear about it describes what it does: it writes drafts, it builds websites, it analyzes data, it generates images, it autocompletes your sentences when you’re too tired to finish them yourself.
If we named it by what it actually does, we’d call it something like Automated Labor. Or Digital Task Completion. Or, if we’re being honest about most people’s Monday morning use case, Fancy Autocomplete That Sometimes Lies.
Drop the “Artificial” entirely, and you could call it Digital Assistance. Automated Pattern Matching. Computational Draft Generation. Each of those names is more accurate than “intelligence” — but none of them would have attracted $100 billion in venture capital, so here we are.
The name matters because it front-loads an answer to a question nobody has agreed on. Call it “intelligence,” and every conversation that follows inherits an assumption: this thing thinks. Now half the room is excited, and the other half is terrified, and both reactions are responses to the name, not the tool. The person who built a website with it in seventeen days and the manager who thinks it’s a crutch for weak employees are both reacting to the word “intelligence” — and they’re arriving at opposite conclusions from the same two syllables. Neither of them is wrong based on the information that they have.
That gap — between what something does and what we’ve decided it is — runs deeper than AI. It shows up every time a demonstration gets mistaken for a definition. And it happens in almost every room I study.
The conflation
There’s a move that shows up in conversations about technology, medicine, policy, leadership, really any domain where a concept carries weight and a demonstration carries force. Someone asks, “what is this thing?” and someone else answers by showing what it can do. A product demo replaces a definition. A capability display substitutes for an explanation. A personal success story stands in for a general claim.
The audience nods. The question feels answered. And nobody notices that the actual question — what is this thing, what does it mean, who gets to define its role — was never addressed.
This conflation — treating what something does as identical to what something is — operates quietly in almost every high-stakes conversation I study.
A surgeon demonstrates a new technique that cuts procedure time in half. Impressive. But “what can this technique do in one surgeon’s hands?” is a different question than “what is this technique’s role in standard care?” The demonstration doesn’t answer the second question. Adoption protocols, failure modes, training requirements, and patient selection criteria do.
A team lead shows a quarterly dashboard where every metric is green. The room concludes the team is healthy. But green metrics are what the team produced. Whether the team is healthy depends on questions the dashboard can’t answer: how sustainable is the pace, who is burning out, and what conversations are being avoided to keep the numbers clean.
An employee shows five ways they used AI to finish a project faster. The manager concludes the employee is dependent on a crutch. A different manager concludes the employee is a visionary. Same demonstration, opposite definitions — because neither manager is responding to the demo. Both are responding to what “AI” already means in their heads, and the demo just gave each of them permission to feel more certain.
Why this happens
Alfred Korzybski identified the root of this problem nearly a century ago. In Science and Sanity (1933), he warned against what he called the “is of identity” — the tendency to collapse a thing with its description, a map with its territory, a word with the object it points to. When someone says “AI is a game-changer,” Korzybski would flag that sentence. The word “is” performs an act of identification: it treats the label and the thing as the same. But “AI” is a label covering thousands of different tools, methods, capabilities, and contexts. Saying “AI is a game-changer” skips every question that matters: which AI, for whom, under what conditions, by what measure.
Korzybski’s broader point was that humans routinely confuse levels of abstraction. A demonstration lives at one level — concrete, specific, bounded by context. A definition lives at a higher level — abstract, general, meant to travel across contexts. When a demonstration gets treated as a definition, the concrete swallows the abstract. The specific case becomes the general rule. And the room moves forward on a foundation that feels solid but isn’t.
Psychologist Edward Thorndike identified the same pattern from a different angle. In 1920, he documented what he called the halo effect: the tendency for a strong impression in one area to color judgment in unrelated areas. Military officers who were tall and attractive were also rated as more intelligent and better leaders by evaluators who had never spoken to them. One visible trait radiated outward and shaped the assessment of everything else.
Thorndike’s finding has been replicated across domains for over a century. A 1977 study by Nisbett and Wilson showed that college students who watched a warm, friendly lecturer rated him higher on physical appearance and accent — traits that had nothing to do with his warmth. The initial impression didn’t just influence related judgments. It rewired unrelated ones.
The same pattern appears in technology. A study from the Nielsen Norman Group found that websites with high visual appeal received high satisfaction ratings from users even when the task-failure rate on those same sites exceeded 50%. Users liked how the site looked, and that impression bled into their assessment of how well it worked — even when it demonstrably didn’t work. “Beautiful” became a stand-in for “usable.” The demo replaced the definition.
Where it shows up in teams
In the conversations I study — teams under pressure, organizations making high-stakes decisions, groups trying to coordinate across different assumptions — the is/does conflation creates a specific and recurring failure pattern.
Someone shows impressive results. The room treats the results as proof of a larger claim. The larger claim goes unexamined because the demonstration felt like enough. Decisions get made. And when those decisions break down later, nobody can trace the failure back to the moment where a definition was needed, and a demo was offered instead.
Here’s how it sounds in practice:
“We rolled out the new system and productivity jumped 15% in the first quarter.” That’s what the system did. What the system is — its actual role, its fit with existing workflows, its long-term maintenance burden, its impact on the people using it — requires a different conversation.
“Our culture is strong. Look at our engagement scores.” That’s what the survey produced. What “strong culture” means on this team, in this building, under these specific conditions — that question is still open.
“AI is transforming everything. Look what I built in two weeks.” That’s what happened in one person’s hands, with one set of skills, on one project. What “AI” is — as a category, as a policy question, as a set of decisions your organization needs to make — remains undefined. And every person in the room is filling in that definition with their own assumptions, silently, while nodding at the same screenshots.
The repair
The fix is simple to describe and hard to practice, which is the definition of infrastructure.
When someone shows you an impressive result, train yourself to notice the moment your brain wants to leap from “that’s what it did” to “that’s what it is.” That leap feels natural. It feels like a conclusion. In reality, it’s a shortcut — and the gap it jumps over is where most coordination failures begin.
The question that closes the gap: “That’s a strong result. Now — what does this mean for us, in our context, with our constraints?”
That question moves the conversation from demonstration to definition. It honors the evidence without letting the evidence do work it can’t do. It creates space for the room to build a shared understanding instead of leaving with five separate private interpretations of the same demo.
A few other versions of the same move, depending on the context:
“That shows what it can do. What do we think it is — for this team, right now?”
“Impressive demo. What would need to be true for that result to hold across our full operation?”
“I can see the capability. What’s our definition of success for this, and does this demo match it?”
Each of these separates the is from the does. Each one costs about ten seconds. Each one prevents a room full of people from walking away with a shared experience and no shared meaning.
Why it matters now
We are living through a period where demonstrations are abundant and definitions are scarce. New tools produce visible, shareable, impressive outputs at unprecedented speed. Screenshots travel faster than analysis. A build log can go viral while the question “what does this tool mean for how we work?” remains unasked.
That asymmetry — demonstrations outpacing definitions — is the engine of most organizational confusion around technology. Teams adopt tools before they agree on what the tools are for. Leaders see a capability demo and assume alignment that doesn’t exist. Individuals have transformative personal experiences with a tool and can’t understand why others don’t share their certainty.
The certainty is real. The experience is real. The demonstration is real. What’s missing is the shared definition — the agreement about what this thing means in this room, for these people, under these conditions.
Until that definition exists, every person in the room is watching the same demo and seeing a different thing.
This essay is part of the What Do You Mean? series on What Time Binds, where I study what happens when people use the same words and mean different things — and what to do about it.
I’m building a 10-module course called Meaning Repair for High-Stakes Teams on this Substack. Module 1 is completely free. If the pattern in this essay felt familiar, that’s where to start: what-time-binds.com


