The Scale Doesn't Matter, The Mistake Is the Same
I work with teams inside a major sales organization at a Redmond-based technology company. Dedicated AI program, six-figure tooling budget, enterprise clients deploying agents at scale. I also work with a 50-person manufacturer that had never touched an AI tool. Both asked me the same question, just using different words.
The enterprise team asked: "How do we agentify this process?"
The manufacturer asked: "Where do we even start with AI?"
Different vocabulary. Different budgets. Different levels of sophistication. Same structural mistake underneath.
The Enterprise Version
I wrote about this in detail in Your AI Agent Shouldn't Have Your Employee's Job Description, but here's the short version.
A team inside a large enterprise took an existing process and handed it to an agent, step by step. Someone fills out a form, which triggers an email, which leads to manual data entry in a spreadsheet, which requires a lookup in a separate system. The agent followed the exact same sequence, hit the exact same checkpoints, and waited at the exact same handoff points.
The agent worked. But it didn't deliver the value anyone expected, because it inherited every constraint that shaped the original workflow. The email routing existed because two systems couldn't talk to each other. The spreadsheet existed because the reporting was limited. The approval chain existed because of organizational trust boundaries, not because the decision required human judgment. The agent faithfully reproduced all of it.
The symptom: the AI investment produced marginal efficiency gains on a process that still carried every legacy constraint.
The SMB Version
I wrote about this side in Where Do I Even Start? (The Wrong First Question), but the compressed version is this.
A mid-size manufacturer came asking where to start with AI. Good leadership, solid business, about 50 employees. They'd heard from every conference and vendor pitch that they needed AI, but they had zero hands-on exposure to what it actually did. Their instinct was to scan their existing processes and look for a place to plug it in.
They couldn't find one. Not because there wasn't opportunity, but because they were evaluating a technology they didn't understand against processes that were designed before the technology existed. They were stuck at the starting line.
The symptom: paralysis. They couldn't see where AI fit because they were looking through the wrong lens.
Why Bolt-On Wins by Default
These look like opposite problems. The enterprise team had the knowledge and budget to do this right. The manufacturer was stuck at the starting line. But underneath, the same thing is happening: the existing process wins by default. Not because it's better, but because redesign requires something the organization hasn't built yet.
For the enterprise team, the barrier is comfort with the known. The existing process is a known quantity. It has known owners, known accountability structures, and known outcomes. Redesigning it means questioning decisions made by people who are still in the room. It means telling the department that owns the approval chain that their approval chain might not need to exist. That's not a technology conversation. That's a power and ownership conversation. So the team defaults to the safe move: keep the process, swap in an agent. The current workflow survives not because it's the right design, but because changing it is organizationally expensive.
For the manufacturer, the barrier is not being able to see the alternative. They couldn't redesign their processes because they couldn't yet imagine what a redesigned process would look like. It's not resistance to change. It's that the design space is invisible when you don't understand the technology. You can't design for a capability you haven't experienced. So they default to scanning existing workflows for a place to plug AI in, because that's the only frame they have.
Both organizations end up in the same place: bolt-on. The enterprise team bolts AI onto a process they understand and control. The manufacturer tries to bolt AI onto a process because they can't envision anything else. The existing process wins by default in both cases.
This is why bolt-on isn't a technical mistake or a strategy mistake. It's the path of least organizational resistance. Process-native thinking requires something most organizations haven't invested in: either the willingness to challenge existing structures, or the foundational understanding to imagine different ones. Usually both.
What Process-Native Actually Requires
Knowing the right question isn't the hard part. The hard part is building the organizational capacity to act on it.
For teams with knowledge but inertia, the work is political before it's technical. You have to create permission to challenge existing process ownership. The enterprise team's real breakthrough wasn't building a better agent. It was getting the organization to accept that the agent's process didn't need to look like the human's process. That the approval chain could be restructured. That the spreadsheet workaround could be eliminated. That meant conversations with the people who built and owned those steps.
For teams without exposure, the work is educational before it's strategic. The manufacturer couldn't redesign what they couldn't imagine. They needed hands-on experience with the technology before the design space opened up. That's why their path started with building understanding, then piloting in a contained process with real measurements, and only then evaluating which processes should look entirely different.
The design sequence itself is the same at any scale:
Define the outcome you need. Not "automate this workflow" but "what result does this process exist to produce?" The enterprise team needed validated requests routed for approval. The manufacturer needed delivered proposals. Start there.
Design the process as if you were starting from scratch today with agents available. This is the step most organizations skip because it's where organizational resistance lives. They take the existing workflow as a given and try to speed it up. The redesign is where the real value is, and it's where the real difficulty is.
Identify what requires human judgment versus what agents can own. Some decisions are high-stakes, ambiguous, or irreversible. Those stay with humans. Everything else is a candidate for agent execution. The Organizational Capability Map provides a framework for making that call.
Encode organizational intent into agent objectives. A well-designed process with a poorly-aimed agent still fails. What should the agent optimize for? What constraints should it respect? This is where process design meets goal clarity, and it's the bridge between this series and the work I've written about in The Intent Gap and Goal Translation Infrastructure.
Build, measure, iterate. The manufacturer got this right from the start: pick a contained process, define what you're measuring, and use the pilot to build organizational understanding. The enterprise team learned it after the fact: their first implementation showed them what needed to be redesigned.
The Mistake That Scales
Scale doesn't protect you from this mistake. Budget doesn't protect you. Technical sophistication doesn't protect you. A Fortune 100 team with dedicated AI engineers and a 50-person manufacturer with no AI experience ended up in the same place, for reasons that look different on the surface but share the same structure underneath.
The enterprise team knew what AI could do and still defaulted to bolt-on because the existing process was safe, understood, and owned. The manufacturer couldn't get past bolt-on because they hadn't yet built the understanding to see any other option.
The only thing that protects you is investing in the capacity to redesign, not just the tools to execute. For some organizations, that means building the political will to challenge existing process ownership. For others, it means building the foundational understanding to imagine what's possible. For most, it means both.
The question isn't "where does AI fit in what we do?" It's "what would this look like if we designed it today?" But the real work isn't asking that question. It's building an organization that's ready to act on the answer.