Where Do I Even Start? (The Wrong First Question)
"Where do I even start with AI?" is the wrong first question. It skips a step that determines whether everything after it works or wastes your time. The right sequence: understand the technology, pilot with measurement, then evaluate which processes should look entirely different.
"Where do I even start with AI?" is the most common question I hear from small and mid-size businesses. It's also the wrong first question.
Not wrong because it's bad. Wrong because it skips a step that determines whether everything after it works or wastes your time.
The Instinct That Gets You Stuck
The natural move is to scan your existing processes and look for places where AI "fits." You pull up your workflows, find the ones that feel slow or manual, and ask: could AI do this faster?
This feels productive. It feels like you're being strategic. But you're evaluating a technology you don't yet understand against processes that were designed before the technology existed. You're looking at what you have and trying to find a place to plug something in. That's bolt-on thinking, and it's the same trap that catches enterprise teams with ten times your budget.
The difference is that enterprise teams get further down the wrong path before they realize it. They build the agent, deploy it, and then discover it inherited every inefficiency in the original process. You're stuck earlier, at the "where do I start?" phase. That's actually a better place to be.
A Manufacturer's Version of This Problem
I worked with a mid-size manufacturer that came to me with exactly this question. About 50 employees, solid business, good leadership. They kept hearing they needed AI. Every conference, every trade publication, every vendor pitch told them the same thing: you need to be using AI or you'll fall behind.
So they asked: where do we start?
Their instinct was to look at their current processes and find somewhere to add it. Maybe automate some data entry. Maybe speed up reporting. They were scanning their workflows for an "AI opportunity."
The problem was more fundamental than that. This team had zero hands-on exposure to what AI actually did. Not the marketing version. Not the demo version. The actual mechanics of how it works, what it's good at, and where it falls apart. You can't evaluate where a technology fits if you don't understand the technology.
So we didn't start with their processes. We started with building that understanding.
The Right Sequence
Once the team understood what AI could actually do, we picked a contained process to pilot: their proposal workflow. When an RFP came in, the team would pull together past project work, reference previous proposals, draft a response, route it through engineering for technical review, and deliver. It worked, but it was slow and heavily manual.
For the pilot, we built a process where an AI agent carried institutional knowledge: what past projects looked like, what the engineering team's guidance was, what winning proposals had in common, and what losing ones got wrong. The agent helps the team build proposals by drawing on that accumulated context instead of starting from scratch every time.
The measurement is specific: time from RFP retrieval to proposal delivery. The pilot is still running, so I don't have final numbers. But that's not the point of this story.
The point is the sequence:
First, build understanding. Not hype, not demos, not vendor pitches. Real exposure to what the technology does and doesn't do well. Without this, every decision you make about where to apply AI is a guess.
Second, pilot with measurement. Pick a contained process where you can define what success looks like before you start. The proposal workflow was right for this team because it had a clear input (RFP), a clear output (delivered proposal), and a measurable gap (time). The pilot isn't just about whether AI "works." It's about building organizational understanding of how to work with it.
Third, evaluate which processes should look entirely different. This is where the real value is. Once your team understands the capability and has experience working with it, the conversation changes. Instead of "where does AI fit in what we do?" it becomes "which of our processes should look completely different now that we know what's possible?"
Step three is where most of the value lives. But you can't get there without one and two.
The Anxiety Is Real. The Rush Isn't.
If you're a leader at a small or mid-size company and you feel behind on AI, I understand the pressure. The noise is constant. Every week there's a new tool, a new capability, a new headline about how AI is changing everything. It feels like everyone else has figured this out and you're standing still.
Here's what I see from the other side of these engagements: the organizations that jumped in fastest aren't ahead of you. Many of them are now unwinding implementations that automated the wrong thing. They bolted AI onto existing processes, inherited every inefficiency, and are now doing the process redesign work they should have done first.
You're not behind. You're at the starting line, and most organizations that think they're ahead just started running in the wrong direction.
So don't rush to "add AI" to your current workflows. Start with understanding. Pick one process for a measured pilot. Use the pilot to build organizational knowledge, not just to check a box. Then ask the real question: knowing what we now know, which of our processes should look entirely different?
Where This Connects
This isn't just a small business problem. I recently wrote about how enterprise teams with dedicated AI programs and six-figure tooling budgets make the same structural mistake. They take an existing process, hand it to an agent step by step, and inherit every constraint that only existed because humans were the execution layer. Different scale. Different budget. Same wrong question.
Once you've redesigned a process, you still need a framework for deciding which parts agents should own versus where humans stay in the loop. The Organizational Capability Map provides that structure. And getting clear on what agents should actually optimize for is a separate problem entirely, one I covered in The Intent Gap.
The question isn't where to start. It's how to start. And how is: understand first, pilot second, redesign third. The scale doesn't matter. A Fortune 100 team with unlimited budget and a dedicated AI program made the same structural mistake as a 50-person manufacturer. More on that next.