You Might Be Using AI Wrong. Here's a Fix.

I've been in my career since 2000 (yikes). That means I've lived through every iteration of "this new scary thing is going to replace us." Digital transformation in the mid-2000s. Automation in manufacturing. And now AI. Here's what I've noticed: the panic is always the same. And it's typically misplaced, but not for the reason people think.

When I was at Kaiser Permanente during the shift from paper to electronic medical records, patients called in to complain. They felt like their doctors weren't talking to them anymore, they were talking to a screen. Some of them actually said: "I'm not thriving" — a pointed dig at our company tagline at the time.

Look at us now. Think about how strange it would feel if a doctor walked into an exam room carrying a paper chart.

When I worked in manufacturing, the fear was that automation would eliminate experienced factory floor workers. And yes, in some cases, headcount reduction was the objective. But in many more cases, it created an entirely different opportunity. Instead of performing tedious, repetitive assembly tasks, those same operators were being trained to program and operate machinery. To identify defects with precision. I watched factory employees get promoted because they leaned into the technology instead of away from it.

AI is the same story. With one important difference.

The problem isn't that people are afraid of AI. It's that many people who aren't afraid are using it wrong.

Let's be honest about something other consultants aren't saying out loud: AI is replacing jobs. Not metaphorically — actually. And pretending otherwise doesn't serve anyone.

What I've observed across two decades of helping organizations navigate transformation is that the question was never "will this technology change things?" It always does. The EMR transition changed healthcare. Automation changed manufacturing. The question that actually matters is: what do you do with the change?

The factory workers who leaned into automation didn't escape disruption. They navigated it by becoming the people who could program the machinery, identify the defects, operate at a level the automation couldn't reach. They moved up precisely because they stopped competing with the tool and started using it.

AI is the same dynamic. The productivity gains are real. The displacement is real. What's also real is that every efficiency gain AI creates generates a corresponding need for someone who can do what AI can't: read the room, qualify the output, and know when the first draft is wrong. That's not a consolation prize. That's the actual work.

Here's the data point that stopped me mid-scroll this week: OpenAI — the company that arguably kicked off this entire era — is reportedly planning to nearly double its workforce from 4,500 to 8,000 employees. A significant chunk of those new roles? "Technical ambassadors" — humans hired specifically to help other humans use AI tools effectively. The most advanced AI company on the planet is betting on people to close the adoption gap. That tells you everything you need to know about where the real work still lives. The real problem is the prompting.

I see it consistently. Someone opens an AI tool, types a question the same way they'd Google it, gets a mediocre output, and either concludes that AI is overhyped, or worse, uses the mediocre output anyway. The issue isn't the tool. It's the approach. After two decades of helping organizations navigate transformation, I developed a framework for AI prompting that I use with my clients and in my own practice. I call it AEQ.

Act. Engage. Qualify.

  • Act (The Expertise): Define the AI's persona before you ask your question. Don't just ask for a summary, provide instruction. "Act as a data-driven advisor to an executive audience" produces a fundamentally different output than "summarize this." You're not searching. You're directing.

  • Engage (The Art): Provide the specific context the AI needs to be useful. Program data, goals, industry constraints, stakeholder dynamics. Feed the AI the technical details you need translated into actionable steps. The quality of your output is directly proportional to the quality of your context.

  • Qualify (The Science): This is the step most people skip, and it's the most important one.

Review the output through the lens of what you actually know. Your knowledge of the team. Your read on the stakeholders. Your understanding of what's operationally feasible versus what looks good on paper. Here's a real example from a presentation I gave — Mastering the Art of Technical Program Management Without a Technical Background:

I used AEQ to generate an initial risk register for a program that had just kicked off. The output identified UAT depth deficiency, zero-buffer logic, and data migration overlap as high-priority risks — all legitimate, all data-driven. Then I applied the Qualify step. I knew the customer had internal expertise and the resources to stress-test under a tight deadline. So I fed that context back in and asked for an updated risk assessment. The output shifted entirely. UAT depth deficiency became defect resolution latency. Zero-buffer logic became expert bias and tunnel vision. The risks were now calibrated to the actual environment, not just a generic program template.

That's the difference between using AI and using AI well.

AEQ is built on an honest reality, not on the hope that AI won't come for your workflow, but on the belief that the people who know how to direct it, interrogate it, and qualify its output will always have something irreplaceable to offer. AI gives you a first draft. Your experience is what makes it real.

The qualification step is the piece most people skip. It's also the piece that separates a productivity tool from a liability.

Previous
Previous

Your employees don't need you to be perfect. They need to know you're real.