elliott richmond looking at AI via a tablet device, AI Won't Replace Developers Who Can Think

AI Won’t Replace Developers Who Can Think Critically

I use AI every day. It’s become part of how I work, and I’m not embarrassed to say I’m reliant on it. But the more I use it, the more I notice something that doesn’t get talked about enough: knowing how to use AI well is a skill in itself, and it’s one that depends heavily on already knowing what you’re doing.

There’s a lot of talk right now about “vibe coding” and “one-shot prompts.” If I’m honest, those terms frustrate me a bit. Not because the ideas are wrong, but because they flatten something complicated into something that sounds easy. And easy is dangerous when you’re building software.

What Does Vibe Coding Actually Mean?

I’ve heard the term thrown around a lot, but when you dig into it, it seems to cover two very different things.

There’s the developer who knows their craft, gets into a flow state with AI, and uses it to move faster. They’re steering the conversation, catching mistakes, and making judgment calls along the way. That’s productive. That’s genuinely useful.

Then there’s the person who doesn’t write code, sits down with an AI, and asks it to build something from scratch. They’re along for the ride. The AI is doing the thinking, and they’re accepting the output because they don’t have the experience to question it.

These are not the same thing. Calling them both “vibe coding” hides a problem that’s going to bite us.

The “Yes, You’re Absolutely Right” Problem

If you’ve worked with AI for any length of time, you’ll know this pattern. You spot something off in the code it’s produced. You push back. And the AI responds with some variation of: “Yes, you’re absolutely right, I apologise for the oversight.”

It’s almost too agreeable. But here’s the thing, you wouldn’t have caught that mistake if you didn’t already know what to look for. That moment of pushback, where you question what the AI has done and redirect it, that’s where the real value of experience shows up.

A junior developer might not catch it. Someone with no coding background almost certainly won’t. And the AI isn’t going to flag its own blind spots. It’ll confidently serve up code that looks right, passes a surface level check, and quietly introduces a problem that doesn’t show up until it’s in production.

Context Is a Box

One thing I’ve noticed is that AI works brilliantly within the boundaries you give it. But that’s also its limitation.

When you set up a project with configuration files, markdown docs, and clear instructions… the AI treats all of that as its blueprint. It follows the rules. It stays inside the lines. And most of the time, that’s exactly what you want.

But sometimes, the right move is to scrap the plan. Rethink the approach. Realise that the architecture you started with isn’t going to hold up, and you need to come at the problem from a different angle.

Humans do this naturally. We step back, question assumptions, and sometimes throw out work we’ve already done because we’ve spotted a better path. AI doesn’t do that. It works with what you’ve given it, and if what you’ve given it is flawed, it’ll build confidently on top of those flaws.

That’s not a criticism of the technology. It’s just a reality of how it works right now. And it means someone needs to be in the room who can think outside the box… because the AI won’t.

A Real Example

Here’s something that happened to me recently. I was working on a WordPress project where house listings were being filtered by date. The performance was poor, and I asked the AI to trace the data flow and tell me where the bottleneck was.

It did a solid job. It identified that when transient caches were cold, every house in the results was triggering two separate HTTP requests to an external booking API… sequentially O_o
With 15 to 20 houses, that’s 30 to 40 requests in series. That’s the kind of thing that brings a page load to its knees!

The AI then offered three options:

  1. client-side progressive loading – js in the browser!
  2. pre-warming caches via cron – fast local database query!
  3. batching the pricing calls – more advanced js in the browser!

On the surface, option one, progressive loading, sounds reasonable. Shift the work to the client, show the cards first, fetch prices after. But I knew from running the old production site that during peak traffic, every user session would still be hammering the external API. You’re just moving the bottleneck, not removing it.

The better answer was option two. A cron job that pre-warms the cache on a schedule, so no user request ever touches the external API. Every visitor reads from the local database. The AI agreed immediately when I pointed this out (obviously!) but it didn’t arrive there on its own because it didn’t have the context of real-world traffic patterns and years of dealing with exactly this kind of problem.

That’s the gap. The AI can analyse code, trace logic, and suggest solutions. But the judgment call…? the one informed by experience, production incidents, and knowing how systems behave under load… that still sits with the developer.

How I Actually Work With AI

I’ve become more disciplined about this over time. My process now looks something like this:

  1. Discovery first. I have a conversation with the AI just to explore the problem. No code gets written. No commits happen. We just talk through what we’re trying to solve.
  2. Plan on paper. I ask the AI to dump a markdown file into the root of the project outlining the tasks we discussed. That gives me a sanity check. I can read through it, make sure nothing’s gone sideways, and catch any assumptions I don’t agree with.
  3. One task at a time. I work through the plan in small, testable steps. Each one gets reviewed either with a test or by manually checking the result. No big-bang deployments. No hoping it all hangs together.

This sounds slow, but it’s actually faster. You catch problems early. You don’t end up three hours into a session wondering why everything’s broken and trying to untangle a mess of changes that were all made at once.

The Echo Chamber Risk

If you throw everything at an AI in one go, here’s my project, here’s what I want, go build it… you’re setting up an echo chamber. The AI will work within the constraints you’ve given it, and if those constraints are wrong or incomplete, it’ll just keep building on top of them.

Worse, if you’ve told it “don’t do X” early in the session and then later discover that X is actually what you need, you’ve got a context problem. The AI is still operating under instructions that no longer apply, and unpicking that mid-session is messy.

Breaking work into small, focused tasks avoids this. Each step is a checkpoint. Each checkpoint is a chance to change direction if something isn’t working.

The Real Threat

I’ve written before about the impact of AI on jobs, and my position hasn’t changed. AI isn’t coming for experienced developers. It’s coming for the work that people who aren’t developers are now trying to do with AI tools.

That sounds harsh, but think about it. If someone with no coding background uses AI to build an application, who’s checking the output? Who’s catching the security holes, the performance bottlenecks, the architectural decisions that won’t scale? Who’s pushing back when the AI gets it wrong?

Nobody. And that’s a time bomb.

We’re going to end up with a wave of AI-generated software that looks finished but isn’t. It’ll work in demos. It’ll pass a casual glance. But under real load, with real users, the cracks will show. And fixing those problems will still require someone who understands the fundamentals.

Keep Going

It took me decades to get to the point where I can look at a problem and instinctively know something’s off. I look at code I wrote six months ago and wonder why I approached it that way. That’s not a sign of weakness, it’s a sign of growth. Your judgment keeps sharpening if you let it.

There’s a lot of anxiety in the industry right now. People are worried. They’re in transition. They’re not sure where they fit. I get it.

But I keep coming back to the same thought: AI can write code faster than I can. It can’t think like I can. It can’t question its own assumptions. It can’t draw on twenty years of watching things go wrong in ways that nobody predicted.

So my advice is simple. Keep using AI. Get better at it. Learn its strengths, learn where it falls short, and sharpen your own judgment in the process. The developers who treat AI as a tool, not a replacement for thinking, are the ones who’ll still be here when the dust settles.

The ones who hand over the keys and stop paying attention? They’re the ones who should be worried.

The best use of AI isn’t getting it to do your thinking for you. It’s getting it to help you think better.


For more on working with AI in real-world development, or to explore how tools like Claude Code fit into a practical workflow, the conversation is just getting started.


Leave a Reply

Your email address will not be published. Required fields are marked *