In my recent talk, “Dancing with AI – A Developer’s Journey,” one of the most important topics I covered was security and how AI-generated code can make it alarmingly easy to overlook critical vulnerabilities and this isn’t just with WordPress, it can be with any framework: human protection over AI security.
AI can be incredibly helpful speeding up development, whether you’re scaffolding a plugin, writing a function, or even querying a database. But speed can come at the cost of scrutiny. AI doesn’t intuitively understand the nuanced responsibilities of a developer, especially when it comes to data integrity, access control, and secure input handling.
One of the most overlooked areas? Security best practices.
The problem? AI doesn’t have intuition. It doesn’t understand context, data sensitivity, or ethical responsibility. You might get working code that appears solid but dig deeper, and you’ll often find missing sanitization, raw user input, or database queries without proper parameters. It’s a ticking time bomb disguised as convenience.
During the talk, I shared a key insight: AI-generated code often “works” on the surface, but that doesn’t mean it’s safe or production-ready. For example, I’ve been handed AI code that inserts user data into a database without any escaping… a classic case of SQL injection waiting to happen!! No warnings, no hints, just code that executes successfully while quietly introducing risk.
Human protection over AI security is non-negotiable.
That’s why I argue for human protection over AI security because developers must remain the gatekeepers.
To drive this point home, I often challenge the AI directly. I’ll ask:
“What security considerations might I be missing here?”
That simple question can turn a passive suggestion into a more thoughtful exchange and sometimes highlight risks I hadn’t considered yet.
In fact, I recently demonstrated this exact process using a real-world example not from my slides, but a working session just this morning using ChatGPT where I interrogated AI-generated logic. The screenshot below shows the result of that process: a list of missing security measures that were only uncovered after I pushed back on the AI’s initial suggestions.

What was missing? CSRF protection. Capability checks. Proper input sanitization and output escaping. Protection against direct file access. REST endpoint abuse. None of it was in the original code and yet, the AI considered the job “done.”
This is why human oversight is not optional.
I’ve built a mental checklist to review AI-generated output, especially when it touches user input, permissions, or external data sources. For WordPress work, I ensure all form handling includes nonces, all input is sanitized and escaped correctly, and every action checks for capability and intent.
The goal isn’t to distrust AI, it’s to collaborate with it responsibly. Think of AI as a junior developer that never sleeps but also never reads documentation unless you explicitly ask. It’s powerful, but it needs direction.
So the next time AI gives you code, don’t just paste it in and move on. Interrogate it. Refactor it. Validate it. Ask it what’s missing. And most importantly, don’t let it erode your own instincts and skills as a developer. That’s where the real danger lies.
Let’s not forget — in the evolving world of development, human protection over AI security is the standard we must uphold. Because in this partnership, we’re still the ones leading the dance.
Leave a Reply