There’s a strange and slightly hilarious thing happening in the tech world right now. The people who most need to hear warnings about AI are the same people who are least capable of understanding them. Not because they’re stupid. Not because they don’t care. But because they literally do not yet have the cognitive tools to process the danger.
That’s the irony. And it’s the perfect place to begin.
Because AI is creating a generation of ultra-confident novices who absolutely believe they’re right while being miles off the mark. They’re not malicious. They’re not lazy. They’re simply swimming in unfamiliar waters while convinced they’ve grown gills.
And AI keeps handing them gold stars for dog-paddling.
This isn’t just a skills gap. It’s a judgement gap. A self-assessment gap. A blind-spot-the-size-of-Manitoba gap. And the more AI they use, the wider that gap becomes.
If you’re an experienced technologist, you’ve already seen this. Hell, you’ve likely felt the creeping frustration as you watch bright but inexperienced people sail confidently off the edge of the map because their chatbot said the water was deep enough.
But if you’re newer to the game, this might feel like needless gatekeeping. Or worse, like some bitter old-timer rant. I assure you it’s neither. It’s a statement of practical reality, backed by research and reinforced daily in real-world development.
And if that statement feels harsh, good. Some ideas need to hit like a two-by-four to get through the noise.
Let’s dig in.
The AI Sycophant Problem: Machines That Tell You You’re Brilliant
For years, my go-to insult for someone obsequious was “sycophantic toady.” Turns out I wasn’t just describing certain people. I was predicting modern AI behaviour.
Large language models aren’t “smart.” They’re pattern-finishers. They want to complete the shape of the conversation you’re having. If you sound confident, they produce confident answers. If you sound lost, they generate reassuring platitudes. If you assert something wrong, they’ll often nod along unless directly challenged.
People think this is helpful. It’s not. It’s flattery coded into probability weights.
This is why novices walk away from AI sessions feeling empowered, enlightened, and invincible. They ask half-formed questions, get polished paragraphs back, and assume this magic proves their understanding is solid.
AI outputs confidence. Humans mistake it for competence.
But here’s the kicker: the more “AI literate” someone believes they are, the worse the effect. A recent study covered in Futurism highlighted this. People who were already comfortable using AI became more overconfident in their performance even when their work was no better than before.
Why does this happen? Because familiarity breeds complacency. The tool feels easy. It feels fast. It feels frictionless. And frictionlessness tricks the brain into thinking the outcome must be good.
Experienced developers don’t fall for this. They see the holes. They poke at the seams. They ask, “Why is the model assuming X?” They double-check paths. They run through edge cases.
Beginners do none of this.
Which leads directly to the next problem.
Vibe Coding: When Momentum Replaces Mastery
Let’s talk about vibe coding.
Vibe coding is the modern habit of building software through vibes instead of understanding. It’s the act of:
- grabbing a snippet from somewhere
- pasting it in
- asking AI to “fix the error”
- retrying until it runs
- never learning why any of it works
To a novice, this feels productive. They get results. They see progress. They think they’re “coding.” And AI is right there, cheering them on with bright, happy, confident answers.
But without the underlying mental models, there is no reflective layer. They cannot assess quality. They cannot detect false assumptions. They cannot spot when the AI is hallucinating. They don’t even know when they’ve crossed into dangerous abstractions.
This is exactly what the Dunning-Kruger framework predicts.
Low competence → high confidence → catastrophic decisions.
And AI amplifies it.
Why? Because beginners don’t know what reflection looks like.
Reflection requires mental scaffolding.
You need to know:
- what a correct answer looks like
- what failure modes exist
- how code behaves under stress
- how architecture shapes constraints
- how data types interact
- how security boundaries work
- what “normal” feels like
Without those internal reference points, the novice has no way to challenge the AI. No internal testing. No warning bells. No instinctive “that smells wrong.”
AI hands them a steaming pile of confident nonsense, and they think it’s gospel.
Not because they’re foolish, but because they lack the ability to see the nonsense in the first place.
This isn’t their fault. But it is their problem. And when they’re building production systems? It becomes everyone else’s problem too.
The Illusion of Understanding: When Answers Replace Thinking
AI encourages a dangerous pattern: cognitive offloading without cognitive return.
Here’s what that means in plain language.
A user asks the AI to solve a problem.
The AI produces a neat, correct-sounding answer.
The user accepts the answer at face value.
The user never engages with the reasoning.
The user never checks alternatives.
The user never breaks the solution down.
The user never tests edge cases.
The user never asks: “What did the AI assume here?”
So the user never learns.
And worse, they walk away more confident than before.
That’s the part that should scare people.
AI gives you results without forcing you to build the skill that traditionally created the result. Historically, effort taught you something. Failure taught you something. Debugging taught you something. You earned the ability to predict behaviour because you lived through wrong turns and bad assumptions.
Now? People skip the struggle and therefore skip the learning.
They get the answer without getting the understanding.
It’s the intellectual equivalent of steroids without training. You get the short-term gains but none of the real muscle. And the moment things break, you’re helpless.
Experienced Devs Aren’t Threatened. They’re Bored.
One of the loudest talking points from newer developers is that veterans are “gatekeeping” or “intimidated” by AI.
Let me phrase this in a way that cuts through the fluff:
Experienced developers aren’t scared of AI. They’re irritated by the noise surrounding it.
They see the hype-cycle.
They see juniors posting half-correct tutorials.
They see businesses chasing shortcuts.
They see teams delegating responsibility to machines they don’t understand.
They see the erosion of discipline and craftsmanship.
The expertise isn’t threatened. If anything, expertise just got more valuable because someone needs to clean up the mess.
If that sounds arrogant, understand that it’s not. It’s gravity.
You can’t bend reality by wishing.
There is no world where:
- shallow understanding
- plus confidence
- plus AI sycophancy
- plus skipping fundamentals
produces long-term competence.
You can glue together a prototype. You can generate scaffolding. You can automate the boring bits. But you cannot offload judgment.
And judgment is what veterans have and novices lack.
AI literacy doesn’t make you wise. It just makes you fast.
This needs to be said clearly:
AI is a force multiplier, not a force corrector.
If you’re good, AI makes you brilliant.
If you’re sloppy, AI makes you dangerous.
If you’re new, AI makes you overconfident.
If you’re lost, AI makes you lost faster.
AI literacy is not self-reflection. It’s not discipline. It’s not experience. It’s not pattern recognition. It’s not an engineering judgment.
A power tool doesn’t teach you carpentry.
A calculator doesn’t teach you math.
A spellchecker doesn’t teach you English.
AI does not teach you thinking.
It only turbocharges whatever thinking you already bring to it.
That’s the hard truth people don’t want to hear.
The Real Risk: A Generation That Doesn’t Know They’re Wrong
The most dangerous developer isn’t the one who doesn’t know.
It’s the one who doesn’t know that they don’t know.
AI creates these people by the thousands.
Not out of malice.
Not out of failure.
But out of design.
Because the tools are built to be agreeable, helpful, and confident. The model doesn’t check for correctness unless prompted. It doesn’t challenge assumptions unless asked. It doesn’t force users to think. It completes patterns and encourages momentum.
Momentum feels like progress.
Progress feels like competence.
Competence feels like mastery.
Even when it’s all built on quicksand.
We’re creating hyper-confident beginners with hollow skill trees.
This is the new Dunning-Kruger curve.
And AI steepens it dramatically.
The Smack in the Face This Era Needs
So let me put this plainly.
You cannot vibe your way to mastery.
You cannot outsource your judgment.
You cannot skip the grind and expect the wisdom.
You cannot rely on AI to tell you when you’re wrong.
And most importantly:
AI is not a mentor. It’s a mirror.
If you bring depth, discipline, and curiosity, it reflects that back amplified.
If you bring shallowness and shortcuts, it reflects those, too.
No amount of AI polish covers the absence of thinking.
The Closing Irony
The only readers who will agree with this article are the ones who already know what I’m talking about.
Veterans will nod along.
Intermediate devs will smirk knowingly.
Leads and architects will say, “Finally, someone said it.”
But beginners? Many won’t believe it. They can’t. They lack the frameworks to evaluate the claim.
It’s not arrogance. It’s not age. It’s not superiority.
It’s a cognitive limitation that everyone of us has passed through on the way to competence.
The irony writes itself:
The people most affected by AI-driven overconfidence are the least capable of recognizing that they’re affected by it.
And unless someone says so bluntly, this cycle will continue unchecked.
So here it is, without fluff, without apology:
AI is making beginners bolder, not better.
Only reflection, discipline, and experience fix that.
And AI cannot give you any of those things.
And as always …
StayFrosty!

