Am i cooked as a junior dev?
The question “Are junior devs cooked?” haunts my feed every day. Usually, the fear is that the entry-level job is disappearing. But after a few months of heavy AI usage, I’ve realized the real danger isn’t that the jobs are going away , it’s that my ability to think might be. I’m becoming a “prompt manager” before I’ve even learned how to be a “problem solver.” In a world where code is becoming a commodity, being a manager of someone else’s logic is a dangerous place to be.
The moment I hit send on a prompt... First, relief. Answers appear, code appears, explanations appear. Momentum. I feel incredibly "productive". But then, that quiet feeling i try to ignore starts crawling up my neck. It's the gnawing sense that I’ve skipped the struggle—the very part of the process where the learning actually lives. Nothing breaks immediately, so it feels safe. But deep down, I know this progress is fugazi
the trap
That’s where the trap is. AI makes it easier to stay shallow without noticing. Every time something feels abstract or uncomfortable, there’s an exit. “I’ll understand it later.” “I’ll come back to it.” “I just need to move forward.” Later rarely comes. You end up remembering words, patterns, shapes of answers, but not the reasoning underneath. You’re not even outsourcing memory anymore, you’re outsourcing thinking.
the more I rely on AI, the less I ask the right questions
My mental muscle is atrophying. I’m losing the friction i need to actually grow. I’m just sliding down a polished slope toward being a dev who can’t survive 10 minutes without a chatbox holding my hand.
false feedback
I noticed this most clearly in a personal project. At some point, I got into the habit of asking what felt like an innocent question. When working with Claude Code, I would often ask the model: “Review this code, list any issues you find, and write them to a DEFAUTS.md file.”
Sounds smart, right? especially as a junior. You assume there must be things you’re doing wrong. You expect feedback. The problem is that the model is relentless. , even when they’re not real problems in the context of the project.
It hallucinates architectural flaws, urging me to over-engineer simple logic into a nightmare of abstract patterns that looks like it belongs in a faang codebase. And because the output looks so structured so confident, i fell for it. every. single. time. I spent hours fixing 'defects' that didn't matter, losing my own common sense to a machine that has no of what I was actually building.
the missing "no"
This happens because AI lacks the one thing a junior needs most: the ability to say "no" to a bad premise. It will help you dig a hole deeper and deeper, even if you shouldn't be digging at all. When you talk to a human, the gaps are exposed. Assumptions get challenged. “It depends” shows up. The conversation pushes back. You’re forced to refine the question, not just consume an answer.
From a learning perspective, that friction is not noise. It’s feedback. It’s part of the loop that reveals misunderstanding early. Removing it doesn’t stop learning, but it delays the moment where errors and false assumptions become visible.
implicit decisions
At some point, I realized this wasn’t really about code quality. It was about who was actually in the driver's seat. I would ask for help on a small piece of code (nothing architectural). The result would come back clean and functional. But something felt off.
what I had in mind
Python
def send_email(user, template, context):
content = render_template(template, context)
smtp.send(user.email, content)
what the AI suggested
Python
class EmailService:
def __init__(self, renderer, transport):
self.renderer = renderer
self.transport = transport
def send(self, user, template, context):
content = self.renderer.render(template, context)
self.transport.send(user.email, content)
class TemplateRenderer:
def render(self, template, context):
return render_template(template, context)
class EmailTransport:
def send(self, address, content):
smtp.send(address, content)
Nothing here is obviously wrong. In fact, this kind of structure is often considered better design.
And that was precisely the problem.
The pattern itself wasn’t necessarily bad. It might even have been a good choice. The issue was that I didn’t grasp fully what it implied. A decision had been made about abstraction and future flexibility without me ever consciously making it. The code worked, but I had lost the ability to justify it.
I also realized how easy it is to slip into cargo-cult thinking. As a junior, you’re hit with a lot of principles at once: Clean Code, SOLID, DRY. They start to feel like commandments: thou shalt abstract, thou shalt decouple. You recognize the shapes of “good” code, even if you don’t fully understand why they apply.
Cargo-culting isn't new. People were copying patterns from Stack Overflow long before LLMs. What AI changes is the speed and confidence of the process. When an AI applies a principle, it doesn't hesitate. It just produces something that looks correct. Reasoning gets replaced by pattern matching.You’re not building a system; you're just decorating your codebase with patterns you haven't earned the right to use yet.
the illusion
For a while, I blamed the tool. But that’s too convenient. The deeper issue is language. AI outputs sound right, even when they’re wrong. They’re fluent and confident. And when you don’t fully understand the system you’re working in, you don’t actually know what a good answer should feel like. You don’t know which assumptions are safe and which ones are dangerous.
With enough exposure, I started noticing something unsettling: I was not being fooled by the model itself. I was being fooled by explanations that feel right. By answers that sound confident enough to bypass my own doubt. That realization is uncomfortable, because it forced me to admit that the weak point isn’t the tool. It's me. I'm the one who stopped being critical. I'm the one who traded my edge for a bit of comfort.
brand new perspective
And once you see that, something shifts. I stopped asking whether AI is “good” or “bad” because it’s not going anywhere. I stopped asking how fast I can build something, and I started asking whether I actually understand what I'm wiring together.
the real question is what AI does to my relationship with understanding.
Using it saves time, yes. It removes boilerplate, yes. But it also raises the bar on what it means to actually understand something. When plausible answers are instant, the only real defense is having a solid mental model before asking the question.
thinking again
Being good with AI, then, is just being good. It means knowing when to trust it, when to slow down, and when to step back and rebuild your own understanding. TThe tool amplifies whatever is already there. It rewards clear thinking and punishes vague mental models. This clicked for me back in September at a Meetup Devs for AI. One of the speakers dropped a truth nuke that reframed my entire anxiety:
"Our job has never been about coding; it’s about understanding what the client actually wants. If a client explains a requirement to an AI the same way they explain it to a human, it’s going to be a disaster."
That’s the gap. AI removes the boilerplate, but it pushes the real responsibility upward. It forces us to move from being "translators of syntax" to "architects of intent."
To survive this, I realized I had to intentionally reintroduce the friction that AI tries so hard to remove. I started looking for places where my thinking would be challenged by something other than fluent answers. I went to more meetups. I tried to be around cracked senior engineers, not to get solutions, but to hear how they framed problems. What they questioned. What they ignored. What they pushed back on.
I also went back to slower forms of learning. Books like the pragmatic programmer don’t give you instant answers. They give you principles, trade-offs, and mental models that only make sense once you’ve been wrong a few times.
But I also realized I could turn the tool against its own nature. Since the default state of an LLM is to be "agreeable," I decided to break that loop by for example modifying my chatgpt custom instructions.

By explicitly telling the AI to critique my thinking and act like a high-level senior, I’m forcing friction back into the chat window.
Now, I know what you’re thinking: “Didn’t you just say that asking for feedback leads to the 'false feedback' trap?” Yes. But the difference is in how I consume the output. I don't treat the AI’s critique as a to-do list anymore. I treat it as a sparring session.
When it tells me my architecture is "suboptimal," I don't just change it anymore. I ask myself: Is the AI right, or is it just hallucinating complexity? By forcing the machine to say "no" or challenge my shortcuts, I’m no longer just consuming an output, I’m being forced to defend my technical decisions. Even if the AI is wrong, the act of proving it wrong makes my own mental model stronger. It's no longer my assistant; it's my sparring partner.
Finally, I’ve shifted to something I stumbled upon in a blog called QDD (Question Driven Development). Instead of asking for code first, I start by asking the AI questions about the problem space and the constraints. I force myself to articulate the logic before a single character of syntax is generated. By the time I ask for the code, I’ve already done the heavy lifting of thinking. The AI becomes a validator of my logic, not the architect of it.
understanding the big picture is the only real way to stand out.
So no, I don’t think being a junior today means being cooked. But it does mean the bar moved. Avoiding used to be an option. It doesn’t feel like one anymore. And maybe that discomfort isn’t a sign that the job is dying, but that it’s finally asking more of us.