Something’s been bugging me about how new devs and I need to talk about it. We’re at this weird inflection point in software development. Every junior dev I talk to has Copilot or Claude or GPT running 24/7. They’re shipping code faster than ever. But when I dig deeper into their understanding of what they’re shipping? That’s where things get concerning. Sure, the code works, but ask why it works that way instead of another way? Crickets. Ask about edge cases? Blank stares. The foundational knowledge that used to come from struggling through problems is just… missing. We’re trading deep understanding for quick fixes, and while it feels great in the moment, we’re going to pay for this later.
Have you actually found that to be the case in anything complex though? I find it just forgets parts to generate something. Stuck in an infuriating loop of fucking up.
It took us around 2 hours to run our coding questions through chatgpt and see what it gives. And it gives complete shit for most of them. One or two questions we had to replace.
If a company cannot invest even a day to go through their hiring process and AI proof it, then they have a shitty hiring process. And with a shitty hiring process, you get shitty devs.
And then you get people like OP, blaming the generation while if anything its them and their company to blame… for falling behind. Got to keep up folks. Our field moves fast.
I find ChatGPT to sometimes be excellent at giving me a direction, if not outright solving the problem, when I paste errors I’m to lazy to look search. I say sometimes because othertimes it is just dead wrong.
All code I ask ChatGPT to write is usually along the lines for “I have these values that I need to verify, write code that verifies that nothing is empty and saves an error message for each that is” and then I work with the code it gives me from there. I never take it at face value.
Have you actually found that to be the case in anything complex though?
I think that using LLMs to create complex code is the wrong use of the tool. They are better at providing structure to work from rather than writing the code itself (unless it is something simple as above) in my opinion.
If a company cannot invest even a day to go through their hiring process and AI proof it, then they have a shitty hiring process. And with a shitty hiring process, you get shitty devs.
Have you actually found that to be the case in anything complex though? I find it just forgets parts to generate something. Stuck in an infuriating loop of fucking up.
It took us around 2 hours to run our coding questions through chatgpt and see what it gives. And it gives complete shit for most of them. One or two questions we had to replace.
If a company cannot invest even a day to go through their hiring process and AI proof it, then they have a shitty hiring process. And with a shitty hiring process, you get shitty devs.
And then you get people like OP, blaming the generation while if anything its them and their company to blame… for falling behind. Got to keep up folks. Our field moves fast.
My rule of thumb: Use ChatGPT for questions whos answer I already know.
Otherwise it hallucinates and tries hard in convincing me of a wrong answer.
I find ChatGPT to sometimes be excellent at giving me a direction, if not outright solving the problem, when I paste errors I’m to lazy to look search. I say sometimes because othertimes it is just dead wrong.
All code I ask ChatGPT to write is usually along the lines for “I have these values that I need to verify, write code that verifies that nothing is empty and saves an error message for each that is” and then I work with the code it gives me from there. I never take it at face value.
I think that using LLMs to create complex code is the wrong use of the tool. They are better at providing structure to work from rather than writing the code itself (unless it is something simple as above) in my opinion.
I agree with you on that.