New Junior Developers Can’t Actually Code.
New Junior Developers Can’t Actually Code.
One can classify approaches to progress in at least four most popular ways:
The most dumb clueless jerks think that it’s replacing something known with something known and better. Progress enthusiasts, not knowing a single thing from areas they are enthusiastic about, are usually here.
The careful and kinda intellectually limited people think that it’s replacing something known with something unknown. They can sour the mood, but are generally safe for those around them.
The idealistic idiots think that it’s replacing something unknown with something known, that’s “order bringers” and revolutionaries. Everybody knows how revolutionaries do things, who doesn’t can look at Musk and DOGE.
The only sane kind think that it’s replacing something unknown with something unknown. That is, that when replacing one thing with another thing you are breaking not only what you could see and have listed for replacement. Because nature doesn’t fscking care what you want to see.
I honestly don’t know how anyone’s been able to code anything predominantly using AI that’s production worthy.
Maybe it’s the way I’m using AI, and to be honest I’ve only used chatGPT so far, but if I ask it to generate a bit of code then ask it to build on it and do the next thing, by about the third or fourth iteration it’s forgotten half of what we talked about and I missed out bits of code.
On a number of occasions it’s given me a solution and when I questions it about the accuracy of it and why a bit of it probably won’t work I just get oh yes let me adjust that for you.
Maybe I’m doing AI wrong I don’t know, but quite frankly I’ll stick with stack overflow thanks.
I frankly only used those to generate pictures and sometimes helloworlds for a few languages, which didn’t work and didn’t seem to make sense. It was long enough ago.
Also I have ASD, so it’s hard enough for me to make consistent clear sense from something small. A machine-generated junk to give ideas is the last thing I need, my thought process is different.
It’s only useful for stuff that’s been done a million times before in my experience. As soon as you do anything outside of that, it just starts hallucinating.
It’s basically like how junior devs used to go to stack overflow, grabbed whatever code looked like it would work and just plopped it in the codebase.
I remember talking to someone about where LLMs are and aren’t useful. I pointed out that LLMs would be absolutely worthless for me as my work mostly consists of interacting with company-internal APIs, which the LLM obviously hasn’t been trained on.
The other person insisted that that is exactly what LLMs are great at. They wouldn’t explain how exactly the LLM was supposed to know how my company’s internal software, which is a trade secret, is structured.
But hey, I figured I’d give it a go. So I fired up a local Llama 3.1 instance and asked it how to set up a local copy of ASDIS, one such internal system (name and details changed to protect the innocent). And Llama did give me instructions… on how to write the American States Data Information System, a Python frontend for a single MySQL table containing basic information about the member states of the USA.
Oddly enough, that’s not what my company’s ASDIS is. It’s almost as if the LLM had no idea what I was talking about. Words fail to express my surprise at this turn of events.
On the flipside, I’m discouraging people from entering CS. The passionate devs will ignore me anyway, and those that’ll listen won’t stand a chance against the hordes of professional BS “devs” that’ll master AI and talk much prettier than them.
Don’t get into CS unless you’re passionate about the craft. If you’re passionate, you’ll succeed in pretty much regardless of the field.
I’ve said it before, but this is a 20-year-old problem.
After Y2K, all those shops that over-porked on devs began shedding the most pricey ones; worse in ‘at will’ states.
Who were those devs? Mentors. They shipped less code, closed fewer tickets, cost more, but their value wasn’t in tickets and code: it was investing in the next generation. And they had to go because #numbersGoUp
And they left. And the first gen of devs with no mentorship joined and started their careers. No idea about edge cases, missing middles or memory management. No lint, no warnings, build and ship and fix the bugs as they come.
And then another generation. And these were the true ‘lost boys’ of dev. C is dumb, C++ is dumb, perl is dumb, it’s all old, supply chain exploits don’t exist, I made it go so I’m done, fuck support, look at my numbers. It’s all low-attention span, baling wire and trophies because #numbersGoUp.
And let’s be fair: they’re good at this game, the new way of working where it’s a fast finish, a head-pat, and someone else’s problem. That’s what the companies want, and that’s what they built.
They say now that relying on Ai makes one never really exercise critical thought and problem-solving, and I see it when I’m forced to write fucking YAML for fucking Ansible. I let the GPTs do that for me, without worrying that I won’t learn to code YAML for Ansible. Coding YAML for Ansible is NEVER going to be on my list of things I want to remember. But we’re seeing people do that with actual work; with go and rust code, and yeah, no concept of why we want to check for completeness let alone a concept of how.
One day these new devs will proudly install a patch in the RTOS flashed into your heart monitor and that annoying beep will go away. Sleep tight.
I have seen this too much. My current gripe isn’t fresh devs, as long as they are teachable and care.
My main pain over the last several years has been the bulk of ‘give-no-shit’ perms/contractors who don’t want to think or try when they can avoid it.
They run a web of lies until it is no longer sustainable (or the project is done for contractors) and then again its someone else’s problem.
There are plenty of 10/20 year plus and devs who don’t know what they are doing and don’t care whose problem it will be as long as it isnt theirs.
I’m sick of writing coding 101 standards for 1k+ a day ‘experts’. More sick of PR feedback where it’s a battle to get things done in a maintainable manner from said ‘experts’.
and I see it when I’m forced to write fucking YAML for fucking Ansible. I let the GPTs do that for me, without worrying that I won’t learn to code YAML for Ansible. Coding YAML for Ansible is NEVER going to be on my list of things I want to remember.
Feels like this is the attitude towards programming in general nowadays.
To be fair, YAML sucks. It’s a config language that someone thought should cover everything, but excel at nothing.
Just use TOML, JSON, or old-school INI. YAML will just give you an aneurism. Use the best tool for the job, which is often not the prettiest one.
Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.
Antoine de Saint-Exupéry
Kids these days with their fancy stuff, you don’t need all that to write good software. YAML is the quintessential “jack of all trades, master of none” nonsense. It’s a config file, just make it easy to parse and document how to edit it. That’s it.
No one wants mentors. The way to move you in IT is to switch jibes every 24 months. So when you’re paying mentors huge salaries to train juniors who are velocity drags into velocity boosters, you do it knowing their are going to leave and take all that investment with them to a higher paycheck.
I don’t say this is right, but that’s the reality from the paycheck side is things and I think there needs to be radical change for both sides. Like a trade union or something. Union takes responsibility for certifying skills and suitability, companies can be more confident of hires, juniors have mentors to lean from, mentors ensure juniors have aptitude and intellectual curiosity necessary to do the job well, and I guess pay is more skill/experience based so developers don’t have to hop jobs to get paid what they are worth.
Yeah those job hoppers are the worst. You can always tell right away what kind of person those are. I’ve had to work with a “senior” dev who had 15 years of experience and to be honest he sucked at his job. He couldn’t do simple tasks, didn’t think before he started writing code and often got stuck asking other people for help. But he got paid big bucks, because all he did his entire career was work somewhere for 2-3 years and then job hop and trade up. By the time the company figured out the dude was useless, he went on to the next company.
Such a shitty attitude, which is a shame because he was a good dude otherwise. I got along with him on a personal level. And honestly good on him for making the most he can, fuck the company. But I personally couldn’t do that, I take pride in my work.
Honestly good on that dude. Yeah it sucks for the bottom line of the company but as you said fuck the company. They’re always exploitative and would drop you in a hot minute if they found someone cheaper even if you were good at your job.
Dude found a way to survive in this system and I don’t fault people like that. I do wish I could be one but the interview process stresses me out too much and I couldn’t do it every other year.
I let the GPTs do that for me, without worrying that I won’t learn to code YAML for Ansible.
And this is the perfect use case. There’s a good chance someone has done exactly what you want, and AI can regurgitate that for you.
That’s not true of any interesting software project though.
FAIL some code reviews on corner cases. Fail some reviews on ISO27002 and supply chain and role sep. Fail some deployments when they’re using dev tools in prod. And use them all as teachable moments.
Fortunately, I work at an org that does this. It turns out that if our product breaks in prod, our customers could lose millions, which means they could go to a competitor. We build software to satisfy regulators, regulators that have the power to shut down everything if the ts aren’t crossed just so.
Maybe that’s the problem, maybe the stakes are low enough that quality isn’t important anymore. Idk, what I do know is that I go hard on reviews.
Recently my friend was trying to get me to apply for a junior dev position. “I don’t have the right skills,” I said. “The biggest project I ever coded was a calculator for my Java final, in college, a decade and a half ago.”
It did not occur to me that showing up without the skills and using a LLM to half ass it was an option!
std::unordered_map. I tell them about cppreference. The little shit tells me “Sorry unc, ChatGPT is objectively more efficient”. I almost blew a fucking gasket, mainly cuz I’m not that god damn old. I don’t care how much you try to convince me that LLMs are efficient, there is no shot they are more efficient than opening a static page with all the info you would ever need. Not even considering energy efficiency. Utility aside, the damage we have dealt to developing minds is irreversible. We have convinced them that thought is optional. This is gonna bite us in the ass. Hard.
Literacy rates are on a severe decline in the US, AI is only going to make that worse.
Over half of Americans between 16 and 74 read below a 6th grade level (that’s below the expected reading level of an 11 year old!)
This is only a guess, but it could be related to increased use of technology. Many things we interact with are simplified, and if you come across a word you don’t know your phone can give you simple synonyms or if you can’t spell autocorrect will catch it.
The same problem people are talking about with LLMs with a different lens.
Of course, there are different opinions, but here’s my take (as a Swede, but not an expert in politics/history):
The issues didn’t start during the last decade. In the 90’s, it was politically decided that schools wouldn’t be nearly as centrally managed by the state as they had been, instead municipalities would handle most school-related politics and administration locally. It was also decided that parents are allowed to choose more freely where to send their kids. This weakened public schools. Moreover, legislation was introduced (in the 00’s I think but I’m not sure) that allows for-profit private schools, which historically AFAIK had been prohibited.
Parents usually don’t have to pay anything extra to send their kids to private schools, and for each private school pupil more tax money flows into the private instead of public schools. The private schools are of course incentivized to attract children from families that are well off, since they tend to perform better (boosting the school’s score and thus reputation), have parents that can e.g. drive them from a longer distance, and just generally have less issues and so cost and complain less. For instance, it’s been reported that some private schools refuse (openly or through loopholes) e.g. special needs pupils since the tax money paid to the school for them isn’t worth the cost (and “bad PR”, no doubt) of actually giving them a proper education.
Sweden has also had a high rate of immigration the last decades. Immigrant parents understandably tend to not be as savvy about the school system and have less time/resources for getting their kids to “nicer” schools further away. Immigrant kids also tend to require more attention, both due to needing to learn Swedish and because psychological problems, e.g PTSD, are more common among many immigrant groups. Also I haven’t seen any studies on this, but IMO the private schools’ advertisements (on billboards etc) tend to be very geared towards “white” kids/parents with no immigrant background.
In 2007 a tax benefit for “homework help” among other things was introduced, halving the price parents have to pay for private tutors at home. This again benefits families that are well off and lets private companies in education siphon tax money.
All this means a cycle of segregation seen in so many countries. Public schools are burdened with students that require more resources, while private schools do everything they can to snatch up low-maintenance pupils. This makes private schools seem to perform better and gives public schools bad reputations. Racism and class discrimination also plays into all this of course.
It also doesn’t help that teachers’ salaries and social standing have decreased, partly due to the same general pattern.
Really? My kids are hitting the rules hard. In 1st grade, they’re learning pronunciation rules I never learned (that’s phonics, right?). My 2nd grader is reading the 4th Harry Potter book, and my 5th grader finished the whole series in 3rd grade and is reading at a 7th or 8th grade level.
I did teach them to read before kindergarten (just used a book for 2-3 months of 10 min lessons), but that’s it, everything else is school and personal interest. They can both type reasonably well because they use the Minecraft console and chat. They’re great at puzzles, and my 5th grader beat me at chess (I tried a wonky opening, and he punished me), which they learned at school (extra curricular, but run by a teacher).
We love our charter school, though I don’t think it’s that different from the public school.
I don’t think phonics are the most critical part of why the kids can’t read.
It’s proven that people who read primarily books and documents read thoroughly, line by line and with understanding, while those that primarily read from screens (such as social media) skip and skim to find certain keywords. This makes reading books (such as documentation) hard for those used to screens from a young age and some believe may be one of the driving forces behind the collapse in reading amongst young people.
If you’re used to the skip & skim style of reading, you will often miss details, which makes finding a solution in a manual infinitely frustrating.
Look, ultimately the problem is the same as it has always been: juniors doing junior shit. There’s just more of it going on. If you’re hiring one, you put a senior on them ready to extinguish fires. A good review process is a must.
Now that I think about it, there was this one time the same young’un I was talking about tried to commit this insane subroutine that was basically resizing a vector in the most roundabout way imaginable. Probably would have worked, but you can also just use the resize method, y’know? In retrospect, that was probably some Copilot bullshit, but because we have a review process in place, it was never an issue.
When I had to get up to speed on a new language, it was very helpful. It’s also great to write low to medium complexity scripts in python, powershell, bash, and making ansible tasks. That said I’ve been programming for ~30 years, and could have done those things myself if I needed, but it would take some time (a lot of it being looking up documentation and writing boilerplate code). It’s also nice to write C# unit tests.
However, the times I’ve been stuck on my main languages, it’s been utterly useless.
ChatGPT is extremely useful if you already know what you’re doing. It’s garbage if you’re relying on it to write code for you. There are nearly always bugs and edge cases and hallucinations and version mismatches.
It’s also probably useful for looking like you kinda know what you’re doing as a junior in a new project. I’ve seen some shit in code reviews that was clearly AI slop. Usually from exactly the developers you expect.
I used it a few days ago to translate a math formula into code.
Here is the formula: wikimedia.org/…/126b6117904ad47459ad0caa791f296e6…
It’s not the most complicated thing. I could have done it. But it would take me some time. I just input the formula directly, the desired language and the result was well done and worked flawlessly.
It saved me some time.
Agreed. I wanted to test a new config in my router yesterday, which is configured using scripts. So I thought it would be a good idea for ChatGPT to figure it out for me, instead of 3 hours of me reading documentation and trying tutorials. It was a test scenario, so I thought it might do well.
It did not do well at all. The scripts were mostly correct but often in the wrong order (referencing a thing before actually defining it). Sometimes the syntax would be totally wrong and it kept mixing version 6 syntax with version 7 syntax (I’m on 7). It will also make mistakes and when I point out the mistake it says Oh you are totally right, I made a mistake. Then goes on to explain what mistake it did and output new code. However more often than not the new code contained the exact same mistake. This is probably because of a lack of training data, where it is referencing only one example and that example just had a mistake in it.
In the end I gave up on ChatGPT, searched for my testscenario and it turned out a friendly dude on a forum put together a tutorial. So I followed that and it almost worked right away. A couple of minutes of tweaking and testing and I got it working.
I’m afraid for a future where forums and such don’t exist and sources like Reddit get fucked and nuked. In an AI driven world the incentive for creating new original content is way lower. So when AI doesn’t know the answer, you are just hooped and have to re-invent the wheel yourself. In the long run this will destroy productivity and not give the gains people are hoping for at the moment.
It’s like useful information grows as fruit from trees in a digital forest we call the Internet. However, the fruit spoils over time (becomes less relevant) and requires fertile soil (educated people being online) that can be eroded away (not investing in education or infrastructure) or paved over (intellectual property law). LLMs are like processed food created in factories that lack key characteristics of more nutritious fresh ingredients you can find at a farmer’s market. Sure, you can feed more people (provide faster answers to questions) by growing a monocrop (training your LLM on a handful of generous people who publish under Creative Commons licenses like CC BY-SA on Stack Overflow), but you also risk a plague destroying your industry like how the Panama disease fungus destroyed nearly all Gros Michel banana farming (companies firing those generous software developers who “waste time” by volunteering to communities like Stack Overflow and replacing them with LLMs).
There’s some solar punk ethical fusion of LLMs and sustainable cultivation of high quality information, but we’re definitely not there yet.
This is probably because of a lack of training data, where it is referencing only one example and that example just had a mistake in it.
The one example could be flawless, but the output of an LLM is influenced by all of its input. 99.999% of that input is irrelevant to your situation, so of course it’s going to degenerate the output.
What you (and everyone else) needs is a good search engine to find the needle in the haystack of human knowledge, you don’t need that haystack ground down to dust to give you a needle-shaped piece of crap with slightly more iron than average.
Forced to use copilot? Wtf?
I would quit, immediately.
I would quit, immediately.
Pay my bills. Thanks.
I’ve been dusting off the CV, for multiple other reasons.
how surprising! /s
but seriously, it’s almost never one (1) thing that goes wrong when some idiotic mandate gets handed down from management.
a manager that mandates use of copilot (or any tool unfit for any given job), that’s a manager that’s going to mandate a bunch of other nonsensical shit that gets in the way of work. every time.