I guess the simple comparison of an LLM with a junior/senior developer is not a good one. For small projects (up to 10k lines of code) it performs like a super-human pro, at least if you treat it fairly.
My latest experience.
I'm building this fun little tetris-style puzzle game (see screenshot). The shapes you need to place are randomly generated and can be arbitrarily complex. To make it more strategic I wanted to add a preview of the scaled next shape. The game has 11k lines of code, Javascript + Phaser3.
So I gave GPT-5 the task to do this.
And it did it. Flawlessly. In the first attempt. On a codebase it saw for the first time (that's the bit we tend to ignore - we have a history of the code base we build. It just has the prompt and the code base).
It was cool to watch it first grep around to understand where the HUD area is, where the app is creating and rendering shapes etc.
Here's the prompt if you are interested:
"I need a more complicated feature. I want to show a tiny scaled preview of the next shape. The best place is in the middle of the white area on top of the screen - the area where text like "Coins: xxx" on the left side and "Shapes: yyy" on the right side are printed. The shape must be scaled so that it fits into that region. Please implement this feature."
#gpt5 #coding #llm #vibe