Programmers are no longer needed!
Programmers are no longer needed!
LLMs often fail at the simplest tasks. Just this week I had it fail multiple times where the solution ended up being incredibly simple and yet it couldn’t figure it out. LLMs also seem to „think“ any problem can be solved with more code, thereby making the project much harder to maintain.
LLMs won’t replace programmers anytime soon but I can see sketchy companies taking programming projects by scamming their customers through selling them work generated by LLMs. I‘ve heard multiple accounts of this already happening myself and similar things happened with no code solutions before.
Your anecdote is not helpful without seeing the inputs, prompts and outputs. What you’re describing sounds like not using the correct model, providing good context or tools with a reasoning model that can intelligently populate context for you.
My own anecdotes:
In two years we have gone from copy/pasting 50-100 line patches out of ChatGPT, to having agent enabled IDEs help me greenfield full stack projects.
Our product delivery has been accelerated to while delivering the same quality standards verified by our internal best practices we’ve our codified with determistic checks in CI pipelines.
The power come from planning correctly. We’re in the realm of context engineering now, and learning to leverage the right models with the right tools in the right workflow.
Most novice users have the misconception that you can tell it to “bake a cake” and get the cake ypu had in your mind. The reality is that baking a cake can be broken down into a recipe with steps that can be validated. You as the human-in-the-loop can guide it to bake your vision, or design your agent in such a way that it can infer more information about the cake you desire.
If you’re already good at the SDLC, you are rewarded. Some programmers aren’t good a project management, and will find this transition difficult.
You won’t lose your job to AI, but you will lose your job to the human using AI correctly. This isn’t speculation either, we’re also seeing workforce reduction supplemented by Senior Developers leveraging AI.
Cursor and Claude Code are currently top tier.
GitHub Copilot is catching up, and at a $20/mo price point, it is one of the best ways to get started. Microsoft is slow rolling some of the delivery of features, because they can just steal the ideas from other projects that do it first.
Claude Code is better than just using Claude in cursor or copilot. Claude Code has next level magic that dispells some of the myths being propagated here about “ai bad.”
Cursor hosts models with their own secret sauce that improves their behavior. They hardforked VSCode to make a deeper integrated experience.
Avoid antigravity (google) and Kiro (Amazon). They don’t offer enough value over the others right now.
If you already have an openai account, codex is worth trying, it’s like Claude Code, but not as good.
JetBrains… not worth it for me.
I get it. I was a huge skeptic 2 years ago, and I think that’s part of the reason my company asked me to join our emerging AI team as an Individual Contributor. I didn’t understand why I’d want a shitty junior dev doing a bad job… but the tools, the methodology, the gains… they all started to get better.
I’m now leading that team, and we’re not only doing accelerated development, we’re building products with AI that have received positive feedback from our internal customers, with a launch of our first external AI product going live in Q1.
What are your plans when these AI companies collapse, or start charging the actual costs of these services?
Because right now, you’re paying just a tiny fraction of what it costs to run these services. And these AI companies are burning billions to try to find a way to make this all profitable.
What are your plans when the Internet stops existing or is made illegal (same result)? Or when…
They are not going away. LLMs are already ubiquitous, there is not only one company.
Ok, so you’re completely delusional.
The current business model is unsustainable. For LLMs to be profitable, they will have to become many times more expensive.
What are you even trying to say? You have no idea what these products are, but you think they are going to fail?
Our company does market research and test pilots we customers, we aren’t just devs operating a bubble pushing AI. We are listening and responding to customer needs.
I don’t know what your products are. I’m speaking specifically about LLMs and LLMs only.
Seriously research the cost of LLM services and how companies like Anthropic and OpenAI are burning VC cash at an insane clip.
That’s a straw man.
You don’t know how often we use LLM calls in our workflow automation, what models we are using or, what our margins are or what a high cost is to my organization.
That aside, business processes solve for problems like this, and the business does a coat benefit analysis.
We monitor costs via LiteLLM, Langfuse and have budgets on our providers.
Similar architecture to the Open Source LLMOps Stack oss-llmops-stack.com
Also, your last note is hilarious to me. “I don’t want all the free stuff because the company might charge me more for it in the future.”
Our design is decoupled, we do comparisons across models, and the costs are currently laughable anyway. The most expensive process is data loading, but good data lifecycles help with containing costs.
Inference is cheap and LiteLLM supports caching.
These tools are mostly determistic applications following the same methodology we’ve used for years in the industry. The development cycle has been accelerated. We are decoupled from specific LLM providers by using LiteLLM, prompt management, and abstractions in our application.
Losing a hosted LLM provider means we prox6 litellm to something out without changing contracts with our applications.
We using a layered architecture following best practices and have guardrails, observability and evaluations of the AI processes. We have pilot programs and internal SMEs doing thorough testing before launch. It’s modeled after the internal programs we’ve had success with.
We are doing thus very responsibly, in a way our customers are asking. We are not some junior devs vibe coding garbage.
Accelerated delivery. We use it for intelligent verifiable code generation. It’s the same work the senior dev was going to complete anyway, but now they cut out a lot of mundane time intensive parts.
We still have design discussions that drice the backlog the developer works off with their AI.
I seriously doubt your quality is maintained when an LLM writes most of your code, unless a human audits every line and understands what and why it is doing it.
If you break the tasks small enough that you can do this each step, it is no longer writing a full application, it’s writing small snippets, and you’re code-pairing with it.
We have human code review and our backlog has been well curated prior to AI. Strongly definitely acceptance criteria, good application architecture, unit tests with 100% coverage, are just a few ways we keep things on the rails.
I don’t see what the idea of paircoding has to do with this. Never did I claim I’m one shotting agents.
Great? Business is making money. We’re compliant on security, and we have no trouble maintaining what we’ll be maintaining less of in the future.
As more examples in the real world
Aider has written 7% of its own code (outdated, now 70%) | aider aider.chat/2024/05/24/self-assembly.html
LibreChat is largely contributed to by Claude Code, it’s the current best open source ChatGPT client, and they’ve just been acquired by ClickHouse.
Such suffering from the quality!
Your product is an LLM tool written with LLM tools. That’s is hilarious.
If the goal is to see how much middleware you can sell idiots, you’re doing great!