In the world of AI and LLM every mistake, error or bug you run into is already a »classic«.

»You’re running into a classic ›columns depend on async data‹ vs ›columns must be stable for grid state‹ conflict.«

Okay, I feel less alone now, but this still doesn't help me solve the problem, haha.

#ai #llm #codingassistant #slop

Question for #LLM haters:

So I don’t really want to hate on LLMs or generative AI in this post, but I am curious about alternatives to LLMs for code generation. What I particularly have in mind is how LLMs are pretty good at calling up code examples and fitting them into your code base.

Has anyone tried just creating a very large database with millions of code examples or code snippets, perhaps tagged with keywords pertaining to what the code does, and letting people search code by tag and automatically paste it into your file? If StackOverflow has been mined for LLM training data, can’t we just take that StackOverflow training data and parse-out the code snippets and generate such a database?

If LLMs coding tools like #Cursor, #Claude, #Copilot, Cody, etc. are basically doing that using statistical algorithms, couldn’t we do the same thing with ordinary symbolic computation and a clever little editor plug-in with good UI/UX design for the copy-pasting?

Does anyone know if such tools/databases already exist?

I’ll tag @screwlisp and @kentpitman on this one because I am especially curious about what you think about it.

#tech #software #AskFedi #CodingAssistant #LLM #AI #GenerativeAI

@[email protected]

Once you realize that a #LLM is just a lossy compression of its textual sources (improperly named "training data"), you see how using a #CodingAssistant provided as a service is just a way to share your own source code with your competitors.

Just like your competitors, you will see its contextualized output and will say "how smart!" while paying to work as a data annotator for #Anthropic, #OpenAI, #Google and so on, providing new code and feedbacks they will compress in their matrices.

At the end of the day, companies using these #AI tools totally deserve the outcomes as they will lose any edge over the market, with all newcomers producing cheap clones of their products.

Otoh, the #BigTech stealing their code don't deserve it and will leverage their position to squeeze the market.

Sometime I wonder: maybe we should find ways to decompress (and thus exfiltrate) proprietary software into the commons as long as gready managers still push these tools on programmers. After all they are stealing from the #FreeSoftware commons, violating #copyleft and misattributing works under permissive licenses alike.
However it would legitimize such companies, and this would harm the #commons and society much more than a few niche software clone could ever benefit them.

@[email protected]
@[email protected]

It's a little more subtle than this.

#CodingAssistant like #Claude, #Copilot and so forth are not really becoming "smarter" version after version.

The trick there is that these tools steal the code you send them, compress it in the next version of the LLM and give it back to your competitors.

They are basically broadcasters of your corporate value.

Now, to be fair this isn't much an issue for #Google, #Microsoft or #Meta, because they don't really have competitors over actual software. They compete on marketing.

But if you are anybody else, by using these service you are helping your competitors, that will replicate your innovations into their product leveraging your code.
I just realized that #vibecoding is just the software-development equivalent of cooking at #McDonalds.

The output of #LLM is #junkcode.

While #programming, #hackers retain full control of the means of production, firmly anchored on their necks.
In the last forty years, this turned programming into an wealthy carrier that attracted greedy people because the only way corporation had to obliviate awareness of such unprecedented political leverage, was to pay developers relarively high salaries while they were building the infrastructure of their own oppression

With #vibecoding, means of production go back into capitals hands: novice produce nicely looking software without acquiring any valuable skill, and senior developers leverage their (hardly acquired) experience to "drive the tool like a younger intern", alienating¹ themselves, loosing their skills while providing further "training data"² to the capital owners

Note how I'm not strictly talking about employers: if you work for a company that push a #CodingAssistant from a third party (usually a #BigTech from the #USA), your company is doomed too, as they are giving their most valuable asset (your skills and the business experience encoded in their source code) away.

Yet the point is that junk code is to society what junk food is to public health: a burden that mostly affect the poor, not the rich.

Indeed rich people can pay for fine restaurants and healthy food, while the poorest are forced to eat the cheapest slop they can afford, further enriching the companies that sell it and pay low wages to their employees.

In the same way, the users' of vibecoded software will be those who can't afford high quality software. And vibecoders will be those who can't afford to learn how to code (that requires time and energy, and thus money)

So while #vibecoding is marketed as "the democratization of programming", such #propaganda hides the opposite process: if vibecoding keep spreading, programming will become a service to rent under the full control of a handful of companies that will be able to inject any vulnerability or backdoor into the junk code that nobody could actually read.

Paradoxically, those who now resist to the fear of missing out and preserve their skills might gain even higher wages in the future, while those who follow the mob will discover themselves among the replaceble members of the reserve army of labour, together with McDonald's chefs, forced to ~eat~ depend on junk code.

What about hackers

Hackers will keep programming on their own. Not much for the fun of encoding an insight into a sequence of symbols a compiler can crunch (a process that can be just as frustrating as it's rewarding in term of knowledge) but because they understands both the technicalities of these tools (no #AI really understand its own output, so it can only work insofar the requirements replicated many existing software, but without any quality or security assurance) and the politics of the corporations that build and operate such tools.

In the long run, the social contract behind #FreeSoftware will evolve to avoid both contamination from junk code and contribution to the training dataset.

With everything else equal, we could have new #junkfree stacks, designed to be both human friendly and hostile to corporations: simpler operating system, programming language and protocols.

Unless, obviously, these #BigTech will somehow manage to outlaw programmable computing device they cannot control, probably in the name of users' security or children protection.

_____
¹ #Cybernetic alienation is the process of reducing human (awareness of) #autonomy. #Antropic gaslight the issue framing it as an issue of personal empowerment that can be addressed at design level, but given how people are trained to treat other people as tools and to treat interactive software as people, you see the issue is systemic to #LLM usage (at least as long as they are programmed to pass the #Turing test and fool humans about their nature).

² talking about "training data" is alienating by itself, as it project a human experience over an unrelated mechanical process. Instead of "training data" we should talk about "source data", as the models are nothing more than executable expressed as numeric matrices and designed to be executed by specific custom-built architectures that are improperly called "inference engine" (or even worse, #NeuralNetworks) while they are just statistically programmable vector mapping machines.
Reserve army of labour - Wikipedia

@regehr
🙏🏻 So true, so well said, so best practice: helpers should “never, ever touch the keyboard when they're helping a student with an assignment. not even once! because as soon as someone else is driving, it becomes real easy for the student to stop thinking and just let things happen.”

AND, the corollary or by extension:

“kind of like what happens when we use a coding assistant.”

#CodingAssistant #AI #teaching #students

For much of the software engineering workforce, the junior and mid-level engineers at banks, healthcare, and government agencies, there’s much less wiggle room. They are sandwiched between the unreliability of AI output and the increased expectation from management to ship faster, resulting in a rapidly widening empathy gap between developers and product owners.

Only 16% of a developer’s time goes to writing code. The rest? Security and code reviews, monitoring, deployments, requirements clarification—operational work that keeps the lights on but doesn’t ship features.

Coding assistants are solving the wrong problem

#dev #ai #codingassistant

Người dùng chia sẻ cách thiết lập Local AI Coding tương tự Claude Code bằng GLM-4.7 Flash & llama.cpp. Hiệu quả bất ngờ khi quy trình làm việc cho cảm giác rất giống Claude dù chạy local.

#AI #LocalLLM #CodingAssistant #OpenCode
#TríTuệNhânTạo #LậpTrình #MôHìnhNgônNgữ

https://www.reddit.com/r/LocalLLaMA/comments/1qqpon2/opencode_llamacpp_glm47_flash_claude_code_at_home/

🛠️ ClaudeDesk - Giải pháp quản lý session cho Claude Code CLI vừa được open-source!

✨ Tính năng nổi bật:
- Timeline hoạt động tool real-time (chỉnh file, chạy lệnh...)
- Lịch sử hội thoại đầy đủ
- Cách ly với Git worktree để test an toàn
- Hướng dẫn workflow commit/push/PR

🔧 Công nghệ: Express + TypeScript + React
📥 Cài đặt: `npx claudedesk`
📜 Giấy phép MIT - Đóng góp PR welcome!

#AI #CodingAssistant #OpenSource #DeveloperTools #LapTrinh #TroLyAI #NguonMo

https://github

Công ty muốn triển khai AI coding assistant nội bộ cho ~300 lập trình viên, yêu cầu LLM chạy on‑premise vì bảo mật. Đang cân nhắc Qwen‑3‑Coder‑32B hoặc mô hình lớn hơn đã quantize. Cần hạ tầng GPU mạnh (ví dụ 8×A100/RTX‑4090), VRAM đủ, interconnect tốc độ cao, SSD nhanh, mạng nội bộ ổn định, backup & bảo mật dữ liệu. #AI #CodingAssistant #OnPrem #LLM #MachineLearning #CôngNghệ #LậpTrình #AIinVietnam #DevOps #KỹThuật

https://www.reddit.com/r/LocalLLaMA/comments/1qkl400/ai_coding_assistant_infras