fedi there is so much full force sloperating happening in open source accounting software...

no, NO look at me fedi, I know it's boring but you need to look at the accounting software, it's bad ok, it's really fucking bad out there, and I don't think your financial situation should depend on whatever logic an LLM spat into a bunch of accounting software without review

beancount developer fantasizing about running "5-8 agents" on the beancount codebase and babysitting them instead of writing code
https://groups.google.com/g/beancount/c/cz8Xwnb7BLE/m/LSA3rTfMAgAJ

ledger accepting LLM code and talking about vibe coded ports to Rust (as an experiment, at least, but lol)
https://github.com/ledger/ledger/discussions/2474

rustledger (seems almost entirely vibe coded)
https://github.com/rustledger/rustledger

paper by the american accounting association on "Applying Large Language Models in Accounting"
https://publications.aaahq.org/jeta/article-abstract/21/2/133/12800/Applying-Large-Language-Models-in-Accounting-A
Some words about LLMs and Agents

PS no denying there is (will always be) slop. But if you look again at rustledger, you might be impressed. Beyond the usual type checks and extensive test suites it also includes TLA+ formal verification of some functionality.
@simonmic I did see the verification, it's interesting but my critique of AI is less about the actual code at any given snapshot in time, and more about the way it's produced, the lack of oversight developers give it, and the fact that this leads to many bugs and errors being added over time.

A lot of the reason I'd probably avoid these projects is that I'm not sure if they will create regressions tomorrow, or next week, due to some massive set of AI changes with poor review, who's to say they won't just break the TLA+ tests? I have much less confidence in the developer's ability to push stable and reliable changes, and this is a pretty big red flag when their programs are supposed to handle my financial future...
@froge @simonmic Besides, who's to say the TLA+ tests do what the LLM "says" they do? Or that they test anything useful?