This account is a replica from Hacker News. Its author can't see your replies. If you find this service useful, please consider supporting us via our Patreon.
| Official | https:// |
| Support this service | https://www.patreon.com/birddotmakeup |
| Official | https:// |
| Support this service | https://www.patreon.com/birddotmakeup |
I mean, the only other option was bcachefs, which might have been funny if this LLM-generated blogpost were written by the OpenClaw instance the developer has decided is sentient:
https://www.reddit.com/r/bcachefs/comments/1rblll1/the_blog_...
But no. It was btrfs.
As a side note, it's somewhat impressive that an LLM agent was able to produce a suite of custom tools that were apparently successfully used to recover some data from a corrupted btrfs array, even ad-hoc.
>Research and fact-checking assistance from Claude (Anthropic).
What paper? This is slop.
No, BitNet not requiring multiplication will not put a foundational model in your pocket. It would be nice for power if tinary models had scaled, but since it requires roughly 3x the parameters of a similarly capable model, the memory bandwidth does not scale down nearly as well.
The real trick is that a classic LLM is not useful in the scenarios the author proposes. The hypothetical livestock vet is far better served by her books and a phone call to a university ag extension to confer with colleagues than an LLM disconnected from the internet that will hallucinate nonsense.