Today's threads (a thread)
Inside: The web is bearable with RSS; and more!
Archived at: https://pluralistic.net/2026/03/07/reader-mode/
1/
Today's threads (a thread)
Inside: The web is bearable with RSS; and more!
Archived at: https://pluralistic.net/2026/03/07/reader-mode/
1/
@pluralistic I wrote and have been been using my own feed reader, Temboz, for over 24 years now. The two essential features of any good reader in my book are:
1) filtering. Don't want to hear about the inane Kardashians or the insane Trump any more? Double-click on their occurrence, click "thumbs-down", adjust the filtering setting and boom, mental health restored.
2) giving you control over the order articles are shown, as opposed to the Meta or Google algorithm that does not have your best interests at heart.
@viq I am actually in the middle of rewriting it in Rust, and switched my own usage yesterday, so if I were you I'd hold off for a couple of weeks until it stabilizes:
@viq OK, use rTemboz, follow the Docker instructions at https://github.com/fazalmajid/rTemboz?tab=readme-ov-file#running
(you don't need to build from source if you are OK using my docker images on Docker Hub).
I am dogfooding rTemboz at the moment, I am intensely aware of the regressions and highly motivated to fix them as this is the single app I spend the most time in...
@viq Either you build your tag hierarchies, which is a drag, , or you have a LLM classifier do this according to a taxonomy you choose, or you implement some form of vector search with embeddings. I don't want rTemboz to require a GPU to run. The data model already supports human entered tags vs AI, but the infrastructure to actually do it isn't in place yet. And some people have deep objections to LLMs.
Assuming you don't, what would be your preference for integrating AI assistance: self-hosted, API, CPU only, etc?
@fazalmajid
Personally I'd rather not get LLMs involved in this. Thus my idea of "saved search" (if searching for articles is possible), and/or rules for applying tags to articles or otherwise support for "virtual folders" with "some" mechanism for making articles appear in them.
I wonder how well Bayesian analysis would work for such?
Oh, also, does (r)Temboz support fetching full articles if feed has only snippets?
@viq yes, it has full-text search using SQLite's fts5, but not semantic search. It does not fetch full feeds, I've been on the receiving end of a DMCA take-down for just publishing a feed of the articles I personally find interesting.
You don't need a LLM to classify, an embedding like SBERT, GTE or Microsoft E5 combined with vector search would suffice, unless you are philosophically opposed to any kind of neural net tech.
Have you tried it yet? Any first impressions you'd like to share?
@fazalmajid
Huh. Damn, that's trigger-happy :(
LLMs to my knowledge require models to be of any use, and currently AFAIK all of those are of dubious origin. The current situation and how LLMs under a marketing term are shoved into everything, and the wide lens view of "why" is what soured that particular technology for me. AFAIK neural nets are just underlying tech, shared with other uses, and do not require ingesting all of everything without compensation to do useful things.
@viq That's right, the approach is inspired by Google News and its "river of news" rather than like an email reader. There was an article recently about how that design creates "phantom obligation" and a form of FOMO:
https://www.terrygodier.com/phantom-obligation
Articles marked "Up" are kept indefinitely, filtered or "Down" articles have their contents purged after 2 weeks to free up disk space. But otherwise, yes, the point of flagging an article is to bump up the feed SNR as a measure of quality, and to keep them for future reference in lieu of a bookrmarking service like the late del.icio.us.