I'm looking at a demo of this paper right now, which is kind of interesting - https://arxiv.org/pdf/2005.11401.pdf - but... it relies, the same way most AI models do, on a tectonic amount of human curation effort that's gone on behind the scenes to make it work.
I mean, it's nice I guess, and there's some nice features in a low-K-threshold, high-quality-training-data situation, but it sure looks like this will all fall apart if you point it at large, unvetted or adversarial data sets.
@mhoye
Confirmed in Spanish as well "no existe" at least is a complete and answer. I would be interested in knowing if it sources info form the language asked first and then translates ? Or does it translate the question look for the translation in English then translates response back?
@mhoye also... it seems like most AI people have given up on...
1. Letting the AI ask questions to test its understanding (toddler)
2. Accepting corrections as input (elementary school).
3. Being able to research & cite sources (high school)
4. Being able to say "here's what I don't know" (college)
@dalias @bsmedberg @mhoye @dalias @bsmedberg @mhoye The dreamers rarely can get the budget, and the implementors are rarely interested in working for free.
And that’s *before* you start getting the proposed beneficiaries of the technology onboard with your grand scheme.
(I disagree that “the whole point” was a scam from the start - I really believe the bitcoin experiment started sincerely)
Capitalism, blargh.
@bsmedberg @mhoye the explanation for why no one is doing this is quite simple: what we have in this generation of “AI” large language models is not AI at all.
It cannot learn. It cannot know. It cannot understand. It cannot cite sources because it does not know what a source is. It would not gain value from those kinds of questions.
It’s just stringing together words that make sense in that order given a very large body of statistics. That’s it. It is not anything resembling intelligent.
@mhoye @bsmedberg right, yeah. What we don’t really know yet, and will be interesting to find out, is whether the very premise of the current round of “AI” LLMs is fundamentally incompatible with that kind of development, or whether they could actually be a path to more generalized intelligence and human like characteristics.
It’ll still be more and more useful the more “extensions” we can add to the language, and maybe we’ll get close. Just hard to say right now.

Vulgarity and Cloud Orthodoxy in Linked Data Infrastructures - A critical history of the semantic web and linked data, grappling with the next generation of surveillance capitalism, where grand corporate knowledge graphs devour the planet and sell it back to us as a glassy-eyed LLM personal assistants, will we remain stuck in the ideology of the cloud, or can we have better dreams?
@jonny thank you for writing and sharing that article. That was genuinely refreshing and disturbing at the same time.
It reads like a manifesto for concerted action. 💪
@mhoye
Amazing.
And I bet this works for other basic-ass stuff too. Anything where the conspiracy content outweighs the debunking content is going to have the possibility of the conspiracy stuff seeming more plausible simply because that's the only content written in response to a dumb-ass question very few people ask.
@Eggfreckles @mhoye All Australians are paid actors based at a film set in London. We knock up the CGI for Uluru and the harbour bridge in MS Paint.
They are. But let that fool you into thinking drop bears aren't real.
@mhoye priceless.
Literally. Any price is too high.