Let’s Buy California from Trump – Denmark’s Next Big Adventure
https://denmarkification.com
https://infosec.exchange/@bontchev/113984675112429256
Let’s Buy California from Trump – Denmark’s Next Big Adventure
https://denmarkification.com
https://infosec.exchange/@bontchev/113984675112429256
Avec tout ce qu’il a fait pour saboter les trains à haute vitesse en Californie, jamais je n’aurais pensé qu’Elon Musk devienne la cause d’une augmentation d’utilisateurs du train et vélo
> ce que fait Elon Musk, c’est trop, ça m’a coupé le plaisir de conduire. Il faut que le groupe change de patron! En attendant, je prends davantage le vélo et le train.
https://www.24heures.ch/tesla-reactions-en-suisse-face-aux-exces-delon-musk-861834847174
At work we’re not allowed to use suppliers that allow bribery. So Including, now, Microsoft? And Google? Any company from the US?
Good news: GIMP 3.0 RC3 Released
Seems to be another #LLM #TLDR day :) I like this essay "The Bullshit Machines", here some nice quotes:
---
BULLSHIT involves language or other forms of communication intended to appear authoritative or persuasive without regard to its actual truth or logical consistency.
ANTHROPOGLOSSIC systems are computer programs or algorithms designed to mimic the way that humans use language.
We are easy prey for an anthropoglossic machine: surely a machine that writes like us must also think like us.
---
https://thebullshitmachines.com/
Now I need to read the second half of the lessons :)
A nice TLDR from a 3.5h deep-dive into LLMs. I'm also proud because for the last 2.5 years I have been catching up with how these things work, and most of the TLDR is known to me :)
I have still a lot of questions, though...
https://anfalmushtaq.com/articles/deep-dive-into-llms-like-chatgpt-tldr
We have mastodon at #epfl now! If you have an @epfl.ch mail address (doesn't work for the alumni (yet), but make some noise, and you might be added :), you can join here:
Funny thing happened yesterday when giving my email address @gasser.blue to an insurer:
me: "[email protected]"
she: "@gasser.bluewin.ch"
bluewin.ch is a common provider here
me: "No, @gasser.blue only"
she: "OK, @gasser.blue.ch"
me: "No, just @gasser.blue"
she: "???"
me: "Yes, it's not common, but actually an allowed name"
she: "Ah, that's funny, I didn't know. All good then!"
I thought it was interesting: very smart to fix common mistakes people make, but not geeky enough (yet) to know all TLDs by heart :)
https://huggingface.co/blog/open-deep-research
Seems to be LLM day today.
Huggingface is working on agent models, and wants them to be able to interact with your browser. Probably similar to what anthropic is doing with their framework I never got around to install...
https://arxiv.org/abs/2405.14831
I just read the abstract, but this looks fantastic for LLMs. Giving them a long-term memory!
Did you ever think while chatting with one of the LLMs: "no need to explain this answer is wrong, it will not remember it anyway."?
Well, think again. It might be useful now to also start writing "please" and be nice to these things :)
In order to thrive in hostile and ever-changing natural environments, mammalian brains evolved to store large amounts of knowledge about the world and continually integrate new information while avoiding catastrophic forgetting. Despite the impressive accomplishments, large language models (LLMs), even with retrieval-augmented generation (RAG), still struggle to efficiently and effectively integrate a large amount of new experiences after pre-training. In this work, we introduce HippoRAG, a novel retrieval framework inspired by the hippocampal indexing theory of human long-term memory to enable deeper and more efficient knowledge integration over new experiences. HippoRAG synergistically orchestrates LLMs, knowledge graphs, and the Personalized PageRank algorithm to mimic the different roles of neocortex and hippocampus in human memory. We compare HippoRAG with existing RAG methods on multi-hop question answering and show that our method outperforms the state-of-the-art methods remarkably, by up to 20%. Single-step retrieval with HippoRAG achieves comparable or better performance than iterative retrieval like IRCoT while being 10-30 times cheaper and 6-13 times faster, and integrating HippoRAG into IRCoT brings further substantial gains. Finally, we show that our method can tackle new types of scenarios that are out of reach of existing methods. Code and data are available at https://github.com/OSU-NLP-Group/HippoRAG.