context: I wanted to know if the open source projects currently being spammed with PRs would be safe from people running slop models on their computer if they weren’t able to use claude or whatever. Answer: yes, these things are still terrible
but while I was searching I found this comment and the fact that people hated it is so funny to me. It’s literally the person who posted the thread. less thinking and words, more hype links please.
conversationwww.reddit.com/r/LocalLLaMA/comments/…/o3jn5db/ > 32k context? is that usable for coding? > >> (OP’s response, sitting at a steady -7 points) >> >> LLMs are useless anyway so, okay-ish, depends on your task obviously >> >> If LLMs were actually capable of solving actual hard tasks, you’d want as much context as possible >> >> A good way to think about is that tokens compress text roughly 1:4. If you have a 4MB codebase, it would need 1M tokens theoretically. >> >> That’s one way to start, then we get into the more debatable stuff… >> >> Obviously text repeats a lot and doesn’t always encode new information each token. In fact, it’s worse than that, as adding tokens can _reduce_ information contained in text, think inserting random stuff into a string representing dna. So to estimate how much ctx you need, think how much compressed information is in your codebase. That includes stuff like decisions (which LLMs are incapable of making), domain knowledge, or even stuff like why does double click have 33ms debounce and not 3ms or 100ms in your codebase which nobody ever wrote down. So take your codebase, compress it as a zip at normal compression level, and then think how large the output problem space is, shrink it down quadratically, and you have a good estimate of how much ctx you need for LLMs to solve the hardest problems in your codebase at any given point during token generation
*emphasis added by me
superuser.com/questions/1930445/…/1930446#1930446
Can I delete the Chrome’s OptGuideOnDeviceModel safely? It’s taking up 4GB
. . .
I also founds mentions of bunch of various flags you can potentially disable to turn the whole feature off, e.g. chrome://flags/#optimization-guide-on-device-mode… - but I’ve seen at least 5 other ones mentioned in several sources, with various people claiming for each that they don’t work . . .
Now Chrome can hog your VRAM too. Yay
Don’t worry if you only have 8GB and need the other half for anything, Chrome will probably relinquish it. This is very intelligent, as all the browser has to do is simply load another 4GB file from disk the next time you want to do anything.
psychologytoday.com/…/the-psychology-of-collectiv…
Article I found randomly because… I was trying to add the Psychology Today blog to uBlacklist so I stop seeing their articles lol
It lost me a little towards the end, but it’s heartwarming to imagine a world where tech fascists screaming about the Antichrist have a few* billion dollars less and actual charities have a few more.
*where few = [3, ∞)
Many of these tools are useful, and don’t use generative AI – that is, AI that creates – but use AI to summarize texts or alter images.
Oh no, has this become the common definition of generative AI? I’m guessing some AI company must have tried to launder the name and make it seem less bad. Both of those examples are clear-cut generative AI.
Fortunately the EA side is a little more on the nose sometimes.
One of my first wakeup calls was they offered to mail me a book for free🚩🚩🚩 (it was from 80,000 hours)
I’ve seen the same thing and it’s reassuring lol.
I lurk on subreddit drama and curated tumblr, and I feel like the common reaction to LW has gone from a few negative comments and “really? that’s crazy”'s five years ago to being much more aware. Years ago you’d see maybe one person familiar with them and then a couple people respond who are totally out of the loop and maybe you’d see one crazy rationalist chime in to nuh-uh them. Now, anything rationalist-related usually has a bunch of people bringing up the harry potter or acausal robot god stuff right away.
I use the tag feature a lot in RES to keep track of people who I like hearing what they have to say. Years ago I mostly saw the same names when LW stuff came up, but now there’s always a ton of people I’ve never seen before who are familiar with it.
Why not make an evil time travelling robot controlled by the illuminati? bro it’s even called Alexander
Maybe they simply yearn to write Final Fantasy villains
Alexander (アレクサンダー, Arekusandā? or アレキサンダー, Arekisandā?), also known as Alexandr, is a summoned creature in the Final Fantasy series first appearing in Final Fantasy VI. It is a gigantic robot, often appearing as a fortress-type entity. Its attack, Divine Judgment, deals Holy damage to all enemies. Alexander is an esper obtained by defeating the Wrexsoul in Doma Castle with Cyan in the World of Ruin and checking the throne in the King's Room. Its summon is called Divine Judgment (Justice in...
oh no not another cult. The Spiralists???
reddit.com/…/this_article_is_absolutely_hilarious…
it’s funny to me in a really terrible way that I have never heard of these people before, ever, and I already know about the zizzians and a few others. I thought there was one called revidia or recidia or something, but looking those terms up just brings up articles about the NXIVM cult and the Zizzians. and wasn’t there another one in california that was like, very straight forward about being an AI sci-fi cult, and they were kinda space themed? I think I’ve heard Rationalism described as a cult incubator and that feels very apt considering how many spinoff basilisk cults have been popping up
some of their communities that somebody collated (I don’t think all of these are Spiralists): www.reddit.com/user/ultranooob/m/ai_psychosis/
ah seems the site doesnt show the comments, change the ones it shows and they turn up
Oh man, I’ve found the old LW accounts of a few weird people and they didn’t have any comments. Now I’m wondering if they did and I just didn’t sort it