@cancel Wait, did something happen? My news intake is erratic...
(Update: I have now been informed about the LLM-driven demand for RAM, and Crucial in particular closing their consumer division.)
@Infrapink @woozle @cancel the "AI" bubble has inflated RAM prices, and Micron is exiting the consumer RAM market to chase those AI dollars instead.
https://futurism.com/artificial-intelligence/ai-data-centers-ram-expensive
@tragivictoria @koorogi @Infrapink @cancel
Hmm, is this maybe why MSFT went "plonk" 2 days ago? 🧐
@tragivictoria @koorogi @Infrapink @cancel
In compensation, there will probably be a lot of really cheap high-end hardware hitting ebay when the bubble finally pops... (...assuming it doesn't take civilization down with it 🧐).
[plea for AI bubble to pop before destroying civilization, please sir may we still have some civilization left when you're all done, sir... 🧎♀️🙏]
@woozle @tragivictoria @koorogi @cancel
Unlikely. I'd love if that were to happen, but the way companies work, they're more likely to just chuck their old computers in a dump.
@Infrapink @tragivictoria @koorogi @cancel
Hopefully there will be a thriving dumpster-dive initiative...
@woozle @Infrapink @tragivictoria @koorogi @cancel
Unfortunately they shred the entire server.
This has been going on for years. All the big data center operators get their machines at a special low price in exchange for shredding them after three years. It's a huge contributor to the poorly-documented "embodied energy" use of the industry.
@publius @Infrapink @tragivictoria @koorogi @cancel
Okay, add that to the list of things that need to be illegal after the revolution.
@publius @woozle @tragivictoria @koorogi @cancel
~Capitalism is SO efficient!~
@Infrapink @publius @tragivictoria @koorogi @cancel
Efficient at extracting wealth from the rest of us (and burying it in a pit), yup. 
@woozle @Infrapink @tragivictoria @koorogi @cancel
It's worth observing that even the radical market economists only claim that the market economy is the most efficient possible (by a completely circular definition of efficiency). Much as they might like to, they cannot claim this about capitalism, which inherently and inevitably involves what they call "market non-idealities", that is, violations of the basic assumptions of free enterprise.
@publius @Infrapink @tragivictoria @koorogi @cancel
Yah -- I've had "discussions" with a few capitalists, especially back in the day on G+, and... it's an awful lot like a religion.
@woozle @cancel One of the big makers of RAM, Micron, announced that they are only going to sell to ai data centers from now on. Their consumer products like their "crucial" brand are all going away forever.
It'd be nice to open a new RAM factory in response but the capital outlay for that is something like a billion dollars.
@madengineering @cancel I can't imagine that the LLM-driven demand will last more than another year; the whole thing is a financial shell-game -- companies giving each other free access, and the recipients counting it as "investment" which they can use to further overvalue their stocks...
(I feel like some metaphorical heads will eventually metaphorically roll over this; aren't those sorts of numbers supposed to be vetted by certified accounting firms?)
Yeah, it's definitely going to cause issues for any non-billionaires out there. 
Thankfully Linux people don't seem to have that attitude. Even the big bells-and-whistles desktop environment will run comfortably on 4GB.
@argv_minus_one@mastodon.sdf.org I wouldn't be so sure. Have you ever seen malloc() return NULL in Linux?
Its memory manager writes cheques that it can't cash. This is why you see so many oom_kill problems.
I have a laptop next to me with 4GB and it runs KDE Plasma without incident, so yes, I'm quite sure.
@xconde overallocation is a design decision, not a bug/problem. it's safer to kill apps with heuristics instead of trusting developers to handle malloc failures properly. a fatal malloc failure kills whatever process gets caught in the crossfire, not necessarily the heaviest or problematic one(s). malloc failures being fatal is a common pattern across languages that don't make the developer explicitly malloc().
virtually no developer (and that includes Vel) wants to think about malloc failures outside of embedded/critical systems.
@xconde @argv_minus_one you’re making a different argument than the one you’re responding to
argv is saying common linux desktop environments use a modest amount of RAM, and you’re saying, at a technical level, the allocator doesn’t reject requests
but that’s unrelated, you can have a malloc() which is careful to not run out of real memory and still have user applications on top of it which are super disrespectful memory hogs which use every page they can get
@usul while Vel wasn't alive 20 years ago, it's quite certain that low end devices were not using 1920x1080@60 24-bit displays. triple-buffering wasn't used, since there were no fancy smooth animations, etc
using more RAM is often for the purposes of speeding things up and ultimately reducing the load on the CPU/GPU/disk.
macs were triple buffered by default in 2001 (mac os x) windows was triple buffered by default in 2006 (vista)
don't worry it's not that big of a deal.
@cancel
Something is different though. On our laptop almost half the RAM is used by the iGPU driver's RAM spillover. This was not the case 5+ years ago.
That (and websites getting fatter) are the main reason it regularily OOMs.
Got any info/hypothesis on that?
@risc @usul @argv_minus_one
@curiousicae sounds like either a bug or strange settings... vel's iGPU only gets any significant amount of RAM when it's actively using something heavy
@cancel
Most of the used memory is “GTT” (horrible Linux kernel jargon for “RAM dynamically allocated by the GPU” - took me years to understand that radeontop line).
I think the static assignment is actually for the iGPU, so that it can function without the graphics driver being present.
Seems like its feeling modest right now though (usually GTT is easily twice that) – really doubt AMD fixed anything for a “legacy” GCN 1.0 card in their driver suddenly:
@risc @usul @argv_minus_one
That is a good example of using more RAM to make things faster and smoother. It's just that it became an example earlier than @risc thought (because RAM was already fairly plentiful in 2001).
Like, you know how gamers these days say to run video games in windowed mode instead of full screen to make them run smoother? That's because the window compositor is triple-buffering the game. In full-screen mode, this doesn't happen (unless the game itself does triple buffering).
As someone who is very much enjoying his new 3840×2160 screen, it's because I can see *everything* on this huge damn screen. I love it so much.
Now if apps could go ahead and stop wasting space on unnecessary padding, that'd be great. I'm using a desktop with a mouse, not a phone!
Although I do have a touchscreen laptop, so… 🤷♂️