once in a lifetime adrenalin rush
once in a lifetime adrenalin rush
I’d say it’s not the LLM at fault. The LLM is essentially an innocent. It’s the same as a four year old being told if they clap hard enough they’ll make thunder. It’s not the kids fault that they’re being fed bad information.
The parents (companies) should be more responsible about what they tell their kids (LLMS)
I’d say it’s more that parents (companies) should be more responsible about what they tell their kids (customers).
Because right now the companies have a new toy (AI) that they keep telling their customers can make thunder from clapping. But in reality the claps sometimes make thunder but are also likely to make farts. Occasionally some incredibly noxious ones too.
The toy might one day make earth-rumbling thunder reliably, but right now it can’t get close and saying otherwise is what’s irresponsible.
Sorry, I didn’t know we might be hurting the LLM’s feelings.
Seriously, why be an apologist for the software? There’s no effective difference between blaming the technology and blaming the companies who are using it uncritically. I could just as easily be an apologist for the company: not their fault they’re using software they were told would produce accurate information out of nonsense on the Internet.
Neither the tech nor the corps deploying it are blameless here. I’m well aware than an algorithm only does exactly what it’s told to do, but the people who made it are also lying to us about it.
Sorry, I didn’t know we might be hurting the LLM’s feelings.
You’re not going to. CS folks like to anthropomorphise computers and programs, doesn’t mean we think they have feelings.
And we’re not the only profession doing that, though it might be more obvious in our case. A civil engineer, when a bridge collapses, is also prone to say “is the cable at fault, or the anchor” without ascribing feelings to anything. What it is though is ascribing a sort of animist agency which comes natural to many people when wrapping their head around complex systems full of different things well, doing things.
The LLM is, indeed, not at fault. The LLM is a braindead cable anchor that some idiot, probably a suit, put in a place where it’s bound to fail.
This is gonna get worse before it gets better.
Editing grub.cfg from an emergency console, or running grub-update from a chroot is a close second.
Adding the right Modeline to xorg.conf seemed more like magic when it worked. 🧙🏼
If you legitimately got this search result - please fucking reach out to your local suicide hot line and make them aware. Google needs to be absolutely sued into the fucking ground for mistakes like these that are
Google trying to make a teensy bit more money
Absolutely will push at least a few people over the line into committing suicide.
We must hold companies responsible for bullshit their AI produces.
This is from the account that spread the image originally: x.com/ai_for_success/status/1793987884032385097
Alternate Bluesky link with screencaps (must be logged in): bsky.app/profile/…/3ktarh3vgde2b
Apology Post: About 7-8 hours ago, I shared my views on how Google AI overview might be disabled in the next 15 days, citing some humorous results shared by users on X. I still believe this could happen due to the AI's inaccurate responses. Unfortunately, my first post of 🧵
I gotchu on the cheese
My comment - relied on another user’s modified prompt to avoid Google’s incredibly hasty fix
Be depressed
Want to commit suicide
Google it
Gets this result
Remembers comment
Sues
Gets thousands of dollars
Depression cured (maybe)
Everyone’s brains are different. For some SSRIs might work. For others, SNRIs. While there are claims of cocaine and prostitutes being helpful for some, that’s not really scientifically proven and there the significant health and imprisonment risks. There is, however, strong evidence for certain psychedelics.
TL;DR - Drugs might be helpful for some.
i pulled the image from a meme channel, so i dont know if its real or not, but at the same time, this below does look like a legit response