| GitHub | https://github.com/cainaru |
| Drupal.org | https://www.drupal.org/u/cainaru |
| BlueSky | https://bsky.app/profile/tinyredflowers.bsky.social |
| GitHub | https://github.com/cainaru |
| Drupal.org | https://www.drupal.org/u/cainaru |
| BlueSky | https://bsky.app/profile/tinyredflowers.bsky.social |
"Friends don't let friends use LLM for activities where humans are valued. "
By @nod_
https://tresbien.tech/blog/algorithmic-bias-against-drupal-community-values/
I don't usually fail at making my life easier, but hey, it's a whole new world lately. To try my hand at LLM during my trial of AI-assisted coding, I wanted to see if I could customize an LLM for a specific task: assigning user credit on Drupal Core issues. Depending on the complexity, activity, and number of contributors involved it can take me anywhere between 30 seconds and 30 minutes to assign credit when I commit an issue to Drupal Core. Maybe I could automate some of it?
How does this work?
How is it compatible to wipe between 50-80% of the so-called "white-collar" workforce according to these company estimates and be profitable?
Who's buying their B2C product exactly if there are virtually no consumers? who's buying their B2B product if their corporate clients have virtually no consumers?
I feel like all these analysis are missing a very important variable on all this and I think they're doing it on purpose.
But when you look at projected numbers of OpenAI and Anthropic for their supposed IPOs (end of this year??) they're projecting massive profits by 2030 according to this article in WSJ https://www.wsj.com/tech/ai/openai-anthropic-ipo-finances-04b3cfb9?st=AUu1XC&reflink=desktopwebshare_permalink
SO...
Ok so given that every single piece of tech news is #AI related... how does this work exactly? 🧵
"Dario Amodei, the head of Anthropic, has warned that A.I. could eliminate 50 percent of entry-level white-collar jobs within years. The tech investor Vinod Khosla predicted last year that A.I. would replace 80 percent of jobs by 2030. Elon Musk has said the technology will render work “optional.”
I've been thinking about moving this instance from Vultr to infomaniak https://www.infomaniak.com/en/about as it would be cheaper and apparently better for privacy but then I've read https://www.tomsguide.com/computing/vpns/infomaniak-breaks-rank-and-comes-out-in-support-of-controversial-swiss-encryption-law so I'm now conflicted
/c @e0ipso
I'd like to listen to opinions, also if someone is thinking about supporting this instance, you can check how to do it in https://opencollective.com/drupal-mastodon
"I used AI. It worked. I hated it." by @mttaggart https://taggart-tech.com/reckoning/
This is a really good blogpost. And I"m sure it'll make some people unhappy to read whether they're pro or anti genAI. What's good about @mttaggart's blogpost is he talks honestly about how using Claude Code did actually solve the problem he set out to do. It needed various guardrails, but they were possible to set up, and the project worked. But the post is also completely clear and honest about how miserable it was:
- It removed the joy from the process
- If you aim to do the right thing and carefully evaluate the output, your job ends up eventually becoming "tapping the Y key"
- Ramifications on people learning things
- Plenty of other ethical analysis
- And the nagging wonder whether to use it next time, despite it being miserable.
I think this is important, because it *is* true that these tools are getting to the point where they can accomplish a lot of tasks, but the caveat space is very large (cotd)