| Home | https://herland.net/johan |
| GitHub | https://github.com/jherland |
| https://twitter.com/jherland |
| Home | https://herland.net/johan |
| GitHub | https://github.com/jherland |
| https://twitter.com/jherland |
LOL true Mastodon verification:
YOU MUST VERIFY YOURSELF ON MASTODON BY:
- Posting a picture of a cat in your lap
- Random photos of flowers
- Wax poetic about your favorite episode of Star Trek TNG (or your hate of the series)
- Random, undecipherable technical blabbering about ham radio electronics
- Mention something about your favorite Linux command line
- Say hello to your many LGBTQ followers/friends here, just because you're glad they're here
- Toot a picture of some mushroom you ran into while walking in the forst
- Something something astronomy
- Random gadget/device/bicycle post
- Post a random picture of a tree or window
- Photo of your sewing/mending project!
- Hand drawn art post
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
The ethical case against using LLMs for work is straightforward and unambiguous
The productivity case against using LLMs for work is complex and requires an understanding of volatility, variability, biases, security issues, lock-in, and more
But it turns out that if you don’t have any time for ethics, you also don’t have any time for understanding complex systems, so neither case matters to them
duckduckgo has a yes or no ai poll going now and lmao
Tony Hoare: Null was my billion dollar mistake.
AI Industry: Hold my beer...
Dit is eng, zelfs doodeng.
larry ellison oracle keynote