we're close to a year since chatgpt launched and most of what's been built atop it so far has made the world worse and given bad people more power.
"but i use it for X" i don't fucking care. keep it to yourself.
LLM image generators are already synonymous with low effort, artist-disrespecting trash, an aesthetic that dated itself almost instantaneously. "this image came out of a computer's ass."
"i can write superficially adequate, subtly defective code so much faster though!" how is that even remotely a good thing, in aggregate, in the long term? dril drunk driving tweet dot png
all of this shit is just the rotten, world-destroying political economy of big tech manifested in an unusually concrete and virulent form. if there is any real utility in LLMs it will only be found once we have destroyed this current incarnation and exiled those responsible.
@jplebreton I've noticed that so many of the developers I work with who talked my ear off about Copilot have since... quietly stopped using Copilot.
Largest study of its kind shows AI assistants misrepresent news content 45% of the time – regardless of language or territory

An intensive international study was coordinated by the European Broadcasting Union (EBU) and led by the BBC

entering year 4. it doesn't. fucking. work. https://med-mastodon.com/@jeneralist/115475265142314917
Jennifer Hamilton, MD PhD (@[email protected])

My work #EMR at now has integrated #AI that summarizes a patient's chart whether I want it to or not. This week it told me the wrong reason for admission, the wrong hospital course, and the wrong medications as compared against the human-written discharge summary. To review it and find the error took 3 minutes; to document the error and report it took another 10. Anchoring bias exists. What we read stays with us, truth or lie, influencing decisions. And I can't turn it off. #LawsuitBait

Med-Mastodon