I spent a long time experimenting with AI before finally writing about it in depth. It can be pretty useful — but is it worth it?
I spent a long time experimenting with AI before finally writing about it in depth. It can be pretty useful — but is it worth it?
When I boil it down, I find my feelings about AI are actually pretty similar to my feelings about blockchains.
If you liked this essay, consider signing up for a pay-what-you-want subscription.
My writing is never paywalled, but your support is what helps me keep doing this.
The AI stuff isn’t convincing to “normals” either. IronMouse absolutely rinsing Twitter’s “AI” features here, nail on the head. “Reads like when I was at school padding out a bullshit essay with loads of words and not saying anything”. “AI artists?” *laughing* “get that shit out of my art tags. I don’t want to see it.” AI is going to implode so hard in the next 12 months, when the over-valuations of speculative futures adjust to a reality where no one is liking it. https://www.twitch.tv/videos/2120071913?t=2242s
@molly0xfff If only people in Silicon Valley weren’t primarily motivated by becoming billionaires.
This isn’t even an indictment of capitalist success, just…lop off a few zeros there.
You can still get rich quietly making moderately useful things.
@molly0xfff AI is following the same tech hype train I have seen my entire life in tech.
And like every other hype in tech, it's about 80% greed trying to make money selling a 'perfect future', and 20% true believers who should really, really REALLY know better and yet somehow have become converts to a technical concept or tool whose flaws they are professionally trained and experienced to spot, but are blinded to.
My MS is in machine learning. I know how chatGPT works under the hood (just as I knew how blockchain worked on a technical basis) and I cannot understand some of the people who ALSO know how it works treating LLMs like they think or understand.
I can see how it can fool people who don't have a glimpse under the hood, but even they should start to notice they're talking to a literal Chinese Room example.
Words in, most statistically likely words out in the most likely pattern, zero thought or comprehension.
@molly0xfff I appreciate the nuanced view, and think you nailed it.
And the costs are indeed staggeringly high!
@molly0xfff Interesting read! It aligns really well with my own opinion about the AI.
Sadly, the companies are rushing to automate the kinds of creative work that people enjoy doing, like writing, coding and drawing, instead of automating the dangerous and boring work that no one is excited about.
@molly0xfff
Interesting.
Incidentally, I find that care in terminology is a good way to take apart overhyped tech. I'd avoid conflating "blockchain" and "consensus through proof-of-work". The issues of speed and cost stem from the latter.
@molly0xfff At least blockchain doesn't first require the theft of tons of people's works to be functional
Both of them still suck massively though.
The misuse of the term destroys discourse about it. MML about patterns of transactional behaviour hasn't any real overlap with MML using Large Language Models.
There's no intelligence involved in processing the models. There's considerably more precision in financial transactions than loose English language models. That doesn't necessarily mean they're more accurate but the provenance of the data and its mapping is much simpler.
And I should have used ML, Machine Learning, rather than MML which has all sorts of meanings.
Being able to show the workings out is a necessary requirement, where did the data come from and how were the results produced.
Provenance, 'reasoning' and training sets aren't available for LLM. Making them available wouldn't help users directly but it would enable third party verification.
It doesn't look as if it's moved on very far in the past 7 years (which is the last time I thought seriously about it) but it is tractable.
It lacks proper incentive and motivation. If the use of results required generating sufficient markers to enable Explain before the product was released then real work would be done.
Right now it's put down as being too expensive, do it later. So it will never be done.
I think it's a lot simpler for financial transactions but they probably think their risk valuation, and the heuristics they apply for fraud are secret sauce.
But the regular kind of disclosure would be audit so controlled not public.
If they were treated as disclosable by someone bringing a well formed suit then that's still manageable and not disclosing would have far worse consequences.
It would help being in a multi-state regulatory authority.
@molly0xfff Your entire essay is excellent but especially salient point: "do we even want to be doing these things? If all you want out of a meeting is the AI-generated summary, maybe that meeting could've been an email. If you're using AI to write your emails, and your recipient is using AI to read them, could you maybe cut out the whole thing entirely?"
A thousand times yes!
@molly0xfff is think the way AI has been most “useful” is for churning out low quality garbage (slop) that you can fool people into clicking on to generate ad revenue. it’s not a good thing, but it’s something that people have managed to monetize. not sure how long time profitable it will be though.
it has also been fairly successful at deepfakes for scams as well. so i guess being most useful for scammers is another thing that it has in common with blockchain