Adopt the Juicero or be left behind. A vital shift is underway in juicing. The Juicero is no longer optional. It's tomorrow's future, today. 40% of jobs are impacted by the Juicero. The Juicero isn't the future, it's a present necessity. Nobody hand-juices anymore. To hand-juice is like an impairment. Everyone must now focus on the delegation and the verification of a juice. We become less juice producers and more juice enablers. Adopt the Juicero or be left behind. We are burning every forest and poisoning every river to produce more Juiceros. You will become obsolete if you don't get on the Juicero bandwagon. Students must not be taught how to hand-juice. 80% of jobs will be lost to the Juicero. Students must be taught to exclusively focus on how to collaborate with the Juicero. Education must focus on orchestrating agentic Juicero systems. The Juicero is inevitable. Adopt the Juicero or be left behind. Adapt or risk becoming obsolete. As the Juicero rapidly advances toward automating up to 90% of juicing, the skills that will matter most include juice design, Juicero fluency, juice delegation, and juice quality assurance. 110% of jobs have been replaced by the Juicero.
(Context: The Juicero was a very expensive machine that came with a DRM lock and could almost perform a simple task some of the time. It's a useful key to understanding Silicon Valley madness.)

@kaye the juicero was expensive and drm locked but my understanding is that at its core, it in fact *was* a very good juicer.

Unlike LLMs.

@azonenberg @kaye

I mean if not juicing anything makes it a good juicer. (It just squeezed DRM capri sun-like pouches)

@Maverynthia @kaye the point is that there is a useful, though expensive, machine inside all the enshittification of a juicero.

LLMs don't even have that. There's no way to jailbreak one to make it useful.

@azonenberg @Maverynthia @kaye I think I see the confusion:
The Juicero, at least early models, were badly over-engineered so they actually had $400 worth of _parts_ inside.
Meaning DIY people rushed to buy them whenever people were selling theirs for cheap.

And that was spun as "but it was a good design/juicer/machine" by people trying to save face.
But a machine = parts ร— design and the design was trash.

@Asimech @Maverynthia @kaye Ah interesting.

My understanding was that it was more of "What if you gave an engineering team an unlimited budget to build the best juicer imaginable, then slapped DRM on top of the result". Seems that wasn't entirely accurate.

@Asimech @Maverynthia @kaye Anyway, it still did juice at least a little bit so still better than a LLM.
@azonenberg @Asimech @Maverynthia @kaye
Llms have a use they are proficient at both predictive text and translation.

@duckwhistle Based on everything I've heard LLMs are worse at both than the traditional algorithms we've had.

With predictive text the quality drop is hidden by the fact that platform decay had hit most of them before LLMs came about. And the big names like Google were never the best ones to begin with.

With translations LLMs are just hiding the rough edges, which makes it sounds better but really just makes it harder to tell when the translation can't be trusted.

@Asimech
Don't get me wrong I dont think google translate is reliable enough for use in a professional context, but it was great for use in situations where human translator was never going to be an option.
And the traditional algorithms for both those things were LLMs, that's were the technology originated.
It's only with the "advances" in the late 2010s that they became something incredibly inefficient, and the switch to using the same models for all applications, that the quality degraded.

@duckwhistle I was talking in private context.
LLMs are worse than traditional algorithms for translating for private uses.
Because LLMs are fundamentally unreliable and you would need to check the work to know it's at all accurate.

And that last part of yours is BS.
Do you seriously think Nokia 5110 ran an LLM for its T9 input?
And e.g. Google Translate started as an SMT, moved to NMT (and got worse). Neither of which is an LLM (which are worse still).

@Asimech
They may not meet the modern size requirement to be considered an LLM, but the term predates what you are limiting it to.
Even were that not the case I'd argue LLMs are just a fancy NMTs trained on significantly more data, rather than just a model specific for translation.
The differences between the first GPT and an SMT is no greater than the difference between the first GPT and GPT 5.
Fundamentally they are increasingly complex examples of the same software development paradigm. ๐Ÿคท