I heard of this through @[email protected] 's newsletter and radio show but had to try it myself. The terrible results are still up.

This example is funny and absurd, but imagine people who are seeking advice for serious issues--medical, legal, financial--being shown the equivalent of "applum". Though we don't have this concept in tech, I'd characterize showing people results like this as malpractice. It almost surely runs afoul of standard technology ethics guidelines, which typically have something like "do no harm" near the top of the list.

I worked on large language models in the context of a startup company in the 2016-2019 timeframe ( https://bucci.onl/notes/Legit.ai ). It's one of the reasons I comment so frequently and so negatively on this technology here--I've worked with it, at least the generation of it from that time (much has changed in the meantime, though not at the fundamental level). We experimented a bit with natural language generation. I concluded at the time that it was nowhere near ready for prime time even in the restricted domain in which we were operating. Despite Google's vast computational resources, gigantic troves of data, and the intervening 5+ years of breakthroughs in this technology, its generative AI here is still not ready for prime time as far as I can see. Never say never, but I don't think it ever will be unless the application domain is well-scoped, which general web search is not.

The really sad part? There is non-LLM-based technology that can do this sort of thing pretty well when scoped carefully. Less embarrassingly badly than what Google is demonstrating here, that's for sure. There are folks within Google, or at least there used to be, who are well aware of this.

#Google #AI #GenerativeAI #GenAI #AIOverview #tech #PinkSlime #AIGoop #InformationOilSpill
Legit.ai

Anthony Bucci's personal web site

Anthony Bucci
This is truly remarkable. How are these projects being deployed given the exceptionally low quality of the output? I can't begin to imagine the depth of dysfunction at Google that would lead to something like this going live on their home page. Dangerous, embarrassing, and frankly sad stuff.

Note: I put it in the alt text of the image, but just to make sure it's clear: solanum is definitely not the scientific name for a tomato. It's a large grouping of flowering plants that includes tomatoes but also includes eggplants and potatoes. This result is a category error, like saying "machine" is another word for "car". It's pretty simple to avoid these in more conventional, grammar-based natural language generation systems. If Google cared to, they could filter outputs using lexical information and a bit of shallow parsing and disambiguation and avoid a decent fraction of weird results like these. Or, if they want to use only data-driven techniques, which they seem hellbent on pursuing, they could use Wikipedia.

#Google #AI #GenerativeAI #GenAI #AIOverview #tech #PinkSlime #AIGoop #InformationOilSpill