I've seen folks arguing that good and accurate info can come out of "AI" too, so we can't dismiss it as garbage.
This misses the point entirely.
Even if "AI" "says" something 100% accurate, the provenance is still garbage. It's like a broken clock. It's like waiting for the nazi to say something non-offensive and saying "wow they're at least right about some things".
So how do we move forward? We can't entirely put this shit back in the shitter. The models are large but tractable for bad actors to keep and continue using even if we somehow banned them.
But there's a lot we can do...
@dalias this is a decent point about LLMs and AI but it’s going to be solved within the year from the research labs, then probably another 6 months rolled into the FOSS/commercial AI tools
There’s already been decent work into figuring out where LLMs got the info from, the next step is understanding why it used those sources, then training it how to discern on which sources to value
@alsothings @dalias there’s already really good work on arxiv on identifying which documents an LLM output comes from, and other work on letting LLMs know the probability of tokens explicitly, and then other work on the output being a system of agent LLMs
If you put all this together you have an AI that can explain itself and explain other things, down to the sources & other possibilities
I’ll be surprised if someone doesn’t have a working demo of this by fall, & an OSS project by next spring