I've seen folks arguing that good and accurate info can come out of "AI" too, so we can't dismiss it as garbage.
This misses the point entirely.
Even if "AI" "says" something 100% accurate, the provenance is still garbage. It's like a broken clock. It's like waiting for the nazi to say something non-offensive and saying "wow they're at least right about some things".
So how do we move forward? We can't entirely put this shit back in the shitter. The models are large but tractable for bad actors to keep and continue using even if we somehow banned them.
But there's a lot we can do...
@dalias this is a decent point about LLMs and AI but it’s going to be solved within the year from the research labs, then probably another 6 months rolled into the FOSS/commercial AI tools
There’s already been decent work into figuring out where LLMs got the info from, the next step is understanding why it used those sources, then training it how to discern on which sources to value
Who's got the time for all that, though? And what about the fact that the well of information future AIs draw from is forever polluted by the previous generations?
More importantly, why wasn't the lack of sourcing seen as an issue before the fact, rather than afterward? Every authoritative source in history had footnotes, references, etc. In the digital realm, even Wikipedia has references. So why did the big brains developing AI not take provenance into account?
@darrelplant @dalias because researchers didn’t know LLMs would be able to chat. This was an emergent capability. They weren’t trying to build a chat bot, they were trying to build special-purpose sentiment/analysis/grammar/translation tools, and chatting took everyone by surprise. LLMs were essentially an accident
Now that they know LLMs can do zero-shot and one-shot learning, they’re working very hard on the provenance/explainability/alignment questions
Pretty sure they knew they would be able to chat before they released products with names like "ChatGPT".
I've been watching attempts at chatbots develop since the late 70s. If the people building tools to write text based on language data had no inkling that their tools could fake holding a conversation, then they are very, very stupid people.
@darrelplant @dalias OpenAI releasing chatgpt was hugely controversial — & still is — by the people who actually discovered LLMs. OpenAI didn’t build chatgpt they just commercialized it. But once something is published research anyone can use it
Looking at how industries & governments reacted, I don’t think anything would have stopped someone from commercializing LLMs before they were ready. Best we can do now is harass/regulate new entrepreneurs to not repeat that
I don't know, after all the science-fiction I read and watched, I'm really kind of surprised at how bad they are. It's 2023! Where's my jetpack?
@darrelplant @Techronic9876 I mean if you know how they work it's not surprising.
It's also why sci-fi authors never envisioned "AI" as LLMs - because they're such a ridiculously dumb, obviously "fake" way to do AI, with no intelligence whatsoever.
The programming and data storage details of the positronic brains of my youth were never really specified. I mean, we were still working with punch cards. It was assumed it was going to be something more sophisticated than punch cards and glossed over. "Big, dumb database search" wasn't a thing.