AI is a lot like fossil fuel industry. Seizing and burning something (in this case, the internet, and more broadly, written-down human knowledge) that was built up over a long time much faster than it could ever be replenished.
I think a lot of ppl who are skeptical of the criticisms here really don't understand how it's *burning* anything because they haven't thought about how value is derived from provenance.
Suppose you have a $100M batch of medicine but an inside saboteur has put poison in a few bottles and you can't determine which ones. How much is your inventory worth? $0 - or less since you have to dispose if it too. This is because the value depended on the provenance - having a reasonable basis to believe it's what it appears to be.
For a lower stakes example, why is the organic produce in the grocery store more expensive? If you took the labels off and mixed it all together with the rest, it wouldn't be. The value at sale time is derived from a meticulous record keeping process that makes faking provenance comparable in cost to just doing it right.
Provenance of human written knowledge comes from a lot of places. Just because something was written by a human doesn't make it accurate or non-garbage. But the labor cost in producing misinformation that's hard to distinguish from meaningful writing in the same domain, together with a lot of systems we have in place, makes evaluating provenance a tractable problem.
The ability to produce unlimited amounts of plausible-looking garbage at essentially no cost, and to crowdsource that kind of vandalism to millions of randos by disguising it as something fun, destroys that capability. It's a DDoS attack on written knowledge.

I've seen folks arguing that good and accurate info can come out of "AI" too, so we can't dismiss it as garbage.

This misses the point entirely.

Even if "AI" "says" something 100% accurate, the provenance is still garbage. It's like a broken clock. It's like waiting for the nazi to say something non-offensive and saying "wow they're at least right about some things".

And quite coincidentally, all this comes right at the moment Elon burned down the greatest tool we had for realtime vetting of information across a vast range of knowledge domains through a shared domain-knowledge trust graph. We're still a long way from rebuilding that here.

So how do we move forward? We can't entirely put this shit back in the shitter. The models are large but tractable for bad actors to keep and continue using even if we somehow banned them.

But there's a lot we can do...

We can recognize the people pushing this shit as charlatans, enemies of humanity, rather than the geniuses they want us to see them as. We can stop falling for their scam of the day. We can organize to tear down their power, devalue their wealth, deliver them consequences.
We can preserve and strengthen our standards for provenance of knowledge. In particular, open and community based projects that deal with knowledge can clearly and unequivocally ban model-generated bullshit and users who try to sneak it in.

@dalias this is a decent point about LLMs and AI but it’s going to be solved within the year from the research labs, then probably another 6 months rolled into the FOSS/commercial AI tools

There’s already been decent work into figuring out where LLMs got the info from, the next step is understanding why it used those sources, then training it how to discern on which sources to value

@Techronic9876 @dalias

Who's got the time for all that, though? And what about the fact that the well of information future AIs draw from is forever polluted by the previous generations?

More importantly, why wasn't the lack of sourcing seen as an issue before the fact, rather than afterward? Every authoritative source in history had footnotes, references, etc. In the digital realm, even Wikipedia has references. So why did the big brains developing AI not take provenance into account?

@darrelplant @dalias because researchers didn’t know LLMs would be able to chat. This was an emergent capability. They weren’t trying to build a chat bot, they were trying to build special-purpose sentiment/analysis/grammar/translation tools, and chatting took everyone by surprise. LLMs were essentially an accident

Now that they know LLMs can do zero-shot and one-shot learning, they’re working very hard on the provenance/explainability/alignment questions

@Techronic9876 @dalias

Pretty sure they knew they would be able to chat before they released products with names like "ChatGPT".

I've been watching attempts at chatbots develop since the late 70s. If the people building tools to write text based on language data had no inkling that their tools could fake holding a conversation, then they are very, very stupid people.

@darrelplant @dalias OpenAI releasing chatgpt was hugely controversial — & still is — by the people who actually discovered LLMs. OpenAI didn’t build chatgpt they just commercialized it. But once something is published research anyone can use it

Looking at how industries & governments reacted, I don’t think anything would have stopped someone from commercializing LLMs before they were ready. Best we can do now is harass/regulate new entrepreneurs to not repeat that

@Techronic9876 @darrelplant LLMs were not "discovered". All this was known over 50 years ago. They just lacked access to the volume of text and GPUs to implement it, which capitalist asshats "solved" by scraping everyone's stuff without license or consent.
@Techronic9876 @darrelplant That LLMs do what they do is not surprising at all to anyone with a basic understanding of probability and statistics, and really shouldn't have been a century ago, either.

@dalias @Techronic9876

I don't know, after all the science-fiction I read and watched, I'm really kind of surprised at how bad they are. It's 2023! Where's my jetpack?

@darrelplant @Techronic9876 I mean if you know how they work it's not surprising.

It's also why sci-fi authors never envisioned "AI" as LLMs - because they're such a ridiculously dumb, obviously "fake" way to do AI, with no intelligence whatsoever.

@dalias

The programming and data storage details of the positronic brains of my youth were never really specified. I mean, we were still working with punch cards. It was assumed it was going to be something more sophisticated than punch cards and glossed over. "Big, dumb database search" wasn't a thing.

@darrelplant You could clearly see from how it was depicted that it was parsing language and applying logical rules, not behaving like a glorified markov bot.
@dalias
Considering that LLMs can't even stick to the rules expressed in a 19-word, one-sentence instruction, I'm not sure I'd really want to trust one "governed" by Asimov's Three Laws.

@Techronic9876 @dalias

The provenance issue isn't even related to chatbots specifically, though. A traditional search engine basically provides its reference data by way of links to those references. It doesn't verify their validity, but they are provided, just a like a freshman college student writing a paper.

Even if an LLM was incapable of chat, and its function was to summarize a topic, or to generate articles to put human writers out of work, IT NEEDS TO PROVIDE REFERENCES.

@Techronic9876 @dalias solved in a year? That's an excellent joke! Maaaybe some good detection techniques would be viable/scalable in that sort of time range, but this is a social problem not just a technical one. Look at the whole cultural around fact checking and misinformation labelling and suppression. It wasn't doing that well with human scale misinformation and bullshit, nevermind the scale possible with the current generation of LLMs, and that BS had no trouble finding an audience so...

@alsothings @dalias there’s already really good work on arxiv on identifying which documents an LLM output comes from, and other work on letting LLMs know the probability of tokens explicitly, and then other work on the output being a system of agent LLMs

If you put all this together you have an AI that can explain itself and explain other things, down to the sources & other possibilities

I’ll be surprised if someone doesn’t have a working demo of this by fall, & an OSS project by next spring

@Techronic9876 @dalias yeah, I don't doubt your timing on the narrow technical issues you're talking about. The thing that compelled me to reply in the first place, was the bit of (probably unintentional) rhetorical fun you did where you declared the problematic deployment and use of modern LLMs would be solved in about 18 months, rather than just _providing_ a means to opt in to better systems.This is, as I said, hilariously far away from 'solving' the systematic problems enabled by these tools
@Techronic9876 @dalias In contrast, I think the systematic problems around information and knowledge sharing/ discovery that we are all now faced with are _generational_ in timescale, but would just love to be wrong

@Techronic9876 @alsothings "An AI that can explain itself and explain other things, down to the sources & other possibilities"?

No you do not. You have a probability model that tells you, for particular word soup, which sources and explanations are most likely, within that model, to have some correlations with the word soup that can plausibly be interpreted by a reader as agreeing with it.

@dalias @alsothings it’s not a word soup, otherwise this would have worked two decades ago with the n-gram models

It’s a multi thousand dimensional vector space where the model regresses each dimension into some concept, then maps the tokens at the intersection of all possible concepts it represents

This means the model can infer new concepts by interpolating between points, or extrapolating to new points in the space

@Techronic9876 @alsothings No it does not mean that. Thinking it does is magical thinking by someone whose salary depends on not understanding what they're doing.
@Techronic9876 @dalias what do you mean exactly when you say 'worked'? LLMs are at heart, a big refinement of various sequential models that came before (n-grams based things markov models etc). But you're just describing the precision of the word soup, not some way in which that sort of modelling stragegy would qualitatively shift epistemic modes from stochastic mimicry (at potentially very high levels) to operational knowledge?

@alsothings @dalias a single LLM would not, but that’s where the systems previously mentioned that are currently being researched will come in

Next time you’re in public and are eaves dropping on people’s casual conversations, tell me it doesn’t sound like two chat GPT agents 99% of the time; people under-appreciate what a really good precise interactive word soup can actually do

@Techronic9876 @alsothings That's literally the cargo-cult fallacy.

@dalias @alsothings it’s a functionalist perspective

But the power of a belief system is in its predictive ability. You predict LLMs will stagnate and continue to have poor explainability and reasoning

I predict in about a year and a half, consumer AI will have mostly solved the explainability problem and continue to get better beyond that

I hope whoever is wrong then will update their belief system to reflect what actually happened

@Techronic9876 @alsothings It's *not* a functionalist perspective unless the function you have in mind is convincing people to believe something. Which is generally a malicious function.

In your example of overhearing a conversation, the difference is that it corresponds to a vast network of consistent facts unknown to you but that could become known later, and the GPT garbage doesn't. Having these appear similar to you is a problem not an achievement to celebrate.

@Techronic9876 @alsothings If you think the explainability problem is something that can be solved, you don't understand the problem.

@dalias @alsothings the technical aspect can be solved, humans will always fight to use something for good or evil

good people need to keep using and building good AI

@Techronic9876 @alsothings You are so deep in your dreams to make a career out of this and so out if touch with mathematical reality.

@dalias @alsothings I think I have a very clear grasp of the mathematical reality, having done a hundred hours of course work on the topic, thousands of hours of reading, and several days of conference presentations

Saying it’s just a “word soup” demonstrates being out of touch with the mathematical reality

@Techronic9876 @dalias
This post aged like stinky cheese.
@paninid @Techronic9876 @dalias to the infinite surprise of nobody who wasn’t a fucking hype booster
@aud @dalias @paninid @Techronic9876 github copilot landing page stated day 1 that attribution was "coming soon!" it made me so upset that is a flat lie
@hipsterelectron @dalias @paninid @Techronic9876 lmao seriously. I genuinely doubt they thought that was true, too.
@hipsterelectron @aud @dalias @paninid @Techronic9876 Yup. The value proposition all along was plausible deniability of copyright infringement. As soon as you provide ready access to attribution, you destroy that plausible deniability as well as credibility of the anthropomorphizing "AI vernacular" being used for lobbying and legal defense.
@jedbrown @hipsterelectron @dalias @paninid @Techronic9876 That plausible deniability extends to basically all "AI" uses: you bake in the bias, then get to say "computer says no", as they say.
@aud @dalias @paninid @hipsterelectron @Techronic9876 Indeed, and that's why "AI is replacing human jobs" is misleading. More like "AI is doing things that would break existing law if a human were to do the same things". This is obviously valuable because crime pays.
@jedbrown @aud @paninid @hipsterelectron @Techronic9876 Wow this is such a perfect explanation and ALSO explains why it's the exact same people behind it as cryptocurrency.
@dalias @jedbrown @paninid @hipsterelectron @Techronic9876 "you couldn't charge an NVIDIA GPU with a crime... not outside of Texas, anyway!"
@aud @paninid @hipsterelectron @jedbrown @Techronic9876 Please please please charge the nvidia gpus with a crime and apply civil forfeiture to them.
@dalias @paninid @hipsterelectron @jedbrown @Techronic9876
computer says no,
think that's not a crime?
now the GPU's doing science
on the company's dime
@dalias @paninid @hipsterelectron @jedbrown @Techronic9876 oh my god: having to donate compute time to the public good would be a lovely punishment for these tech fuckers.
@dalias @paninid @hipsterelectron @jedbrown @Techronic9876 "Hope you enjoyed your crypto scam! You now have to donate 10% of your available computing resources to a set of public universities and/or create programs providing free compute to schools and other educational resources"
@dalias @paninid @hipsterelectron @jedbrown @Techronic9876 I wish there was a government position where I got to dole out ironic yet appropriate punishments for corporations because I would dedicate myself to that job 110%.
@aud @dalias @paninid @jedbrown @Techronic9876 a part of my brain tells me this means corps get to dictate priorities for that computing which makes it less of a public good than building independent public infrastructure. appropriate fines like in the billions commensurate to actual harm could be appropriated for something like this but i would prefer for the FTC to apply the banhammer
@hipsterelectron @aud @paninid @jedbrown @Techronic9876 Oh I just wanted the gpus seized and destroyed (or even "destroyed" and given to pigs' gamer kids or whatever 🤪) not just made to donate % of compute time.
@dalias @hipsterelectron @paninid @jedbrown @Techronic9876 (I 100% agree on both points... actually, I suspect asset seizure would be something they could directly exploit; do crimes on (or claim the crimes were committed on) out of date hardware and bam, you've reaped the benefits of crime and don't have to worry about offloading old hardware).
@dalias @hipsterelectron @paninid @jedbrown @Techronic9876 I guess the reality is there's nothing a corporation can't exploit or abuse if they have the financial means to do so, so they need to not have the means. Tax and fine them into oblivion.