SciShow Is Lying to You about AI. Here are the receipts.

In this video, I debunk the recent SciShow episode hosted by Hank Green regarding Artificial Intelligence. I break down why the comparison between AI development and the Manhattan Project (Atomic Power) is factually incorrect. We also investigate the sponsor, Control AI, and expose how industry propaganda is shifting focus toward hypothetical extinction risks to distract from real-world issues like disinformation and regulatory accountability, and fact-check OpenAI’s claims about the International Math Olympiad and Anthropic’s AI Alignment bioweapon tests.

00:00 I wish this wasn’t happening

00:32 SciShow’s Lie Overview

01:58 Intro

02:15 Biggest Lie on the SciShow Video

04:44 Biggest Omission in the SciShow Video

05:56 The “Statement on AI” that SciShow Omits

08:57 Summary of Most Important Points

09:23 Claim about International Math Olympiad Medal

09:50 Misleading Example about AI Alignment

11:20 Downplaying “practical and visible” problems

11:53 Essay I debunked from Anthropic CEO

12:06 Video on Hank’s Personal Channel

12:31 A Plea for SciShow and others to do better

13:02 Wrap-up

https://piefed.social/c/fuck_ai/p/1509831/scishow-is-lying-to-you-about-ai-here-are-the-receipts

SciShow Is Lying to You about AI. Here are the receipts.

>In this video, I debunk the recent SciShow episode hosted by Hank Green regarding Artificial Intelligence. I break down why the comparison between…

That guy’s video was terrible. I want my time back.

The SciShou video? Yeah.

If you mean Carl: why did you watch it past the first minute, then?

LLMs ≠ AI. I wish more people in the media would realize that even the most advanced LLM possible cannot achieve “AGI”. That is just not how they work. It’s like saying that if you make a car that can spin it’s wheels fast enough then it can go to space. It’s not what wheels do.
Cannot upvote this enough. These tools are not intelligent!! Sure, they can be useful to specialists that check the outputs and select what is correct. For the masses like its being pushed? Hell no!

LLMs are AI. No, they’re not going to get to “AGI”, but this idea that they aren’t connected doesn’t match how the field has evolved.

If you’re unaware of how the MIT model railroading club is one of the most important groups in the history of AI, then do some reading.

Hackers: Heroes of the Computer Revolution — Steven Levy

Steven Levy
They aren’t intelligent, so they aren’t AI.

Not how it works.

The field of AI has been about making computers do things they couldn’t before. Even if they’re just “predicting the next token”, LLMs are a significant leap over Markov Chains (which also predict the next token, but produce output that’s more funny than useful).

Again, if you’re unaware of the history of MIT CSAIL, then you really shouldn’t be opining on what is and isn’t AI.

There’s a difference between the field developing more advanced technology towards AI and calling every piece of that AI. Yes, this is part of a larger field that has worked on this for decades. The previous stuff wasn’t called AI, and this shouldn’t be either. It’s only the companies selling a product who started that.
Would you consider Conway’s Game of Life to be AI? Because the field certainly did back in the day, and it’s less impressive than LLMs.
No they fucking didn’t. That’s absurd. They may have talked philosophically about if it was alive. No one thought it was intelligent. You can look at the code and know that. They called it AI in the same way video games do maybe, not in the way the academic field does.

It was developed by academics in the first place. It’s AI because it was developed by AI researchers.

That’s how it works. You build knowledge by making these little pieces. LLMs are one of those pieces. It won’t get to full human intelligence on its own, but it might be part of what gets there.

Not everything AI researchers develop is suddenly AI. That’s my point, and they know that. What you’re implying is that as soon as the field developed AI existed, and not before. It being made by AI researchers is not the definition of AI.

Its also not an issue with it not being full human intelligence. It isn’t intelligent at all. It doesn’t think about what it outputs. It’s just a statistical model. It’s a very advanced statistical model that creates the appearance of intelligence, but it isn’t intelligent.

Then what is AI? Or do you think there’s no intermediate steps between Turning Machine and full intelligence?
There are many intermediate steps. That’s what the field of AI work has been doing. This is but one of many steps. It is not intelligent though, so it isn’t AI. It is just a step. A basic Turing Machine is also just a step, and you wouldn’t call it AI, would you?

Not a fan of this guy. He’s dead set on that AI won’t progress at all in the near future.

You can argue whether AI is progressing faster than the Manhattan project or not, but these things are true:

  • AI has progressed fast
  • We have no idea how it works
  • We have no idea how fast it will progress in the near future.
  • Think about where AI was 10 years ago. Cutting edge AI was able to accept an video, and put bounding boxes around a predefined set of objects it’s able to recognize (see You Only Look Once paper, 2015). That was about it.

    10 years before that, cutting edge AI was maybe digit recognition. I’m not sure.

    Today current cutting edge AI goes far beyond that. Just imagine where it might be in another 10 years. I think it’s frightening, considering how much AI slop we’re enduring today.

    We have no idea how it works

    I’m so sick of seeing this bullshit.

    You may not know how it works, and the AI industry probably wants you to think that no one knows how it works, but it’s just not true.

    Generative pre-trained transformers are well understood, well documented, and there’s no shortage of resources freely available online to teach you how they work. Ditto for other advanced AI systems.

    They are complex, sure, but they’re not inscrutable. Saying that no one knows how AI works is like saying no one knows how the weather works — which again, is simply not true. Weather is complicated and its behavior is hard to predict because of the massive number of variables involved, but we know how it works at a fundamental level. It’s not magic, it’s not angels bowling or whatever.

    AI is just software, and we know how it fucking works.

    We know how each individual part work. That’s just basic math.

    We don’t know for sure how all trillion parts together produce the results they do. You can’t debug the model step by step to see how the prompt ”generate image of a penguin” produces an image of a penguin, and not an ice bear. That what people mean with ”we don’t know how AI works”.

    Okay, but who cares? “Complex systems are difficult to predict” is a mathematical insight that’s like 2 centuries old at this point… and it hasn’t hindered us at all from gaining deep insights into how both individual complex systems work and how complex systems as a general class of phenomena work. I can’t keep track of all the masses and velocities of every individual air molecule in the room I’m sitting in, but I still know how the interactions of those particles give rise to the temperature and air pressure and general behavior of the atmosphere in the room.

    People know how this shit works, and anyone telling you otherwise is either willfully ignorant or internationally lying to you to feed a hype cycle with an end goal of making your life worse. People can’t afford to remain uneducated about this stuff anymore.

    What’s interesting is how these complex models produce anything useful at all. We could very well have complex models that don’t produce anything other than random noise.

    The reason why “we” have these models because they were deliberately trained not to output random noise. That part is well understood.

    The only reason why we don’t know what exactly makes the model output an image of Garfield with boobs is the amount of data to sift through. Not because we don’t understand the processes.

    Generalization is not a given. It’s possible to make complex models that perfectly memorizes 100% of the training data, but produces garbage results if the input diverges ever so slightly from the training.

    This generalization is a process that’s not fully understood. Earlier architectures struggled with this level of generalization, but transformers seem to handle it well.

    Not overfitting is hard, yes. But it’s not “we have no idea how/why this works”-hard.
    That goes for windows 11 too, and still we know how computers work.

    Windows 11 is programmed by Microsoft engineers. I’m sure they have a good idea how it works. When you click a button, you get predictable results.

    Neural networks is a different story. It’s difficult to predict what’s going to happen for a given prompt, and how adjustments to the weights affects the results.

    There’s some article from last year where they found a ”golden gate” neuron in Claude. Changing it to be always on caused the model to always mention the golden gate in its responses. How and why this works is AFAIK not fully understood. For some reason the model managed to generalize the concept of golden gate into one single neuron.

    What a cute thought!

    No one knows how “everything” works in old monolithic software. You just have to try and see what happens, and often you just doesn’t touch certain codebases because nobody really know the ramifications if you change something in them. Windiws 11 is probably way worse than any LLM. Try to share a simple folder on a simple home network and you’ll see some of the cruft.

    Source: have worked on 30-40 year old monolithic software. In not one of those projects were there a single “engineer” who knew it all.

    Neural networks has their fuzzy part of course, but software became not fully understandable a long time ago. IMO.

    Of course, no single person fully understand the entirety of Windows. But I hope the people working with Windows understands at least a part of it.

    The thing with LLMs is that no one really understands the purpose of one single neuron, how it relates to all other neurons, and how they together seem to be able to generalize high level concepts like golden gate bridge. It’s just too much to map it out.

    We do know how a single “neuron” relates to other neurons, it’s in the model. But what gets complicated is the vast amount of them, of course.

    So yes, we don’t intrinsically get to understand it all, but I think we can understand what it does, a bit like windows 😁/j.

    Fascinating subject, and we’re just scratching the beginning IMO.