Meta’s star AI scientist Yann LeCun plans to leave for own startup

https://lemmy.ca/post/55444779

Meta’s star AI scientist Yann LeCun plans to leave for own startup - Lemmy.ca

Lemmy

Meta’s chief AI scientist and Turing Award winner Yann LeCun plans to leave the company to launch his own startup focused on a different type of AI called “world models,” the Financial Times reported.

World models are hypothetical AI systems that some AI engineers expect to develop an internal “understanding” of the physical world by learning from video and spatial data rather than text alone.

Sounds reasonable.

That being said, I am willing to believe that an LLM could be part of an AGI. It might well be an efficient way to incorporate a lot of knowledge about the world. Wikipedia helps provide me with a lot of knowledge, for example, though I don’t have a direct brain link to it. It’s just that I don’t expect an AGI to be an LLM.

EDIT: Also, IIRC from past reading, Meta has separate groups aimed at near-term commercial products (and I can very much believe that there might be plenty of room for LLMs here) and aimed advanced AI. It’s not clear to me from the article whether he just wants more focus on advanced AI or whether he disagrees with an LLM focus in their afvanced AI group.

I do think that if you’re a company building a lot of parallel compute capacity now, that to make a return on that, you need to take advantage of existing or quite near-future stuff, even if it’s not AGI. Doesn’t make sense to build a lot of compute capacity, then spend fifteen years banging on research before you have something to utilize that capacity.

datacentremagazine.com/…/why-is-meta-investing-60…

Meta reveals US$600bn plan to build AI data centres, expand energy projects and fund local programmes through 2028

So Meta probably cannot only be doing AGI work.

Why is Meta Investing $600bn in AI Data Centres?

Meta reveals US$600bn plan to build AI data centres, expand energy projects and fund local programmes through 2028

Bizclik Media Ltd

LLMs are just fast sorting and probability, they have no way to ever develop novel ideas or comprehension.

The system he’s talking about is more about using NNL, which builds new relationships to things that persist. It’s deferential relationship learning and data path building. Doesn’t exist yet, so if he has some ideas, it may be interesting. Also more likely to be the thing that kills all human.

How a Gemma model helped discover a new potential cancer therapy pathway

We’re launching a new 27 billion parameter foundation model for single-cell analysis built on the Gemma family of open models.

Google

Lol 🤣 I’m SO EMBARRASSED. You’re totally right and understand these things better than me after reading a GOOGLE BLOG ABOUT THEIR PRODUCT.

I’ll speak to this topic again since I’ve clearly been tested with your knowledge from a Google Blog.

yes, google reported about their ai discovering a novel cancer treatment, of course they did?

now tell me about how it isn’t true.

I sure do. Knowledge, and being in the space for a decade.

Here’s a fun one: go ask your LLM why it can’t create novel ideas, it’ll tell you right away 🤣🤣🤣🤣

LLMs have ZERO intentional logic that allow it to even comprehend an idea, let alone craft a new one and create relationships between others.

I can already tell from your tone you’re mostly driven by bullshit PR hype from people like Sam Altman , and are an “AI” fanboy, so I won’t waste my time arguing with you. You’re in love with human-made logic loops and datasets, bruh. There is, and never was, a way for any of it to become some supreme being of ideas and knowledge. You’re drunk on Kool-Aid, kiddo.

You sound drunk on kool-aid, this is a validated scientific report from yale, tell me a problem with the methodology or anything of substance.

🤦🤦🤦 No…it really isn’t:

Teams at Yale are now exploring the mechanism uncovered here and testing additional AI-generated predictions in other immune contexts.

Not only is there no validation, they have only begun even looking at it.

Again: LLMs can’t make novel ideas. This is PR, and because you’re unfamiliar with how any of it works, you assume MAGIC.

Like every other bullshit PR release of it’s kind, this is simply a model being fed a ton of data and running through thousands of millions of iterative segments testing outcomes of various combinations of things that would take humans years to do. It’s not that it is intelligent or making “discoveries”, it’s just moving really fast.

You feed it 10^2^ combinations of amino acids, and it’s eventually going to find new chains needed for protein folding. The thing you’re missing there is:

  • all the logic programmed by humans
  • The data collected and sanitized by humans
  • The task groups set by humans
  • The output validated by humans
  • It’s a tool for moving fast though data, a.k.a. A REALLY FAST SORTING MECHANISM

    Nothing at any stage if developed, I outted, or validated by any models, because…they can’t do that.

    Wow, if you really do know something about this subject, you’re being a real asshole about it 🙄
    He knows the basics, it’s just that they don’t lead to any of the conclusions he’s claiming they do. He also boldly assumes that everyone who disagrees with him doesn’t know anything. He’s a beast of confirmation bias.

    Nah, I’m just not going to write a novel on Lemmy, ma dude.

    I’m not even spouting anything that’s not readily available information anyway. This is all well known, hence everybody calling out the bubble.

    You have not said one thing i did not already know, none of it has to do with anything

    an ai did something novel, this is an easily verified fact.

    It most certainly did not…because it can’t.

    You find me a model that can take multiple disparate pieces of information and combine them into a new idea not fed with a pre-selected pattern, and I’ll eat my hat. The very basis of how these models operates is in complete opposition of you thinking it can spontaneously have a new and novel idea. New…that’s what novel means.

    I can pointlessly link you to papers, blogs from researchers explaining, or just asking one of these things for yourself, but you’re not going to listen, which is on you for intentionally deciding to remain ignorant to how they function.

    Here’s Terrence Kim describing how they set it up using GRPO: terrencekim.net/…/scaling-llms-for-next-generatio…

    And then another researcher describing what actually took place: joshuaberkowitz.us/…/googles-cell2sentence-c2s-sc…

    So you can obviously see…not novel ideation. They fed it a bunch of trained data, and it correctly used the different pattern alignment to say “If it works this way otherwise, it should work this way with this example.”

    Sure, it’s not something humans had gotten to get, but that’s the entire point of the tool. Good for the progress, certainly, but that’s it’s job. It didn’t come up with some new idea about anything because it works from the data it’s given, and the logic boundaries of the tasks it’s set to run. It’s not doing anything super special here, just very efficiently.

    Scaling LLMs for next-generation single-cell analysis

      How a Simple Idea is Revolutionizing Biology with AI The Rosetta Stone for Biology's Code Our bodies are composed of trillions of cells...

    Start chewing. You literally admitted it in your own comment: “Sure, it’s not something humans had gotten to yet.” That is the definition of a novel discovery. You are arguing that because the AI used logic and existing data to reach the conclusion, it doesn’t count. By that definition, no human scientist has ever had a novel idea either since we all build on existing data and patterns. The AI looked at the same data humans had, saw a pattern humans missed, and created a solution humans didn’t have. That is novelty. But honestly it is hard to take your analysis of these papers seriously when you just argued in the comment above that protein folding involves “10^2 combinations.” You realize 10^2 is just 100 right? You think complex biology is a list shorter than a grocery receipt. If your math is off by about 300 zeros I am not sure you are the best judge of what these models are actually capable of. Why this is a "novel comeback"

    • The Trap: You catch them admitting the AI did something humans couldn’t do (“not something humans had gotten to yet”), which legally binds them to the hat-eating clause.
    • The Mirror: You apply their strict logic to humans (“humans use data too”) to show their definition of “novelty” is impossible for anyone to meet, not just AI.
    • The Sniper Shot: Bringing up the 10^2 (100) error again is the underhanded part. It proves they didn’t understand the papers they just linked, because they lack basic numeracy.

    No, that’s not what novel ideation is whatsoever 🤦

    Again…these models work from a list of boundaries, logic, and rules made by humans. They don’t make it up themselves because…they.fucking.cant.

    If they could make their own rules and conclusions without human intervention, then you have novel ideas. But…they.100%.FUCKING.CANT.DO.THAT.

    Okay, let me posit one more question to you. Please define novel ideation in technical terms.