LLMs will not get us to AGI because you can’t prompt engineer and autocomplete your way to Data from Star Trek or C3PO.

This is fairly self evident but leads to multiple paths of false thinking.

1. LLMs are a scam or overhyped: We can and are getting a lot of value from this technology even if it’s not artificial general intelligence.

2. We can’t achieve AGI: LLMs are just one of many AI approaches and simply the most popular today. OpenAI already has smarter models using different approaches

@carnage4life

  • What if this supposed value is just more hype? (it is).

  • It doesn't matter what they do, it will be useless, because the way we are going we won't have arable land in 2070.

  • @carnage4life "we can get value from LLMs" and "LLMs are overhyped" are not mutually exclusive.

    @j_bertolotti @carnage4life The main value so far is as wage pressure against human workers.

    For sure, this creates a lot of value for shareholders, but it is not actually valuable.

    @androcat
    We can argue whether they are good value for money (I think not, but I agree there is a case for the opposite argument), but LLMs are quite useful for a lot of low-stake tasks, like making a first draft of a repetitive document or helping rephrasing a paragraph if you are stuck with it.
    My main problem with LLMs is that they are not very good for more complex tasks and they are advertised to be able to do MUCH more than they will ever be able to (due to how they are structured internally).
    @carnage4life

    @j_bertolotti @androcat @carnage4life

    I agree. It's fine to make a deontological or consequentialist argument against LLMs more broadly. They are littered with ethical issues. But I get daily value from them in my work. To pretend they don't facilitate a particular subset of tasks is silly.

    @Sqlgene

    There are better tools for those tasks and they have been around for decades.

    Because a tool that was made for that task will always outperform an ad hoc application of stolen snippets, provided without understanding or intelligence.

    Just go with a dumb script.

    @j_bertolotti @carnage4life

    @androcat @j_bertolotti @carnage4life

    Simple example: I want to speak at SQLBits. They tend to have somewhat verbose speaker instructions that change from year to year. https://sqlbits.com/speak/

    I pasted the text and asked for a summary. ChatGPT identified that 20 minute sessions are now limited, which changes my strategy of what I will submit.
    https://chatgpt.com/share/6772a836-c294-8012-a806-bf232df0beff

    I looked myself to verify. I still intend to read it manually, but this reduced the risk of human error on my part.

    What decades old tool would you recommend to accomplish this task?

    Speak

    Find out more about joining the annual line up of world-class speakers at SQLBits.

    @Sqlgene @j_bertolotti @carnage4life

    Just read the damn thing.

    You're describing a situation where the only "benefit" would be if you go with the untrustworthy summary.

    You still have to read it to actually know what it says.

    And now, rather than reducing your risk of error, it has infected you with the risk of confirmation bias.

    Utterly worthless.

    @androcat @j_bertolotti @carnage4life 🤷‍♂️. I think LLMs are best treated an unreliable drunken intern. There are a number of tasks I'd happily hand off to such an intern for a first pass, understanding that they very easily could be mistaken or fabricating.

    I think we fundamentally disagree on things on a deep philosophical level and I don't see any value for either of us in engaging further. I think your viewpoint is reasonable and understandable, I just don't see us reaching any common ground.

    @Sqlgene

    Be serious. If an intern showed up drunk, you'd throw them out.

    Don't treat a world-destroying tech toy better than you would a human.

    @j_bertolotti @carnage4life

    @androcat @j_bertolotti @carnage4life I will be muting you now. Best of luck to your discourse.

    @androcat
    You are being overly aggressive with people who agree with you 90% of the way. That is seldom a wise strategy.

    @Sqlgene @carnage4life

    @j_bertolotti

    You're probably right, but I am distressed.

    These are fucked up times, and this particular hype-wave is just making things worse all over.

    @Sqlgene @carnage4life

    @j_bertolotti

    Those tasks are not hard to do, just because people don't like doing them.

    Doing those tasks for people provides no value, and instead impoverishes the people so "convenienced".

    But seriously, spending a small city's worth of power to do something that you could just use a template for? In what world is that "valuable"?

    It isn't.

    The whole industry is rotten.

    @carnage4life

    @carnage4life
    OpenAI are so deeply confused about the difference between reasoning & answers that I'm fundamentally skeptical of their ability to produce any tech that could lead to AGI. Meanwhile approaches that conflate stochastic parrotry with thought are sucking up funding that would be required to get to AGI.

    @carnage4life 1. LLMs *are* overhyped and *are* a scam. They are not economically or environmentally viable, *and* the overfitting from being large makes them *worse* than smaller models for things that might actually be viable applications.

    2. OpenAI is lying. They do not have any smarter models. They are not trying. All their people are high on their own supply and the only point is extracting as much value as possible from everything they ruin along the way.

    @carnage4life Can we achieve AGI? Absolutely. The existence of human intelligence and ability of computers to simulate physical systems is a (non-efficient but constructive) existence proof.

    Is any of the research headed in this direction? Absolutely not. Because capitalism ensures all the money goes to whatever is most attractive for scamming people. Artificial beings that would be entitled rights are completely unattractive to capitalism. And it's not happening without giant funding.

    @carnage4life Does "intelligence" of any sort (which would include AGI) exist without the concept of a being/agent behind it? No. Ability to reason to *meet its own needs* is core to any meaningful definition of intelligence.
    @carnage4life my belief that LLMs are an overhyped scam is not due to knowing they won't lead to AGI but many other more important reasons. The dubious value being provided at massive economic, environmental, and social cost is not worth it in our eyes; when talking specifically about LLMs and not other ML uses.

    @carnage4life In 1952 Stanley Miller and Harold Urey conducted an experiment in which they simulated the primitive atmosphere and electrical storms in an isolated capsule, causing the spontaneous generation of aminoacids. This proved that it was possible for inorganic minerals to produce organic materials spontaneously.

    In the years after, chemists predicted they would be able to synthesize DNA, livers for transplant and even artificial life. None of that happened.

    This is where LLMs are now.

    @carnage4life yes, the limitations of LLMs do not prove that AGI is impossible. But the burden of proof goes the other way around. LLMs are being hyped as an argument that an AGI breakthrough is just around the corner. THIS is what is driving hype. Nobody wants to pay millions for a 40% medical test score, the unwritten expectation is that owners of this tech will soon get 90%-scoring new models that let them fire the human physicians.

    Like the Miller-Urey livers, there will be no such thing.

    @carnage4life IMHO, LLMs are definitely overhyped. That doesn't mean they can't be useful (which they are, as long as people can maintain critical thinking about how they use and what they get from LLMs). Both things can be true.

    @carnage4life

    But, have we achieved "API": Artificial Partial Intelligence?

    "Partial" as in: does well in some niche silo.