OpenAI, Google, Anthropic admit they can’t scale up their chatbots any further
OpenAI, Google, Anthropic admit they can’t scale up their chatbots any further
It’s absurd that some of the larger LLMs now use hundreds of billions of parameters (e.g. llama3.1 with 405B).
This doesn’t really seem like a smart usage of ressources if you need several of the largest GPUs available to even run one conversation.
It is conceptually the same thing. A series of interconnected neurons with a firing threshold and weighted connections.
The simplification comes with how the information is transmitted.
Many functions in the human body rely on quantum mechanical effects to function correctly. So to simulate it properly each connection really needs to be its own super computer.
But it has been shown to be able to encode information in a similar way. The learning is the part it’s not even close on.
It is conceptually the same thing. […] The learning the part is not even close.
Well… isn’t the “learning part” precisely the point? I don’t think anybody is excited about brains as “just” a computational device, rather the primary function of a brain is … learning.
No, we are nowhere close to learning as the human brain does. We don’t even really understand how it does at all.
The point is to encode solutions to problems that we can’t solve with standard programming techniques. Like vision, speech recognition and generation.
These problems are easy for humans and very difficult for computers. The same way maths is super easy for computers compared to humans.
By applying techniques our neurones use computer vision and speech have come on in leaps and bounds.
We are decades from getting anything close to a computer brain.
No, we are nowhere close to learning as the human brain does. We don’t even really understand how it does at all.
Sorry then if I sound like a broken record but again, doesn’t that mean that the analogy itself is flawed? If the goal remain the same but there is close to no explanatory power, even if we do get pragmatically useful result (i.e. it “works” in some useful cases) it’s basically “just” inspiration, which is nice but is basically branding more than anything else.
It’s a lot. Like a lot a lot. GPUs have about 150 billion transistors but those transistors only make 1 connection in what is essentially printed in a 2d space on silicon.
Each neuron makes dozens of connections, and there’s on the order of almost 100 billion neurons in a blobby lump of fat and neurons that takes up 3d space. And then combine the fact that multiple neurons in patterns firing is how everything actually functions and you have such absurdly high number of potential for how powerful human brains are.
At this point, I’m not sure there’s enough gpus in the world to mimic what a human brain can do.
That’s also just the electrical portion of our mind. There are whole levels of chemical, and chemical potentials at work. Neurones will fire differently depending on the chemical soup around them. Most of our moods are chemically based. E.g. adrenaline and testosterone making us more aggressive.
Our mind also extends out of our heads. Organ transplant recipricants have noted personality changes. Food preferences being the most prevailant.
The neurons only deal with ‘fast’ thinking. ‘slow’ thinking is far more complex and distributed.
Larger models train faster (need less data), for reasons not fully understood. These large models can then be used as teachers to train smaller models more efficiently. I’ve used Qwen 14B (14 billion parameters, quantized to 6-bit integers), and it’s not too much worse than these very large models.
Lately, I’ve been thinking of LLMs as lossy text/idea compression with content-addressable memory. And 10.5GB is pretty good compression for all the “knowledge” they seem to retain.
I don’t think Qwen was trained with distillation, was it?
It would be awesome if it was.
Also you should try Supernova Medius, which is Qwen 14B with some “distillation” from some other models.
Hmm. I just assumed 14B was distilled from 72B, because that’s what I thought llama was doing, and that would just make sense. On further research it’s not clear if llama did the traditional teacher method or just trained the smaller models on synthetic data generated from a large model. I suppose training smaller models on a larger amount of data generated by larger models is similar though. It does seem like Qwen was also trained on synthetic data, because it sometimes thinks it’s Claude, lol.
Thanks for the tip on Medius. Just tried it out, and it does seem better than Qwen 14B.
Can’t be, I haven’t fucked one yet, and everyone knows Cylonism is an STD.
Unless I’m an Eskimo brother and don’t know it…
Though, I don’t think that means they won’t get any better. It just means they don’t scale by feeding in more training data.
Agreed. There’s plenty of improvement to be had, but the gravy train of “more CPU or more data == better results” sounds like it’s ending.
It’s a known problem - though of course, because these companies are trying to push AI into everything and oversell it to build hype and please investors, they usually try to avoid recognizing its limitations.
Frankly I think that now they should focus on making these models smaller and more efficient instead of just throwing more compute at the wall, and actually train them to completion so they’ll generalize properly and be more useful.
Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.
OpenAI, Google, Anthropic admit they can’t scale up their chatbots any further
Lol, no they didn’t. The quotes this articles are using are talking about LLMs not chatbots. This is yet another stupid article from someone who doesn’t understand the technology. There is a lot of legitimate criticism for the way this technology is being implemented but FFS get the basics right at least.
I think you’re agreeing, just in a rude and condescending way.
There’s a lot of ways left to improve, but they’re not as simple as just throwing more data and CPU at the problem, anymore.
So is your autism diagnosed or undiagnosed?
I ask this as an autistic person, because the only charitable way to read what’s happening here is that you’re clearly struggling with statements that aren’t intended to be read completely literally.
The only other way to read it is that you’re arguing in bad faith, but I’ll assume thats not the case.
Also an autistic person here.
How are people supposed to tell this is an opinion?
And please dont say “by reading the article, maybe some (like me) do so but its well known that most people stop at the title.
Grammatically speaking it remains a direct statement. They admit == appear to hint == pure opinion (Title: “Ai cant be scaled further”)
While i am not disagreeing with the premise perse i have to perceive this as anti-ai propaganda at best, a attempt at misinformation at worst.
On a different note, do you believe things can only be an issue if neurotypical struggle with it? There is no good argument to not communicate more clearly in the context of sharing opinions with the world.
David and Amy are - openly - skeptics in the subject matters they write about. But it’s important to understand that being a skeptic is not inherently the same thing as being unfairly biased against something.
They cite their sources. They backup what they have to say. But they refuse to be charitable about how they approach their subjects, because it is their position that those subjects have not acted in a way that is deserving of charity.
This is a problem with a lot of mainstream journalism. A grocery store CEO will say “It’s not our fault, we have to raise prices,” and mainstream news outlets will repeat this statement uncritically, with no interrogation, because they are so desperate to avoid any appearance of bias. Donald Trump will say “Immigrants are eating dogs” and news outlets will simply repeat this claim as something he said, with adding “This claim is obviously insane and only an idiot would have made it.” Sometimes being overly fair to your subject is being unfair to objective truth.
Of course OpenAI et al are never going to openly admit that they can’t substantially improve their models any further. They are professional bullshitters, they didn’t suddenly come down with a case of honesty now. But their recent statements, when read with both a critical eye, and an understanding of the limitations of the technology, amount to a tacit admission that all the significant gains have already been made with this particular approach. That’s the claim being made in this headline.
I see a lot of links here and there to this domain but I haven’t really read anything from there. I’m literally just scrolling through these comments to see if anyone has a comment like yours.
My impression was that it’s just a blog but you calling it “a reddit post” is also interesting. What’s with this site? It looks like a decent amount of people think these takes are interesting. I have to deal with a lot of management people who love AI buzzwords, so a whole blog just ripping into it really speaks to me.
I believe that the current LLM paradigm is a technological dead end. We might see a few additional applications popping up, in the near future; but they’ll be only a tiny fraction of what was promised.
My bet is that they’ll get superseded by models with hard-coded logic. Just enough to be able to correctly output “if X and Y are true/false, then Z is false”, without fine-tuning or other band-aid solutions.
We’ve seen this pattern play out in video games a bunch of times.
Revolutionary new way to do things. It’s cool, but not… You know…fun.
So we give up on it as a dead and and go back to the old ways for awhile.
Then somebody figures out how to (usually hard code) bumpers on the new revolutionary new way, such that it stays fun.
Now the revolutionary new way is the new gold stand and default approach.
For other industries, replace “fun” above with the correct goal for than industry. “Profitable” is one that the AI hucksters are being careful not to say…but “honest”, “correct” and “safe” also come to mind.
We are right before the bit where we all decide it was a bad idea.
Which comes before we figure out hard-coding the bumpers can get us where we wanted, after a lot of work by really smart well paid humans.
I’ve seen industries skip the “all decide it was a bad idea” phase, and go straight to the “hard work by humans to make this fulfill the available promise” phase, but we don’t actually look on track to, today.
All lot of current investors are convicned that their clever talking puppet is going to do the hard work of engineering the next generation of talking puppet.
I have some faith that we can reach that milestone. I’m familiar enough with the current generation of talking puppet to confidently declare that this won’t be the time it happens.
My incdntive in sharing all this is that I like over half of you reading there, and so figure I can give some of you a shot at not falling for this particular “investment phase” which is essentially, in practical terms, a con.
Sure, except for the thousands of products working pretty well with current gen. And it’s not like it’s over, now we’ve hit the limit of “just throw more data at the thing”.
Now there aren’t gonna be as many breakthroughs that make it better every few months, instead there’s gonna be thousand small improvements that make it more capable slowly and steadily. AI is here to stay.