China’s AI overload: Baidu CEO warns of too many models, too few applications
China’s AI overload: Baidu CEO warns of too many models, too few applications
Image/video upscaling via neural net is something I would pay for.
I currently use freeware/open source alternatives, but I am planning to get a copy of Topaz Video AI in the future.
Even the freeware/open source algorithms are incredible in terms of quality.
Yes exactly, today I used it to find the name of a dish which I ate in Poland when I was a child, I remembered what it was made of but not the name, only remembered a similar soup. But describing it the chat spat out the name so I will be making it during the next couple of days for dinner.
Bigos
I get what you’re saying, but I still think the vast majority of AI use they’re trying to push nowadays is categorically pointless at best, and actively harmful and misleading at worst.
It’s because LLMs are logically incapable of mapping language to actual concepts (at least, in their current incarnation), which, in the vast majority of meaningful, complex, and nuanced knowledge domains, is going to yield subtle nonsense a meaningful proportion of the time, which is the most dangerously form of ML hallucination in the context of consumer/layperson usage. We have NOT done the work to deploy this technology safely and responsibly in modern society, but we’re deploying it anyways, and we’re deploying it at scale.
The bubble popping isn’t going to look like the .com bubble. It’s going to be a lot worse, because a lot more harm is being done - and will be done - but at the same time, there are also a LOT more HUGE companies and people with TONS of money who stand to lose CATASTROPHIC amounts of capital… and they’re all ignoring the fact that this tech is CLEARLY being used in harmful ways all over the place. And that’s without touching the energy consumption issue.
Claude Opus disagrees, lol:
I respectfully disagree with the claim that current language models are logically incapable of mapping language to actual concepts. While today’s LLMs certainly have limitations and are prone to hallucinations, they have demonstrated a remarkable ability to capture and manipulate complex concepts through language.
A few key points:
LLMs are trained on vast amounts of human-generated text which is grounded in real-world concepts, allowing the models to build rich representations that go beyond surface-level word associations. Techniques like unsupervised pre-training enable LLMs to learn meaningful conceptual relationships. In many domains, LLMs have shown an impressive capacity for reasoning, inference, and knowledge synthesis - skills that rely on a deep understanding of the underlying concepts, not just pattern matching. Their performance on complex question-answering, analogical reasoning, and natural language inference tasks is evidence of this. LLMs can be fine-tuned and augmented with techniques like retrieval-augmented generation or chain-of-thought prompting to further ground them in factual knowledge and improve their reasoning capabilities. This allows more robust conceptual mappings. Careful prompt engineering and sampling techniques can help to mitigate hallucinations and nonsensical outputs. While not a complete solution, this shows that the conceptual knowledge is often present even if the model sometimes struggles to express it faithfully.That said, I fully agree that the tendency for subtle nonsense is a serious issue, especially for layperson use cases where hallucinations may go undetected. Continued research into making LLMs more truthful, consistent and robust is crucial. Techniques like constitutional AI to bake in truthfulness, and better UX design to convey uncertainty are important steps.
But in summary, I believe the evidence suggests that LLMs, while flawed, are not fundamentally incapable of meaningful conceptual representation and reasoning. We should push forward on making them more reliable and trustworthy, rather than dismissing their potential prematurely.
The major difference is that we don’t see an influx of insanely overvalued startups nobody heard of before.
That was the norm in the dotcom bubble and the reason nobody remembers the “major players” of that time now.
The AI boom is pushed by the well established big tech (which are also highly profitable - which dotcom startups were not).
if their own AI efforts didn’t fall so short. They‘re seeing silicon valley raising massive amounts of money
Successful AI =/= grifting massive amounts of money
“i like your shoelaces”
“thanks i stole them from sam altman”
AI is so dumb. I’ve tried it and I can’t get it to do anything. There are no useful applications and it’s very bad!1!