"bitcoin's not going to die, it's not like the dotcom bubble. The blockchain is a real new technology with endless applications, this is nothing like the hype over having webpages ..."
"bitcoin's not going to die, it's not like the dotcom bubble. The blockchain is a real new technology with endless applications, this is nothing like the hype over having webpages ..."
During the dotcom bubble you had all these people who just invested in anything with the right buzz word "dot com" they didn't really understand the tech and it was easy to fool them. But this is totally different.
@futurebird @gotofritz @pikesley @wakame The hype with _generative_ AI seems to hide all other AI usages. For instance summarizing a web search seems pretty useful to me, it's a great starting point. It does not seem different from the dotcom and other bubbles: some things will fade, others will stay for sure.
Also, we tend to ignore that most lines of code were already garbage even before AI. High profile and successful projects eclipsed that.
@march38 @futurebird @gotofritz @pikesley
Problem with summarizing using LLMs: It doesn't work reliably. As with all other tasks where LLMs are used.
Proper summarizing tools would be cool though. Sounds like an open research area to me.
As for code: LLMs have introduced whole new flavors of bugs. Even really horrible human-written code has some kind of a rationale behind it.
LLM code is just joining a programming-related chat group, then copy/pasting the last day of messages into an IDE.
There is also this very interesting perspective that LLMs learn the best ways to hide bugs. Because they are trained on code that has been written, reviewed, maintained etc.
So when reenacting the learned stuff, there is a tendency to introduce the worst of the worst bugs into new code.
(Or at least, it will be, as soon as LLMs can generate high-quality code.)