@gotofritz @pikesley @wakame

"bitcoin's not going to die, it's not like the dotcom bubble. The blockchain is a real new technology with endless applications, this is nothing like the hype over having webpages ..."

@gotofritz @pikesley @wakame

During the dotcom bubble you had all these people who just invested in anything with the right buzz word "dot com" they didn't really understand the tech and it was easy to fool them. But this is totally different.

@futurebird @gotofritz @pikesley @wakame The hype with _generative_ AI seems to hide all other AI usages. For instance summarizing a web search seems pretty useful to me, it's a great starting point. It does not seem different from the dotcom and other bubbles: some things will fade, others will stay for sure.

Also, we tend to ignore that most lines of code were already garbage even before AI. High profile and successful projects eclipsed that.

@march38 @futurebird @gotofritz @pikesley

Problem with summarizing using LLMs: It doesn't work reliably. As with all other tasks where LLMs are used.

Proper summarizing tools would be cool though. Sounds like an open research area to me.

As for code: LLMs have introduced whole new flavors of bugs. Even really horrible human-written code has some kind of a rationale behind it.
LLM code is just joining a programming-related chat group, then copy/pasting the last day of messages into an IDE.

There is also this very interesting perspective that LLMs learn the best ways to hide bugs. Because they are trained on code that has been written, reviewed, maintained etc.
So when reenacting the learned stuff, there is a tendency to introduce the worst of the worst bugs into new code.
(Or at least, it will be, as soon as LLMs can generate high-quality code.)

@march38 @futurebird @gotofritz @pikesley @wakame it's important to realize that whether you use an LLM to summarize or do something else, it's still generative AI. That's what all LLMs are. It's not like it's doing something fundamentally different than a chatbot, you're just giving it some text and a prompt that asks it to summarize, and getting its response. Hence all of the unreliability inherent to LLMs
@tarix29 sorry I should have picked a non-generative AI example like image recognition or something. There is still a critical usage difference though: unlike scary and/or bogus use cases, web search summaries rarely ever go into production. Because it's just an unreliable starting point. Exactly like a non-summarized web searches before. But faster now.
@march38 @futurebird @gotofritz @pikesley @wakame My first use case was as a spellcheck/syntax checker. It is very good for that.
My second use case was as a translator. Also very good for that. Even local open source models can translate stuff well enough.
There are several other good use cases, e.g asking it for regular expressions, asking it to explain what a piece of code does.
These are all by open source local modes on my 7 year old laptop.

This won’t go away any time soon. I don’t really care about Altman and his friends.

Not good use cases, replacing people, working unsupervised.
When asked to generate something from scratch then it will often need cleanup afterwards so you may end up spending ore time. But if you write something, or have part of the code there and use it to write extensions then it can get 80% of the work done.