Maybe I'm in the wrong, but part of me feels like - if it's not interesting enough for the person to write it themselves, it's probably not interesting enough to be read.

Also the AI doesn't understand intentions.

Blog posts with spelling and grammar errors are way better than lukewarm AI voice.

@pie Context, always.

Right now, I mostly agree with you. In the future AI will absolutely be able to understand intent. It can be trained on it or instructed on it.

And how much writing needs to be interesting? Does a weather report need to be interesting and have soul? Does an article describing how to fix your car need it?

Context, context, context.

@kneath Absolutely fair. And totally; context.

But that's why weather reports are usually a few lines per time unit. The key facts. Rather than blog post size each.

As you said, right now until it doesn't understand intent. (Think the old "Should I walk to the carwash" question.) So in today's age - I really do feel it.

@kneath When it's going to understand intent is an interesting question. Considering the scaling is starting to plateau, and the data is drying up.) It's anyone's guess. We'll see.

Also - miss you!

@pie I've gotta say, I think the carwash example is an overblown gotcha (as if other programming languages have never had gotchas 🙄). These things understand a LOT of intent if you give it to them. It's far more noticeable with local models where you define the context window.

@kneath I feel other languages having gotchas and LLMs **sounding** like they know intent, isn't the same thing though.

LLMs can be pretty impressive. But understanding what it's asking you, and giving you tailored responses is it's main job.

But anyone who's used it has also spent 6 hours going in circles because it's no where near any form of AGI.

I understand the dislike for the specific question, but it does show the fundemental issue. I'm sure there are other examples out there.

@kneath
Like your example. It looks good. I'm sure there are no errors. But a lot of it's errors aren't obvious, and it can miss things.

I'm not trying to say they're not useful. I've used them on and off for years. I just think people naturally over-assign intelligence because of the wow-factor.

Again, I'm open to being wrong on this one.

@pie Perhaps it's a lot of the past ten years of me being conned, lied to, and disappointed by humans. Combined with the sharp difference of the post-December models from pre-December ones.

You always need people, you always need introspection, you always need deep thinking. But on it's surface, LLMs are far more ethical and correct than my average human experience. Ever try hiring a plumber?

That doesn't make all humans bad. That doesn't make all LLMs right.

@kneath I mean - the ethical is up for debate considering the data-in it harvested. And who gets the money. (Eco side is up for debate obviously - I've seen arguments on both sides for that.)

But I 100% totally understand where you come from in general. Genuinely.