LLMs are modeling outputs, not replicating a process, as a result the output looks the same but it isn't made of the same stuff.

It is a plastic banana.

There is nothing inherently wrong with a plastic banana but as soon as you claim you can use it to solve world hunger people are going to be upset and it doesn't matter how much it looks right.

@Vrimj I like this analogy! It shows that just like a real banana, there is a process behind it, but that it is a fundamentally different one that should have different expectations and uses.
@Vrimj lots of people in the technical-professional class do in fact just copy and paste a something from Google and don't know what it's actually doing for some subset of their job. They effectively are doing what chat gpt does.

@zippy1981

I might have agreed before I read the NY Case ChatGPT generated cases, but now having read that and to be frank legal documents produced by copying and pasting by people with no domain knowledge I can say, not it isn't the same thing at all, at least in this case.

Because the person understands what they are trying to do. Prompts don't replicate that.

@zippy1981

Understanding, thinking and having a narrative of events matter and have a lot to do with what even someone blind to the issues chooses and frankly I didn't fully understand that until I saw.what happened when they didn't.

@Vrimj I guess I need to read the story about the lawyer who used chat gpt, because the little I know about it drew me to the conclusion it caused you to abandon
@Vrimj @zippy1981
Trained, accredited & licensed to “practice” lawyer prompted ChatGPT for specific, existing case details. ChatGPT strung a bunch of words in grammatically correct structure & sentences as a response.
This is not meaningful as it is not grounded in the “existing” past tense reality. This is the main issue with using LLMs - grammatically correct ≠ meaning or understanding.
Lawyer is now in legal practice trouble for ethical violations of submitting dross.

@Vrimj Excellent, I needed that image to brighten my mood.

Lets talk to a plastic banana.

@Vrimj damn I really like "modeling outputs vs replicating a process" as a way to articulate the distinction
@Vrimj In that case, how about chain of thought prompting? It's true that LMs only model an input/output distribution, but they can mimic and reproduce as many convincing artifacts along the way as necessary.

@Vrimj

It is "word complete while typing" with entire stolen books as the next letter.

@Vrimj this is why chain of thought prompting completely surprised researchers working on LLMs. It seemed like using chain of thought replicated a process, and it possibly does for very simple processes

Figuring out how to do bigger processes is the big question now

@Techronic9876

Having a conversation with my kindergartner about legal issues is sometimes really helpful too

I don't think it means that kindergartners should be a routine part of law practice or are a monetizable tool

@Vrimj @Techronic9876 and rubber duck debugging (what you're doing with your k-gartener) is work done by you, not the k-gartener

This also applies to "step by step" reasoning with spicy autocomplete