LLMs are modeling outputs, not replicating a process, as a result the output looks the same but it isn't made of the same stuff.

It is a plastic banana.

There is nothing inherently wrong with a plastic banana but as soon as you claim you can use it to solve world hunger people are going to be upset and it doesn't matter how much it looks right.

@Vrimj lots of people in the technical-professional class do in fact just copy and paste a something from Google and don't know what it's actually doing for some subset of their job. They effectively are doing what chat gpt does.

@zippy1981

I might have agreed before I read the NY Case ChatGPT generated cases, but now having read that and to be frank legal documents produced by copying and pasting by people with no domain knowledge I can say, not it isn't the same thing at all, at least in this case.

Because the person understands what they are trying to do. Prompts don't replicate that.

@Vrimj @zippy1981
Trained, accredited & licensed to “practice” lawyer prompted ChatGPT for specific, existing case details. ChatGPT strung a bunch of words in grammatically correct structure & sentences as a response.
This is not meaningful as it is not grounded in the “existing” past tense reality. This is the main issue with using LLMs - grammatically correct ≠ meaning or understanding.
Lawyer is now in legal practice trouble for ethical violations of submitting dross.