LLMs are modeling outputs, not replicating a process, as a result the output looks the same but it isn't made of the same stuff.

It is a plastic banana.

There is nothing inherently wrong with a plastic banana but as soon as you claim you can use it to solve world hunger people are going to be upset and it doesn't matter how much it looks right.

@Vrimj this is why chain of thought prompting completely surprised researchers working on LLMs. It seemed like using chain of thought replicated a process, and it possibly does for very simple processes

Figuring out how to do bigger processes is the big question now

@Techronic9876

Having a conversation with my kindergartner about legal issues is sometimes really helpful too

I don't think it means that kindergartners should be a routine part of law practice or are a monetizable tool

@Vrimj @Techronic9876 and rubber duck debugging (what you're doing with your k-gartener) is work done by you, not the k-gartener

This also applies to "step by step" reasoning with spicy autocomplete