@devopscats Chat 3.5 seems to still struggle with this really badly, it acknowledged its mistakes when I pointed them out and then proceeded to hallucinate about cabbages and wolves that were not in the original prompt.
Claude does pretty well, my first prompt yielded something overly complicated but when I pointed that out Claude gave the solution and apologized for over complicating it. I asked it to write a more clear prompt for itself, put that into a new session and it got it in one shot.
@m8ta @ngaylinn @brembs More commentary while it's fresh in my head;
ML is in a stage that's similar to microcomputers in the late 70's, the technology to make it possible has arrived but we're still figuring out what it's good at, how to best use it, and what the best interface for it is. As a side-effect it's largely inaccessible to people who aren't closely associated with STEM, but it likely would make the most meaningful impact to industries outside STEM. 1/2
@m8ta @ngaylinn @brembs I agree, though my reasoning is different. Statistical models can have up to two purposes; to explain, and to predict.
Deep learning models such as ChatGPT are extremely good at prediction (e.g. what is the next word in this ___) but extremely bad at explaining. Suppose we built a super intelligent model; it would make shockingly human-like responses, but wouldn't be able to explain why. It's sort of like how I can't explain how I form words using my vocal cords.
@ngaylinn @brembs No no, you misunderstood *my* post. I never claimed it was a good blueprint for AGI but your window of attention is far too short for that kind of nuance. That's all irrelevant anyway because you clearly don't understand GPT or deep learning either. I digress.
You COMPLETELY missed my broader point, which is that at a fundamental, philosophical level we have no way to distinguish human sapience from any other kind of sapience. That which behaves sufficiently human, is. Goodbye
@ngaylinn @brembs The problem with AI discourse and speculation on the emergence of super-human intelligence is that it will never be incontrovertible, it's impossible to prove that AGI actually is AGI because it's impossible to distinguish a living, thinking machine that draws on its experiences to form ideas, from a machine that just knows all the right answers to our questions.
Vis a vis solipsism, we either have to give machines the benefit of the doubt, or deny them any rights.
@ngaylinn @brembs Don't kid yourself, your brain is just a blob of electric jello. Reductionism is foolish, there's still a lot we don't understand about how people make decisions and if you participate in commonly held activities like showering or eating breakfast then you're far more predictable than you think.
It's not absurd to claim the first AGI will be one of GPT-4's cousins, but I agree that it's arrogant to claim we've got the blueprint for AGI all mapped out.