I've been playing *a lot* lately with chatGPT, only using the GPT-4 model. It's really good for using it as a way to bounce ideas or suggest improvements. It's far from perfect. When asking technical things, it just spit things that are blatantly wrong. One example was that it was claiming the output of XOR(0,0) to be 1 (it's 0). I also got some really interesting code where there were subtle bugs, that if you're not 100% on top of the code it would make you spend a lot of time trying to debug it. Overall: it's a great tool if you know the domain and are willing to analyze the output with detail. It's a great rubber duck if you're into rubber duck debugging https://en.wikipedia.org/wiki/Rubber_duck_debugging
