I've been playing *a lot* lately with chatGPT, only using the GPT-4 model. It's really good for using it as a way to bounce ideas or suggest improvements. It's far from perfect. When asking technical things, it just spit things that are blatantly wrong. One example was that it was claiming the output of XOR(0,0) to be 1 (it's 0). I also got some really interesting code where there were subtle bugs, that if you're not 100% on top of the code it would make you spend a lot of time trying to debug it. Overall: it's a great tool if you know the domain and are willing to analyze the output with detail. It's a great rubber duck if you're into rubber duck debugging https://en.wikipedia.org/wiki/Rubber_duck_debugging
Rubber duck debugging - Wikipedia

another thing that was super useful was to throw a snipped of code and ask for suggestions or verify that's bug free (adding context, of course). Super useful at that as well.
If github adds auto-PR reviews using chatgpt (*not* auto-approval), that would be a super interesting tool to have