I wonder if ChatGPT will be used to help users submit bug reports ready for investigation.

And...

ChatGPT would have far more patience than a busy developer, whose job is to fix bugs, not decypher a user's (justifiably) imprecise understanding of how software works.

But a chatbot could help, patiently asking the user questions.

Then, of course, over time -- the user would learn how to do it themselves, and get this -- they would also learn how software works.

There might be some great programmers out there who don't know they are. 😄

It'll be a great teacher, too.

@davew The problem is that ChatGPT doesn't understand what it has written. It cannot tell if the software it generates is "correct". So it makes it very hard for it to "teach" when it cannot evaluate and correct mistakes (its, or those of others.)
@cshotton @davew True of a "pure" LLM model, but it's easy to imagine a system where code generation was fine-tuned to be called by a test metafunction and then snippets went through this external validation. Basically, today the LLMs are writing their programs on legal pads and handing them over to us to run. They do well! But the route to improvement may not only be "get better at coding on legal pads"
@lobrien @davew Actually, we're working on an alternative right now that is at least one step closer to "trustable" AI-generated code. We have a set of well-defined, modular code blocks that can only be assembled in "correct" ways, so at least it will prevent certain types of generative errors because they will be disallowed by the "syntax". I think this is how high-order domain specific non-expert coding will be done. "Here are your Legos. Go make an eCommerce site."
@cshotton @davew It sounds like you’re talking about a much higher level of composition than functions and typical type-systems. *And* you’re manipulating them with an LLM front end? That’s just fascinating. I think we’re in for an explosion of creativity in programming tools.