Does anyone know of safe spaces for new programmers to ask questions about their AI-generated code?

Some communities explicitly ban this. Do any NOT ban this?

And do any communities have guidelines for how to do this politely/considerately?

These are genuine questions that I'd like to answer for some friends.

@treyhunner I don't have an answer, but I'm curious more about the question:

What kind of code is being generated? And what kinds of questions are being asked?

Are new programmers trying to use it like a 'code review' space, like a 'spot the bug' type challenge? or is it more the common, "I have this code it doesn't work why not?" type thing where the end goal is just working code and they quickly move on once it works?

@KeithTheEE the use case in thinking is for folks trying to use an LLM to do something a bit outside their current abilities where they're generating code, modifying it, and repeating.

@treyhunner In that use case if they provided their before AI, after AI, what the error is, and what they think it means/what they're confused by, the python discord and I think learnpython subreddit can answer

The added 'if's' go a long way to demonstrating a willingness to learn which helps a lot, but put burden on the learner

BUT a lot of users don't like working to answer questions about broken ai code, so the user experience can be pretty bad if the post is, "Chatgpt said this its broke".

@treyhunner I asked about it among pydis staff and the consensus was that the aversion is when helpee's treat the help forum as if the helpers are chatgpt.

But when there's a demonstration of an attempt to understand, not just an expectation of code being handed to them, (as is the requirement regardless of if it's LLM code or 'traditional'--that's weird to say--code) then it doesn't matter how the code came to be. The help session is usually focused on teaching.

(2/n)

@treyhunner
Being upfront about why code is there: (stack overflow, LLM, friend) and what code the user wrote themselves helps the helper understand what the user has, and has not, looked closely at.

So some suggested tips:

- If the code worked before, you added generated code, and it's not working, show both copies.
- Someone suggested showing what the prompt for the code was
- show the full error

(3/n)

@treyhunner

- explain what you think the error means, and what you think broke (this helps see what the helpee is looking at, and thinking about to direct 'how to debug' practice)
- talk about what you've tried.

Honestly, with the exception of the before/after llm code and the prompt itself, it's the same recommendations for asking good questions and working with others on a forum.

(4/n)

@treyhunner

My own thoughts:

Debugging LLM code will give new devs 'code review' skills that I didn't get in school. It'll take a bit of time for people to learn that that's what they need to work on when working with LLM code, but I think that's a skill that'll be wildly helpful.

Some users have started treating help forums as if they're chatgpt, not a collection of people. It's not new, but LLMs embolden the behavior because an LLM doesn't say no if you're demanding.

(5/n)

@treyhunner

Earnest learners will still want to learn and be good to work with, but some helpers on forums are abrasive to LLMs because the first blush with questions involving LLM code has been demanding, and the helpers accidentally color the interaction before it begins by assuming the helpee will also treat them like the demanding users. This strong association will fade, but for now it's unfortunate.

(6/n)

@treyhunner

LLM Answers are usually not welcome because of a lot of reasons (errors, dismissive interactions, the feeling that helpee's are being ignored with boilerplate responses), but LLM questions are not all that bad as long as the user is willing to Socratically work to an answer.

And at that level it's less about LLM code, and more about just working with someone to learn.

(7/n)

@treyhunner

TLDR: AI generated code questions are welcome in the python discord as long as the user is trying to understand what's going on and learn.

- Showing the before and after helps a lot
- some folks like to know the prompt
- showing the error and stack trace helps
- saying what you think should be happening and why you think it isn't happening helps others see where your mental model is
- remembering that a forum is made of people and isn't an LLM matters a ton,

(8/9)

@treyhunner

- some people just don't want to answer anything with LLMs. We try to discourage meanness, but that's a long form change
- Sometimes deleting all the AI did and starting fresh is the answer.
- Over time these users will probably be very skilled at code reviews if they approach their questions from that perspective.

I hope this helps, it was lengthy but I asked some folks and it prompted a lot of good discussion and this is kind of what spun out of it.

(9/9)