@bagder to respect not weighing in on a project I have not participated in development on before, here's what I would have replied on the issue:
I think there are generally three issues raised with contributions (notice I did not say code) that involve the use of an LLM:
Quality: LLMs can produce a large volume of output and with an error rate above 0.000%, eventually it will introduce an error that may not have been introduced otherwise.
Violating the law or license: LLMs were trained on large datasets which were almost definitely illegally obtained but also may contain illegal material themselves which the LLM might reproduce. Here, we define illegal material as anything that, if added to curl in sufficient amount would make having or using,the curl codebase in a way consistent with its license illegal.
Enforceability of the license: LLM-produced material is new and it's unclear what the global consensus will end up being regarding its copyrightability and whether licenses for LLM-produced material can even be enforced.
Although point 1 is often the most talked about, it is the simplest to deal with. LLMs can introduce errors, but they can also spot them. The code review process is meant to find these errors and as time goes on if it is deficient then the review process is amended. A balance for quality will eventually be found.
Point 2 is also alarming, but mostly follows point 1 - the code review process should involve a component of trying to identify re-use of prior art or plagarism. Good faith efforts go a long way for open source projects, especially ones as well-run as curl. However, if the project does not require attestation from the submitter of the PR that they did not use illegal content or content they do not have the rights to contribute then the project takes that risk on themselves. I think asking contributors to attest that their submission is original and they can legally contribute it to the project is a simple way to reduce the risk of legal issues.
Point 3 is the big question mark. Some jurisdictions have created a legal environment where it's not clear LLM output can be copyrighted which has implications as to whether LLM output can even be licensed or if one can enforce any kind of software license for a project with LLM output. This landscape might change drastically and/or rapidly and makes including LLM output a risk, one that I don't see a sufficient mitigation for except to bar submissions that used LLMs and ask contributors to attest that LLMs did not produce the output or meaningfully contribute. I would say using an LLM for reference material or syntax help might be safe, but even with this example, there's no certainty that this would be safe usage.