Well the best approach would probably be to say "no for now". But following recent court rulings (I think it was in the US), the answer could change to a "yes" as it appears that they see that any code generated with AI is basically public domain as copyright is for human generated work only.
Well the latest court ruling I saw a few days ago was that when they used AI generated code in a codebase with a specific license that any code that was AI generated was public domain and that license didn't apply.
But you know, it is still quite young and both courts and politicians take time to find "a stable and predictable ruling", so...
@BenAveling @davidgerard @ariadne
That wasn't part of the court ruling. Everything is still heavily in flux. So the only thing that is clear right now is that nothing is clear...
@ariadne @agowa338 without legal precedent everything is just guess work. Until we get a SCO vs IBM sized case nothing's for sure and the LLM bubble blowers are going to do whatever they can to prevent such a case from happening any time soon.
Their strategy is when it is in everything it needs to be declared legal.
I hate this timeline.
Meta made it "fair use" to mine the entire web for data and train you AI on. But that was probably not the case you were thinking off...
@ariadne I hope you don't mind me paraphrasing this to "should Alpine take the legal responsibility to upstream things none took any legal responsibility until now?", I know it is not what you said but this is how I have experienced this process, and a default no is the only logical answer atm.
given though that we now have repos that hide AI contributions as well, it is a clear indicator that even outside the Alpine scope this is not a code contribution subject, it is a broader liability one, and a need to discuss what will happen if even involuntarily a maintainer ends up upstreaming something "legally/security/community toxic" is proper.
Most ( tech friendly ) legal people I spoke with just end up with "just avoid clear trademark infringement and set up an integration framework to prove to a court you at least tried if shit hits the fan", which I personally take it as as hard no for prod readiness.
Just to clarify, I am not against AI used as productivity tool in general ( same way as I am not for or against using an IDE to write code ), but I am definitely not ok setting up horizontal ( or in fact any ) rules upon castles made of sand... especially where it is not fit to do so and will introduce risks on the principles of a project.
Maybe after we, as society, go through some law suits and have established a better foundation on this subject.