My desire to use Mastodon is basically at zero now because post-"Claude", hanging out here means inevitably I'm going to have a conversation with someone who tolerates, or even uses, "generative AI". And what's the point of being in a community where that's a risk. Interestingly* there is absolutely no chance of this on Bluesky, because there are artists there

* And oddly, given how "AI"-brained the *admins* there are

No longer interested in talking about open source unless the conversation starts with how to build an alternate open source community without "AI code assistant" users contributing to it
@mcc Sadly the Copilot monster came to #FreeBSD on GitHub last week.
@bms48 @mcc Wait, how did copilot "come" to FreeBSD ?
@trashheap @mcc The LLM-driven entity is submitting AI slop bug reports and "reviews".

@bms48 @mcc Ewww...

Last I heard FreeBSD was working on a draft policy. that was leaning towards actually banning LLM code. (This was discussed at BSDCan last year). That hasn't changed has it?

@trashheap @bms48 last I heard netbsd had done this but don't cite me
@mcc @bms48 YEAH NetBSD has, and im really hoping FreeBSD does the right thing and follow their lead.
@mcc @trashheap @bms48 last summer FreeBSD leaned strongly towards no (see https://reviews.freebsd.org/D50650 ), but the more recent "accepted" change is weaker (see https://reviews.freebsd.org/D54817 ). I don't like the weasel words in the newer proposal, and the proposed "developer's guide" doc (which you have to be logged in to see) is overlong and makes it too easy to miss the actual point.
@fedward @bms48 @mcc WELL fuck me.
@trashheap @bms48 @mcc yeah, I think "I considered it and decided I didn't care" is going to happen a lot if that policy goes into place. But the proposed change currently depends on a proposed doc that hasn't been approved, so maybe other people will agree with me that it's a shitty proposed policy and just decide not to accept it.
@fedward @mcc @trashheap I had much more to say about this on reviews.FreeBSD.org particularly about the disconnect between UK CDPA law regarding GenAI contributions and the US precedent of not accruing copyright to GenAI work at all. I think the CDPA has the more forward looking position.
@fedward @mcc @trashheap You're not going to like this, but there are FreeBSD folk who are looking to leverage Generative AI, but in a very limited and targeted way, e.g. automated code reviews. We would aim to do it in a controlled way away from Microsoft's monopolistic interest as Douglas #Rushkoff predicted would happen. It isn't a strict go/no-go area. The UK CDPA act does offer copyright protection to GenAI content but the key test is originality (system prompts must be very specific).
Is AI the next Dumbwaiter?

The demand for AI's unbridled growth is reactionary — a way of doubling down on the same old colonizing way of doing things. It doesn't have to be.

Rushkoff
@bms48 @fedward @trashheap My goal is to have no "genAI"-emitted code on my computer. If FreeBSD uses the lie machine to review code but not to write it, then I'd consider using FreeBSD, but would possibly not send PRs back upstream (the environmental impact of LLMs can be heavy and I'd prefer not to take any action that could result in an LLM query) and the idea of an open source project I can't contribute back up to is weird.
@bms48 @fedward @trashheap This said tho there's the bigger problem of long term trust. Once "AI" is inside an organization it spreads. If you've accepted this is a real technology which is okay to use and the question is just where and how much is appropriate, then "how much" will increase until it's on the table everywhere for everything. The people in the org who already want to use more of it will win more and more arguments over time because hey, you agreed to it *last* time, why not now.
@bms48 @fedward @trashheap I've already switched from Windows to Linux to get away from "AI" features and now have to switch away from Linux for the same reason. Each switch is disruptive. If FreeBSD is just going to be like Linux and just be two years *slower* in adopting "AI" than the other thing, it's not clear why I bother with FreeBSD. It's not like I'm interested in your OS on the merits. If I cared about product merits and not the ethical/political dimension I'd probs still be using Mac.
@mcc @fedward @trashheap I hate to keep invoking Robert Heinlein here, but he said it best: TANSTAAFL: There ain't no such thing as a free lunch. And Atkinson’s Casket, Sept. 1833, has ‘The price of liberty is eternal vigilance.’
@bms48 @fedward @trashheap Okay, well possibly the price of liberty is not using FreeBSD. And maybe if I go with NetBSD instead of FreeBSD I don't waste two years learning something I'm going to throw away.
@mcc @fedward @trashheap Not to disrespect NetBSD in any way, but it's kind of niche unless you're working for certain storage companies. Are you proposing to become some kind of tech Amish? No offence intended, but... I'm not vegan for similar reasons. If an LLM used in a limited way can help us ship features more quickly, then that has to be balanced against other concerns like how it was trained. Don't forget I have skin in the game against OpenAI and Microsoft who trained AI in violation.
@mcc @fedward @trashheap If GenAI and the fact that the genie is now out of the bottle is your beef, your options are going to be increasingly limited. Hence my comment re becoming "tech amish". The position is far more nuanced than go/no-go. And Rushkoff predicts the new trend quite well. If you want bigtech GenAI takedown, you could do worse than read Ed Zitron on this. I know I have. Cover to cover.
@bms48 @mcc @trashheap I think that's a mischaracterization of what's being said here. I believe there are two main points of opposition (if I may, and I hope *I'm* not mischaracterizing in this summary):

1. No amount of risk associated with generative AI/LLM contributions is acceptable (where risk includes potential copyright or license issues as well as the risk of introduced bugs or vulnerabilities);
2. Merely participating in the project shouldn't incur more LLM usage.

To the first point, the legal framework in the UK is only material for users in the UK; US courts haven't yet established enough precedent on whether machine generated code is even copyrightable. The EU also seems to be leaning towards "uncopyrightable," but also hasn't fully established rules. The more those contributions are accepted now, before a legal framework is fully established, the greater the risk not just assumed by developers but pushed out towards users who may not have the same appetite for that risk. And that doesn't even account for the bugs. Or the new bugs the LLMs produce when they're told to fix the old ones.

To the second point, the current stance of a nonzero number of developers is that the barn door, having been forced open, shouldn't be allowed to close again, and they like to call people luddites if they are worried about the conditions under which new horses are to be introduced. I don't know why the people who don't care about moral or legal issues with LLM usage are the only people whose opinion seems to matter.
@mcc @fedward @trashheap We won't be turning over our critical faculties to a stochastic parrot. I could say more but it would probably just sound like "trust me bro" and I would rather be able to give a stronger guarantee than that. I can only really speak for myself and my own projects. FWIW I will be doing copyright assignment to a company I direct for my open source work for now on, and making use of my professional indemnity insurance.
@mcc @fedward @trashheap At the moment it's mostly one FreeBSD developer who is actually taking point on using code review. I do not plan to use GenAI in upcoming kernel work, however, I will be using it in a very limited and controlled way on a related open source project that I plan to reboot. We've noted how these things hallucinate when given real code inspection or generation tasks so the LLM agent's scope has to be carefully controlled to hit the inflexion point of risk/reward.
@mcc @fedward @trashheap Local models are preferred for the environmental reasons you have stated. New post-quantization techniques such as BitNet (ironically an MSFT backed effort) help to address this.
@fedward @mcc @trashheap It is something of an arms race. I have already punted on reviewing a code contribution made as a GitHub PR which was essentially a semi-vibe-coded TCP socket stats scraper. It didn't add much to the kernel itself, or the TCP stack, so we asked the submitter to submit it as a 3rd party kernel module in ports. Limited use of GenAI is exactly what Jensen Huang doesn't want, but it's the only economically viable application IMHO because of LLM shortcomings and limits.
@fedward @mcc @trashheap When I raised the issue of authorship and individual/SME IP rights being diluted or undermined on the internal list, one FreeBSD developer responded by calling me a "luddite" which showed only he hadn't seriously considered the intellectual property issues at all, or how to use GenAI for good without succumbing to greedy corporate interest. It is looking like local on-prem models may be the way forward, the stranglehold of OpenAI and Anthropic on the tech must be broken.
@fedward @mcc @trashheap Remember, in the UK things are very different from the USA. We have "fair dealing", not "fair use", and Microsoft/OpenAI's unauthorised training of Copilot and GPT on open source code released under ANY kind of license (BSD at one end, GPL at the other) is technically in breach of the Copyright, Designs & Patents Act 1988. IANAL but this is reflected in 6th March House of Lords report smacking Starmer etc down from giving in to FAANG interest. #GrandTheftAutoComplete