70 Followers
180 Following
223 Posts
Programmer, I really like Haxe and Rust but get stuck doing a lot of stuff in C++. I work on the @grig audio libs in Haxe.

 đꦀ

Alt of @thomasjwebb
pronounsthey/them
codeberghttps://codeberg.org/thomasjwebb
githubhttps://github.com/thomasjwebb/
gitlabhttps://gitlab.com/thomasjwebb
portfoliohttps://thomasjwebb.com/
LinkedInhttps://www.linkedin.com/in/thomasjwebb/
personal@[email protected]
consulting company@[email protected]
Give Django your time and money, not your tokens

The Django community wants to collaborate with you, not a facade of you.

Better Simple
I'm also researching ways of taking advantage of Claude where I'm not even using it to generate anything that gets checked in. If it's just doing research for me and code reviews, that might be better anyway. If this proves fruitful I could end up going more restrictive. As long as the above post is pinned, I will update it with any changes.

Due to security, ethical, technical and legal concerns, I’m going to spell out the AI policy for my OSS projects, which would cover everything under my username, my company (Osaka Red LLC) and the grig project. See my Codeberg.

In short, any use of LLM agentic coding is banned on all of my projects with a few exceptions, which will be listed at the end of this post. Of these no-AI projects, a couple have some LLM-generated code in them, but it is minimal and also listed at end of this post.

I don’t like genAI and I hate the tech industry’s cloud-based business models. My specific views aren’t necessarily the same as many of the anti-AI views found on the fedi but I won’t get into that here. On the other hand, I wanted to take advantage of what may be a very temporary situation where LLM agentic coding services are relatively affordable (pre-enshittification era) to boost OSS projects, particularly some that I have to get to a certain point soon.

I now think I was wrong to focus on the latter point and am now pulling back. The legal integrity of OSS is important so we need to be careful about this even if it could put us at a disadvantage for the time being (conversely, it may turn out to be an advantage). The issue is that the supreme court in the US (and courts all over the world!) haven’t settled the matter of how copyright applies to genAI output. It could be a disaster if OSS uses it extensively then any courts anywhere in the world find that this constitutes unauthorized relicensing.

So the basic AI policy on my projects are as follows:

  • Always respect the per-project AI policy. This means all projects except the few that allow it, listed at the end ban all use of genAI. At the time of this writing this hasn’t been reflected in the guidelines in the repos but I’ll work on that.

  • Always respect the per-issue AI policy. Even if a project generally allows AI it might disallow it for certain issues for technical or pedagogical reasons.

  • If AI is allowed, then we mostly go by fedify’s AI policies with the following modifications: a) genAI may not be used for any audio or visual media, only for text b) be extra careful that all comments are factually correct and not misleading c) LLMs are strongly discouraged for human-readable text, especially in discussions. Prefer posting in your own language instead of using machine translation if you’re not confident in English. d) AI may be used for translation of user-facing text (in UIs) and in documentation but only between languages you fully understand and can verify the accuracy of. e) No use of OpenAI or xAI products. Prefer local LLMs if possible.

  • Bots will be blocked as well as anyone who opens issues related to AI policy (you may discuss that under this post). Please don’t let the AI industry suck all the oxygen out of the room. There’s much, so much more to development than tools.

  • Exceptions, projects that do allow the use of AI (I have backup plans if I have to start over and write it all by hand):

    • Sabratha and Oea, parts of the Tripoli project.
    • Gallae - this doesn’t do much yet…
    • nchant - repo doesn’t exist yet, but newer version of my audio DSL (fka hxal) that will work with grig.audio. Will have more detailed per-project policy.

    Projects that fall under the ban of AI but already have some LLM-generated code:

    Thommy Webb

    Codeberg is a non-profit community-led organization that aims to help free and open source projects prosper by giving them a safe and friendly home.

    Codeberg.org
    The problem with getting trapped by dependence on a *heavily subsidized, obviously pre-enshittification* cloud service for your workflow is that not only can it increase friction for reverting back to your old workflow, it also could also make it more difficult to switch to something better that is yet to come. Even from the most pro-AI perspective, the current approach seems unlikely to be the best we can do. Our old instincts about avoiding vendor lock-in are more relevant than ever.
    See you at the GDC Festival of Gaming in San Francisco, CA, USA this coming week! #GDC2026 #GameDev #Xsolla

    I am convinced we are on the verge of the first "AI agent worm". This looks like the closest hint of it, though it isn't it quite itself: an attack on a PR agent that got it to set up to install openclaw with full access on 4k machines https://grith.ai/blog/clinejection-when-your-ai-tool-installs-another

    But, the agents installed weren't given instructions to *do* anything yet.

    Soon they will be. And when they are, the havoc will be massive. Unlike traditional worms, where you're looking for the typically byte-for-byte identical worm embedded in the system, an agent worm can do different, nondeterministic things on every install, and carry out a global action.

    I suspect we're months away from seeing the first agent worm, *if* that. There may already be some happening right now in FOSS projects, undetected.

    A GitHub Issue Title Compromised 4,000 Developer Machines

    A prompt injection in a GitHub issue triggered a chain reaction that ended with 4,000 developers getting OpenClaw installed without consent. The attack composes well-understood vulnerabilities into something new: one AI tool bootstrapping another.

    The rule I’ve always followed if I want to make my own software, I won’t even look at code that’s under a more restrictive license. There are many cases where it would have helped me to look at proprietary or copyleft code, but I refrained. Because whatever the law says, the intent of the original authors matter. When you release something copycenter, that’s an assurance to users of it that what they’re using or putting into their projects is safe to do so. If it’s even heavily inspired by anything not copycenter or public domain, then a fundamental contract with them is broken.

    In some ways, the LLM is just a red herring with the whole chardet situation. If a long-term contributor to the GPLed project wrote something new by hand and contributed to the same repo, then it would be clear that they’re at least violating the spirit of the original project. Whether or not an agent is involved, it has to be a cleanroom reimplementation, which means someone else has to do it. There is a philosophical question of whether or not an LLM even can do cleanroom implementations and we’ll find out whether that’s possible from a legal perspective as soon as MIT-licensed clones of proprietary software lead to lawsuits.

    But the open source community has never just been code. The persons behind the software matter. Pissing off copyleft people to appease corporate overlords and signalling that there are people in the OSS community who will stab you in the back will definitely harm the community and thereby the general public. In a time when tech companies are just getting more and more evil, OSS is more important and if unethical or antisocial behavior gets accepted, then everyone’s worse off.

    I’ve always been more on the copycenter side of the divide, but the rule is we can’t be jerks to each other. That’s why I’ve always been careful about even the spirit of the copyleft licenses. Sometimes maintainers of copyleft projects will let you relicense part of it if you fucking ask. Being an asshole just encourages people to close ranks and see each other as the enemy. (Also think about how petty it is to attempt a relicense into copycenter when the original was already lGPL, admittedly kind of an odd license to choose for python, but probably compatible with non-copyleft projects).

    I just hibernated my LinkedIn account. One of the better places to follow me for work/dev stuff is this account, on the fediverse as @[email protected] and on bsky at https://bsky.app/profile/tjw.haxe.social. Due to bugs on the Akkoma side, the account’s main handle isn’t tjw.haxe.social yet. Catch me at the #GDC next week!
    Bluesky

    Bluesky Social
    I'm going to be at as many #GameAudioGDC events as I can this year. Hit me up if you're going to be at the GDC!
    I just uploaded a livestream from last month onto youtube about making #Bitwig extensions (and eventually, extensions for other DAWs) in #haxe using my grig.controller lib and how to help improve it.

    https://youtu.be/BBxYIquf7OU
    Making Bitwig Plugins in Haxe With grig.controller

    YouTube