8.1K Followers
894 Following
10.7K Posts

Displaced Philly boy. Threat hunter. Educator. #infosec, #programming #rust , #python  #haskell , and #javascript . #opensource advocate. General in the AI Resistance. Runs @thetaggartinstitute. Made https://wtfbins.wtf. Not your bro. All opinions my own. Dad. #fedi22 #searchable

Pronouns: He/him.

The Taggart Institutehttps://taggartinstitute.org
Bloghttps://taggart-tech.com
Codeberghttps://codeberg.org/mttaggart
YouTubehttps://youtube.com/taggarttech
GitHubhttps://github.com/mttaggart
Keyoxideaspe:keyoxide.org:G4ADJFWICZZZXGR4STZQVMBJNM

@mttaggart Not a lawyer but in my career I’ve often focused on these questions.

No copyright = no OSI license.

In the US, the Perlmutter case decided works generated by a simple prompt (“an image of a landscape”) are not copyrightable.

however! many things are copyrightable when mixed with enough human selection and creativity. A database of streets, a selective anthology of Shakespeare, all can be copyrighted. An arrangement of AI-generated non-copyrightable works might be copyrightable.

@dogfox @glyph @matt To the first:

There are lines we cannot cross to be sure, but we must be vigilant to prevent those lines from excluding all but exact matches to our own beliefs. This is the challenge, and one we are not meeting.

Once again, I am making no case for a lack of boundaries. I am making the case that the boundaries currently in play are counterproductive.

I can not and will not give you a maxim for establishing them. Looking for empiricism there is where you get into weird inversions of moral obligation.

As for obsession versus the thing itself, I see a distinction without a difference. To maintain "purity" as a virtue is to seek it, and without clarity that it is unattainable, you end up with some version of obsession. I would prefer a heuristic of growth and estimation of intent. Not perfect metrics, and deeply subjective. It's something best done in human relations, and not conducive to a few hundred characters of pith.

@glyph @matt So, this is probably the most misunderstood part of the piece, and that's on me. I am concerned about ideological purity in this context. Purity as a concept, whether ideological or otherwise (i.e. racial), is what I was calling dangerous. And racism, among many other things, is a derangement that weaponizes purity. This is an instrument capitalists used heavily throughout the latter 19th and early 20th centuries to disrupt labor movements and prevent workers of different races from finding common cause. That's not to say racism wasn't elsewhere or sourced from within all socioeconomic echelons. Even so, the weaponization and exacerbation is relevant. Purity is a way to pit people against each other.

Ideological purity, less dangerous than racism, still prevents finding common cause. Building movements requires working with those who do not agree with you on everything. There are lines we cannot cross to be sure, but we must be vigilant to prevent those lines from excluding all but exact matches to our own beliefs. This is the challenge, and one we are not meeting.

Am I a fascist for having used Claude Code and paying $20 to test it as others have? Some will say I am, or adjacent, because I have used a fascist tool. I find this deeply unhelpful to anyone. And that's my point. If you demonize anyone who touches this technology, your opposition movement is doomed to failure.

What do we want to accomplish? Stopping or stemming the spread of the disease, or building a commune of the untouched?

@glyph We gonna make the metals kiss and the fuel turn lively
Some of you can cite chapter-and-verse XKCD but Achewood is my scripture.

@mttaggart @fhekland Ah yeah. I also have been meaning to write a blogpost about the uncertain legal status of LLM based output. I really am worried it's much more uncertain than people are acting...

I think one thing that *is* positive is that I'm glad that the "hey look you can just clean room vibecode a replacement to any open source software" is being applied to leaked software from Anthropic. Now I hope someone does it with a leaked copy of Windows!

@cwebber @fhekland Ah hey, thanks!

And yeah, this question was really to get at the "We don't know," related to your point a while ago about the danger of attempting to license generated code. Basically I wanted more citations on that claim, and it sure seems like the best case scenario is "We don't know," and the worst case scenario is "Almost certainly not licensable." Either way, definitely not safe for us in open source.

@mttaggart @fhekland Well, that's right, you can't *license* it, but the public domain is compatible with nearly every FOSS license

The problem is, *not every place has the public domain* and *we don't know that AI generated output will be considered in the public domain everywhere*

This was the motivation that lead to CC0, a public domain declaration with permissive fallback license

We simply *don't know* yet what the legal status of AIgen output is, sufficiently. If it was "public domain worldwide", you'd effectively be mixing its output with yours and contributing that, and it wouldn't likely be tthat big of a deal. For instance, it just might weaken some of the eligibility for coverage under copyleft, but not copyleft compatibility... same with propriettary licenses.

But we *don't know yet*!

@ai6yr Yeah I've had Copilot give me my own Rust code for Windows exploits.

@bob_zim @fhekland @cwebber