I seem to glean that most of you who do use GenAI coding assistants appear to prefer Claude, even those of you who work professionally in free and open source software.

How does your organisation look upon this...

https://github.com/anthropics/claude-code/blob/main/LICENSE.md

claude-code/LICENSE.md at main · anthropics/claude-code

Claude Code is an agentic coding tool that lives in your terminal, understands your codebase, and helps you code faster by executing routine tasks, explaining complex code, and handling git workflo...

GitHub
... in combination with the fact that https://claude.ai/install.sh installs an opaque binary from a Google Storage bucket and executes it?

To clarify what I mean: the vendor has chosen not to put its software under a copyleft license, which means they have not undertaken a legal obligation to give you source code that matches your binary.

They're also not giving you a means to verify the binary that's in that Google Storage bucket, except using a checksum that is *also* in that bucket. So it would seem like you have no way to tell that the software in the binary has anything to do with the code in the repo.

What am I missing/misunderstanding?
@xahteiwi that's how proprietary software works?

@zhenech Yes, I know. It's also how proprietary software sucks. 🙂

Now, I fully appreciate that lots of people/orgs out there couldn't care less about that suckage, or don't even consider it suckage. That's why I asked specifically how a company that builds or otherwise relies on FOSS assesses Claude's potential (or real) impact.

@zhenech Also, https://opencode.ai/ does exist, is at least MIT licensed (no copyleft, but better than "All Rights Reserved"), and does give you somewhat better verification options including a local install, so I'm curious how people/orgs assess those two relative to each other.

cc @larsmb

OpenCode | The open source AI coding agent

OpenCode - The open source coding agent.

@xahteiwi @zhenech Limited testing suggests that CC just still performs better, perhaps because it's what the model is trained/fine-tuned for.

Unfortunately that's quite hard to measure.

At the end of the day, we're all stuck with plenty of proprietary software and services and people pick their battles.