I think something that's worth highlighting is that both communities are concerned with empowerment and disempowerment. And I tend to think these tools tend to *appear* empowering, but are actually disempowering, in their current configuration.
I don't believe LLMs are fundamentally disempowering, and could be part of an empowering future, but the present *industrial deployment* of AI tech within our *socio-economic environment* is net-disempowering. And I worry that there is a big rush to adopt with so little settled about the legal implications on the one side, and with *well known* problems for AI generated code on the other.
Not all AI coding usage is necessarily doomed to be a problem: using local models to "lint" or discover vulnerabilities/bugs is actually probably very good, in the way that having fuzzers is good. But there is so much pressure to adopt beyond the space of what's good and to dismiss real concerns that I am worried it is going to take a long time to undo damage