If Codeberg is trying to "compete" against GitHub and GitLab, why does it refuse to take a look at AI assistants? Apart from infringing on authors' rights and questionable output quality, we think that the current hype wave led by major companies will leave a climate disaster in its wake: https://disconnect.blog/generative-ai-is-a-climate-disaster/

Other _sustainable_ (and cheaper!) ways for increasing efficiency in software development exist: In-project communication, powerful automation pipelines and reducing boilerplate.

Generative AI is a climate disaster

Tech companies are abandoning emissions pledges to chase AI market share

Disconnect

@Codeberg
> Other _sustainable_ (and cheaper!) ways for increasing efficiency in software development exist

This is the big thing for me even if I ignore everything else that's an issue. I've tried to use AI code assistants multiple times and I can't shake the feeling that getting good with them would be a lot of work for not nearly as much payoff, when I know there are infrastructure things I could work on that would make me WAY more pawductive and that wouldn't have those downsides >w<

@Codeberg Even with a local assistant, there are better things I can do with that processing power.

And I can't shake the feeling even in the instances I've tried one and found it helpful, that the stuff it's helping with is kind of bullshit. If I'm going to throw so much compute at something, at least it could do something more significant than speed up boilerplate.

@Codeberg One illustrative moment for me was OpenAI talking about setting up dedicated VMs for their models to run code in to give you the result.

And my immediate thought was, wait a sec, *I* don't have dedicated VMs to run code in. Why not? Wouldn't that be more useful for me? Why does the LLM have better resources than me?