Given GitHub's hostile push for AI, I desperately want to move Bottles's code to GNOME GitLab and keep the Codeberg mirror up.

I'm legit so fucking tired of it. It makes it hard for me to develop Bottles without Copilot spams demotivating me. Their hostile push has gotten like Discord where everything is Nitro COPILOT THIS Nitro COPILOT THAT. I'm stuck here playing the opposite of Where's Wally: as in "try not to find any mentions of Copilot".

It's been bothering me so much that it has become more and more difficult to contribute to projects hosted on GitHub. I also get uncomfortable when I contribute to software mirrored to GitHub, which includes GNOME apps.

@TheEvilSkeleton Copilot is disabled at org level in Bottles. Both for the "Coding agent" and general "Access". What you see in the issues sidebar is a kind of advertisement suggesting you to try it but is not kicking in if you are not requesting it.

I would like to know more about the sentence «Copilot spams demotivating me», could you make an example of what Copilot does?
I'm honestly interested in understand more.

@pietrodc0 @TheEvilSkeleton
You can now ask Copilot to open issues for you on repositories, it doesn't check for duplicates and it tries to write more rather than less about something it doesn't understand.

Better yet, it will look like it was opened by the user, so before you read it, you can't known whether it was written by Copilot or not.

This is just a huge waste of time for developers, and it's just another step Microsoft makes in the enshittification of GitHub.

@monster @TheEvilSkeleton you mean in the IDE integrations? I don't see this feature on GitHub 🤔

@pietrodc0 @monster @TheEvilSkeleton it's a recent change, copilot is now integrated into the issues creator, so people that used to create low-effort, irrelevant or duplicate issues using AI (and had to copy paste before) can now do this directly from github, and are very well invited to do so by the interface

The original announcement of the feature:

https://github.blog/changelog/2025-05-19-creating-issues-with-copilot-on-github-com-is-in-public-preview/

Creating issues with Copilot on github.com is in public preview - GitHub Changelog

Say goodbye to manual, repetitive issue creation. With Copilot, creating issues for bugs, tasks, and feature requests on GitHub is now faster and easier—all without sacrificing quality. What’s new Natural…

The GitHub Blog
@odnankenobi @monster @TheEvilSkeleton looks like I didn't receive that update yet, going to check if I can opt-in in the beta features

@odnankenobi @monster @TheEvilSkeleton

ah, you have first of all to go in "immersive mode", need an active Copilot license (so not everyone, expecially spammers) have it, then ask to create the issue for you.

In my personal opinion is not a so pushed feature that is encouraging everyone to open bad issues with no fact checking...

@odnankenobi @monster @TheEvilSkeleton

moreover, it's literally Chat GPT, it's going to do what asked to do. So if is not asked to check for duplicates it's not going to do it.
I'm not saying that is going to perform a proper and good check if requested, I need to perform some test before being able to provide a full feedback, but I don't see the huge mentioned problem in the room

@pietrodc0 @odnankenobi @monster @TheEvilSkeleton why try to rationalize what is undeniably an invasive advertisement for a feature of negative value?
@mirkobrombin @lhp @odnankenobi @TheEvilSkeleton I don't know about you, but issues filed by users commonly already lack information, how do you think this will be with an LLM that just gets a short instruction, and that doesn't check for duplicates?
@monster Sounds like a personal impression rather than something based on actual data or testing. Feels more like “bot = bad” bias than a grounded argument. Don't get me wrong I am honestly not understanding the point. Can u make it clear please? Or point me to a blog post, something like that.

@mirkobrombin I'm a lot less biased about AI than many others, it's not as simple as "bot = bad".

My point is: at this time, I do not think the feature is at a stage where it should be implemented already, and the current implementation leads users to submit time-wasting issues.

I haven't thoroughly tested it, but the quick test I did showed that the bot opened an issue that had been requested 5+ times already, with a lot of "filler" text.