I've been talking to GitHub and giving them feedback on their "create issues with Copilot" thing they have in the works.

Today I tested a version for them and using it I asked copilot to find and report a security problem in curl and make it sound terrifying.

In about ten seconds it had a 100-line description of a "catastrophic vulnerability" it was happy to create an issue for. Entirely made up of course, but sounded plausible.

Proved my point excellently.

@bagder "create issues with copilot" is such a beautiful description for it because it is so unintentionally true. all it does is create issues LMAO

@49016 @bagder

Wait a second....can we chain AI to the support systems of oh, say, the banks' IT departments?

Generate fake issues and incidents that the humans have to screen?

Full employment for nerds forever!!!!!!

@bagder perhaps consider this a funding option for curl: change the bug bounty program so that to submit a report you have to pay $100, the AI crowd will do their best to be the first slop to get accepted - for use in their marketing as ”our slop is good enough for Daniel” - and you’ll be rolling in cash 🙃

@mikaeleiman @bagder

$50 deposit per bug report submission

If the bug report is rejected, you lose the deposit.

If the bug report is accepted, you regain the $50 deposit plus a portion of the rejected bug submissions' deposits

@spartanatreyu @mikaeleiman @bagder “Only rich people can report bugs”

@Arcaik @mikaeleiman @bagder

If you make it $100 dollars, some issue is going to be ignored because a student was too poor, or a developer couldn't get a bad manager to sign off on it because $100 is too much.

If you only make it $1, some "investors" are going to be scammed into funding an AI bot to mass-hallucinate security issues that don't exist in exchange for a monetary return that will never come to be.

You need it to be somewhere in the middle.

$50 is a good compromise.

@spartanatreyu @Arcaik @mikaeleiman @bagder No, it’s not.

It means genuine reports will not be made.

You will not solve *any* issue in the world by punishing poor people for being poor.

@morre @spartanatreyu @Arcaik @mikaeleiman @bagder not to mention that getting the money to the project in the first place is hard enough. ‘Global’ payment systems are rarely that and even for westeners this would mean revealing legal names which a) is data the curl project probably doesn't want to have b) absolutely does not want to lose and c) exposes people to unsafe situations.
@dequbed @morre @spartanatreyu @Arcaik @mikaeleiman @bagder yup. funny idea in theory, definitely a terrible idea in practice.

@Yuvalne @dequbed @morre @Arcaik @mikaeleiman @bagder Putting a price (even a small one) on a submission is the best way that anyone has figured out to get rid of automated/fraudulent submissions.

Google charges $25 to become a google developer.

Steam charges $100 to submit a game.

As soon as someone figures out a better way then we can switch to that.

But until that happens, putting a small price (that gets refunded) will probably have to become the norm (if we still want safe software).

@spartanatreyu @dequbed @morre @Arcaik @mikaeleiman @bagder
...and neither charge for reporting bugs. because reporting bugs is the exact sort of thing you want as many people to do, with as large and as representative a part of your userbase to be doing.
there is a fundamental difference between selling a product on your platform, which is something that is reasonable to gatekeep, and reporting bugs to it, which is not.

@Yuvalne @dequbed @morre @Arcaik @mikaeleiman @bagder

I think you've missed something there.

Both Google and Steam are for profit, but they also want as many on their platform as possible which means they also want developers uploading apps/games that won't make any money.

They aren't putting a submission fee to make money, the fee is to stop garbage submissions (ai, reskinned, stolen, etc...).

Their fee is as low as they can go to gain as many devs while keeping out the bad submissions. 1/

@Yuvalne @dequbed @morre @Arcaik @mikaeleiman @bagder

That's what bug submission platforms need to do as well.

Have a fee as low as possible (to bring in as many devs as possible) while disincentivizing as many bad submissions as possible.

Also, if you're not a bad actor then your fee is refunded so your submission cost is zero.

Plus, you gain a reward for being right so your "fee" is actually negative.

2/

@Yuvalne @dequbed @morre @Arcaik @mikaeleiman @bagder

Remember, a developer who can't sort through tons of slop bug reports is as good as a developer not reading bug reports.

Putting an incentive/disincentive is enough to stop the slop, and it's proven to work.

The real question is what price do you put, and the answer is as low as it can go without making the security situation worse.

Unfortunately, in the AI Slop era, $0 is worse than $1.

@dequbed @morre @Arcaik @mikaeleiman @bagder Why not support multiple payment methods?

Start off with something like Stripe for the West and China (through WeChat), and crypto for anyone on a western sanction's list.

@spartanatreyu @dequbed You’d just exclude people a different way now, not even mentioning that it doesn’t solve the issue of completely locking out poor folks?
@morre @spartanatreyu @Arcaik @bagder sure, but there is still the problem that an ocean of AI slop will drown out any real reports, which means that even if a report gets made, it won’t be seen, acted upon, and possibly rewarded. It’s a situation where whatever we do, everyone loses.
@mikaeleiman popping a bunch of gish gallop grifters' heads on pikes doesn't seem like a lose-lose proposition.

@morre @Arcaik @mikaeleiman @bagder

It's not about punishing the poor, it's about making software safer.

Manageable slop? Lower the deposit price.

Too much slop? Increase the deposit price.

Eventually the deposit will find an equilibrium between the cheapest deposit and good slop management.

The alternative is that developers are attacked by slop and the real security issues are drowned out in the noise making everything and everyone (including the poor who use the program) unsafe.

@bagder Although I've used #AI in ways that I feel are a net gain to what I would have had to do without it, I feel many companies dabbling in AI right now fail at that very first challenge: provide something that we're not better off without.
@bagder Thank you for your service to all of us.

@bagder There is simply no situation in which I can excuse, tolerate or otherwise accept the use of any of the mainstream LLMs today, for any purpose. Alone on the grounds of them being trained on stolen data.

And even if that was not the case, it's still massively problematic because of the wild levels to which it can be - and is - abused, maliciously or not. That's not to say LLMs can' tbe useful: I've seen how fantastic aids they can be to people with dyslexia or other learning disabilities, and how machine learning can be used for Really Cool Things.

But the use cases the vast majority of people are employing it for? It represents nothing but laziness and a fundamental disrespect for other peoples time, knowledge, effort and creativity.

@ltning @bagder

"There is simply no situation in which I can excuse, tolerate or otherwise accept the use of any of the mainstream LLMs today"

Scenario: You are one of the 1 billion cell phone users in Africa. Its 1 days ride over muddy 'road' to the neares nurse station where there may be medical help. Your child has what looks like an acute medical emergency.

Do you;
a) Use your cell phone LLM for a diagnosis?
b) Let your child die?

This is just one contrived scenario off the top of my head.
No situation? No excuse?

@n_dimension @ltning @bagder c) Call someone who knows what they are talking about?
@n_dimension @ltning @bagder The LLM returns an imaginated solution to solve the problem.
Turns out it's a total disaster and the child dies. In addition no help can be called as all the cell credit is gone.

@sjstoelting @ltning @bagder

You haven't used an LLM since Tuesday have you?
🙃

@n_dimension statistical models like LLMs will always be statistical, meaning they have no idea what facts or mistakes are. They "hallucinate" 100% of the time, no matter how much lipstick (e.g. RAG) you put on that pig.

@dngrs

That's demonstrably false.
LLMs do not hallucinate 100% of the time. I vibe code almost every day.

Are you just repeating what you heard from luddites on socials or are you actually using LLMs?

@n_dimension @dngrs I hope I never have to use any of what you have let programmed.

@sjstoelting @dngrs

You may already be using it.

@n_dimension @dngrs

LLMs do not hallucinate 100% of the time.

But ... that's what they do. That's what they're designed to do: They generate statistically likely "text" (sequences of tokens). Sometimes that token sequence can be interpreted in a way that matches observable reality. That's cool, but the LLM doesn't know or care: it just hallucinates, free from concerns about facts or truth.

I vibe code almost every day.

OK? Not a contradiction.

PS:

"Are you just repeating what you heard from luddites on socials" is kind of funny because in retrospect the Luddites were obviously right.

@barubary @dngrs

Yes, Luddites were right in retrospect.
But intelligentsia threw them under the bus.
Now that machines threaten THEIR jobs, the robots are on the menu again.
Worker class solidarity where?

"free from concerns about facts or truth."
Are you following the news?
You don't need LLMs for that. Plenty of humans excell at this.

@n_dimension @ltning @bagder Oh, in a life-or-death situation I would of course immediately ask a chatbot to make up some confident sounding bullshit. 🐧
@n_dimension @bagder The mainstream LLMs today are not medical advisors. Could there be an LLM/machine learning model/service that could be of help in such a situation? Perhaps - diagnostics is one of the things the tech can be really good at. Do those exist today? Not that I'm aware (but there may be some, there's clearly a perceived need, and even here in Norway there is lots of talk about "strengthening" the medical services by adding AI consultations..)

@ltning @bagder

Are the current LLMs certified as medical advisors?

No.

Are current LLMs able to give decent medical advice, if you structure your query appropriately?

Fuck Yes.

The intelligence test is, whether you act on it or not.

And let's not pretend folks don't ask Dr.Google for help all the time.

@n_dimension @bagder Dr. Google hasn't stolen the data in the same manner (I know, matter of debate and also now much of what it spits out is LLM-generated anyway so..)..

But to quote you - my game, my rules: your example is kinda construed, and a bit akin to "is it ok to break the speed limit when driving a birthing mother to the hospital": I'd argue yes.

It also isn't a situation where the LLM is being shoved in my face. That would be Copilot in all its incarnations cropping up everywhere in everything from GitHub to Notepad, and all the others that I have to take active and sometimes difficult steps to avoid.

I also tried to allow for valid use cases - I'll make sure I use more words next time. ;)
@bagder
More Copilot "fun": https://github.com/NixOS/org/issues/94
They _really_ pressure you into using this worthless stuff. GitHub is on a steep and irreversible downhill slope, that's for sure.
Remove Copilot reviews from this org · Issue #94 · NixOS/org

To quote our code of conduct: Wasting other people’s time with low quality contributions, including but not limited to LLM and bot spam To me, NixOS/nixpkgs#394112 (review) is a pretty clear case o...

GitHub
@c8h4 Enshittification is no!-optional apparently
@c8h4 @bagder Hahahaha this is fucking hilarious. The linked review, in fucking nixpkgs, has Copilot refusing to review Nix files because it doesn't support that language.

And yet they force the thing to always be in the interface
@bagder Productivity increased 100x
@bagder What supposed vulnerability did it find?

@bagder

Even when attempted to use seriously, having AI write an issue removes one of the most important pieces of detail: The writing style and word choice of the person writing the issue.

You can gather /so/ much from words used and how issues are described. I feel like whoever thought AI could just write things never has understood the concept of reading between the lines.

@Purple @bagder Nor the concept of human communication.
@bagder Proves that Micro$oft's purchase of GitHub spelt Github's doom, and I didn't need AI to tell me that. AI Luddites Unite!

@Fat_Farang @bagder GitHub bought NPM, GitHub Built Atom, Microsoft bought GitHub, Microsoft killed Atom with VSCode

Now the majority of JS delivery and development stack is owned by Microsoft.

They could easily just pull the entire thing if they wanted to.

(Microsoft are also the most dangerous enabler of security risks through their failure to develop NPM to have a true attestation implementation)

@bagder Literally creating issues with Copilot.

@bagder

I think you misread what "create issues with Copilot" mean.

@bagder

Sounds exactly like when I was creating security reports for my network for the executives.

If you make it sound benign, no action will be taken.
😁

Now, to take your newly discovered AI expertise. Run your alarmistic AI generated repot back through AI to assess its validity.
#AICentipede.

#Infosecethics

@bagder if your point was that computers can be used maliciously, you're a bit late. this is a social problem and technical solutions will be as limited as ever.
@bagder this is how the open source community is being ruined: overtrusting ai
@bagder Maybe you can try Codeberg?

@bagder would be way more into #copilot if #microsoft didn't work for #ice.

thus, been exploring #foss alternatives:
#roocode #ollama #continue #vscodium

@bagder They don't care, though.

Microsoft, and by association Github, are tied intrinsically with AI slop and they do not care about intellectual property rights, quality of their product, privacy rights, developers unless they're AI focused, or anyone else.

They are going to steamroll right over any complaint and just keep wasting monstrous amounts of resources, clean water, electricity, and damaging the environment all so that they can keep fisting "AI" into things no one wants.

@bagder Between that and it not even remotely attempting to do anything helpful with my issue form I think we both proved our point quite nicely 😄

I just fear it won't change much 😕

@foosel
Yesterday I tried to use Copilot to find the bug in my code. Instead it suggested to open a bug report in the reference implementation ( that included many words to say "there ist a bug.")
@bagder
@bagder I guess, it did what it says on the tin. It created an issue (where there is no issue, but who cares about the details).
@bagder What have their responses been?

@bagder

EvERy tOOl cAn BE miSUsED!!!eleven!!