I've been talking to GitHub and giving them feedback on their "create issues with Copilot" thing they have in the works.

Today I tested a version for them and using it I asked copilot to find and report a security problem in curl and make it sound terrifying.

In about ten seconds it had a 100-line description of a "catastrophic vulnerability" it was happy to create an issue for. Entirely made up of course, but sounded plausible.

Proved my point excellently.

@bagder There is simply no situation in which I can excuse, tolerate or otherwise accept the use of any of the mainstream LLMs today, for any purpose. Alone on the grounds of them being trained on stolen data.

And even if that was not the case, it's still massively problematic because of the wild levels to which it can be - and is - abused, maliciously or not. That's not to say LLMs can' tbe useful: I've seen how fantastic aids they can be to people with dyslexia or other learning disabilities, and how machine learning can be used for Really Cool Things.

But the use cases the vast majority of people are employing it for? It represents nothing but laziness and a fundamental disrespect for other peoples time, knowledge, effort and creativity.

@ltning @bagder

"There is simply no situation in which I can excuse, tolerate or otherwise accept the use of any of the mainstream LLMs today"

Scenario: You are one of the 1 billion cell phone users in Africa. Its 1 days ride over muddy 'road' to the neares nurse station where there may be medical help. Your child has what looks like an acute medical emergency.

Do you;
a) Use your cell phone LLM for a diagnosis?
b) Let your child die?

This is just one contrived scenario off the top of my head.
No situation? No excuse?

@n_dimension @ltning @bagder c) Call someone who knows what they are talking about?
@n_dimension @ltning @bagder The LLM returns an imaginated solution to solve the problem.
Turns out it's a total disaster and the child dies. In addition no help can be called as all the cell credit is gone.

@sjstoelting @ltning @bagder

You haven't used an LLM since Tuesday have you?
🙃

@n_dimension statistical models like LLMs will always be statistical, meaning they have no idea what facts or mistakes are. They "hallucinate" 100% of the time, no matter how much lipstick (e.g. RAG) you put on that pig.

@dngrs

That's demonstrably false.
LLMs do not hallucinate 100% of the time. I vibe code almost every day.

Are you just repeating what you heard from luddites on socials or are you actually using LLMs?

@n_dimension @dngrs I hope I never have to use any of what you have let programmed.

@sjstoelting @dngrs

You may already be using it.

@n_dimension @dngrs

LLMs do not hallucinate 100% of the time.

But ... that's what they do. That's what they're designed to do: They generate statistically likely "text" (sequences of tokens). Sometimes that token sequence can be interpreted in a way that matches observable reality. That's cool, but the LLM doesn't know or care: it just hallucinates, free from concerns about facts or truth.

I vibe code almost every day.

OK? Not a contradiction.

PS:

"Are you just repeating what you heard from luddites on socials" is kind of funny because in retrospect the Luddites were obviously right.

@barubary @dngrs

Yes, Luddites were right in retrospect.
But intelligentsia threw them under the bus.
Now that machines threaten THEIR jobs, the robots are on the menu again.
Worker class solidarity where?

"free from concerns about facts or truth."
Are you following the news?
You don't need LLMs for that. Plenty of humans excell at this.

@n_dimension @ltning @bagder Oh, in a life-or-death situation I would of course immediately ask a chatbot to make up some confident sounding bullshit. 🐧
@n_dimension @bagder The mainstream LLMs today are not medical advisors. Could there be an LLM/machine learning model/service that could be of help in such a situation? Perhaps - diagnostics is one of the things the tech can be really good at. Do those exist today? Not that I'm aware (but there may be some, there's clearly a perceived need, and even here in Norway there is lots of talk about "strengthening" the medical services by adding AI consultations..)

@ltning @bagder

Are the current LLMs certified as medical advisors?

No.

Are current LLMs able to give decent medical advice, if you structure your query appropriately?

Fuck Yes.

The intelligence test is, whether you act on it or not.

And let's not pretend folks don't ask Dr.Google for help all the time.

@n_dimension @bagder Dr. Google hasn't stolen the data in the same manner (I know, matter of debate and also now much of what it spits out is LLM-generated anyway so..)..

But to quote you - my game, my rules: your example is kinda construed, and a bit akin to "is it ok to break the speed limit when driving a birthing mother to the hospital": I'd argue yes.

It also isn't a situation where the LLM is being shoved in my face. That would be Copilot in all its incarnations cropping up everywhere in everything from GitHub to Notepad, and all the others that I have to take active and sometimes difficult steps to avoid.

I also tried to allow for valid use cases - I'll make sure I use more words next time. ;)