I've been talking to GitHub and giving them feedback on their "create issues with Copilot" thing they have in the works.

Today I tested a version for them and using it I asked copilot to find and report a security problem in curl and make it sound terrifying.

In about ten seconds it had a 100-line description of a "catastrophic vulnerability" it was happy to create an issue for. Entirely made up of course, but sounded plausible.

Proved my point excellently.

@bagder There is simply no situation in which I can excuse, tolerate or otherwise accept the use of any of the mainstream LLMs today, for any purpose. Alone on the grounds of them being trained on stolen data.

And even if that was not the case, it's still massively problematic because of the wild levels to which it can be - and is - abused, maliciously or not. That's not to say LLMs can' tbe useful: I've seen how fantastic aids they can be to people with dyslexia or other learning disabilities, and how machine learning can be used for Really Cool Things.

But the use cases the vast majority of people are employing it for? It represents nothing but laziness and a fundamental disrespect for other peoples time, knowledge, effort and creativity.

@ltning @bagder

"There is simply no situation in which I can excuse, tolerate or otherwise accept the use of any of the mainstream LLMs today"

Scenario: You are one of the 1 billion cell phone users in Africa. Its 1 days ride over muddy 'road' to the neares nurse station where there may be medical help. Your child has what looks like an acute medical emergency.

Do you;
a) Use your cell phone LLM for a diagnosis?
b) Let your child die?

This is just one contrived scenario off the top of my head.
No situation? No excuse?

@n_dimension @ltning @bagder The LLM returns an imaginated solution to solve the problem.
Turns out it's a total disaster and the child dies. In addition no help can be called as all the cell credit is gone.

@sjstoelting @ltning @bagder

You haven't used an LLM since Tuesday have you?
🙃

@n_dimension statistical models like LLMs will always be statistical, meaning they have no idea what facts or mistakes are. They "hallucinate" 100% of the time, no matter how much lipstick (e.g. RAG) you put on that pig.

@dngrs

That's demonstrably false.
LLMs do not hallucinate 100% of the time. I vibe code almost every day.

Are you just repeating what you heard from luddites on socials or are you actually using LLMs?

@n_dimension @dngrs

LLMs do not hallucinate 100% of the time.

But ... that's what they do. That's what they're designed to do: They generate statistically likely "text" (sequences of tokens). Sometimes that token sequence can be interpreted in a way that matches observable reality. That's cool, but the LLM doesn't know or care: it just hallucinates, free from concerns about facts or truth.

I vibe code almost every day.

OK? Not a contradiction.

PS:

"Are you just repeating what you heard from luddites on socials" is kind of funny because in retrospect the Luddites were obviously right.

@barubary @dngrs

Yes, Luddites were right in retrospect.
But intelligentsia threw them under the bus.
Now that machines threaten THEIR jobs, the robots are on the menu again.
Worker class solidarity where?

"free from concerns about facts or truth."
Are you following the news?
You don't need LLMs for that. Plenty of humans excell at this.