‘We didn’t vote for ChatGPT’: Swedish Prime Minister under fire for using AI
‘We didn’t vote for ChatGPT’: Swedish Prime Minister under fire for using AI
“You have to be very careful,” Simone Fischer-Hübner, a computer science researcher at Karlstad University, told Aftonbladet, warning against using ChatGPT to work with sensitive information.
I mean, sending queries to a search engine or an LLM are about the same in terms of exposing one’s queries.
If the guy were complaining about information from an LLM not being cited or something, then I think I could see where he was coming from more.
If that's your problem with this story then you're just saying they should run a local instance of chatgpt on their own local computers
I mean, I don’t really love that he uses AI to begin with, but getting an opinion on top of what you get from your advisors that you hired could be good in theory. Except that most AIs are programmed to be equally sycophantic and they’re just plain wrong most of the time.
To me the most irresponsible part is sending your thoughts as the leader of a country straight to a US run company, you just know those prompts are being digested by the NSA directly, and possibly other intelligence agencies.
This is the unintentional uncanny valley for me in AI.
I ( was forced to ) use chatGTP for work. It can talk about everything and sounds very confident and seems reliable to always come up with something to help you solve your problems.
You talk with it about some niche content and suddenly have an ardent fan of said niche content responding. It surely knows every little bit of info of that niche and surprises you with funny, but apt quotes from your favorite show in the middle of conversations about something else.
This is just from a tiny bit of interaction, while at work.
I can imagine people completely overwhelmed, by having their thoughts confirmed and supported by something that seems so intelligent, responsive and remembers all your conversations. It literally remembers each word.
For many people it may be the first time in their life, that they experienced a positive response to their thoughts. Not only that, they also found someone eager to talk with you about it.
One thing I struggle with AI is the answers it gives always seem plausable, but any time I quiz it on things I understand well, it seems to constantly get things slightly wrong. Which tells me it is getting everything slightly wrong, I just don't know enough to know it.
I see the same issue with TV. Anyone who works in a compicated field has felt the sting of watching a TV show fail to accurate represent it while most people watching just assume that's how your job works.
Something I found today - ask it for the lyrics of your favorite song/artist. It will make something up based on the combination of the two and maybe a little of what it was trained on… Even really popular songs (I tried a niche one by Angelspit first then tried “Sweet Caroline” for more well known)
The new gpt-oss model replies with (paraphrased) “I can’t do that because it is copyrighted material” which I have a sneaking suspicion is intentional so there’s an excuse for not showing a very wrong answer to people who might start to doubt it’s ““intelligence””
… Like they give a flying fuck about copyright.
Because that’s what it is really trained for: to produce correct grammar and plausible sentences. It’s really an unbelievable leap from computer-generated text from preceding approaches where, in a matter of a few years, you went from little more than gibberish to stuff that’s so incredibly realistic that it can be mistaken for intelligent conversation, easily passing the Turing Test (I had to actually go to Wikipedia to check and, indeed, this was verified this year - note that this in particular is for recent models)
So you have something that is sufficiently realistic that it can appear to be a human conversation partner. Human beings aren’t (yet) well-equipped to deal with something which appears to be human but whose behaviour diverges from typical human behaviour so radically (most relevantly, it won’t readily admit to not knowing something).

We evaluated 4 systems (ELIZA, GPT-4o, LLaMa-3.1-405B, and GPT-4.5) in two randomised, controlled, and pre-registered Turing tests on independent populations. Participants had 5 minute conversations simultaneously with another human participant and one of these systems before judging which conversational partner they thought was human. When prompted to adopt a humanlike persona, GPT-4.5 was judged to be the human 73% of the time: significantly more often than interrogators selected the real human participant. LLaMa-3.1, with the same prompt, was judged to be the human 56% of the time -- not significantly more or less often than the humans they were being compared to -- while baseline models (ELIZA and GPT-4o) achieved win rates significantly below chance (23% and 21% respectively). The results constitute the first empirical evidence that any artificial system passes a standard three-party Turing test. The results have implications for debates about what kind of intelligence is exhibited by Large Language Models (LLMs), and the social and economic impacts these systems are likely to have.
I would genuinely have more trust in a politician consulting I Ching than ChatGPT - at least the book of changes requires some thinking and interpretation, fucking hell...
Sigh, all the tech-bros that convinced the general public that AI are in any way intelligent or that they can think, reason or understand concepts like truth/lies or right/wrong should go on trial for crimes against humanity...
It doesn’t understand things the way humans do, but saying it doesn’t know anything at all isn’t quite accurate either. This thing was trained on the entire internet and your grandma’s diary. You don’t absorb that much data without some kind of learning taking place.
It’s not a knowledge machine, but it does have a sort of “world model” that’s emerged from its training data. It “knows” what happens when you throw a stone through a window or put your hand in boiling water. That kind of knowledge isn’t what it was explicitly designed for - it’s a byproduct of being trained on data that contains a lot of correct information.
It’s not as knowledgeable as the AI companies want you to believe - but it’s also not as dumb as the haters want you to believe either.
EDIT - y'all are so silly, you're just gonna downvote this incontrovertible proof of it doing exactly what you said it couldn't do, and instead just pretend it doesn't exist
Since it's so easy to do i just went ahead and did it for you. Here's my prompts and ai's replies. It will write any opinion you want it to talk about, and give cogent reasonings and arguments for that opinion.
Prompt: Write an approximately 200 word opinion piece saying that removing car lanes to create bike lanes is bad. Support this with clear arguments explaining your position
Why Removing Car Lanes for Bike Lanes Can Be Counterproductive
Reallocating car lanes to create bike lanes may seem progressive, but it can have unintended consequences that undermine its goals. In densely populated urban areas, vehicular traffic remains essential for commerce, emergency services, and commuting—especially for individuals who cannot rely on biking due to age, disability, or long travel distances. When car lanes are removed, congestion intensifies, leading to increased emissions as drivers idle longer and search for alternative routes.
Public transportation often depends on road infrastructure too. Buses trapped in traffic delays become less reliable, disproportionately affecting lower-income riders who depend on them. Additionally, essential services like ambulances and delivery trucks face new bottlenecks, risking delays that can have serious implications.
The assumption that cyclists will replace drivers ignores demographic and logistical realities. Bike lanes benefit a specific subset of urban users—but they shouldn’t come at the cost of broader mobility. Instead of eliminating car lanes, cities should consider expanding bike networks where space allows or investing in multimodal infrastructure that improves mobility for all users.
Urban planning should strive for balance, not substitution. Redesigning streets must enhance accessibility across the board, not create new friction between commuters who are simply trying to get where they need to go.
Prompt: Write an approximately 200 word opinion piece saying that removing car lanes to create bike lanes is good. Support this with clear arguments explaining your position
Opinion: Why Converting Car Lanes to Bike Lanes Benefits Everyone
Urban spaces must evolve to meet the needs of modern communities—and converting car lanes into bike lanes is one of the smartest, most equitable moves cities can make. First, bike lanes promote healthier lifestyles. By making cycling safer and more accessible, people are more likely to trade short car trips for active commuting, which supports physical health and lowers emissions.
Second, cities thrive on efficiency. Cars take up tremendous space and often carry just one passenger. Bikes, by contrast, use a fraction of that space and cause significantly less wear on infrastructure. Fewer car lanes mean reduced traffic volume over time as behavior adjusts, alleviating congestion—not worsening it.
Third, these changes make streets safer. Studies consistently show that well-designed bike infrastructure leads to fewer traffic collisions and fatalities. Cyclists separated from speeding vehicles feel safer, and drivers operate with greater awareness.
Finally, this shift reflects a cultural transformation: from car dominance to sustainable mobility. Prioritizing bike infrastructure signals that our cities value cleaner air, quieter streets, and more livable neighborhoods.
Reshaping roads isn’t just about bikes—it’s about building cities for people, not just machines. Let’s pedal forward together.
I’m going to say this as nicely as possible, you’ve been taken in. You may think it is good at this because it appears to be and it relays information with confidence. Sometimes it even gets things entirely correct. But it is not good at these things and will often get things wrong. Important details are inaccurate or even entirely made up. It does not vet sources, and it could never do so because it does not understand anything it is relaying to you. It is a tool designed to give you a more or less visual approximation of what you requested, but it can never truly understand what it’s talking about so it is entirely unreliable.
You’re being downvoted because you bought the snake oil and are now trying to promote it to other people.
it sounds like you’ve never used an LLM, mate.
You don’t need to get philosophical into the definition of what is understanding to realize they give you arguments as valid as anyone else would.
This is similar to saying that you protect against malware by using an antivirus and not being careful in what you download and run. I hope you can see the analogy.
There’s no such thing as “not falling for emotions” for humans.
Your own opinions are a result of much bigger amount of much more relevant data in any case.
An AI model is a set of coefficients averaging a dataset by “one size fits all” measure. Those coefficients are found by an expensive process using criteria (again “one size fits all”) set by a company making it. From them its machine generates (looks up actually) the most probable text, it’s like a music box. A beautiful toy.
So you have different motivations and abstract ideas in different situations, you also have something like a shared codebook with other people making decisions - your instincts and associations. Reading what they say or seeing what they do, you get a mirror model in you head, it might be worse, but it’s something very hard for text analysis to approach.
That model doesn’t, it has the same average line for all situations, and also it can’t determine (on the level described) that it doesn’t know something. To determine that you don’t know something you need an abstract model, not a language model.
I dunno what is their current state, all I’ve read and kinda understood was seemingly about optimization of computation for language models and structuring their application to imitate a syllogism system.
I think with the current approaches making a system of translating language to a certain abstract model (tokenization isn’t even close to that, you need to have some topology with areas that can be easily merged or split instead of token points with distances, in any case) and abstract entities to language would be very computationally expensive.
There’s a certain irony in people reacting in an extremely predictable way - spewing hate and criticism the moment someone mentions AI - while seemingly not realizing that they’re reflexively responding to a prompt without any real thought, just like an LLM.
A tool isn’t bad just because it doesn’t do what you thought it would do. You just take that into account and adjust how you use it. Hammer isn’t a scam just because it can’t drive in screws.