lol of the day - noticed I’m quoted in this article, but I never spoke to the publication and the quotes are made up. They use GenAI during article creation and just made up what I thought 🤣 https://www.techbuzz.ai/articles/hacker-siphons-700k-from-u-k-energy-firm-in-payment-redirect

@GossiTheDog Now we can add that to Wikipedia where the fact that it's in a third party source outweighs you claiming you never said it!

It's a glorious world.

@troed I strongly doubt Wikipedia editors would trust an AI generated source. If anything we now get that page added to the list of untrusted sources...
@max Most if not all Swedish newspapers now use LLMs when writing articles. This cat is not even close to a bag :/
@troed That sucks. I hope the Swedish people start electing people that vow to fix that and return quality journalism instead of lie machines.
@max Having governments control newspapers sounds like a bad idea though :) We'll see if the consumers notice quality issues that will lead them to vote with their wallets.

@troed I'm not sure how that has anything to do with government control of newspaper if tools are regulated.

Of course the government shouldn't care if journalists decide to do shoddy work, but using tools that actively hurt society and the planet should not be allowed for anyone.

@max The only way to ban LLMs would be through a global oppressive authoritarian regime.
@troed No? We are already doing this in so many economic areas without requiring that.
@max I have no idea what technology you mean we're currently not allowing computers to do that can be controlled and verified without a global oppressive regime.

@troed That is not how economic regulations work.

They ban offering of certain products and services. That is already the case in lots of industries e.g. with illegal substances in products or in the digital space with online gambling, advertisement, social media, or (to a limited extend) "AI". It's not too far off to extend those to more types of applications that don't match certain conditions like unbias, reliability and energy efficiency.

@troed And if you believe that this wouldn't impact the original issue then I'm a bit confused as to where Journalists would get these tools from in your opinion.

Because a) you cannot host software of the required quality locally so you would have to purchase it from somewhere and b) purchasing of dedicated AI accelerator hardware could be regulated like any other physical product.

@max Sorry, but it's trivial to self-host models of high enough quality for such work. That's what I'm saying - you're advocating a ban on _software_ running on general purpose hardware.

We have no bans of anything like that except in the most oppressive regimes.

@troed If you believe that self-hostable models are good enough to rival private models then I'm not even sure how to begin... all the issues the private ones have are amplified with the self-hostable ones to a level which make them unusable in any kind of professional fashion where truthfulness is important.

And nowhere am I argueing for a software ban. (Although software bans already exist but I do not believe they are feasible to enforce, such bans can only be enforced by social pressure)

@max I don't believe that self hosted models are good enough for many use cases, I know. The reason I know is because I use them.

I don't think you do. It sounds like you formed your opinions a good while ago and haven't challenged them since.

And yes, since LLMs and diffusion models can run on regular consumer hardware you either accept that they exist and are used, or you propagate for global authoritarian control.

@troed Well you seem to not be challenging my arguments so this discussion feels kinda pointless.

Also generally speaking you do not need to use things yourself to be able to have an informed opinion about it. That's what journalism exists for.

(Although I do experiment with self-hosted LLMs and used to use them a lot more active in the past (kinda even hailed them) but then challenged that opinion and came to a different, imo. more informed conclusions about their worth and risks)

@troed As for your point on the usage bans in private: As I already agreed I don't believe that that's possible any more –the cat is out of the bag – but the commercial offering and research into a dead-end technology needs to stop or at least priced properly taking all the costs and impacts into account.

@max Self-hosted Gemma 4 today is as capable as the best cloud hosted model was one year ago - and this relationship has held true for the last few years.

That disproves your claim about self-hosted models not being good enough.

I like how you believe it's a "dead-end technology". You might want to ask a few people in cybersec whether that holds true.

@troed My current tests include Gemma 4, Apertus and gpt-oss:120b so I feel like that is on the level of what you are talking about. However from my experience all of them seem to amplify the issues which one can see in current proprietary models to a lesser extend, namely lying, logic errors and of course bias.

As for the "dead-end technology": I prefer the opinion of AI-scientists instead of random engineers that are in love with their shiny new llm toys, lol

Linux kernel maintainer says AI has suddenly become useful for devs: 'We can't ignore this stuff. It's coming up, and it's getting better'

Primarily for security reports.

PC Gamer
@troed Thank you for providing an example of a "random engineer in love with their shine new llm toy", now where are your arguments in regards to the societal impact of AI-generated texts e.g. in journalism which this whole discussion is about...?

@max You might need to actually read links on the topic before claiming they support your case. That one didn't.

Regarding journalism, I'm sure you know who Baekdal is: https://baekdal.com/newsletter/how-good-can-we-make-ai-translations-can-we-make-it-production-quality/

@troed @max someone shared a blog post about the terrible effect of LLMs on programming the other day, and about half the article was obviously, and ironically, LLM generated, probably explains why it was twice as long as it needed to be. But the LLMs turn of phrase (it sounded like chatGPT) was quite nice, we don't need to ban them, we just need to fact check them thoroughly, although it may still be easier to have someone write from the facts, than find their inventions.

Also I think the kind of mistake that gets sued in journalism is context dependent. Mixing up say the right honourable Baroness Theresa May, and Teresa May the adult film star, can be amusing, cringy, or a lawsuit, and humans have little problem telling these situations apart. Seen LLMs confuse people with similar names before.

@Insufficient_entropy
In the technical British use of "honourable" you're of course correct, but I find adult film stars more honorable than baronesses.

They at least provide some satisfaction to those who pay their bills.
@troed @max

@troed @max Or just tell the companies that program them that it's illegal

@elduvelle

Yes, that needs a global authoritarian government, as I said.

@max

@troed @max
Hmm.. governments can pass laws about companies without being authoritarian.. they do it all the time 🤔

@elduvelle

So you think it's viable that the EU bans generative AI while China doesn't?

That would be absoutely devastating. For us.

https://www.anthropic.com/glasswing

@max

Project Glasswing: Securing critical software for the AI era

A new initiative to secure the world’s most critical software and give defenders a durable advantage in the coming AI-driven era of cybersecurity.

@troed @max

Why would it be devastating for us? Instead of wasting money and energy into "AI" we could use all these resources for much more useful things, like actual research, more efficient programming practices, curing cancer etc. We could better communicate to the population that genAI is wasteful, useless and unethical and that people need to keep their skill levels up and their brains engaged. Other countries that don't want to ban it would fall behind and have massive unemployment, so they'd probably eventually follow us.

@elduvelle

Try reading the link I posted.

@max

@troed @max
I can see that's from a genAI company (Anthropic), so whatever is written there is probably inaccurate - why would I waste my time reading it?

genAI should only be used for "entertainment", as Microsoft admitted about their Copilot. You shouldn't trust anything written via a genAI program - go to the source instead!

@elduvelle

I don't understand why you participate in discussions if you readily admit to not knowing anything about the topic.

What good does that do anyone?

@max

@troed oh, the ad-hominem attacks already 👀

@elduvelle It's not an ad-hominem to point out that if you don't follow sourced arguments you're not participating in the discussion at all.

There's a reason I posted that specific link.

@GossiTheDog grammarly would be proud!
@GossiTheDog I’m glad you can laugh it off, I’d be having a breakdown

@0xabad1dea @GossiTheDog

But the important question everyone wants to know is...

Did LLM correctly 'predict' what would have been your quote had they bothered to ask you... 🤪 (that would bother me most 😉)

@john_philip_bell @0xabad1dea no, I don't think humans are the weakest link

@0xabad1dea @john_philip_bell @GossiTheDog the “weakest link” In security being the human is such a trope… and completely incorrect
If anything the humans in the chain are the strongest link, and some of our best detectors!

(Okay with that said… some specific humans may best detect by triggering malware…)

@fbarton @0xabad1dea @GossiTheDog
My earlier joking about LLMs aside...

I too would not agree that humans are 'the weakest link', there surely are cases where 'a human' was 'a weakness', but in the context of Dr. Reason's swiss cheese model, rarely the weakest in the overall design of any system.

After all, a key objective of proper cybersecurity is exactly to mitigate the human factor, so arguably, by definition if it came down to a human, something more critical failed. (Design, GRC, etc.)

(From maker-checker to insider threat - the field is supposed to be designed to reduce the weakness of the human factor, if anything that failed.)

@GossiTheDog @john_philip_bell @0xabad1dea but, but - I have read an article that says differently;) - and I’m already poking my llm to generate an article saying “cyber security experts (like Kevin Beaumont) are suggesting to swap people for AI’s in your corporations, because “people are the weakest link”
@0xabad1dea @GossiTheDog yeah we'd be pursuing legal action. that's really fucked up.

@GossiTheDog

Can you charge them for the interview?

@GossiTheDog

Hey, some of us can only hope to be prominent enough to feature in hallucinations! 😂

@GossiTheDog Anguilla catching strays for hosting the slop factories. Sorry Anguilla 🇦🇮
@ligniform @GossiTheDog They have the opportunity to do THE funniest thing.
Ipsos Partners with Stanford University to Pioneer the Future of Market Research with Synthetic Data

Ipsos partners with Stanford University's Politics and Social Change Lab (PASCL).

Ipsos
@GossiTheDog Libel lawyers are going to make a killing in the next few years suing AI “news” publications.
@GossiTheDog Sue them for infringement and defamation :D
@GossiTheDog And the next time you DO say something - correct or not 😉 - we have to check several times to make sure you actually said it. We live in interesting times indeed.
@GossiTheDog @klefstadmyr was interviewed many times in my career and NOT ONCE was I quoted accurately or without sensationalism. #NothingToSeeHere
@GossiTheDog ok but what is a "Bitch Eating Crackers attack"?

@adriano @GossiTheDog Finally, my area of expertise.

A BEC attack can be very powerful, because the "bitch" in question will likely have spent years practicing improper cracker eating, likely with a range of different crackers.

Of course some are more appropriate for certain situations than others. Depending on the target, you may choose Ritz over basic saltines, or perhaps some kind of cheese-flavoured cracker.

When done well, the cracker eating will be so annoying that the target begs for mercy and will give you any information required to get you to leave.

@GossiTheDog Fuck AI. I will never willfully use (yes I know banks and too many others are forcing it on us already) and if I know something was written with it, I won't read it.

@GossiTheDog

New "quoting" method:

<PERSON> would have said "<THINGS>" had we asked them and not used AI to extract a likely statement from their other writings and other works that our AI scraped.?

@GossiTheDog Curious....what do you do in this situation?

@GossiTheDog

#alttext

techbuzz.ai the tech buzz.
"BEC attacks are incredibly effective because they don't require sophisticated malware or zero-day exploits," says cybersecurity researcher Kevin Beaumont. "They exploit the weakest link - human decision-making under time pressure. A single compromised email account can yield millions if the attacker times it right."

@GossiTheDog first that actor from the live action One Piece show, now you?

@GossiTheDog

The good news, they didn't bother you and you didn't have to make stuff up.