@GossiTheDog Now we can add that to Wikipedia where the fact that it's in a third party source outweighs you claiming you never said it!
It's a glorious world.
@troed I'm not sure how that has anything to do with government control of newspaper if tools are regulated.
Of course the government shouldn't care if journalists decide to do shoddy work, but using tools that actively hurt society and the planet should not be allowed for anyone.
@troed That is not how economic regulations work.
They ban offering of certain products and services. That is already the case in lots of industries e.g. with illegal substances in products or in the digital space with online gambling, advertisement, social media, or (to a limited extend) "AI". It's not too far off to extend those to more types of applications that don't match certain conditions like unbias, reliability and energy efficiency.
@troed And if you believe that this wouldn't impact the original issue then I'm a bit confused as to where Journalists would get these tools from in your opinion.
Because a) you cannot host software of the required quality locally so you would have to purchase it from somewhere and b) purchasing of dedicated AI accelerator hardware could be regulated like any other physical product.
@max Sorry, but it's trivial to self-host models of high enough quality for such work. That's what I'm saying - you're advocating a ban on _software_ running on general purpose hardware.
We have no bans of anything like that except in the most oppressive regimes.
@troed If you believe that self-hostable models are good enough to rival private models then I'm not even sure how to begin... all the issues the private ones have are amplified with the self-hostable ones to a level which make them unusable in any kind of professional fashion where truthfulness is important.
And nowhere am I argueing for a software ban. (Although software bans already exist but I do not believe they are feasible to enforce, such bans can only be enforced by social pressure)
@max I don't believe that self hosted models are good enough for many use cases, I know. The reason I know is because I use them.
I don't think you do. It sounds like you formed your opinions a good while ago and haven't challenged them since.
And yes, since LLMs and diffusion models can run on regular consumer hardware you either accept that they exist and are used, or you propagate for global authoritarian control.
@troed Well you seem to not be challenging my arguments so this discussion feels kinda pointless.
Also generally speaking you do not need to use things yourself to be able to have an informed opinion about it. That's what journalism exists for.
(Although I do experiment with self-hosted LLMs and used to use them a lot more active in the past (kinda even hailed them) but then challenged that opinion and came to a different, imo. more informed conclusions about their worth and risks)
@max Self-hosted Gemma 4 today is as capable as the best cloud hosted model was one year ago - and this relationship has held true for the last few years.
That disproves your claim about self-hosted models not being good enough.
I like how you believe it's a "dead-end technology". You might want to ask a few people in cybersec whether that holds true.
@troed My current tests include Gemma 4, Apertus and gpt-oss:120b so I feel like that is on the level of what you are talking about. However from my experience all of them seem to amplify the issues which one can see in current proprietary models to a lesser extend, namely lying, logic errors and of course bias.
As for the "dead-end technology": I prefer the opinion of AI-scientists instead of random engineers that are in love with their shiny new llm toys, lol
@max You might need to actually read links on the topic before claiming they support your case. That one didn't.
Regarding journalism, I'm sure you know who Baekdal is: https://baekdal.com/newsletter/how-good-can-we-make-ai-translations-can-we-make-it-production-quality/
@troed @max someone shared a blog post about the terrible effect of LLMs on programming the other day, and about half the article was obviously, and ironically, LLM generated, probably explains why it was twice as long as it needed to be. But the LLMs turn of phrase (it sounded like chatGPT) was quite nice, we don't need to ban them, we just need to fact check them thoroughly, although it may still be easier to have someone write from the facts, than find their inventions.
Also I think the kind of mistake that gets sued in journalism is context dependent. Mixing up say the right honourable Baroness Theresa May, and Teresa May the adult film star, can be amusing, cringy, or a lawsuit, and humans have little problem telling these situations apart. Seen LLMs confuse people with similar names before.
@Insufficient_entropy
In the technical British use of "honourable" you're of course correct, but I find adult film stars more honorable than baronesses.
They at least provide some satisfaction to those who pay their bills.
@troed @max
So you think it's viable that the EU bans generative AI while China doesn't?
That would be absoutely devastating. For us.
Why would it be devastating for us? Instead of wasting money and energy into "AI" we could use all these resources for much more useful things, like actual research, more efficient programming practices, curing cancer etc. We could better communicate to the population that genAI is wasteful, useless and unethical and that people need to keep their skill levels up and their brains engaged. Other countries that don't want to ban it would fall behind and have massive unemployment, so they'd probably eventually follow us.
@troed @max
I can see that's from a genAI company (Anthropic), so whatever is written there is probably inaccurate - why would I waste my time reading it?
genAI should only be used for "entertainment", as Microsoft admitted about their Copilot. You shouldn't trust anything written via a genAI program - go to the source instead!
I don't understand why you participate in discussions if you readily admit to not knowing anything about the topic.
What good does that do anyone?
@elduvelle It's not an ad-hominem to point out that if you don't follow sourced arguments you're not participating in the discussion at all.
There's a reason I posted that specific link.
But the important question everyone wants to know is...
Did LLM correctly 'predict' what would have been your quote had they bothered to ask you... 🤪 (that would bother me most 😉)
@0xabad1dea @john_philip_bell @GossiTheDog the “weakest link” In security being the human is such a trope… and completely incorrect
If anything the humans in the chain are the strongest link, and some of our best detectors!
(Okay with that said… some specific humans may best detect by triggering malware…)
@fbarton @0xabad1dea @GossiTheDog
My earlier joking about LLMs aside...
I too would not agree that humans are 'the weakest link', there surely are cases where 'a human' was 'a weakness', but in the context of Dr. Reason's swiss cheese model, rarely the weakest in the overall design of any system.
After all, a key objective of proper cybersecurity is exactly to mitigate the human factor, so arguably, by definition if it came down to a human, something more critical failed. (Design, GRC, etc.)
(From maker-checker to insider threat - the field is supposed to be designed to reduce the weakness of the human factor, if anything that failed.)
Can you charge them for the interview?
Hey, some of us can only hope to be prominent enough to feature in hallucinations! 😂
@adriano @GossiTheDog Finally, my area of expertise.
A BEC attack can be very powerful, because the "bitch" in question will likely have spent years practicing improper cracker eating, likely with a range of different crackers.
Of course some are more appropriate for certain situations than others. Depending on the target, you may choose Ritz over basic saltines, or perhaps some kind of cheese-flavoured cracker.
When done well, the cracker eating will be so annoying that the target begs for mercy and will give you any information required to get you to leave.
@GossiTheDog
New "quoting" method:
techbuzz.ai the tech buzz.
"BEC attacks are incredibly effective because they don't require sophisticated malware or zero-day exploits," says cybersecurity researcher Kevin Beaumont. "They exploit the weakest link - human decision-making under time pressure. A single compromised email account can yield millions if the attacker times it right."
The good news, they didn't bother you and you didn't have to make stuff up.