@GossiTheDog Now we can add that to Wikipedia where the fact that it's in a third party source outweighs you claiming you never said it!
It's a glorious world.
@troed I'm not sure how that has anything to do with government control of newspaper if tools are regulated.
Of course the government shouldn't care if journalists decide to do shoddy work, but using tools that actively hurt society and the planet should not be allowed for anyone.
@troed That is not how economic regulations work.
They ban offering of certain products and services. That is already the case in lots of industries e.g. with illegal substances in products or in the digital space with online gambling, advertisement, social media, or (to a limited extend) "AI". It's not too far off to extend those to more types of applications that don't match certain conditions like unbias, reliability and energy efficiency.
@troed And if you believe that this wouldn't impact the original issue then I'm a bit confused as to where Journalists would get these tools from in your opinion.
Because a) you cannot host software of the required quality locally so you would have to purchase it from somewhere and b) purchasing of dedicated AI accelerator hardware could be regulated like any other physical product.
@max Sorry, but it's trivial to self-host models of high enough quality for such work. That's what I'm saying - you're advocating a ban on _software_ running on general purpose hardware.
We have no bans of anything like that except in the most oppressive regimes.
@troed If you believe that self-hostable models are good enough to rival private models then I'm not even sure how to begin... all the issues the private ones have are amplified with the self-hostable ones to a level which make them unusable in any kind of professional fashion where truthfulness is important.
And nowhere am I argueing for a software ban. (Although software bans already exist but I do not believe they are feasible to enforce, such bans can only be enforced by social pressure)
@max I don't believe that self hosted models are good enough for many use cases, I know. The reason I know is because I use them.
I don't think you do. It sounds like you formed your opinions a good while ago and haven't challenged them since.
And yes, since LLMs and diffusion models can run on regular consumer hardware you either accept that they exist and are used, or you propagate for global authoritarian control.
@troed Well you seem to not be challenging my arguments so this discussion feels kinda pointless.
Also generally speaking you do not need to use things yourself to be able to have an informed opinion about it. That's what journalism exists for.
(Although I do experiment with self-hosted LLMs and used to use them a lot more active in the past (kinda even hailed them) but then challenged that opinion and came to a different, imo. more informed conclusions about their worth and risks)
@max Self-hosted Gemma 4 today is as capable as the best cloud hosted model was one year ago - and this relationship has held true for the last few years.
That disproves your claim about self-hosted models not being good enough.
I like how you believe it's a "dead-end technology". You might want to ask a few people in cybersec whether that holds true.
@troed My current tests include Gemma 4, Apertus and gpt-oss:120b so I feel like that is on the level of what you are talking about. However from my experience all of them seem to amplify the issues which one can see in current proprietary models to a lesser extend, namely lying, logic errors and of course bias.
As for the "dead-end technology": I prefer the opinion of AI-scientists instead of random engineers that are in love with their shiny new llm toys, lol
@max You might need to actually read links on the topic before claiming they support your case. That one didn't.
Regarding journalism, I'm sure you know who Baekdal is: https://baekdal.com/newsletter/how-good-can-we-make-ai-translations-can-we-make-it-production-quality/