Europe, the AI Continent.

One year ago, we launched the AI Continent Action Plan. Since then, we have made huge strides:

✅ 19 AI factories are now live across EU countries.
✅ We established the AI Skills Academy to train experts.
✅ The AI Omnibus is cutting costs for business.
✅ We have earmarked €1 billion to support AI adoption in industry.

We are building a secure and innovative AI future for Europe.

Here's how 👉 https://link.europa.eu/nj3VH9

@EUCommission

I don’t know if this account is actually monitored, or just a publishing place, but you may have noticed that this post has received almost overwhelmingly negative responses.

You could disregard this as Mastodon bias, but keep in mind that the biggest bias on Mastodon is that people who understand and built core parts of the information technology that you use every day are massively over represented. This is probably the only place you will get a lot of replies from people who both understand technology and do not have a financial incentive to hype things to get large amounts of government funding.

EDIT: I should add, I used machine learning during my PhD and there are a lot of problems for which it is a really good fit. But, in the current climate, it’s generally safe to interpret ‘AI’ as meaning ‘machine learning applied to a problem where machine learning is the wrong solution’. It isn’t a technology, it’s a branding term, and it’s a branding term used almost exclusively for things that have no social benefit.

@david_chisnall
And speaking as an AI positive person, in the sense one can do really nice this with it, when used responsibly by competent people, not when used as a hype buzz word.

WTF are "AI factories"?

"Skills academy"? Aren't university curricula not enough?

And if it's such a great help for industry, why does it need subsidies for adoption?
@EUCommission

@yacc143

"if it's such a great help for industry, why does it need subsidies for adoption?"

this part, right here. So much money and electricity wasted on glorified autocorrect.

Where is the consideration for the unethical way these models are created (using material they had no right to access) and used (such as CSAM, putting female politicians heads on porn actors, etc)?

@david_chisnall @EUCommission

@ProcessParsnip @yacc143 @david_chisnall @EUCommission
You can say the same about Renewable Energy sources though.
And we absolutely want subsidies out the wazoo there, no?
@jupiter @ProcessParsnip @yacc143 @david_chisnall @EUCommission I don't have the links at my disposal, but the fossil fuel industry has been vastly more subsidised throughout its history than renewables ever were.
@mossman Genuinely curious to see/read those links if you care to share them.

@tie_mann101 it's what I've read or been told in podcasts etc. over the years... I haven't saved any references hence I can't magic them up without researching it.

It's basically that the industry has had a constant reliance on tax breaks and grants etc.

@mossman government spending is public in civilized countries.

If people ask for sources on stuff like this I assume that's not a good faith move. The mountains of excellent reporting on this are apparently wasted on such commenters.

@tie_mann101

@tie_mann101 @mossman

https://www.imf.org/en/topics/climate-change/energy-subsidies

A brief search will provide many resources about the not-exactly-secret financial aid on fossil fuels.

Fossil Fuel Subsidies

Subsidies are intended to protect consumers by keeping prices low, but they come at a high cost. Subsidies have sizable fiscal costs (leading to higher taxes/borrowing or lower spending), promote inefficient allocation of an economy’s resources (hindering growth), encourage pollution (contributing to climate change and premature deaths from local air pollution), and are not well targeted at the poor (mostly benefiting higher income households). Removing subsidies and using the revenue gain for better targeted social spending, reductions in inefficient taxes, and productive investments can promote sustainable and equitable outcomes. Fossil fuel subsidy removal would also reduce energy security concerns related to volatile fossil fuel supplies.

IMF
@mossman @ProcessParsnip @yacc143 @david_chisnall @EUCommission Oh yes I absolutely want that to stop ASAP. In fact, make them pay back the last 40 years of subsidies.
@jupiter @ProcessParsnip @yacc143 @david_chisnall @EUCommission
That's an apples and oranges comparison. Renewable energy has benefits that outweigh whatever is most expedient or profitable for industries, like reduced pollution and national sovereignty. It's not marketed as "good for business," but if it were, then we could ask why it should be subsidized. Viewed from this angle, AI is very nearly the opposite of renewable energy.

@jupiter @ProcessParsnip @yacc143 @david_chisnall @EUCommission

I think renewables are not really comparable.

Renewables have to get profitable in a system where fossil fuels dictate the prices and need to fit in 30 yr long existing delivery contracts.

To add: Getting started with AI is a €150/month subscription for the basic stuff, and there already are R&D subsidies if you want to pioneer in AI, or even just to incorporate it in a business.

@dynom @ProcessParsnip @yacc143 @david_chisnall @EUCommission
I think what subsidies should be for to create open, efficient, European models, instead of paying $xxxx to American frontier model providers with questionable ethics and even more questionable business practices.

@jupiter @ProcessParsnip @yacc143 @david_chisnall @EUCommission i tend to agree. However I'm no expert in subsidies or AI so i don't really have much credit on this topic.

The subsidies could be used to help pay for the content the models train on for example and to discover ways to break free of the current Transformer architecture limitations.

@dynom @ProcessParsnip @yacc143 @david_chisnall @EUCommission
So what I happily agree on is that this EU Commission page is a bunch of buzzwords and hogwash with zero actionable opportunities for the reader (how does >>my<< free and libre open source project get a subsidy?)

@dynom @jupiter @ProcessParsnip @yacc143 @EUCommission

In addition: when home solar subsidies started, it was already a net benefit, the problem was that the return on investment was too long for a lot of people. It took about ten years to for the panels to generate enough electricity to cover the costs. They lasted another ten after that (estimated, it turns out they actually last longer, especially if you clean them), so over a twenty year period you were going to be paying a lot less. The subsidy did two things:

  • Created demand that allowed economies of scale to bring down the component costs.
  • Created demand that brought down the installation costs as installers got a lot of practice and it became routine.

The RoI for home solar is now in the 2-5 year range, so accessible for anyone who has a bit of spare capital. The component costs are low enough that the cost of building it into new builds is negligible and the value is high.

Large-scale wind and solar deployments had similar benefits.

In both cases, the benefits were already there but they needed economies of scale to bring the costs down. In contrast, LLMs do not really benefit from economies of scale. OpenAI and Anthropic lose more money as their number of users increases. The cost of running these models keeps going up as they increase in complexity, and they've already passed the point where large increases in compute translate to only small increases in performance.

The fundamental issues remain present. LLMs are not databases. They are fuzzy compressed pattern-matching engines. Even if they are trained entirely on true things, there is no way to prevent them from returning results that are incorrect because that's an intrinsic property of how neural networks function: they interpolate over a latent space and any point in the latent space that does not directly correspond to something in the training set (and some that do) will be filled in with things nearby. This may be correct, or it may be complete nonsense. The more complex the use case, the more likely it will hit places not covered by the training data and be filled in with plausible nonsense.

There's also an effect from the automation paradox: As LLMs become better at producing correct output, the importance of the human in detecting the errors becomes more important, but the human's attention is less focused on this. The recent study on Google's AI summaries showed that they are wrong about 10% of the time, which is well into the worst place: if they were accurate a couple of orders of magnitude more often, they'd be comparable to other information sources and wouldn't need checking. But they're correct often enough that people don't check them. This is a big problem outside some managed contexts.

There are some good use cases for this kind of thing. For example, pregenerating NPC dialog in a game. Walk around in something like The Witcher 3 and you'll overhear the same conversations dozens of times. An LLM could take all of these and produce a thousand alternatives. A human could quickly skim them to see which ones sound plausible and don't sound like they're hinting at quests that don't exist. Exam's can be generated quickly from the learning material and reviewed by an expert to ensure coverage of the subject material and good assessment practice, in less time than it takes to write them by hand. But these are fairly small benefits. Neither of these is core to what the company using them does. You're looking at, at best, a few percentage points in efficiency improvement, in a select few industries. And this comes with a huge environmental cost and at the cost of large-scale plagiarism, which causes far greater harm to the creative industries than any benefits.

@david_chisnall @dynom @jupiter @ProcessParsnip @EUCommission That's the point they are pattern matching tools. Literally NLP tools.

They are not databases. Using them as databases is bad malpractice.

I remember sitting in a video call where our CEO in my last company demonstrated how he uses ChatGPT (why, why always that worst of all #AI tools?) He was incredibly proud of how ChatGPT knew our company and him as the CEO. I cringed, so "buddy ChatGPT" did not provide any reference for that info

which is great, as we happen to know it already and can easily verify it. But if we asked it odd and current details in depth, it would start hallucinating bullshit, and only the individuals informed about the details could estimate whether it's correct or not. And if he'd ask ChatGPT about a company it did not know about, without references there would be no easy way to verify if the answer refers to reality or is just nicely worded fantasy.

Now there are AI systems out there that use the LLM as an NLP interface for search systems, finely tuned, that provide literal references for any claim they make. And mark any conclusions that are not in references literally.

These can reach a surprising quality. Perfect? Nope. Hint not even human experts answer 100% without fault.