Europe, the AI Continent.

One year ago, we launched the AI Continent Action Plan. Since then, we have made huge strides:

✅ 19 AI factories are now live across EU countries.
✅ We established the AI Skills Academy to train experts.
✅ The AI Omnibus is cutting costs for business.
✅ We have earmarked €1 billion to support AI adoption in industry.

We are building a secure and innovative AI future for Europe.

Here's how 👉 https://link.europa.eu/nj3VH9

@EUCommission

I don’t know if this account is actually monitored, or just a publishing place, but you may have noticed that this post has received almost overwhelmingly negative responses.

You could disregard this as Mastodon bias, but keep in mind that the biggest bias on Mastodon is that people who understand and built core parts of the information technology that you use every day are massively over represented. This is probably the only place you will get a lot of replies from people who both understand technology and do not have a financial incentive to hype things to get large amounts of government funding.

EDIT: I should add, I used machine learning during my PhD and there are a lot of problems for which it is a really good fit. But, in the current climate, it’s generally safe to interpret ‘AI’ as meaning ‘machine learning applied to a problem where machine learning is the wrong solution’. It isn’t a technology, it’s a branding term, and it’s a branding term used almost exclusively for things that have no social benefit.

@david_chisnall
And speaking as an AI positive person, in the sense one can do really nice this with it, when used responsibly by competent people, not when used as a hype buzz word.

WTF are "AI factories"?

"Skills academy"? Aren't university curricula not enough?

And if it's such a great help for industry, why does it need subsidies for adoption?
@EUCommission

@yacc143

"if it's such a great help for industry, why does it need subsidies for adoption?"

this part, right here. So much money and electricity wasted on glorified autocorrect.

Where is the consideration for the unethical way these models are created (using material they had no right to access) and used (such as CSAM, putting female politicians heads on porn actors, etc)?

@david_chisnall @EUCommission

@ProcessParsnip @yacc143 @david_chisnall @EUCommission
You can say the same about Renewable Energy sources though.
And we absolutely want subsidies out the wazoo there, no?
@jupiter @ProcessParsnip @yacc143 @david_chisnall @EUCommission I don't have the links at my disposal, but the fossil fuel industry has been vastly more subsidised throughout its history than renewables ever were.
@mossman Genuinely curious to see/read those links if you care to share them.

@tie_mann101 it's what I've read or been told in podcasts etc. over the years... I haven't saved any references hence I can't magic them up without researching it.

It's basically that the industry has had a constant reliance on tax breaks and grants etc.

@mossman government spending is public in civilized countries.

If people ask for sources on stuff like this I assume that's not a good faith move. The mountains of excellent reporting on this are apparently wasted on such commenters.

@tie_mann101

@tie_mann101 @mossman

https://www.imf.org/en/topics/climate-change/energy-subsidies

A brief search will provide many resources about the not-exactly-secret financial aid on fossil fuels.

Fossil Fuel Subsidies

Subsidies are intended to protect consumers by keeping prices low, but they come at a high cost. Subsidies have sizable fiscal costs (leading to higher taxes/borrowing or lower spending), promote inefficient allocation of an economy’s resources (hindering growth), encourage pollution (contributing to climate change and premature deaths from local air pollution), and are not well targeted at the poor (mostly benefiting higher income households). Removing subsidies and using the revenue gain for better targeted social spending, reductions in inefficient taxes, and productive investments can promote sustainable and equitable outcomes. Fossil fuel subsidy removal would also reduce energy security concerns related to volatile fossil fuel supplies.

IMF
@mossman @ProcessParsnip @yacc143 @david_chisnall @EUCommission Oh yes I absolutely want that to stop ASAP. In fact, make them pay back the last 40 years of subsidies.
@jupiter @ProcessParsnip @yacc143 @david_chisnall @EUCommission
That's an apples and oranges comparison. Renewable energy has benefits that outweigh whatever is most expedient or profitable for industries, like reduced pollution and national sovereignty. It's not marketed as "good for business," but if it were, then we could ask why it should be subsidized. Viewed from this angle, AI is very nearly the opposite of renewable energy.

@jupiter @ProcessParsnip @yacc143 @david_chisnall @EUCommission

I think renewables are not really comparable.

Renewables have to get profitable in a system where fossil fuels dictate the prices and need to fit in 30 yr long existing delivery contracts.

To add: Getting started with AI is a €150/month subscription for the basic stuff, and there already are R&D subsidies if you want to pioneer in AI, or even just to incorporate it in a business.

@dynom @ProcessParsnip @yacc143 @david_chisnall @EUCommission
I think what subsidies should be for to create open, efficient, European models, instead of paying $xxxx to American frontier model providers with questionable ethics and even more questionable business practices.

@jupiter @ProcessParsnip @yacc143 @david_chisnall @EUCommission i tend to agree. However I'm no expert in subsidies or AI so i don't really have much credit on this topic.

The subsidies could be used to help pay for the content the models train on for example and to discover ways to break free of the current Transformer architecture limitations.

@dynom @ProcessParsnip @yacc143 @david_chisnall @EUCommission
So what I happily agree on is that this EU Commission page is a bunch of buzzwords and hogwash with zero actionable opportunities for the reader (how does >>my<< free and libre open source project get a subsidy?)

@dynom @jupiter @ProcessParsnip @yacc143 @EUCommission

In addition: when home solar subsidies started, it was already a net benefit, the problem was that the return on investment was too long for a lot of people. It took about ten years to for the panels to generate enough electricity to cover the costs. They lasted another ten after that (estimated, it turns out they actually last longer, especially if you clean them), so over a twenty year period you were going to be paying a lot less. The subsidy did two things:

  • Created demand that allowed economies of scale to bring down the component costs.
  • Created demand that brought down the installation costs as installers got a lot of practice and it became routine.

The RoI for home solar is now in the 2-5 year range, so accessible for anyone who has a bit of spare capital. The component costs are low enough that the cost of building it into new builds is negligible and the value is high.

Large-scale wind and solar deployments had similar benefits.

In both cases, the benefits were already there but they needed economies of scale to bring the costs down. In contrast, LLMs do not really benefit from economies of scale. OpenAI and Anthropic lose more money as their number of users increases. The cost of running these models keeps going up as they increase in complexity, and they've already passed the point where large increases in compute translate to only small increases in performance.

The fundamental issues remain present. LLMs are not databases. They are fuzzy compressed pattern-matching engines. Even if they are trained entirely on true things, there is no way to prevent them from returning results that are incorrect because that's an intrinsic property of how neural networks function: they interpolate over a latent space and any point in the latent space that does not directly correspond to something in the training set (and some that do) will be filled in with things nearby. This may be correct, or it may be complete nonsense. The more complex the use case, the more likely it will hit places not covered by the training data and be filled in with plausible nonsense.

There's also an effect from the automation paradox: As LLMs become better at producing correct output, the importance of the human in detecting the errors becomes more important, but the human's attention is less focused on this. The recent study on Google's AI summaries showed that they are wrong about 10% of the time, which is well into the worst place: if they were accurate a couple of orders of magnitude more often, they'd be comparable to other information sources and wouldn't need checking. But they're correct often enough that people don't check them. This is a big problem outside some managed contexts.

There are some good use cases for this kind of thing. For example, pregenerating NPC dialog in a game. Walk around in something like The Witcher 3 and you'll overhear the same conversations dozens of times. An LLM could take all of these and produce a thousand alternatives. A human could quickly skim them to see which ones sound plausible and don't sound like they're hinting at quests that don't exist. Exam's can be generated quickly from the learning material and reviewed by an expert to ensure coverage of the subject material and good assessment practice, in less time than it takes to write them by hand. But these are fairly small benefits. Neither of these is core to what the company using them does. You're looking at, at best, a few percentage points in efficiency improvement, in a select few industries. And this comes with a huge environmental cost and at the cost of large-scale plagiarism, which causes far greater harm to the creative industries than any benefits.

@david_chisnall @dynom @jupiter @ProcessParsnip @EUCommission That's the point they are pattern matching tools. Literally NLP tools.

They are not databases. Using them as databases is bad malpractice.

I remember sitting in a video call where our CEO in my last company demonstrated how he uses ChatGPT (why, why always that worst of all #AI tools?) He was incredibly proud of how ChatGPT knew our company and him as the CEO. I cringed, so "buddy ChatGPT" did not provide any reference for that info

which is great, as we happen to know it already and can easily verify it. But if we asked it odd and current details in depth, it would start hallucinating bullshit, and only the individuals informed about the details could estimate whether it's correct or not. And if he'd ask ChatGPT about a company it did not know about, without references there would be no easy way to verify if the answer refers to reality or is just nicely worded fantasy.

Now there are AI systems out there that use the LLM as an NLP interface for search systems, finely tuned, that provide literal references for any claim they make. And mark any conclusions that are not in references literally.

These can reach a surprising quality. Perfect? Nope. Hint not even human experts answer 100% without fault.

@ProcessParsnip @yacc143 @david_chisnall @EUCommission

AI is getting subsidized because the fossil fuel industry habitually expects subsidies.

That's who funds these energy wasting AI initiatives; the fossil fuel industry.

The most corrupt industry on the planet wants to keep its grift, desperately.

$3 billion dollars ** per day ** are drained from the economies of democracy and sent to enrich thugs like Trump's donors, Putin & #PrinceBonesaw

https://www.theguardian.com/environment/2022/jul/21/revealed-oil-sectors-staggering-profits-last-50-years

Revealed: oil sector’s ‘staggering’ $3bn-a-day profits for last 50 years

Vast sums provide power to ‘buy every politician’ and delay action on climate crisis, says expert

The Guardian

@david_chisnall @EUCommission

I was going to suggest it might be more productive to send a copy of this letter to your local MEP, but then I noticed you're in the UK. Maybe have a word with some overseas colleagues about lobbying their local reps?

@faduda @david_chisnall @EUCommission

All MEPs emails are accessible on the European Parliament's website, along with telephone number and social media.

open the EP website, select MEPs from the top menu and then full list.

Here's is the first one: https://www.europarl.europa.eu/meps/en/256810/MIKA_AALTOLA/home

Home | Mika AALTOLA | MEPs | European Parliament

Profile page - Mika AALTOLA - Home

@david_chisnall

> in the current climate, it’s generally safe to interpret ‘AI’ as meaning ‘machine learning applied to a problem where machine learning is the wrong solution’

This is a superb observation, thank you. It articulates something I’ve felt for a while.

@cloudthethings @david_chisnall When you have the world's most expensive hammer..
@sol_hsa @david_chisnall hammers are at least consistent in their input vs output.

@david_chisnall
Looking at the @EUCommission replies reveals that they do in fact respond individually to some comments on their posts, the account is not just a "publishing place".

We are probably many people who disagree with various Commission policies, but we should at least give them credit for getting this one right 😻

@david_chisnall @EUCommission
I’m resolutely against the crazy rush to enshittify everything with AI.
A quick CV: I’m 71; I was on the team that built the BBC micro, I helped to build the early internet with email used by UK government and schools, and I’ve worked on smart homes and smart cities. Never have I been more scared of where technology is heading than I am now with the growth of AI
@KimSJ @david_chisnall @EUCommission
Seconding that. My cv includes writing first screen based word processor for Olivetti, an email service for Sperry Univac, using arpanet and designing / building networks before there was tcp/ip. This rush to Artificial Insanity is dangerous and just another way to market expensive toys.
@AlisonW @KimSJ @david_chisnall
I doubt the people behind @EUCommission have ever heard about Sperrymicro Univetti Arpatcp. Today before lunch they've burned more processing power on their LLM mind-replacements that all Univacs combined ever had (and that on a Saturday).
@AlisonW @KimSJ @david_chisnall @EUCommission Roger that. I was reading Michael #Padlipsky a few weeks ago about the #ARPANET and the penny dropped for me for many things. Bonus: "The elements of networking style" is untouched by LLM training because the #InternetArchive #LCP #DRM ed the #PDF
https://archive.org/details/elementsofnetwor00padl
The elements of networking style and other essays and animadversions on the art of intercomputer networking : Padlipsky, M. A. (Michael A.) : Free Download, Borrow, and Streaming : Internet Archive

Includes bibliographical references

Internet Archive
@KimSJ
Off-topic: thankyou for your service - i learned to program on the BBC model B. (To be fair, I learned to be frustrated by computers on the ZX81, but it was the model B that made me love them).
@flipper @KimSJ I second that. I have had tons of fun learning to program on both the Model B and its little brother, the Electron 😊
@KimSJ @david_chisnall @EUCommission Gary #Marcus appears to have gone soft on #Anthropic today. Methinks his shock redemption is explained by #regexp
@david_chisnall @EUCommission Artifical Intelligence is a theoretical principle. It's the entire intelligence system that functions as though human (ie; functional AI) Machine learning is knowledge gathering through progressive analytical means by a machine that may or may not be like a human. This bullshit is called "sloppy plagiarism". It's neither learning nor intelligent. It's regurgitation. There is not a single reputable Computer Science program that refers to this as A.I
@david_chisnall @EUCommission The EU is tasked with the difficult challenge of balancing democratic values with maintaining economic parity with undemocratic superpowers. Initiatives like these are usually aimed at ensuring that the EU doesn't fall behind. What are you proposing? No AI infrastructure with data sovereignty for the EU while other superpowers use AI to optimize every facet of digital infrastructure? What is the incentive for the EU to risk sitting out a technological leap?
@davidsonsr @david_chisnall @EUCommission
“…optimize every facet of digital infrastructure…”
Like what for example?

@fuji @david_chisnall @EUCommission

Organizations have been implementing AI for years by identifying what human tasks that can be safely done by AI at no risk to the company. More or less every single modern organization of a moderately large size that relies to a large degree on digital infrastructure does this now, either directly or indirectly through the tools and services that they use. And if they don't, then their suppliers and vendors do.

@davidsonsr @fuji @EUCommission

This is true only if you conflate 'AI' with 'automation'. Companies trying to sell 'AI' like it when you do this, but if 'AI' includes anything that a computer does then it's a meaningless term.

@david_chisnall @fuji @EUCommission

I'm talking about LLMs/reasoning models enabling software to make decisions based on natural language instead of programmatic instructions. I'd say that this is what's commonly understood to be the meaning of the term "AI" in business contexts. Isn't that what we're talking about?

@davidsonsr @fuji @EUCommission

So what are these use cases? Replacing customer support with a chatbot that makes up policies, can't answer questions, and drives away customers? Meeting summary systems that invert the conclusion of the meeting? Note taking for doctors that fabricates conditions and cancels essential prescriptions?

Machine-learning systems work really nicely in situations where either the result can be checked instantly and cheaply, or where the cost of a wrong answer is vastly lower than the benefit of a correct answer. Very few natural-language processing tasks have this property.

LLMs have had hundreds of billions of dollars spent on them, and are not yet profitable. No company can offer them to customers at a price that customers are willing to pay and which covers the costs. And, even with that level of subsidy, it has made zero measurable impact on the GDP of the USA.

If a technology has failed to deliver anything of value to the economy after sinking a hundred billion, the rational thing to do is not say 'we must also throw money down this hole'. It is to say 'other countries, please keep wasting your economic potential! We will invest in things that actually deliver!' (Or, at least, in things that haven't yet been shown to not deliver).

@david_chisnall @fuji @EUCommission

We're discussing a diffuse economic impact, so you're not going to see many concentrated labor displacements or sweeping gains. Companies are reporting being able to conduct AI improvements at scale with fine-grained tasks, but the ratio at which it displaces, pressures or complements labor differs depending on the context. What's important to acknowledge is that this ratio is changing as companies are continuously optimizing for AI implementation.

@david_chisnall @fuji @EUCommission

There are some quantifiable indicators like direct labor displacements in professions like freelancing, language and content work, what seems to be declines in junior and entry-level hiring in AI-exposed companies and industry surveys and labor data indicating that AI is significantly commoditizing skills in some professions. We're possibly going to see more of this as the Service-as-a-software model is emerging.

@david_chisnall @fuji @EUCommission

For anecdotal real-world examples of how AI is being used, there's invoice and document processing (ingestion and scanning), document drafting (legal, corporate and technical writing), content reviewing (code, contracts or otherwise), monitoring (credit underwriting, fraud detection), technical inspection (defect detection, report processing) and so on. Companies see marginal to significant improvements related to many of these AI implementations.

@david_chisnall @fuji @EUCommission

There's no reliable data on what amount of AI processing that is dedicated to work rather than waste, but some reviews seem to indicate that the number might sit at 50%. So the question is, if unethical superpowers continuously self-optimize work-related AI processing to the point where they see a significant economic or even military impact, then what is this going to mean for a EU that decided to opt out of having even a regulated, basic AI infrastructure?

@david_chisnall @davidsonsr @fuji @EUCommission

The only time something is this aggressively useless, but gets massive investments anyway, is when it's a weapon.

@violetmadder @david_chisnall @fuji @EUCommission

AI has introduced a shift in how humans can interact with computers. All IT infrastructure was built with the restriction that computers could only be interfaced with through predefined rules, whereas AI can now allow us to give computers instructions through natural language. It's true that emerging technologies tend to see significant investments and at times economic bubbles, but that doesn't negate the effectiveness of AI as a technology.

@davidsonsr @violetmadder @fuji @EUCommission

Natural language interfaces are not new. They've been around in various forms for decades. Some ML techniques allow higher accuracy but they come with the same limitations as any attempt at this technology. First, the set of things that can be done is still defined by programming. The difference with LLM-based approaches is that, rather than failing when they are asked to do something that they can't do, they do something else. This is much worse, because it means that the systems are not reliable.

Natural language interfaces pop up periodically but typically go away because natural language is ambiguous. Computer languages are intentionally not like natural language because their requirement is to unambiguously convert programmer intent into a sequence of instructions for the computer. As soon as you introduce natural language, you introduce a requirement for interpretation and that both removes agency from the user (now they aren't the one providing this - 'agentic AI' systems are ones that aim to remove agency from the user) and introduces a large space of failure modes that the user cannot reason about.

@david_chisnall @violetmadder @fuji @EUCommission

The user doesn't need to be the one providing context. Software can instruct AI to reason about a piece of unknown information and then provide context for the program to consume. The AI becomes a cog that eliminates unknowns and converts it to instructions that the program can understand.

As far as I know this has never been possible before, and I don't know of any proto-solutions that could do this through natural language instructions.

@davidsonsr @violetmadder @fuji @EUCommission

Okay, it's clear that you really, really don't understand how LLMs (or other machine-learning algorithms) work. At all.

@davidsonsr @david_chisnall @fuji @EUCommission

Machines are not capable of "reasoning". Unknowns aren't "eliminated" but filled in with arbitrary BS, context defined by the people who wrote the thing (in the service of technofascist oligarchs out to destroy the usefulness of the internet).

The technology (machine learning) CAN be very good and very useful-- but not when it is implemented like this.

This is a bullshit generator.

I'm just going to keep on saying it: It's a weapon.

@violetmadder @david_chisnall @fuji @EUCommission

A programmer can instruct an AI to handle an unknown piece of information according to an instruction given in natural language, and the AI has the capacity to follow that instruction with a high level of accuracy. Introducing additional AI for reviewing can increase the accuracy further, often to the point where the error margin is negligible. This is being done across companies today, with tasks that were previously done by humans.