Europe, the AI Continent.

One year ago, we launched the AI Continent Action Plan. Since then, we have made huge strides:

✅ 19 AI factories are now live across EU countries.
✅ We established the AI Skills Academy to train experts.
✅ The AI Omnibus is cutting costs for business.
✅ We have earmarked €1 billion to support AI adoption in industry.

We are building a secure and innovative AI future for Europe.

Here's how 👉 https://link.europa.eu/nj3VH9

@EUCommission

I don’t know if this account is actually monitored, or just a publishing place, but you may have noticed that this post has received almost overwhelmingly negative responses.

You could disregard this as Mastodon bias, but keep in mind that the biggest bias on Mastodon is that people who understand and built core parts of the information technology that you use every day are massively over represented. This is probably the only place you will get a lot of replies from people who both understand technology and do not have a financial incentive to hype things to get large amounts of government funding.

EDIT: I should add, I used machine learning during my PhD and there are a lot of problems for which it is a really good fit. But, in the current climate, it’s generally safe to interpret ‘AI’ as meaning ‘machine learning applied to a problem where machine learning is the wrong solution’. It isn’t a technology, it’s a branding term, and it’s a branding term used almost exclusively for things that have no social benefit.

@david_chisnall @EUCommission The EU is tasked with the difficult challenge of balancing democratic values with maintaining economic parity with undemocratic superpowers. Initiatives like these are usually aimed at ensuring that the EU doesn't fall behind. What are you proposing? No AI infrastructure with data sovereignty for the EU while other superpowers use AI to optimize every facet of digital infrastructure? What is the incentive for the EU to risk sitting out a technological leap?

@davidsonsr @EUCommission

The EU has to prioritise investment. It needs to pick things that are likely to give a good return, both financially and in building the kind of society that EU members wish to belong to. To date, AI has not materially contributed in either. There has been no measured impact on economic productivity from AI adoption, in any industry. The systems are built on top of large-scale plagiarism that undermine the creative industry.

If the USA and China wish to sabotage their economise by throwing vast amounts of money at things that deliver negligible benefits (and often the reverse), then the EU should encourage them to do so, while investing in things that actually deliver a return.

@david_chisnall @EUCommission

AI being hard to isolate in aggregate statistics isn't the same as it having no measured impact. While AI has displaced some labor, the clearest evidence of productivity gains appears in field studies and task-level performance measurements, which there's an abundance of.

I'd rather see a hopefully more ethical, more productive and more energy efficient EU AI infrastructure with EU data sovereignty than the EU relying on other superpowers' AI implementations.

@davidsonsr @david_chisnall @EUCommission

the clearest evidence of productivity gains appears in field studies and task-level performance measurements, which there's an abundance of.

Where?

@barubary @david_chisnall @EUCommission

There are examples like TikTok, Meta and other social platforms using AI for content moderation, Duolingo using AI to significantly increase their content output and HubSpot using AI to enhance customer CRM data. There are also papers like "Generative AI and labour productivity: a field experiment on coding" and "Generative AI at Work" which indicate productivity gains for junior workers. There are many instances of applied AI working as intended.

@davidsonsr @david_chisnall @EUCommission "Using AI for content moderation" doesn't mean anything to me.

To "increase content output" and "enhance CRM data" sounds like a deluge of slop, not increased performance. (As a personal anecdote, I was considering using Duolingo myself when I heard they were adding LLM slop to their app, so I lost all interest. I want to learn languages, not consume "content output".)

I'm not qualified to judge the experimental setup of "Generative AI and labour productivity: a field experiment on coding", but some things stood out to me:

  • They looked at ~1200 programmers from one company (Ant Group) over a period of 6 weeks.
  • 335 of them had access to a specific (internal) LLM.
  • The junior programmers with LLM access produced 50% more verbose code, the senior programmers didn't.

That's it. The only thing they measured was the number of lines of code produced, not quality or correctness or anything. And this was only the short-term effects (less than two months); there's nothing there about the mid- or long-term consequences of mandating LLM use to a company's whole workforce.

"Generative AI at Work" is about US customer support (from a call center in the Philippines). The paper is creepy ("AI drives convergence in communication patterns: low-skill agents begin
communicating more like high-skill agents", "customers are less likely to question the competence of agents"). Results are mixed: "AI assistance increases worker productivity, resulting in a 14% increase in the number of chats that an agent successfully resolves per hour", but only for less-skilled and inexperienced agents: "we find evidence that AI assistance may decrease the quality of conversations by the most skilled agents". The metrics used are questionable: Issue resolutions per hour and "net promoter score" (as a proxy for customer satisfaction) are used to determine both productivity and agent "skill".

(Why are these papers all written by economists?)

@barubary @davidsonsr @EUCommission

You'll find this in pretty much all papers that show an improvement in productivity from 'AI'.

Most of them use an invalid metric: self-reported feelings of productivity (a thing that's been shown previously to have a weak inverse correlation with actual productivity), lines of code (known since the '60s to be a terrible metric), or tickets resolved (who marks them as resolved? I can get 100% on this by just claiming everything is resolved, but if the outcome is that the customer gives up and goes to a competitor, that isn't actually a win).

Content moderation is similar. Using 'AI' is not there to improve efficiency, it's there to shift blame. TikTok and Meta moved to having an automated system moderate content so that they could claim compliance with rules about harm, without actually bothering to do the work. It does not increase the quality of the moderation decisions. Note specifically for the @EUCommission : this is a technology that is being used to attempt to bypass regulations that you have passed for the benefit of your citizens. Is that really what you want to be funding.

@david_chisnall @barubary @EUCommission

Developers aren't being evaluated by or paid for KLOCs anymore, so it's not invalid to view an increase in code throughput as an indicator of increased productivity during experimental evaluations, especially in delivery-focused teams. In the same vein, the paper regarding support agents showing an increased usage of unmodified AI response suggestions in combination with increased delivery velocity is also a valid indicator.

@davidsonsr @david_chisnall @barubary @EUCommission Before I retired I was a software developer for a little more than 3 decades. You’re right, LOC was an awful metric for programmer productivity. And there were many other crackpot schemes for enhancing or evaluating programmers or even redefining the role of computer folk.

But replacing 1 failed methodology with another isn’t the way forward. From the era of vacuum tubes onward we have failed to understand the potential of digital technology.