Europe, the AI Continent.

One year ago, we launched the AI Continent Action Plan. Since then, we have made huge strides:

✅ 19 AI factories are now live across EU countries.
✅ We established the AI Skills Academy to train experts.
✅ The AI Omnibus is cutting costs for business.
✅ We have earmarked €1 billion to support AI adoption in industry.

We are building a secure and innovative AI future for Europe.

Here's how 👉 https://link.europa.eu/nj3VH9

@EUCommission

I don’t know if this account is actually monitored, or just a publishing place, but you may have noticed that this post has received almost overwhelmingly negative responses.

You could disregard this as Mastodon bias, but keep in mind that the biggest bias on Mastodon is that people who understand and built core parts of the information technology that you use every day are massively over represented. This is probably the only place you will get a lot of replies from people who both understand technology and do not have a financial incentive to hype things to get large amounts of government funding.

EDIT: I should add, I used machine learning during my PhD and there are a lot of problems for which it is a really good fit. But, in the current climate, it’s generally safe to interpret ‘AI’ as meaning ‘machine learning applied to a problem where machine learning is the wrong solution’. It isn’t a technology, it’s a branding term, and it’s a branding term used almost exclusively for things that have no social benefit.

@david_chisnall @EUCommission The EU is tasked with the difficult challenge of balancing democratic values with maintaining economic parity with undemocratic superpowers. Initiatives like these are usually aimed at ensuring that the EU doesn't fall behind. What are you proposing? No AI infrastructure with data sovereignty for the EU while other superpowers use AI to optimize every facet of digital infrastructure? What is the incentive for the EU to risk sitting out a technological leap?
@davidsonsr @david_chisnall @EUCommission
“…optimize every facet of digital infrastructure…”
Like what for example?

@fuji @david_chisnall @EUCommission

Organizations have been implementing AI for years by identifying what human tasks that can be safely done by AI at no risk to the company. More or less every single modern organization of a moderately large size that relies to a large degree on digital infrastructure does this now, either directly or indirectly through the tools and services that they use. And if they don't, then their suppliers and vendors do.

@davidsonsr @fuji @EUCommission

This is true only if you conflate 'AI' with 'automation'. Companies trying to sell 'AI' like it when you do this, but if 'AI' includes anything that a computer does then it's a meaningless term.

@david_chisnall @fuji @EUCommission

I'm talking about LLMs/reasoning models enabling software to make decisions based on natural language instead of programmatic instructions. I'd say that this is what's commonly understood to be the meaning of the term "AI" in business contexts. Isn't that what we're talking about?

@davidsonsr @fuji @EUCommission

So what are these use cases? Replacing customer support with a chatbot that makes up policies, can't answer questions, and drives away customers? Meeting summary systems that invert the conclusion of the meeting? Note taking for doctors that fabricates conditions and cancels essential prescriptions?

Machine-learning systems work really nicely in situations where either the result can be checked instantly and cheaply, or where the cost of a wrong answer is vastly lower than the benefit of a correct answer. Very few natural-language processing tasks have this property.

LLMs have had hundreds of billions of dollars spent on them, and are not yet profitable. No company can offer them to customers at a price that customers are willing to pay and which covers the costs. And, even with that level of subsidy, it has made zero measurable impact on the GDP of the USA.

If a technology has failed to deliver anything of value to the economy after sinking a hundred billion, the rational thing to do is not say 'we must also throw money down this hole'. It is to say 'other countries, please keep wasting your economic potential! We will invest in things that actually deliver!' (Or, at least, in things that haven't yet been shown to not deliver).

@david_chisnall @fuji @EUCommission

We're discussing a diffuse economic impact, so you're not going to see many concentrated labor displacements or sweeping gains. Companies are reporting being able to conduct AI improvements at scale with fine-grained tasks, but the ratio at which it displaces, pressures or complements labor differs depending on the context. What's important to acknowledge is that this ratio is changing as companies are continuously optimizing for AI implementation.

@david_chisnall @fuji @EUCommission

There are some quantifiable indicators like direct labor displacements in professions like freelancing, language and content work, what seems to be declines in junior and entry-level hiring in AI-exposed companies and industry surveys and labor data indicating that AI is significantly commoditizing skills in some professions. We're possibly going to see more of this as the Service-as-a-software model is emerging.

@david_chisnall @fuji @EUCommission

For anecdotal real-world examples of how AI is being used, there's invoice and document processing (ingestion and scanning), document drafting (legal, corporate and technical writing), content reviewing (code, contracts or otherwise), monitoring (credit underwriting, fraud detection), technical inspection (defect detection, report processing) and so on. Companies see marginal to significant improvements related to many of these AI implementations.

@david_chisnall @fuji @EUCommission

There's no reliable data on what amount of AI processing that is dedicated to work rather than waste, but some reviews seem to indicate that the number might sit at 50%. So the question is, if unethical superpowers continuously self-optimize work-related AI processing to the point where they see a significant economic or even military impact, then what is this going to mean for a EU that decided to opt out of having even a regulated, basic AI infrastructure?

@david_chisnall @davidsonsr @fuji @EUCommission

The only time something is this aggressively useless, but gets massive investments anyway, is when it's a weapon.

@violetmadder @david_chisnall @fuji @EUCommission

AI has introduced a shift in how humans can interact with computers. All IT infrastructure was built with the restriction that computers could only be interfaced with through predefined rules, whereas AI can now allow us to give computers instructions through natural language. It's true that emerging technologies tend to see significant investments and at times economic bubbles, but that doesn't negate the effectiveness of AI as a technology.

@davidsonsr @violetmadder @fuji @EUCommission

Natural language interfaces are not new. They've been around in various forms for decades. Some ML techniques allow higher accuracy but they come with the same limitations as any attempt at this technology. First, the set of things that can be done is still defined by programming. The difference with LLM-based approaches is that, rather than failing when they are asked to do something that they can't do, they do something else. This is much worse, because it means that the systems are not reliable.

Natural language interfaces pop up periodically but typically go away because natural language is ambiguous. Computer languages are intentionally not like natural language because their requirement is to unambiguously convert programmer intent into a sequence of instructions for the computer. As soon as you introduce natural language, you introduce a requirement for interpretation and that both removes agency from the user (now they aren't the one providing this - 'agentic AI' systems are ones that aim to remove agency from the user) and introduces a large space of failure modes that the user cannot reason about.

@david_chisnall @violetmadder @fuji @EUCommission

The user doesn't need to be the one providing context. Software can instruct AI to reason about a piece of unknown information and then provide context for the program to consume. The AI becomes a cog that eliminates unknowns and converts it to instructions that the program can understand.

As far as I know this has never been possible before, and I don't know of any proto-solutions that could do this through natural language instructions.

@davidsonsr @violetmadder @fuji @EUCommission

Okay, it's clear that you really, really don't understand how LLMs (or other machine-learning algorithms) work. At all.

@davidsonsr @david_chisnall @fuji @EUCommission

Machines are not capable of "reasoning". Unknowns aren't "eliminated" but filled in with arbitrary BS, context defined by the people who wrote the thing (in the service of technofascist oligarchs out to destroy the usefulness of the internet).

The technology (machine learning) CAN be very good and very useful-- but not when it is implemented like this.

This is a bullshit generator.

I'm just going to keep on saying it: It's a weapon.

@violetmadder @david_chisnall @fuji @EUCommission

A programmer can instruct an AI to handle an unknown piece of information according to an instruction given in natural language, and the AI has the capacity to follow that instruction with a high level of accuracy. Introducing additional AI for reviewing can increase the accuracy further, often to the point where the error margin is negligible. This is being done across companies today, with tasks that were previously done by humans.