First to propose, first to deliver: the AI Act enters into force today.

It sets comprehensive rules to address the risks of AI:

🔴 Unacceptable risk AI, such as social scoring, is banned
🟠 High-risk AI, like in medical devices, must meet strict requirements
🟡 Limited-risk AI, like chatbots, must inform users they are interacting with AI
🟢 Minimal risk AI, such as spam filters, can operate with no added obligations

Europeans can now safely seize the opportunities of AI: https://europa.eu/!t4QTc8

@EUCommission wow, this actually looks pretty neat. Realistic about the capabilities and drawbacks of AI, with clear guidelines and pretty nice fines for companies that don't comply.

If I'm reading it correctly, there's also a fine for AI giving false information, which could finally stop search engines from telling people to put glue on pizza and eat rocks
@EUCommission the idea is very good. Now it has to be proven if execution was it as well.
Thank you for your interest, @feyter! We agree that implementation is key. We are fully committed to it with our AI Office, which will be the primary implementation body at EU level. National authorities in each Member State will also play critical roles. You can learn more here:
https://europa.eu/!t4QTc8

@EUCommission

😜😁😂 Like sticking your finger in the hole to plug that first leak in the damm.

@EUCommission are there any protections for the creators that so-called 'AI' has stolen from for its databases? Or to limit destruction of the environment? These are the two biggest issues and yet not mentioned in your summary.
@Rhube @EUCommission reading the article and regulation, this regulation only deals with where and how you can use AI systems, so it doesnt deal with either of these points

@EUCommission

What about face recognition in public places?

What about using AI to select CVs in enterprises?

What about using AI to restrict rights like in Galicia, Spain, where they plan to use AI to detect “people who don’t want to work” and pull them out of unemployment lists?

I see your case list as a shortsighted list.

@guetto @EUCommission
from clicking the link:
1. "categorising people or real time remote biometric identification for law enforcement purposes in publicly accessible spaces" is considered Unnaceptible Risk
2. Selecting CVs could fall under using companies biometrics which is Unnacceptiple Risk, otherwise it would probably fall under High Risk
3. Restricting citizens rights is explicitly listed as the criterion for being Unacceptible Risk, so thats where that would fall
@guetto @EUCommission clicking on the first link of the article gives you a larger writeup including a more detailed list that specifically mentions AI based CV-sorting being considered High-Risk
@susul @guetto @EUCommission

Point 1 was already adressed before and iirc they still can do it under special circumstances. Just not in real time though, they need a court order and can only be performed in recordings or something like that. It also doesnt rule out the posibility of using legal loopholes to use it in other situations.

@guetto @EUCommission the linked page is short, and it includes examples of high risk and unacceptable risks that cover all the scenarios that you mention:

- face recognition in public spaces would fall as biometric identification in public spaces which is classified as unacceptable
- AI to select CVs is explicitly mentioned as a high risk: "AI systems used for recruitment"
- AI to restrict rights is described as an unacceptable risk: "systems that allow ‘social scoring' by governments"

@EUCommission This will hopefully stop cops from using it then just blaming that it is the technology and not their politics that is wrong
@EUCommission Baysian spam filters have been around for 20 years now and are not remotely AI... I hope the definition has been carefully worded!
@BibbleCo @EUCommission "AI" is a moving target and has been applied to wildly different technologies since it was coined in the 1950's. Expert systems (essentially just a lot of nested case statements) were once considered AI, as were simple ML techniques such as Bayesian filters. The use of the term to be interchangeable with deep learning is less than a decade old; the further narrowing to mean "large language models" barely two years old.

As others have said, "artificial intelligence" is a marketing term, not a technical term.

@cholling @EUCommission I concur. I'm old enough to remember Eliza (and the expert systems hype, Lisp Machines, etc etc.)

That's why my teeth start to itch when decision makers, legislators etc start acting as though LLMs == AGI.

Edit - oh yeah, Markov chains text generation too

@EUCommission Hm how come that spam Filter Ai is minimal risk? Ai is usually Trainer with some Bias. Meaning the Filter will have introduced Bias beyond just comparing ham and no-ham files. This feels quite dangerous imho. Ignoring the whole Environmental issues Ai already introduced
@EUCommission Unacceptable risk AI systems are banned with "narrow" exceptions. But exceptions are never narrow for criminal states where literally anything can be considered "terrorism".
@EUCommission And how is the EU addressing the missing intelligence, and the wastefulness?
@EUCommission @drewmccormack if the act is what is described in this toot it looks quite good.

@EUCommission

USA: can we have this too?

also USA:    NO  

USA:  

With the AI Act adopted, the techno-solutionist gold-rush can continue

Yesterday, at the Council of the European Union, Member States adopted the AI Act to regulate Artificial Intelligence systems. This step marks the final adoption of this legislation under discussion since 2021, and initially presented as an instrument to protect rights and freedoms in the face of th

La Quadrature du Net
@EUCommission I'm a little bit worried that "spam filters" are being seen as "minimal risk". Despite being the technology which may limit the access to information for huge numbers of people. And bunch of secretive manipulations on filtered/unfiltered content could be baked in the model.

@EUCommission

Great initiative, the lawlessness of the AI Wild West needs to be addressed.

@EUCommission
Very good! Do I get it right that the German SCHUFA is outlawed now?
#SCHUFA #AIact #socialscoring
@EUCommission From what I’m seeing this seems good, putting reasonable limitations on use without an outright ban that would push the technology underground and out of public view
@EUCommission How is spam filtering *not* social scoring ... obviously nobody familiar with even 20 year old Bayesian training much less modern infrastructure (as implemented by the likes of Gmail) had any input into this mess.
@EUCommission Lot many rules are needed for AI.
@EUCommission Finally, credit scores are illegal. Can't wait for them to sue Schufa et al.

@EUCommission What about AI on the European border used by you or companies you fund to FUCKING KILL PEOPLE BASED ON RACEISM? ISN’T THAT ALSO AN “UNACCEPTABLE RISK AI”?????

Here the source.

“safely seize the opportunities of AI” but only for people within your borders. For the other people you use the exact same shit you said is unacceptable. Don’t get me wrong, this isn’t about the AI Act. It sounds great, but to be honest, I don’t give a shit.

It is about your racist hypocrisy you display over and over on your borders. Brick by brick, wall by wall, make the fortress Europe fall.