KI-Chatbot von Frontex soll zu Selbst-Abschiebung beraten

Die europäische Grenzschutzagentur Frontex entwickelt derzeit eine App mit integriertem #Chatbot. Der soll Geflüchtete dabei beraten, wie sie „freiwillig“ in ihr Herkunftsland zurückkehren können. Der Einsatz stelle nach dem #AIAct kein hohes Risiko dar, findet die Behörde…..

#AbolishFrontex

https://netzpolitik.org/2026/migrationskontrolle-ein-ki-chatbot-von-frontex-soll-zu-selbst-abschiebung-beraten/

#Migration #KI #Deportation #Abschiebung #Refugees #FortressEurope #Europe #KI #AI

Migrationskontrolle: Ein KI-Chatbot von Frontex soll zu Selbst-Abschiebung beraten

Die europäische Grenzschutzagentur Frontex entwickelt derzeit eine App mit integriertem Chatbot. Der soll Geflüchtete dabei beraten, wie sie

netzpolitik.org

Report Suggests Piracy Accounts for Nearly a Third of the Italian Book Market, with Mounting Concern Over AI

For the first time, this year’s survey addressed a new element of the piracy threat—the “difficult-to-quantify losses caused by the use of AI-generated summaries and condensations of books.”
The post Report Suggests Piracy Accounts for Nearly a Third of the Italian Book Market, with Mounting Concern Over AI appeared first on Publishing Perspectives.
https://publishingperspectives.com/2026/02/report-suggests-piracy-accounts-for-nearly-a-third-of-the-italian-book-market-with-mounting-concern-over-ai/

#AI #AIAct #AssociazioneItalianaEditori #BookPiracy #Europe

KI-Aufsicht aus einer Hand: Die Bundesnetzagentur wird zentrale Behörde für KI-Anwendungen. Das Handelsblatt beschreibt den Plan als Schritt zu klareren Zuständigkeiten. Weniger Flickenteppich – mehr Koordination. #Handelsblatt
#KIRegulierung #AIAct #Bundesnetzagentur https://www.handelsblatt.com/politik/deutschland/kuenstliche-intelligenz-regierung-gibt-bundesnetzagentur-aufsicht-fuer-ki-anwendungen/100199432.html
Künstliche Intelligenz: Regierung gibt Bundesnetzagentur Aufsicht für KI-Anwendungen

Im Bereich Künstlicher Intelligenz soll die Bundesnetzagentur künftig zentraler Ansprechpartner für Unternehmen werden. Welche Fragen dabei noch geklärt werden müssen.

Handelsblatt

The #DigitalOmnibus proposes to remove Article 49(2) from the #AIAct. This would allow AI providers to dodge registration of high-risk systems by self-declaring they’re “not high risk”.
This would weaken enforcement, undermine legal certainty & erode fundamental rights, all for €100 saving per company.

Name a worse trade.

We urge EU lawmakers to reject this rollback & uphold the integrity of the AI Act.

💌 w/60 CSOs https://edri.org/our-work/ai-omnibus-r

@asanpin Done. Mixed feelings. To concentrate solely on FIMI does not sound wise. In the USA DIMI is at present the most urgent problem (Domestic Information Manipulation and Interference), which has already halted research necessary for informed citizens in a democracy, and IMI in general is dangerous, no matter its source.

In Finland, we don't usually speak about "democracies", because the actual distribution of political power is more important itself, and comes in degrees. Some NATO-countries, e.g, are not democratic, and some even oligarchic, or at least act like ones. There is no "West" or "the democratic world" but in some not so democratic ruler's and marketer's jargon. That jargon manifests itself maybe too much in the WFW-report. Some parts of it were educational, some clearly outdated, superficial, and/or unnecessarily fear-mongerous. The Alan Turing report is on quite another level with its delailed scenarios for influence in Ch. 5, https://www.turing.ac.uk/sites/default/files/2020-10/epistemic-security-report_final.pdf

With respect to the AI-danger there have been proposals for an IPCC-like body to steer and regulate at least something, https://www.nature.com/articles/d41586-023-01606-9, but guess which former "democracy" just dissociated itsef from even the original IPCC? And referring to, e.g, the dual-use problem, guess who frustrated at least part of the EU AI-Act, https://corporateeurope.org/en/2023/02/lobbying-ghost-machine

My advice for most everybody would be to stop using tools which use you, and maybe instead start to read real texts, preferably more than 10 pages long. Important things don't come easily, and the hustle we constantly feel, isn't ours.

I go back to my primary reading, thanks for the opportunity to think aloud, https://plato.stanford.edu/entries/attention/

#cognitiveWarfare #information #ai #democracy #usa #ipcc #aiAct #propaganda #disinformation

🇪🇺#AIAct implementation update from 🇩🇪: According to the ‚KI-Marktüberwachungs- und Innovationsförderungsgesetzes (KI-MIG)‘, the ‚Bundesnetzagentur‘ is the central coordination & competence center, market surveillance & notifying authority.

What this tells us? Many countries give this task to their #DataProtection supervisory authorities. The German approach is an indication that fundamental rights are not the priority of 🇩🇪 AI Regulation.

https://www.heise.de/en/news/AI-Act-Federal-Government-Sets-AI-Law-in-Motion-11173642.html

AI Act: Bundesregierung bringt KI-Gesetz auf den Weg

Das Kabinett bringt die Umsetzung der europäischen KI-Verordnung auf den Weg. Zentrale Aufsichtsbehörde für KI wird damit die Bundesnetzagentur.

heise online

Un appello ai legislatori dell'UE: proteggere i diritti e respingere la richiesta di eliminare la garanzia della trasparenza nel #AIAct

«Noi, organizzazioni e individui sottoscritti, vi esortiamo con la massima fermezza a respingere l'eliminazione della garanzia di trasparenza di cui all'articolo 49(2) per i sistemi di IA ad alto rischio, proposta nell'AI Omnibus.»

https://www.accessnow.org/press-release/a-call-to-eu-legislators-protect-transparency-safeguard-in-ai-act/

@aitech

Access Now - A call to EU legislators: protect rights and reject the call to delete transparency safeguard in AI Act

We, the undersigned organisations and individuals, urge you in the strongest possible terms to reject the deletion of the Article 49(2) transparency safeguard for high-risk AI systems that is proposed in the AI Omnibus. This transparency safeguard ensures that providers of AI systems cannot circumvent the core obligations of the AI Act.

Access Now

"The Article 49(2) transparency safeguard has an essential function and removing it, as proposed in the Commission’s AI Omnibus, will create a gaping loophole and undermine the core functioning of the AI Act.

Under Article 6(3), providers of AI systems which match the list of high-risk use cases in Annex III may decide that their system does not in fact pose a significant risk and unilaterally exempt themselves from all obligations for high-risk AI systems.

To stop the abuse of this derogation mechanism, providers who do exempt themselves are required by the Article 49(2) transparency safeguard to register their derogation in a publicly viewable database. Removing this transparency safeguard would have three key negative consequences:

- Market surveillance authorities will have no overview of how many companies exempt themselves from the high-risk requirements, and we have no way of tracking discrepancies across member states (e.g. that in Country A there were 3000 exemptions but only 6 in Country B), leading to potential lack of harmonisation across the Single Market.

- Providers are given a completely opaque and unaccountable way to opt out of the obligations for high-risk AI systems, creating a perverse incentive to sidestep the requirements of the AI Act. Importantly, this perverse incentive will work to the detriment of responsible providers who truly wish to develop responsible, trustworthy systems in the high-risk categories, allowing them to be undercut in the market.

- The public, including civil society organisations, will have no way of knowing which providers have exempted themselves from obligations, despite the fact that their systems fall under the high-risk categories in Annex III. This removes a key element of transparency, undermines public trust, and deprives those affected by AI systems of necessary information to challenge an exemption."

https://www.accessnow.org/press-release/a-call-to-eu-legislators-protect-transparency-safeguard-in-ai-act/

#EU #AIAct #AIOmnibus #AIGovernance #BigTech #AI #AISafety

Access Now - A call to EU legislators: protect rights and reject the call to delete transparency safeguard in AI Act

We, the undersigned organisations and individuals, urge you in the strongest possible terms to reject the deletion of the Article 49(2) transparency safeguard for high-risk AI systems that is proposed in the AI Omnibus. This transparency safeguard ensures that providers of AI systems cannot circumvent the core obligations of the AI Act.

Access Now
AI Act: Federal Government Sets AI Law in Motion

The cabinet sets the European AI Regulation in motion. The Bundesnetzagentur becomes the central supervisory authority for AI.

heise online