Kris Shrishak

170 Followers
72 Following
237 Posts
Enforce Senior Fellow at ICCL. Tech-policy, algorithmic decision making, privacy, cryptography, Internet security, human rights
@krisshrishak.bsky.social
Websitehttps://krisshrishak.de/

Enforce at the Irish Council for Civil Liberties is hiring an expert in #AI.

Full-time role. Remote in the EU.

Apply early. Deadline 5pm Dublin time on 15 April 2026.

#AIPolicy #EUAIAct #DigitalRights #TechPolicy

https://www.iccl.ie/digital-data/enforce-is-hiring-ai-expert/

Response from the President of the European Commission.

It would have been good if the President had provided links or citation to the evidence supporting the claims.

https://www.iccl.ie/wp-content/uploads/2026/03/20260325_Response-from-President-EC-AI-Hype.pdf

Canada rejected her permanent residence application. Her job duties were made up — by Immigration’s AI reviewer

Immigration Department decision, which openly said it used Generative AI, cites job duties of applicant that bear no relation to her actual tasks.

Toronto Star

Maria Helen Murphy and I have written a short report on protecting the ‘Privacy’ in Privacy-Enhancing Technologies (PETs)

https://www.iccl.ie/news/new-iccl-report-on-privacy-enhancing-technologies/

I think the only reason people keep being impressed by LLMs is because of their scope, which is larger than most people's knowledge. For most people what seems to be impressive is that LLMs show them -their own- Out-of-Distribution issues and limitations. While for a domain specialist LLMs, within their domain, will hit Out-of-Distribution problems quickly.

https://www.theregister.com/2026/02/13/anthropic_c_compiler

OK, so Anthropic's AI built a C compiler. That don't impress me much

Opinion: Fanboys think it's the greatest thing since sliced bread. Devs aren't nearly as won over

The Register

Last month the Council of Europe published a report written by Soizic Pénicaud and me for equality bodies and other national human rights structures in Europe. It provides policy guidelines on AI and algorithm-driven discrimination.

https://edoc.coe.int/en/artificial-intelligence/12396-european-policy-guidelines-on-ai-and-algorithm-driven-discrimination-for-equality-bodies-and-other-national-human-rights-structures.html

European policy guidelines on AI and algorithm-driven discrimination for equality bodies and other national human rights structures

As artificial intelligence and automated decision-making systems become embedded in public administration and key private sectors, these guidelines empower equality bodies to identify, prevent and address discrimination, ensuring AI deployment complies with fundamental rights across Europe.Public administrations across Europe are using artificial intelligence (AI) and/or automated decision-making (ADM) systems in a wide range of policy areas, including migration, welfare, justice, education, employment, tax, law enforcement or healthcare. Such systems are also deployed in critical areas of the private sector, such as banking and insurance. Although AI and ADM systems present significant risks of discrimination, challenges remain in identifying and mitigating these risks. Thus, equality bodies and other national human rights structures have a key role in promoting fundamental rights-compliant deployment of AI/ADM systems by public sector organisations. The guidelines aim to equip equality bodies and other national human rights structures, especially in the European Union, to tackle discrimination in AI/ADM systems. They update them on their responsibilities regarding the changing regulatory environment – including the European Union’s AI Act and the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law – offer recommendations and examples for applying new regulations and serve as a resource to assist and advise national stakeholders, such as policy makers and regulators on human rights, equality and non-discrimination.

Council of Europe Publishing

5. "All public bodies and semi-state entities using AI in public services must publish annual evidence-based reports detailing benefits, disadvantages, and any inequalities identified. These reports should be made publicly accessible to ensure transparency and accountability."

6. "Additional funding and resources to nine agencies responsibility for protecting human rights under the EU AI Act."

https://www.iccl.ie/news/ireland-unprepared-for-ai-act-implementation/

2. Involvement of people affected by AI: "establishing a Citizens’ Assembly on Artificial Intelligence Digitalisation and Technology to facilitate inclusive public dialogue and democratic input on AI policy and ethics."

3. "Developing a national AI risk register within the national AI office to identify and monitor systemic risks across sectors."

4. "Introducing mandatory algorithmic impact assessments for high-risk AI systems in public services."

https://www.iccl.ie/press-release/iccl-to-address-oireachtas-joint-committee-on-ai/