The Joint Research Centre (JRC) is looking to incorporate two Scientific Project Officers on Trustworthy Algorithmic Systems to join the HUMAINT (Human Behaviour and Machine Intelligence) project within the Algorithmic Transparency Unit. Join us at #humaint and #ECAT!
https://ai-watch.ec.europa.eu/news/open-vacancies-humaint-project-2023-07-04_en
Open vacancies at HUMAINT project

The JRC is looking to reinforce the HUMAINT team with researchers specialised in Trustworthy Algorithmic Systems. Applications are open until 3 September, 2023.

AI Watch

RT @emiliagogu: The #HUMAINT team will host a scientific trainee in #Ispra Italy as part of the @EU_ScienceHub program (field number 8 - #ArtificialIntelligence). Duration: 5 months from Oct 1st. Apply here before May 24th! #hiring #trustworthyAI https://t.co/SJXpK5j5ix https://t.co/tRfBSsJ9bw

πŸ¦πŸ”—: https://n.respublicae.eu/EU_ScienceHub/status/1656299771395538944

JRC ESRA: External Staff Recruitment Application

Trainees, Graduate Trainees, Grantholders, Cast and Auxiliary contract staff available positions at JRC.

Liability regimes in the age of AI

A recent JRC study shows the difficulties in proving causation in liability processes, due to the specific characteristics of AI, by means of representative use cases.

AI Watch
AI Watch: Artificial Intelligence Standardisation Landscape Update

The European Commission presented in April 2021 the AI Act, its proposed legislative framework for Artificial Intelligence, which sets the necessary regulatory conditions for the adoption of trustworthy AI practices in the European Union. Once the final legal text comes into force, standards will play a fundamental role in supporting providers of concerned AI systems, bringing the necessary level of technical detail into the essential requirements prescribed in the legal text. Indeed, harmonised standards provide operators with presumption of conformity with legal requirements. AI has been an active area of work by many standards development organizations in recent years. In this report, we analyse a set of specifications produced by the IEEE Standards Association covering aspects of trustworthy AI. Several of the documents analysed have been found to provide highly relevant technical content from the point of view of the AI Act. Furthermore, some of them cover important standardization gaps identified in previous analyses. This work is intended to provide independent input to European and international standardisers currently planning AI standardisation activities in support of the regulatory needs. This report identifies concrete elements in IEEE standards and certification criteria that could fulfil standardisation needs emerging from the European AI Regulation proposal, and provides recommendations for their potential adoption and development in this direction.

JRC Publications Repository

Interesting job opportunity for the #DSA nerds:
RT @emiliagogu: We are hiring! Max 6 years Scientific Project Officer Position (postdoc level) @EU_Commission on Trustworthy Recommender Systems at #humaint Deadline May 23rd, in sunny #sevilla, with a #scienceforpolicy focus #AIAct #DSA Ping me if interested https://ec.europa.eu/jrc/communities/en/community/humaint/news/we-are-hiring-scientific-project-officer-position-trustworthy-reco…

πŸ¦πŸ”—: https://nitter.eu/Senficon/status/1521116745309818880

RT @[email protected]

We are hiring! Max 6 years Scientific Project Officer Position (postdoc level) @[email protected] on Trustworthy Recommender Systems at #humaint Deadline May 23rd, in sunny #sevilla, with a #scienceforpolicy focus #AIAct #DSA Ping me if interested https://europa.eu/!gMxW4V

We are hiring! Scientific Project Officer Position on Trustworthy Recommender Systems - JRC Science Hub Communities - European Commission

We are hiring! Scientific Project Officer Position on Trustworthy Recommender Systems

JRC Science Hub Communities - European Commission