205 Followers
194 Following
187 Posts
Full professor in Explainable AI, Director of Research Dept. Adv. Comp. Sciences, University of Maastricht
Still trying to figure out how to scratch your #recsys itch after this year's @Recsys? Have a look at the #ACMTORS Call for papers for the Special Issue on #RecommenderSystems for Good. Will you be be submitting a paper as an xmas gift to the RecSys community? // @Nava et al.
https://dl.acm.org/pb-assets/static_journal_pages/tors/pdf/TORS_SI-Recommender-Systems-for-Good-1721933817383.pdf
Upcoming (October) I/O Magazine of the ICT Research Platform Nederland where we highlight the importance of structural funding to higher education. This issue also announces the new special interest group in human-computer interaction led by Alessandro Bozzon and Pablo César a.o., to which I have contributed.
📢📢 ACM TORS Special Issue #recsys for Good, submission deadline extended to 24. December 2024! More info: https://dl.acm.org/pb-assets/static_journal_pages/tors/pdf/TORS_SI-Recommender-Systems-for-Good-1715281856967.pdf
with: @Nava, Marko Tkalcic, Noemi Mauro, Antonela Tommasel

There have been 15 papers puplished in the @Recsys Reproducibility track between 2020 when the track was established and 2023.

In 2024, there are 14 papers accepted in the #recsys2024 reproducibility track.

The field of psychological safety can often focus too narrowly on Western, English-speaking, white collar, neurotypical contexts.

There’s a risk that discussions about psychological safety can neglect neurodiversity, assuming that neurotypical behaviours are the "right" behaviours.

We need to resist presuming that psychological safety looks the same to everyone, when in reality it can be wildly different.

https://psychsafety.co.uk/psychological-safety-and-neurodiversity/

Psychological Safety and Neurodiversity – Psychological Safety

While we are at it. I was also looking for good resources to cite to explain mode collapse (overfitting the model during alignment constrains it from generalizing) . This seems to be another nice ACL paper: https://aclanthology.org/2024.scalellm-1.5/
More similar references are welcome if you have them!
Detecting Mode Collapse in Language Models via Narration

Sil Hamilton. Proceedings of the First edition of the Workshop on the Scaling Behavior of Large Language Models (SCALE-LLM 2024). 2024.

ACL Anthology
A few months ago I was asking: How well can LLMs handle sentiment analysis tasks? TLDR; Performance degrades with the increase of categories and best for few shot learning/limited data scenarios. Proceed with caution. Really nice to see this paper accepted in NAACL so we can start to cite it. ;) https://aclanthology.org/2024.findings-naacl.246/
Sentiment Analysis in the Era of Large Language Models: A Reality Check

Wenxuan Zhang, Yue Deng, Bing Liu, Sinno Pan, Lidong Bing. Findings of the Association for Computational Linguistics: NAACL 2024. 2024.

ACL Anthology
I've read it now and it's a *really* nice piece for anyone interested in the philosophy of science, AI ethics, or even evaluation in the age of foundational models.
Added to reading stack. Risk of using AI for hypothesis generation: https://pubmed.ncbi.nlm.nih.gov/38448693/
Artificial intelligence and illusions of understanding in scientific research - PubMed

Scientists are enthusiastically imagining ways in which artificial intelligence (AI) tools might improve research. Why are AI tools so attractive and what are the risks of implementing them across the research pipeline? Here we develop a taxonomy of scientists' visions for AI, observing that their a …

PubMed
Ancillary activities - Prof. dr. C.J. (Kees) van Deemter - Utrecht University

UU Staff