Pepa Atanasova

79 Followers
94 Following
5 Posts

I'm absolutely thrilled to have been awarded a prestigious ERC Starting Grant on 'Explainable and Robust Automatic Fact Checking (ExplainYourself)'!
Official press release: https://erc.europa.eu/news/erc-2021-starting-grants-results
More about the project & how to join the team: http://www.copenlu.com/talk/2022_11_erc/

This wouldn't have been possible without the great work of my PhD students & postdocs in CopeNLU (especially @pepa @dustin) which this project builds on.

#ERCStG #NLProc #NLP

ERC awards €619m in its first research grants under Horizon Europe

The statistics of these call and lists of researchers selected for funding have been amended.

@jasmijn @isa thank you, Jasmijn!
Now that I've passed my Ph.D. defense, I am happy to officially share that I've started a new position as a postdoc at CopeNLU, DIKU 🎉. Thrilled to continue working with @isa on #xai for #ML and #NLP , also with some applications in #FinNLP.
@isa @igel The thesis is now also available online https://arxiv.org/abs/2211.04946
Accountable and Explainable Methods for Complex Reasoning over Text

A major concern of Machine Learning (ML) models is their opacity. They are deployed in an increasing number of applications where they often operate as black boxes that do not provide explanations for their predictions. Among others, the potential harms associated with the lack of understanding of the models' rationales include privacy violations, adversarial manipulations, and unfair discrimination. As a result, the accountability and transparency of ML models have been posed as critical desiderata by works in policy and law, philosophy, and computer science. In computer science, the decision-making process of ML models has been studied by developing accountability and transparency methods. Accountability methods, such as adversarial attacks and diagnostic datasets, expose vulnerabilities of ML models that could lead to malicious manipulations or systematic faults in their predictions. Transparency methods explain the rationales behind models' predictions gaining the trust of relevant stakeholders and potentially uncovering mistakes and unfairness in models' decisions. To this end, transparency methods have to meet accountability requirements as well, e.g., being robust and faithful to the underlying rationales of a model. This thesis presents my research that expands our collective knowledge in the areas of accountability and transparency of ML models developed for complex reasoning tasks over text.

arXiv.org
Massive congrats to @pepa for successfully defending her PhD thesis “Accountable and Explainable Methods for Complex Reasoning over Text”! 🎉🍾🎊
Proud to have been your supervisor.
Thanks to @igel Ivan Titov and Kalina Bontcheva for serving on the committee.