Next in line @semantics was Daniel, who presented “Enhancing Answers Verbalization using Large Language Models" by Daniel Vollmers, Parth Sharma, Hamada Zahera and Axel Ngonga.👨‍💻 Sounds interesting?🤩➡️Take a look at the paper here: https://papers.dice-research.org/2024/SEMANTICS_Answers_Verbalization/public.pdf #DICEreadme

Today “Benchmarking Low-Resource Machine Translation Systems” was presented by Ana & Nikit at #LoResMT workshop @aclmeeting 🤩👏
➡️ Take a look at the paper by Ana Morim da Silva, Nikit Srivastava, Tatiana Moteu Ngoli, Michael Röder, Diego Moussallem and Axel Ngonga here: https://aclanthology.org/2024.loresmt-1

#DICEontour #DICEreadme

Proceedings of the The Seventh Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2024) - ACL Anthology

Congratulations to Lukas Blübaum and Stefan Heindorf for the acceptance of your paper "Causal Question Answering with Reinforcement Learning" at The Web Conference #TheWebConf 👏🥳

➡️Take a look at the paper here: https://arxiv.org/abs/2311.02760

#ReinforcementLearning #DICEreadme

Causal Question Answering with Reinforcement Learning

Causal questions inquire about causal relationships between different events or phenomena. They are important for a variety of use cases, including virtual assistants and search engines. However, many current approaches to causal question answering cannot provide explanations or evidence for their answers. Hence, in this paper, we aim to answer causal questions with a causality graph, a large-scale dataset of causal relations between noun phrases along with the relations' provenance data. Inspired by recent, successful applications of reinforcement learning to knowledge graph tasks, such as link prediction and fact-checking, we explore the application of reinforcement learning on a causality graph for causal question answering. We introduce an Actor-Critic-based agent which learns to search through the graph to answer causal questions. We bootstrap the agent with a supervised learning procedure to deal with large action spaces and sparse rewards. Our evaluation shows that the agent successfully prunes the search space to answer binary causal questions by visiting less than 30 nodes per question compared to over 3,000 nodes by a naive breadth-first search. Our ablation study indicates that our supervised learning strategy provides a strong foundation upon which our reinforcement learning agent improves. The paths returned by our agent explain the mechanisms by which a cause produces an effect. Moreover, for each edge on a path, our causality graph provides its original source allowing for easy verification of paths.

arXiv.org