👥 Justus-Jonas Erker (UKP Lab/Technische Universität Darmstadt), Nils Reimers (Cohere), Iryna Gurevych (UKP Lab/Technische Universität Darmstadt)

See you at Hashtag#EACL2026 in Rabat 🕌!

#UKPLab #NLP #NLProc #InformationRetrieval #DenseRetrieval #MultiHop #FactChecking #QuestionAnswering #RAG

Call for participation: *SciVQA* Shared Task (https://sdproc.org/2025/scivqa.html)

@NFDI4DS members Ekaterina Borisova and Georg Rehm are organizing a shared task “Scientific Visual Question Answering Shared Task (SciVQA)” on July 31 or August 1st, 2025 in Vienna, Austria, hosted as part of the SDP 2025 Workshop.

Deadline for system submissions: May 16, 2025

#chart
#diagram
#multimodalQA
#visualattributes
#questionanswering
#arXiv
#SciVQA
#SDP2025
#ACL2025
#Vienna
#codabench
#huggingface
#NFDI4DS

5th Workshop on Scholarly Document Processing

4th Workshop on Scholarly Document Processing

Our Institute is hiring. If you are interested in working at the intersection of conversational question-answering and geographic knowledge graphs, we would love to hear from you!
#knowledgegraph #questionanswering

Full job ad: https://www.verw.tu-dresden.de/StellAus/stelle.asp?id=11872&lang=de&style=verw

Stellenausschreibung ID 11872

PDF-Based Question Answering with Amazon Bedrock and Haystack

Amazon Bedrock is a fully managed service that provides high-performing foundation models from leading AI startups and Amazon through a single API. You can choose from various foundation models to…

deepset-ai

#TechNews: #Qwen Releases New #VisionLanguage #LLM Qwen2-VL 🖥️👁️

After a year of development, #Qwen has released Qwen2-VL, its latest #AI system for interpreting visual and textual information. 🚀

Key Features of Qwen2-VL:

1. 🖼️ Image Understanding:

Qwen2-VL shows performance on #VisualUnderstanding benchmarks including #MathVista, #DocVQA, #RealWorldQA, and #MTVQA.

2. 🎬 Video Analysis:

Qwen2-VL can analyze videos over 20 minutes in length. This is achieved through online streaming capabilities, allowing for video-based #QuestionAnswering, #Dialog, and #ContentCreation. #VideoAnalysis

3. 🤖 Device Integration:

The #AI can be integrated with #mobile phones, #robots, and other devices. It uses reasoning and decision-making abilities to interpret visual environments and text instructions for device control. #AIAssistants 📱

4. 🌍 Multilingual Capabilities:

Qwen2-VL understands text in images across multiple languages. It supports most European languages, Japanese, Korean, Arabic, Vietnamese, among others, in addition to English and Chinese. #MultilingualAI

This release represents an advancement in #ArtificialIntelligence, combining visual perception and language understanding. 🧠 Potential applications include #education, #healthcare, #robotics, and #contentmoderation.

https://github.com/QwenLM/Qwen2-VL

GitHub - QwenLM/Qwen2-VL: Qwen2-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.

Qwen2-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud. - QwenLM/Qwen2-VL

GitHub
MBZUAI is looking to recruit postdoctoral researchers (https://mbzuai.ac.ae/) to work on #Arabic #NLP. The ideal candidate should be particularly interested in #Dialectal Arabic, Arabic #LLMs, #QuestionAnswering, but candidates with experience on Arabic NLP in general will also be considered.

To apply, please write directly to Preslav Nakov:
[email protected]
MBZUAI - Mohamed bin Zayed University of Artificial Intelligence

Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) is a graduate research university dedicated to advancing AI as a global force for good.

MBZUAI
Meet GraphRAG: Microsoft’s New Graph-Based AI Method for Superior Data Insights

Microsoft introduces GraphRAG, a graph-based AI method for retrieval-augmented generation, now available on GitHub. This tool enhances data retrieval and question answering for private or unseen datasets, offering a systematic and complete response production.

Tech Chill

🎉 We developed a prompting method for improved (and more human-like) LLM reasoning and applied it to hybrid question answering, surpassing the GPT-4 baseline. 🚀

Thanks to my co-authors Dhananjay, Preetam and @SaharVahdati ! We'll present the work at #ACL2024 in Bangkok this year where I hope I'll be able to meet a few of you.

Blog post: https://linkedin.com/pulse/beyond-boundaries-human-like-approach-question-over-sources-lehmann-dhtne

Paper: https://www.amazon.science/publications/beyond-boundaries-a-human-like-approach-for-question-answering-over-structured-and-unstructured-information-sources

#AI #LLMs #QuestionAnswering #ConversationalAI

Beyond Boundaries: A Human-like Approach for Question Answering over Structured and Unstructured Information Sources

Together with my co-authors, we are excited to share our work on an easy-to-apply method to improve LLM reasoning and how we applied it for question answering across heterogenous sources. 🚀 Language models relying solely on their internal parameters lack knowledge about recent knowledge as well as

Very biased, but also very excited about Khyathi Chandu's presentation of our new proposed shared task at #INLG2023: "LowReCorp: The Low-Resource NLG Corpus Building Challenge"

Join the #SharedTask during the coming year if you want to use our UI or task design to collect #NLG data for #LowResourceLanguages!

#DialogueSummarization #QuestionAnswering #ResponseGeneration

Addendum 1

Instruction tuning: https://en.wikipedia.org/wiki/Large_language_model#Instruction_tuning
https://mastodon.social/@persagen/110945422507756632
* self-instruct approaches
* enable LLM to bootstrap correct responses

FactLLaMA: Optimizing Instruction-Following Language Models with External Knowledge for Automated Fact-Checking
https://arxiv.org/abs/2309.00240

LLaMA: https://en.wikipedia.org/wiki/LLaMA
* family of large language models (LLM) released 2023-02 by Meta AI

#LLM #LLaMA #FactLLaMA #AugmentedLLM #SelfSupervisedLLM #LargeLanguageModels #QuestionAnswering #NLP #GPT

Large language model - Wikipedia