6 Followers
24 Following
5 Posts
Assistant Professor at Linköping University
#SeMatS2025 Program is now available online! Join us for the 2nd International Workshop on Semantic Materials Science, where we are honoured to have Prof. Roger French and Prof. Amila Akagic giving keynote talks.
https://sites.google.com/view/semats2025/program
@lysander07 @iswc_conf
semats2025 - Program

The workshop will take place on November 2nd, 13:30 - 17:00, Nara, Japan. All times are in Japan Standard Time (JST). Each paper has 12 minutes for presentation, and 3 minutes for Q&A. 13:30 - 13:40 Opening and Introduction Session 1: 13:40 - 14:30 Keynote: Semantic Data Management and

Deadline extension for #SeMatS2025
2nd Int. Workshop on Semantic Materials Science - Harnessing the Power of Semantic Web Technologies in Materials Science, co located at #ISWC2025
New Deadline: Aug 9, 2025, AoE

Please, spread the news and submit! :)

https://sites.google.com/view/semats2025/call-for-papers?authuser=0

#semanticweb #ontologies #bfo #materialsscience #mse #knowledgegraphs #AI @fiz_karlsruhe @fizise @KIT_Karlsruhe @HL @joerg #NFDIMatWerk #PMD #materialdigital #stahldigital

Can LLMs generate novel research ideas? A new study is making waves.
Researchers from Stanford designed an experiment to find out. They recruited over 100 NLP experts to write research ideas and review both human and LLM-generated ideas in a blinded setup.
The results? LLM ideas were judged as significantly more novel (p < 0.05) than human expert ideas, while being rated slightly lower on feasibility.

http://www.arxiv.org/abs/2409.04109

#llms #research #ai #generativeai

Can LLMs Generate Novel Research Ideas? A Large-Scale Human Study with 100+ NLP Researchers

Recent advancements in large language models (LLMs) have sparked optimism about their potential to accelerate scientific discovery, with a growing number of works proposing research agents that autonomously generate and validate new ideas. Despite this, no evaluations have shown that LLM systems can take the very first step of producing novel, expert-level ideas, let alone perform the entire research process. We address this by establishing an experimental design that evaluates research idea generation while controlling for confounders and performs the first head-to-head comparison between expert NLP researchers and an LLM ideation agent. By recruiting over 100 NLP researchers to write novel ideas and blind reviews of both LLM and human ideas, we obtain the first statistically significant conclusion on current LLM capabilities for research ideation: we find LLM-generated ideas are judged as more novel (p < 0.05) than human expert ideas while being judged slightly weaker on feasibility. Studying our agent baselines closely, we identify open problems in building and evaluating research agents, including failures of LLM self-evaluation and their lack of diversity in generation. Finally, we acknowledge that human judgements of novelty can be difficult, even by experts, and propose an end-to-end study design which recruits researchers to execute these ideas into full projects, enabling us to study whether these novelty and feasibility judgements result in meaningful differences in research outcome.

arXiv.org

#researchers with a #mastodon account and a known field of work (as known by #wikidata) https://w.wiki/649f

Most represented fields of work:
1 #SemanticWeb
2 #ComputerScience
3 #MachineLearning
Good illustration of the bias in the data ^^

Edit: See also https://mastodon.social/@nemobis@mamot.fr/109333232434584227