Graphs are everywhere, but LLMs are trained on text. In „Talk like a Graph“ (ICLR 2024), Google introduces methods for encoding graphs for LLMs, launches the GraphQA benchmark, and demonstrates how task phrasing & graph structure impact reasoning. #ICLR2024 #AI #Graphs

Talk like a graph: Encoding gr...
Bluesky

Bluesky Social
Graphs are everywhere, but LLMs are trained on text. In „Talk like a Graph“ (ICLR 2024), Google introduces methods for encoding graphs for LLMs, launches the GraphQA benchmark, and demonstrates how task phrasing & graph structure impact reasoning. Right methods boost performance by up to 60%! #ICLR2024 #AI #Graphs
https://blog.research.google/2024/03/talk-like-graph-encoding-graphs-for.html
Talk like a graph: Encoding graphs for large language models

At UKP Lab we research how AI hallucinations can be mitigated by merging LLMs. A paper on this topic by us, the Department of Computer Science at @TU Darmstadt, hessian.AI, the RIKEN Center for Advanced Intelligence Project and The University of Edinburgh was presented by Nico Daheim and Thomas Möllenhoff this month at #ICLR2024.

Learn more in this 🧵:
https://sigmoid.social/@UKPLab/112393529894161398

UKP Lab (@[email protected])

Attached: 1 image Model Merging has shown great success but key questions remain unresolved ✅ Why does it work? ❌ When can it fail? We shed light on those questions by connecting inaccuracies of weighted-averaging to mismatches in the gradients. 🧵(1/9) #ICLR2024 #NLProc 📰 https://arxiv.org/abs/2310.12808

Sigmoid Social

Explore this groundbreaking #ICLR2024 paper demonstrating how Large Language Models expedite Reinforcement Learning for tackling intricate long-term robotic assignments.

The approach adeptly handles 25+ robotic manipulation tasks spanning up to 10 stages across four benchmarks, boasting success rates exceeding 85%.

Check out the paper & code: http://mihdalal.github.io/planseqlearn

Plan-Seq-Learn

Plan-Seq-Learn: Language Model Guided RL for Solving Long Horizon Robotics Tasks

We are live at the #AfricaNLP workshop at the #ICLR2024, @ICLR ongoing in Vienna, Austria.

Join our team TODAY as they present our 9 accepted papers in the intersection of large language models, geo-semantics, LLM for social good use cases, low-resource languages in general and African languages at 5th AfricaNLP workshop.

Our Research Lead, a Machine learning engineer, Anthony Soronnadi, is currently in Vienna, and available for discussions on partnership and collaborations. He will be delivering his presentation in person, while the rest of the team will be presenting remotely.

Location: Schubert 4

Time: 8:00 am - 4:00 pm (WAT)

And consider following the authors Nico Daheim, Thomas Möllenhoff, Edoardo M. Ponti, Iryna Gurevych and Mohammad Emtiyaz Khan @emtiyaz (UKP Lab, Department of Computer Science, @TU Darmstadt, Hessian.ai, RIKEN Center for Advanced Intelligence Project, University of Edinburgh). (9/9)

See you in Vienna! 🇦🇹 #ICLR2024

Model Merging by Uncertainty-Based Gradient Matching

Models trained on different datasets can be merged by a weighted-averaging of their parameters, but why does it work and when can it fail? Here, we connect the inaccuracy of weighted-averaging to mismatches in the gradients and propose a new uncertainty-based scheme to improve the performance by reducing the mismatch. The connection also reveals implicit assumptions in other schemes such as averaging, task arithmetic, and Fisher-weighted averaging. Our new method gives consistent improvements for large language models and vision transformers, both in terms of performance and robustness to hyperparameters. Code available here.

arXiv.org

This principle also holds for removing data from models, for example reducing toxicity in LLMs by removing toxic training data or removing private training data without retraining.

(7/🧵) #ICLR2024 #NLProc

It also outperforms many existing schemes, such as Fisher Averaging. 🥇

(6/🧵) #ICLR2024 #NLProc

This new merging scheme improves robustness to scaling and performance!

(5/🧵) #ICLR2024 #NLProc