Last paper for review!

Please volunteer to review this CoNLL interactive neural machine translation paper! Contact me ASAP if you are willing and able.

Or at least boost this post!
Thanks!

#nlp #NLProc #conll #conll2024

I am Area Chairing for CoNLL this year.

We are still looking for emergency reviewers for

- a text representation in a low resource language paper

- an interactive machine translation paper

If you can review any of these papers in the next 2 days, pls contact me ASAP — thanks!

Pls boost!
#nlp #nlproc #conll #conll2024

As a nice surprise, this paper ended up getting an “honourable mention” at #CONLL at #EMNLP2023. Great work by first authors @briemadu and Pelin!

https://arxiv.org/abs/2310.18229

Revising with a Backward Glance: Regressions and Skips during Reading as Cognitive Signals for Revision Policies in Incremental Processing

In NLP, incremental processors produce output in instalments, based on incoming prefixes of the linguistic input. Some tokens trigger revisions, causing edits to the output hypothesis, but little is known about why models revise when they revise. A policy that detects the time steps where revisions should happen can improve efficiency. Still, retrieving a suitable signal to train a revision policy is an open problem, since it is not naturally available in datasets. In this work, we investigate the appropriateness of regressions and skips in human reading eye-tracking data as signals to inform revision policies in incremental sequence labelling. Using generalised mixed-effects models, we find that the probability of regressions and skips by humans can potentially serve as useful predictors for revisions in BiLSTMs and Transformer models, with consistent results for various languages.

arXiv.org

I am going to share some live summaries from #EMNLP #NEURIPS #CONLL conferences

Hope I am not too spammy (will pass soon)

And tell me if I can make it more useful for people not participating in any way
#machinelearning #LLM #LLMs #NLP #NLProc #ML #CV #data

Welcome the new babies!
19 pretrained models on the loose track
24 on the strict
118 on strict-small
https://dynabench.org/babylm

We are proud of >30 pretraining teams submitting papers to babyLM!

FOMO?
Get updated on CoNLL or
participate next year
https://babylm.github.io

#NLP #nlproc #babyLM #CoNLL #machinelearning #llm #llms #pretraining

Dynabench

Dynabench

I will be at @[email protected] & @[email protected] say hi

I will be tweeting under #EMNLP2022livetweet or #EMNLP2022 or #CoNLL or #CoNLLlivetweet2022

If it spams you, mute it (or wait a week 😉)
https://help.twitter.com/en/using-twitter/advanced-twitter-mute-options

How to use advanced muting options

You can mute specific words, hashtags, usernames, and emojis from your Home timeline, replies, and notifications. Learn how.

@ #conll #EMNLP talk to me about
ColD Fusion & https://ibm.github.io/model-recycling/
BabyLM shared task
https://www.label-sleuth.org/
Enhancing decoders with syntax

And guided work (talk to them too)
Estimating #neuralEmpty quality with source only
Controlling structure in - neuron level
Details:

Home

Model-recycling - the best model per architecture. Comparing finetuned models from HF, as base models for future finetuning.

Model Recycling