Maartje ter Hoeve

205 Followers
57 Following
24 Posts
Machine Learning Researcher at Apple MLR • PhD from the University of Amsterdam • MSc AI, BA Linguistics • Interned at Apple MLR; FAIR; MSR • she/her
Websitehttps://maartjeth.github.io
🍏The Apple MLR team in Paris has an open AI resident position (July 2024 to July 2025). See the link below for more info and application procedure. Please reach out to [email protected] if you
have questions! https://jobs.apple.com/en-us/details/200514644/aiml-resident-machine-learning-research?team=MLAI
AIML Resident - Machine Learning Research - Careers at Apple

Apply for a AIML Resident - Machine Learning Research job at Apple. Read about the role and find out if it’s right for you.

I’ll also be there to answer questions about the data collection for the #NLProc task @IgluContest at #NeurIPS2022

🕐 1pm, #interNLP workshop, room 397

Come and say hi! 😃

I will present our work ✨Towards Interactive Language Modeling ✨at #NeurIPS2022 in the #interNLP workshop today:

LLMs perform really well, but what if we train them a bit more like how humans learn language?

w/ @n0mad_0, @_dieuwke_ & Emmanuel Dupoux

https://internlp.github.io/documents/2022/papers/2.pdf

I will present our work ✨Towards Interactive Language Modeling ✨at #NeurIPS2022 in the #interNLP workshop today:

LLMs perform really well, but what if we train them a bit more like how humans learn language?

w/ @n0mad_0, @_dieuwke_ & Emmanuel Dupoux

https://internlp.github.io/documents/2022/papers/2.pdf

Let''s meet! I will be at Neurips from Tuesday morning this week.

We still have open positions in our group @ Apple, in Paris, targeting interns (enrolled in PhD course, with interests in optimization / flows / OT) and also FTE.

More generally please do not hesitate to reach out if you are at the conference, and interested in anything we do @ Apple (ott-jax!). I am looking forward to a lot of discussions 😀

On my way to New Orleans for #NeurIPS! NeurIPS was the first ever conference I went to as a PhD student, so it seems like a good way to finish as well 😃

Excited to meet everyone again! Let me know if you’re there and want to grab a coffee! ☕️🍵

How to predict #singlecell perturbation responses? What is optimal transport and why can it solve such puzzles? How to integrate this into #machinelearning pipelines to predict treatment outcomes of unseen patients? Join us tomorrow in-person or on Zoom http://broad.io/miatalks! More information here: http://broad.io/mia
Join our Cloud HD Video Meeting

Zoom is the leader in modern enterprise video communications, with an easy, reliable cloud platform for video and audio conferencing, chat, and webinars across mobile, desktop, and room systems. Zoom Rooms is the original software-based conference room solution used around the world in board, conference, huddle, and training rooms, as well as executive offices and classrooms. Founded in 2011, Zoom helps businesses and organizations bring their teams together in a frictionless environment to get more done. Zoom is a publicly traded company headquartered in San Jose, CA.

Zoom Video
Our paper "Will You Find These Shortcuts?" A Protocol for Evaluating the Faithfulness of Input Salience Methods for Text Classification was accepted to #emnlp2022 and now updated on arXiv. #NLProc #xai https://arxiv.org/abs/2111.07367
"Will You Find These Shortcuts?" A Protocol for Evaluating the Faithfulness of Input Salience Methods for Text Classification

Feature attribution a.k.a. input salience methods which assign an importance score to a feature are abundant but may produce surprisingly different results for the same model on the same input. While differences are expected if disparate definitions of importance are assumed, most methods claim to provide faithful attributions and point at the features most relevant for a model's prediction. Existing work on faithfulness evaluation is not conclusive and does not provide a clear answer as to how different methods are to be compared. Focusing on text classification and the model debugging scenario, our main contribution is a protocol for faithfulness evaluation that makes use of partially synthetic data to obtain ground truth for feature importance ranking. Following the protocol, we do an in-depth analysis of four standard salience method classes on a range of datasets and shortcuts for BERT and LSTM models and demonstrate that some of the most popular method configurations provide poor results even for simplest shortcuts. We recommend following the protocol for each new task and model combination to find the best method for identifying shortcuts.

arXiv.org

#introduction

Hello all!

I am an ML researcher, working most of the time at Apple, and privileged to teach/supervise PhD students at ENSAE/ IP Paris.

I won't toot very often, other than to advertise work from collaborators and myself, as well as positions @ Apple. Lovely to be here!