Nicolas Le Roux

884 Followers
131 Following
20 Posts

The FATE group at Microsoft Research NYC is looking for interns and postdocs!

Relevant research themes include:

- Computational, statistical, & sociotechnical approaches to fairness assessment

- Human-centered AI transparency

- Institutional, organizational, & economic challenges of AI development, deployment, and use

- AI law and policy

- Responsible AI in practice

Interns (apply soon, we're reviewing applications now!): https://jobs.careers.microsoft.com/global/en/job/1661365/Research-Intern---FATE%2C-NYC-(Fairness%2C-Accountability%2C-Transparency%2C-and-Ethics-in-AI)

Postdocs: https://jobs.careers.microsoft.com/global/en/job/1667778/Post-Doc-Researcher–-FATE-–-Microsoft-Research

Search Jobs | Microsoft Careers

Microsoft Research Montreal is recruiting interns, postdocs and permanent researchers.

I am also looking for PhD students and postdocs at Mila.

I will be at NeurIPS and more than happy to discuss these openings with you if you're going, especially if you are not sure whether your expertise would be a good fit.

Alternatively, please reach out to me at [email protected].

I am especially interested in hearing from those who are not in my immediate professional circle.

I am also looking for postdocs in topics related to the above research directions so, if you plan on graduating soon and have worked on these topics, send me a message.

I'll be at ICML next week to present two pieces of work ("Target-based Surrogates for Stochastic Optimization" and "Decision-Aware Actor-Critic with Function Approximation and Theoretical Guarantees"). I will also moderate a discussion on the Societal Impacts of AI (https://icml.cc/virtual/2023/panel/28435).

Please reach out if you want to chat, either about these works or about our recent MSR work on Deep Language Networks (https://arxiv.org/abs/2306.12509 and https://medium.com/@friederike.niedtner/deep-language-networks-stacking-llms-in-trainable-layers-e7f719bcabde).

The Societal Impacts of AI Panel

Layoffs:
- don't save money
- don't improve company performance
- don't increase stock pricess
- destroy trust
- have huge impacts on health, well-being, and income of employees

So why do layoffs? It's a network effect: execs lay people off because other companies are doing it

Stanford Biz School article: https://news.stanford.edu/2022/12/05/explains-recent-tech-layoffs-worried/

Harvard Biz Review:
https://hbr.org/2022/12/what-companies-still-get-wrong-about-layoffs

What explains recent tech layoffs, and why should we be worried? | Stanford News

As layoffs in the tech sector mount, Stanford Graduate School of Business Professor Jeffrey Pfeffer is worried. Research – by him, and others – has shown that the stress layoffs create takes a devastating toll on behavioral and physical health and increases mortality and morbidity substantially. Layoffs literally kill people, he said.

Stanford News

We (MSR Montreal) are still looking for either a postdoc or principal researcher working on extending our understanding of deep learning.

We will soon be finalizing the selection of candidates to interview so apply soon if you are interested.

Reach out if you have any question.

Chercheur postdoctoral – Apprentissage automatique – Microsoft Research | Post Doc Researcher - Machine Learning - Microsoft Research in Montreal, Québec, Canada | Research, Applied, & Data Sciences at Microsoft

Apply for Chercheur postdoctoral – Apprentissage automatique – Microsoft Research | Post Doc Researcher - Machine Learning - Microsoft Research job with Microsoft in Montreal, Québec, Canada. Research, Applied, & Data Sciences at Microsoft

Microsoft

RT @[email protected]

I have 6 fantastic students and post-docs who are on the academic job market this year. Here is a short thread summarizing their work along with one representative paper:

🐦🔗: https://twitter.com/percyliang/status/1613277082938904576

Percy Liang on Twitter

“I have 6 fantastic students and post-docs who are on the academic job market this year. Here is a short thread summarizing their work along with one representative paper:”

Twitter
Newsletter #2 from the Ethics/Society folks @huggingface is out! This one was led by @yjernite, who's put together an *awesome* focus on what Bias is, what to do about, it and the stuff we've developed *specifically in the bias space*
https://huggingface.co/blog/ethics-soc-2
Ethics and Society Newsletter #2

We’re on a journey to advance and democratize artificial intelligence through open source and open science.

RT @[email protected]

Me & @[email protected] wrote about how, with the new large language models, everything old is new again - we're still talking about the harms observed since 2016 with products like BERT-enabled search & the chatbot Tay. And facing the same pushback to critique.

https://www.wired.com/story/large-language-models-critique/

🐦🔗: https://twitter.com/rajiinio/status/1601243064991125505

ChatGPT, Galactica, and the Progress Trap

When large language models fall short, the consequences can be serious. Why is it so hard to acknowledge that?

WIRED