Eugene Vinitsky

534 Followers
255 Following
38 Posts
Autonomous vehicles, multi-agent reinforcement learning, and transportation. Research Scientist at Apple, assistant prof at @nyutandon in Fall 2023. Go bears.
Websiteeugenevinitsky.github.io
Google Scholarhttps://scholar.google.com/citations?user=6dr5fLEAAAAJ&hl=en&oi=ao

RT @eugenevinitsky

“So much of research effectiveness is just good emotional regulation:
- ignoring the deluge of papers and continuing to read despite analysis paralysis
- finding motivation to work even though the amount of work is technically infinite”

This really resonates with me. Also, finding a balance of: standing up for your work and analysis, but having the emotional maturity to listen to and accept criticisms, to see the truth in them, and improve the next time.

Just a quick reminder of the many interesting talks / interviews on the "Talk RL" podcast: https://www.talkrl.com/
TalkRL: The Reinforcement Learning Podcast

TalkRL podcast is All Reinforcement Learning, All the Time. In-depth interviews with brilliant people at the forefront of RL research and practice. Guests from places like MILA, OpenAI, MIT, DeepMind, Berkeley, Amii, Oxford, Google Research, Brown, Waymo, Caltech, and Vector Institute. Hosted by Robin Ranjit Singh Chauhan.

TalkRL: The Reinforcement Learning Podcast

#introduction

I am a scientist at Meta AI in NYC and study machine learning and optimization, recently involving reinforcement learning, control, optimal transport, and geometry. On social media, I enjoy finding and boosting interesting content from the original authors on these topics

I made this small animation with my recent project on optimal transport that connects continuous structures in the world. The source code to reproduce this and other examples is online at https://github.com/facebookresearch/w2ot

GitHub - facebookresearch/w2ot: Euclidean Wasserstein-2 optimal transportation

Euclidean Wasserstein-2 optimal transportation. Contribute to facebookresearch/w2ot development by creating an account on GitHub.

GitHub

[https://twitter.com/michaeld1729/status/1604855967879172096?s=46&t=wxGlb03ZPSMgGiSpSUli_g | XP]
Very interesting result, showing that one can do very well on the very popular SMAC benchmark without looking at the observations!

This should be a reminder that progress in RL and MARL is hard to measure well, and even when progress is being made by a well defined metric, it is often not for the reasons we think!

#MARL #RL #reinforcementlearning

Michael Dennis on Twitter

“Great experiment! Goes to show that evaluating RL, especially MARL, can be very difficult to do well.”

Twitter
Hello all! I'm interested in physics-based models of human movement, deep reinforcement learning, animation, and robotics. I'm a professor in the Dept of Computer Science at UBC, in Vancouver, Canada. List of recent projects can be found here:
https://www.cs.ubc.ca/~van/papers/index.html
I'm keen to stay connected to the research communities related to all the above areas!
#introductions
#RL #robotics
MvdP Projects & Publications

AI art without human input
I like to post every book I read on the birdsite in an annual Book Thread. I'm going to start the same thing here; if you do not want to see every trashy novel I read on an airplane and lots of reviews of dinosaur fact books, you can mute this thread (as in the attached screenshot)
I feel way more welcome once I downloaded the iPhone app. I’m fairly new to tooting — I’m a researcher at HuggingFace focusing on reinforcement learning! #introductions #welcome
Who am I missing? Lets reconstruct this social graph

Starting an on-going list of MARL folks here, will edit it as folks suggest others:

@michael_dennis
@jparkerholder
@sharky6000
@jnf
@kandouss
@karltuyls @julien @LukasSchaefer @oliehoek @backpropper @dhadfieldmenell