Back home after #iccv2023! Some highlights:
* Really enjoyed the #DataComp workshop, especially the keynote by Olga Russakovsky
* Posters at #WiCV #lxai workshops looked great too. Can we please have more time for workshops?
* Many datasets!
* Ran into more people than expected, even though I've never been to ICCV or similar
* Outside of the conference, was really great to meet up with @GaelVaroquaux and @cazencott again 😊

If you have the chance to be at #iccv2023 in Paris, do not miss our latest work:

Urbano Miguel Nunes, Laurent U Perrinet, Sio-Hoi Ieng (2023). Time-to-Contact Map by Joint Estimation of Up-to-Scale Inverse Depth and Global Motion using a Single Event Camera.International Conference on Computer Vision 2023 (ICCV2023).
πŸ‘‰https://laurentperrinet.github.io/publication/nunes-23-iccv/

TL;DR: Using a #biomimetic event-driven algorithm, we evaluate a map that is critical for obstacle avoidance when controlling vehicles such as drones. #spikes

Time-to-Contact Map by Joint Estimation of Up-to-Scale Inverse Depth and Global Motion using a Single Event Camera | Novel visual computations

Event cameras asynchronously report brightness changes with a temporal resolution in the order of microseconds, which makes them inherently suitable to address problems that involve rapid motion perception, such as ventral landing and fast obstacle avoidance. These problems are typically addressed by estimating a single global time-to-contact (TTC) measure, which explicitly assumes that the surface/obstacle is planar and fronto-parallel. We relax this assumption by proposing an incremental event-based method to estimate the TTC that jointly estimates the (up-to scale) inverse depth and global motion using a single event camera. The proposed method is reliable and fast while asynchronously maintaining a TTC map (TTCM), which provides per-pixel TTC estimates. As a side product, the proposed method can also estimate per-event optical flow. We achieve state-of-the-art performances on TTC estimation in terms of accuracy and runtime per event while achieving competitive performance on optical flow estimation.

Novel visual computations

Did you know that it's now possible to track everything, everywhere, all at once with the power of AI?

You should definitely take a look at the groundbreaking research on OmniMotion conducted by @QianqianWang5 et al, a @GoogleAI researcher. Their work was so remarkable that it earned them the Best Student Paper Award at the #ICCV2023 conference.

OmniMotion introduces a point-tracking model capable of efficiently handling dense tracking points over extended time periods, even maintaining tracking through occlusions.

Read more here>https://arxiv.org/pdf/2306.05422.pdf

Besides people promoting their papers half the posts about #iccv2023 on X seem to be people taking selfies with Yann LeCun and Yann defending "yes there were at least as many women there too they are just not tall enough to be in the picture πŸ˜‚

(for context this is the lead AI researcher at Meta, there are about 20% women at the conference, and I have not seen many of them join this activity)

We should have conferences in France more often! #iccv2023
#iccv2023 is anyone aware of any parties this week?
At #ICCV2023 this week and looking for a postdoc in the near future? Shoot me a message and we'll find a time to chat.

My interest in computer vision started about 8 years ago, and here we are πŸ™‚

🐱 PURRlab from IT University of Copenhagen today at #ICCV2023 ! @DrVeronikaCH

Presenting at #DataComp Workshop how to augment chest X-ray datasets with non-expert annotations 🩻

Find more details about our work: https://purrlab.github.io/publications

Publications | PURRlab @ IT University of Copenhagen

Lab website for PURRlab (Pattern Recognition Revisited lab), IT University of Copenhagen.

Olga Russakovsky's keynote underway at the DataComp workshop #iccv2023 #computerVision
On my way to #iccv2023, reach out it you want to chat β˜•!