Alright I defended my thesis! I'll be working at the Allen Institute for Neural Dynamics next. I will improve Anipose to make it even easier to set up 3D tracking for animals. I will also collect a large dataset of muscle dynamics & 3D kinematics on mice for building musculoskeletal models.

I'm excited to hang out in this space again. Will experiment with posting progress and thoughts on new papers here more often.

#neuroscience

@lili congratulations! Allen is lucky to have you.
@lili Congrats! This sounds like a very worthwhile project.
@lili wow, congrats! Completing the thesis is such an accomplishment. I hope you’ll take the time to rest and relax. Congrats for your new job as well! As others have said, they are lucky to have you :)
@lili congrats! Your project at the Allen sounds very interesting too!

@lili

Yeah! Congrats L!

@lili congrats and good luck! I am shamelessly self serving when I say that I hope you have great success with the #3Dpose estimation work. Especially if it can generalize to #monkey!

@karihoffman

2 different labs have used Anipose to estimate 3D pose in marmosets!

https://doi.org/10.1016/j.cub.2023.05.032

https://doi.org/10.1242/jeb.243998

You may want to check out JARVIS as well: https://jarvis-mocap.github.io/jarvis-docs/#why-jarvis

It does work! I would like to make the process much simpler so labs can get to 3D pose data in a few weeks instead of a few months.

#neuroscience #monkey

@lili

that would be amazing!

We (mainly grad student Ken and team) tried dlc-to-anipose and we ended up using Jarvis. For that, even, we needed some hacks and workarounds. There’s definitely room for improvement, and demand, for 3D pose estimation! I’ll keep my eyes peeled to see what you come up with next. And if you ever need some extra (macaque) multicam test data, we have plenty!

@karihoffman
Oh that's cool you set it up already!

If you don't mind sharing, I am very curious about your issues with Anipose or JARVIS. I'm starting to plan the next iteration of Anipose and it would help to know what to prioritize.

@lili yes, absolutely! We were early #deeplabcut adopters (still fans) but switched from DLC-anipose pairing to Jarvis for ease/accuracy/generalization of 3D. The former wasn’t handling different views well.

For Jarvis, we get hung up on calibration which has to use their specific cards, and getting any sort of results are very sensitive to the calibration data. Plus you need the right card per version. Related, if you have a fixed environment with rigid structures, it would be nice if it were possible to put a DIY calibration matrix of your own at the front end. To enhance the ability to data mine/future proof, it would ideally allow earlier or different means of calibrating.

Managing the amount of overlap of camera views and missing data are also sticking points. We have multiple cameras with views at a given location, yet many/most cameras will miss that location. Having the same camera resolution was another stumbling block. We ended up training two models: one per active corner of our environment. Seems a bit awkward but it worked!

Preprint coming soon- happy to set up a zoom for specifics, or in general to be “on call” for input or feedback.

The implementation of accurate 3D pose has been one of the top 2 rate limiting steps in our research.
You: “I make it better and faster”
Me: “YAAAAAAAAY!!!!!”

@karihoffman Ah that makes sense! Jarvis does have a nicer end-to-end 3D model and better annotation support so that makes sense. I'll have to work on improving this in Anipose.

It's interesting that you had issues with the calibration. Anipose supports somewhat generic calibration boards, but I still have sturggled debugging calibration setups. It's a very frustrating step! A DIY calibration matrix makes sense, or perhaps even a DIY calibration object. Ideally, we would get rid of this altogether...

Does Jarvis require the same resolution across each camera? That does seem a bit awkward...
All of the setups I've been doing in flies have all been with cropped camera views. Perhaps it's not a big deal if you just fill the whole view with black?

Thank you so much for writing this! I look forward to your preprint!