@lili yes, absolutely! We were early #deeplabcut adopters (still fans) but switched from DLC-anipose pairing to Jarvis for ease/accuracy/generalization of 3D. The former wasn’t handling different views well.
For Jarvis, we get hung up on calibration which has to use their specific cards, and getting any sort of results are very sensitive to the calibration data. Plus you need the right card per version. Related, if you have a fixed environment with rigid structures, it would be nice if it were possible to put a DIY calibration matrix of your own at the front end. To enhance the ability to data mine/future proof, it would ideally allow earlier or different means of calibrating.
Managing the amount of overlap of camera views and missing data are also sticking points. We have multiple cameras with views at a given location, yet many/most cameras will miss that location. Having the same camera resolution was another stumbling block. We ended up training two models: one per active corner of our environment. Seems a bit awkward but it worked!
Preprint coming soon- happy to set up a zoom for specifics, or in general to be “on call” for input or feedback.
The implementation of accurate 3D pose has been one of the top 2 rate limiting steps in our research.
You: “I make it better and faster”
Me: “YAAAAAAAAY!!!!!”