Context: #FRCRobot, #AprilTags, Camera mounted on the #robot
Problem: Can we find the translation and the rotation of the camera relative to the robot chassis programmatically fully or somewhat automated?
Idea: #OpenCV library has a powerful calibration subroutine called "Hand Eye Calibration" which can calculate the relationship between the "hand" and the "eye" in a setup that the hand holds the eye in a fixed manner and moves relative the "base". The camera (the eye) can see a target that is also fixed relative to the base. Can we use it in our robot with an eye rolling on a field with a fixed April Tag configuration?
Let's set the terminology. The "eye" is the eye - the camera. The camera is mounted on the robot. So the robot is the "hand" holding the "eye". The robot can move across the field freely and we can track this movement using robot's odometry. So the field is the "base". And the "target", the April Tag, is mounted on the field/base.
This hand-eye calibration function requires two sets of measurements as inputs: Eye-to-target transformations and Base-to-hand transformations. Each transformation is a translation vector and a rotation matrix. In our case "eye-to-target" is camera-to-target which we can obtain from #PhotonVision. And the "base-to-hand" is the robot pose on the field which we can get from the drivetrain odometry. Then the calibration result will be hand-to-eye which is the robot-to-camera transformation.
Should work, right?