I'm still looking for a solution. I have a fuzzy idea. Please, someone, tell me that I'm wasting my time and there's already a well established solution for this problem.

My idea is that since we have only these 3 degrees of freedom we should use that and somehow add constraints. Or in other words, since the robot can only move on a flat surface then the solution doesn't have to be as comprehensive as 3-dimensional #HandEyeCalibration. Even though the camera looks at a 3d space, the target, if it's fixed in place, or more precisely, the measurements of it make up a 2-dimensional plane in that space. From those measurements we can find the equation of that plane in the camera coordinates. Which will give us the roll and pitch of the camera mount.

However, we don't know the yaw yet. We can't be sure if the camera is looking exactly forward, or at what angle, relative to the robot's kinematics. To find the yaw we would have to match the set of points where the camera saw the target with the points where the robot thought it was at that precise moment. Both sets of points are 2-dimensional. So matching them should be similar to or exactly finding a #homography between the two planes...

Would that homography matrix also hint us about the offset from the robot drivetrain's kinematic center and the camera mount?

🤔

#FIRSTrobotics #FRC #CameraCalibration #SightAlignment

OK, we tried. The kids on our robotics team tried to implement this Hand-Eye Calibration on the 2025 season robot to calibrate the robot-to-camera transformation based on the Swerve drive odometry and the April tag localization. It wouldn't "work at the first try." We checked and rechecked that the measurements are in correct order with good timing. But the calibration results are all over the place. The rotation is somewhat close to what's expected but the translation reports like a couple of meters outside the robot perimeter. And it feels like the estimate is not very stable, it vary a lot from one series of measurements to another. So I'm thinking, maybe the accuracy of the measurements affects the result more than we expected. Maybe the calibration algorithm requires more precision than we can get from the robot sensors. We're going to be trying more but...

If anybody has an experience with Hand-Eye Calibration in the real world applications, or knows the theory behind this algorithm and can suggest what to focus on to get a usable result please do.

#HandEyeCalibration #OpenCV #ComputerVision #AprilTags #PhotonVision