After a few fruitless attempts it becomes apparent that the problem is probably
in the lack of degrees of freedom of the robot movements in the 3d space. The #handeye calibration solves equations in 6 dimensions: 3 translational + 3 rotational. Our robot, however, cannot jump up and down. So there's no variability in the Z axis translation. Also the robot cannot tilt. It simply rolls on a flat floor and can only rotate left and right. So there's no rotations of the robot, and Therefore of the camera, around X and Y axes. So, out of 6 possible degrees of freedom in the 3d space, the 3 translational dimensions and 3 rotations we can only utilize 3 of them:
- translation in X directions,
- translation in Y directions, and
- rotation around Z axis.
This really sounds that we are very short of measurements variability. We gotta get more creative on how we can enrich our measurements, or maybe introduce the constraints into the algorithm somehow. Think. Think...