OK, we tried. The kids on our robotics team tried to implement this Hand-Eye Calibration on the 2025 season robot to calibrate the robot-to-camera transformation based on the Swerve drive odometry and the April tag localization. It wouldn't "work at the first try." We checked and rechecked that the measurements are in correct order with good timing. But the calibration results are all over the place. The rotation is somewhat close to what's expected but the translation reports like a couple of meters outside the robot perimeter. And it feels like the estimate is not very stable, it vary a lot from one series of measurements to another. So I'm thinking, maybe the accuracy of the measurements affects the result more than we expected. Maybe the calibration algorithm requires more precision than we can get from the robot sensors. We're going to be trying more but...

If anybody has an experience with Hand-Eye Calibration in the real world applications, or knows the theory behind this algorithm and can suggest what to focus on to get a usable result please do.

#HandEyeCalibration #OpenCV #ComputerVision #AprilTags #PhotonVision

Context: #FRCRobot, #AprilTags, Camera mounted on the #robot

Problem: Can we find the translation and the rotation of the camera relative to the robot chassis programmatically fully or somewhat automated?

Idea: #OpenCV library has a powerful calibration subroutine called "Hand Eye Calibration" which can calculate the relationship between the "hand" and the "eye" in a setup that the hand holds the eye in a fixed manner and moves relative the "base". The camera (the eye) can see a target that is also fixed relative to the base. Can we use it in our robot with an eye rolling on a field with a fixed April Tag configuration?

Let's set the terminology. The "eye" is the eye - the camera. The camera is mounted on the robot. So the robot is the "hand" holding the "eye". The robot can move across the field freely and we can track this movement using robot's odometry. So the field is the "base". And the "target", the April Tag, is mounted on the field/base.

This hand-eye calibration function requires two sets of measurements as inputs: Eye-to-target transformations and Base-to-hand transformations. Each transformation is a translation vector and a rotation matrix. In our case "eye-to-target" is camera-to-target which we can obtain from #PhotonVision. And the "base-to-hand" is the robot pose on the field which we can get from the drivetrain odometry. Then the calibration result will be hand-to-eye which is the robot-to-camera transformation.

Should work, right?

#omgRobots

M5Stack LidarBot Indoor navigation using April Tags (Work in progress)

https://makertube.net/w/uaHYbPWUKx487S2B9dRJVv

M5Stack LidarBot Indoor navigation using April Tags (Work in progress)

PeerTube
Ohio State students win best prototype in NASA Human Lander Challenge

What began as an idea sketched out by Ohio State students has now become an award-winning prototype that could one day help astronauts refuel their way to the moon. The students’ work was named best prototype on June 26, in NASA’s Human Lander Challenge, which was held near the Marshall Space Flight Center in Huntsville, […]

The Lantern

Another challenge of our Hackathon was the attempt to track #FingerMovement automatically and synchronise it with the sonic data.

We wanted to test the current state of the art using the #Ultraleap Leap Motion Controller 2 for finger tracking.

And since neither accordion nor guitar stays relatively stable while being played, we also planned to track the movement of instruments with #ARTags (aka #AprilTags).

#RomaniChords

🧵9/20

Combined with the phone camera and Apriltags ( https://docs.wpilib.org/en/stable/docs/software/vision-processing/apriltag/apriltag-intro.html ) , with 3d positioning, you could build really cool "waves", pictures, etc... I picture an arena with 20,000 people each representing a pixel...
#idea #apriltags
What Are AprilTags?

A demonstration of AprilTag fiducial targets attached to generic robots. AprilTags are a system of visual tags developed by researchers at the University of Michigan to provide low overhead, high a...

FIRST Robotics Competition Documentation

Hey all! For any #frc folks, are there any good resources for vision? I'd like to use the apriltags to align and score, but I don't know much and was hoping to learn more.
To anyone, if you know of resources, papers, or anything else relating to vision, and how that ends up making stuff happen in the code, I'd appreciate anything you could share.

#programming #robot #robotics #code #apriltags #help #firstrobotics

M5Stack LidarBot Indoor navigation using April Tags (Work in progress)

https://diode.zone/videos/watch/59d4952a-6b38-40dd-be3c-4010dfe2728f

M5Stack LidarBot Indoor navigation using April Tags (Work in progress)

PeerTube
M5Stack LidarBot Indoor navigation using April Tags (Work in progress)

YouTube