The team has come up with an idea to manage the power consumption in a centralized manner. Each mechanism before doing anything would estimate how much power it's going to need in this iteration and reserve that amount at a central location. The kids now are working out an allocation algorithm that would process such reservation requests and distribute the power fairly according to mechanisms' importance.
Sunset over the Orange River. In 2024, we camped here on the riverbank in a small border village called Onseepkans. The next morning we crossed over into Namibia and headed up to the Fish River Canyon.
Not sure about our overnight plans this year, but we are definitely heading up to the FRC again.
18 hours until the #FIRST #KickOff 2026.
I'm so pumped!
#frc #firstroboticscompetition #robotics #robots #firstinspires #firstrobotics
I'm still looking for a solution. I have a fuzzy idea. Please, someone, tell me that I'm wasting my time and there's already a well established solution for this problem.
My idea is that since we have only these 3 degrees of freedom we should use that and somehow add constraints. Or in other words, since the robot can only move on a flat surface then the solution doesn't have to be as comprehensive as 3-dimensional #HandEyeCalibration. Even though the camera looks at a 3d space, the target, if it's fixed in place, or more precisely, the measurements of it make up a 2-dimensional plane in that space. From those measurements we can find the equation of that plane in the camera coordinates. Which will give us the roll and pitch of the camera mount.
However, we don't know the yaw yet. We can't be sure if the camera is looking exactly forward, or at what angle, relative to the robot's kinematics. To find the yaw we would have to match the set of points where the camera saw the target with the points where the robot thought it was at that precise moment. Both sets of points are 2-dimensional. So matching them should be similar to or exactly finding a #homography between the two planes...
Would that homography matrix also hint us about the offset from the robot drivetrain's kinematic center and the camera mount?
🤔