The team has come up with an idea to manage the power consumption in a centralized manner. Each mechanism before doing anything would estimate how much power it's going to need in this iteration and reserve that amount at a central location. The kids now are working out an allocation algorithm that would process such reservation requests and distribute the power fairly according to mechanisms' importance.

18 hours until the #FIRST #KickOff 2026.
I'm so pumped!
#frc #firstroboticscompetition #robotics #robots #firstinspires #firstrobotics
This amazing #firstRobotics team of exceptionally bright kids is confident they can succeed this upcoming 2026 competition season. But they need some help to participate in two tournaments in #Minnesota.
I vouch for this incredible #STEM program called #FIRST that teaches kids not only how to build robots but also innovate, solve problems, manage budget, communicate, and many other skills useful in real life.
This team is run by high school students, their parents, and volunteer mentors with no financial support other than donations. Please consider #giving some help to these awesome kids this #thanksgiving season. Or just BOOST this post, that will help too. THANK YOU!
https://www.givemn.org/story/hjbcig
The recipient is a 501C3 non-profit. Donations are tax deductible. The tax ID is in the profile on this #giveMN website.
One of students on the FIRST robotics team I mentor asked "what do you think is the most important skill or trait in becoming a successful computer programmer?". I wrote a paragraph about persistence, problem solving and attention to detail commenting I'd choose problem solving if I could only pick one. I also immediately thought
Context: #FRCRobot, #AprilTags, Camera mounted on the #robot
Problem: Can we find the translation and the rotation of the camera relative to the robot chassis programmatically fully or somewhat automated?
Idea: #OpenCV library has a powerful calibration subroutine called "Hand Eye Calibration" which can calculate the relationship between the "hand" and the "eye" in a setup that the hand holds the eye in a fixed manner and moves relative the "base". The camera (the eye) can see a target that is also fixed relative to the base. Can we use it in our robot with an eye rolling on a field with a fixed April Tag configuration?
Let's set the terminology. The "eye" is the eye - the camera. The camera is mounted on the robot. So the robot is the "hand" holding the "eye". The robot can move across the field freely and we can track this movement using robot's odometry. So the field is the "base". And the "target", the April Tag, is mounted on the field/base.
This hand-eye calibration function requires two sets of measurements as inputs: Eye-to-target transformations and Base-to-hand transformations. Each transformation is a translation vector and a rotation matrix. In our case "eye-to-target" is camera-to-target which we can obtain from #PhotonVision. And the "base-to-hand" is the robot pose on the field which we can get from the drivetrain odometry. Then the calibration result will be hand-to-eye which is the robot-to-camera transformation.
Should work, right?