Applying Forward/Inverse Kinematics to Score with PPE By Elias and Alex
Task: Explain how we use kinematics to manipulate PixeLeviosa (our Outtake)
This year’s challenge led Iron Reign down a complex process of developing an efficient means of scoring, which involved the interaction between the intake and outtake. Currently, we are in the process of implementing Forward and Inverse Kinematics to find and alter the orientation of our outtake, known as PixeLeviosa.
Task: Explain how we map the field and use it in our code
A huge focus this season has been navigation; each scoring sequence spans the whole field: from the wing or stacks to the backdrop, and it’s crucial to make this happen as fast as possible to ensure high-scoring games. To do this, the robot needs deep knowledge of all of the components that make...
Task: Determine the robot’s position on the field with little to no error
With the rigging and the backdrops being introduced as huge obstacles to pathing this year, it’s absolutely crucial that the robot knows exactly where it is on the field. Because we have a Mecanum drivetrain, the included motor encoders slip too much for the fine-tuned pathing that we’d like to do. This year, we use...
UnderArm Inverse Kinematics By Jai, Vance, Alex, Aarav, and Krish
Task: Implement Inverse Kinematics on the UnderArm
Inverse kinematics is the process of finding the joint variables needed for a kinematic chain to reach a specified endpoint. In the context of TauBot2, inverse kinematics, or IK is used in the crane and the UnderArm. In this blog post, we'll be focused on the implementation of IK in the UnderArm.
This year, using FTC 6547's tutorial on april tags, we developed a system to detect which parking location we should park in. The april tag system allows the onboard camera to detect the sleeve at a distance without it being directly in front of the camera unlike other systems. It also requires much less processing than QR codes which allow it to detect the sleeve...
It has been a very long time since we have reconsidered our auto paths. Between my last post and now, we have made numerous changes to both the hardware and the articulations. As a result, we should rethink the paths we used and optimize them for scoring. After testing multiple paths and observing other teams, I identified 3 auto paths we will try to perfect for championships.
It's competition time again, and with that means updating our code. We have made quite a few changes to our robot in the past few weeks, and so we needed to update our code to reflect those changes.
Unfortunately, because the robot build was completed very late, we did not have much time to code. That meant...
In post E-116, I showed all the big wheel articulations. As we shifted our robot to Icarus, we decided to change to a new set of articulations as they would work better to maintain the center of gravity of our robot. Once again, we made 5 major deployment modes. Each articulation is necessary to maintain the robot's center of gravity as its mode of operation shifts.
With the birth of Icarus came a new job for the programmers: supporting both Bigwheel and Icarus. We needed the code to work both ways because new logic could be developed on bigwheel while the builders completed Icarus.
This was done by simply creating an Enum for the robot type and feeding it into PoseBigWheel initialization. This value was fed into all the subsystems so...
At this point in the season, we have time to clean up our code before development for code. This is important to do now so that the code remains understandable as we make many changes for worlds.
There aren't any new features that were added during these commits. In total, there were 12 files changed, 149 additions, and 253 deletions.
Task: Tweaking ftc_app to allow us to drive robots without a Driver Station phone
As you already know, Iron Reign has a mechanized cart called Cartbot that we bring to competitions. We used the FTC control system to build it, so we could gain experience. However, this has one issue: we can only use one pair of Robot Controller and Driver Station phones at a competition, because of WiFi interference problems.
In our motion, our robot shifts multiple major subsystems (the elbow and Superman) that make it difficult to keep the robot from tipping. Therefore, through driver practice, we determined the 5 major deployment modes that would make it easier for the driver to transition from mode to mode. Each articulation is necessary to maintain the robot's center of gravity as its mode of...
Task: Design a state machine class to make autonomous easier
In the past our autonomous routines were tedious and difficult to change. Adding one step to the beginning of an autonomous would require changing the indexes of every single step afterwards, which could take a long time depending on the size of the routine. In addition, simple typos could go undetected, and cause lots of problems. Finally, there was so much repetitive code, making our...
The picture above is a representation of our work today. After making sure all the manual drive controls were working, Karina found the positions she preferred for intake, deposit, and latch. Taking these encoder values from telemetry, we created new methods for the robot to run to those positions. As a result, the robot was very functional. We could latch onto the lander in 10 seconds (a much faster endgame than...
While at the Wylie quaiifier, we had to make many changes because our robot broke the night before.
First thing that happened was that the belt code was added. Previously, we had relied on gravity and the polycarb locks we had on the slides but we quickly realized that the slides needed to articulate in order to preserve Superman. As a result, we added the belts into our...
Task: Detail last-minute code changes to autonomous
It is almost time for competition and with that comes a super duper autonomous. For the past couple of weeks and today, we focused on making our depot side work consistently. Because our robot wasn't fully built, we couldn't do auto-delatching. Today, we integrated our vision pipelines into the auto and tested all the paths with vision. They seemed to work at home base but the field we...
One of our priorities this season was our autonomous, as a perfect autonomous could score us a considerable amount of points. A large portion of these points come from sampling, so that was one of our main focuses within autonomous. Throughout the season, we developed a few different approaches to sampling.
Early on in the season, we began experimenting with using a Convolutional Neural Network to detect...
Today, we implemented our first autonomous path. Since we we still didn't have a complete vision software, we made these manually so we can integrate vision without issues. Here are videos of all of the paths. For the sake of debugging the bot stops after turning towards the crater but in reality it will drive and park in the far crater. These paths will help us score highly...
Code Post-Mortem after Conrad Qualifier By Arjun and Abhi
Task: Analyze code failure at Conrad Qualifier
Iron Reign has been working hard on our robot, but despite that, we did not perform well owing to our autonomous performance.
Our autonomous plan was fairly simple: perform sampling, deploy the team marker, then drive to the crater to park. We planned to use the built-in TensorFlow object detection for our sampling, and thus assumed that our autonomous would be fairly easy.
FTC released new code to support Tensorflow and automatically detect minerals with the model they trained. Unfortunately, all of our CNN work was undercut by this update. The silver lining is that we have done enough research into how CNN's work and it will allow us to understand the mind of the FTC app better. In addition, we may retrain this model if we feel it doesn't work well. But now, it...
Historically, Iron Reign has used a class called "Pose" to control all the hardware mapping of our robot instead of putting it directly into our opmodes. This has created cleaner code and smoother integration with our crazy functions. However, we used the same Pose for the past two years since both had an almost identical drive base. Since there wasn't a viable differential drive Pose in...
Task: Upgrade our code to the latest version of the FTC SDK
FTC recently released version 4.0 of their SDK, with initial support for external cameras, better PIDF motor control, improved wireless connectivity, new sensors, and other general improvements. Our code was based on last year's SDK version 3.7, so we needed to merge the new SDK with our repository.
The merge was slightly difficult, as there were some issues with...
Task: Design a program to record and replay a driver run
One of the difficulties in writing an autonomous program is the long development cycle. We have to unplug the robot controller, plug it into a computer, make a few changes to the code, recompile and download the code, and then retest our program. All this must be done over and over again, until the...
During Relic Recovery season, we had many problems with our autonomous due to slippage in the mecanum wheels and our need to align to the balancing stone, both of which created high error in our encoder feedback. To address this recurring issue, we searched for an alternative way to identify our position on the field. Upon researching online and discussing with other teams, we discovered an...
Autonomous Updates, Multiglyph Part 2 By Abhi, Karina, and Tycho
Task: Develop multiglyph for far Stone
We had a functional autonomous for the balacing stone close to the audience. However, chances are that our alliance partner would want that same stone since they could get more glyphs during autonomous. This meant that we needed a multiglyph autonomous for the far balancing stone. We went on an adventure to make this happen.
At super regionals, we saw all the good teams having multi glyph autonomi. In fact, Viperbots Hydra, the winning alliance captain, had a 3 glyph autonomous. I believed Iron Reign could get some of this 100 point autonomous action so I sat down to create a 2 glyph autonomous. We now have 3 autonomi, one of which is multiglyph.
As explained in a previous post, we were having many issues with git commits and fixing our errors in it. After a lot of the merging conflicts, we had to fix all the commits without exactly knowing what was being changed in the code. Part of the reason this was so hard was our lack of good naming conventions. Though we always try to make a title and good...
At this point, we are training the next generation of the drivers on our team, and since we have so many buttons with so many different functions it can often become difficult for the new drivers to determine which button does what, so Karina and I created a map of the controller. By doing this, we not only assist others in determining what button they need to press to do...
Task: Implement a drive system depending on field perspective
We are always looking for ways to make it easier to drive. One way to do that is to modify our code such that no matter where the front of the robot is, moving the joystick in a certain direction will move the entire robot in that direction. This allows our drivers to only think about the field and align with the cryptobox easier. I...
So, we can't include all the code changes we made today, but all of it involved cleaning up our code, removing extra functions we didn't use, refactoring, adding comments, and making it more readable for the tournament. We had almost 80k deletions and 80k additions. This marks a turning point in the readablity of our code so that less experienced team members can read it....
These commits allow better QoL for our drivers, allow our robot to function more smoothly both in autonomous and during TeleOp, allows us to score the jewels, and lets us test servos.
We attempted to create an autonomous for our first scrimmage. It aimed to make the robot to drive forward and drive into the safe zone. However, we forgot to align the robot and it failed at the scrimmage.
Instead of talking about the code like usual, the code's main functions are well documented so that any person can understand its functions without a prior knowledge of...
We’ve been using machine vision for a couple of years now and have a plan to use it in Relic Rescue for a number of things. I mostly haven’t gotten to it because college application deadlines have a higher priority for me this year. But since we already have experience with color blob tracking in OpenCV and Vuforia tracking, I hope this won’t be too difficult. We have 5 different things we want to try:
REV Robot Reveal By Tycho, Austin, Charlotte, Omar, Evan, and Janavi
Argos V2 - a REV Robot Reveal
This video was pulled from Argos visits to: The NSTA STEM Expo in Kissimmee FL, in the path of eclipse totality in Tennessee, and in North Texas at The Dallas Makerspace, The Southwest Center Mall, Southside on Lamar and the Frontiers of Flight Museum. We hope you find it interesting:
Task: Test and improve the PID system and balance code
We're currently testing code to give Argos a balancing system so that we can demo it. This is also a test for the PID in the new REV robotics expansion hubs, which we plan on switching to for this season if reliable. Example code is below.
Last year, we had some experience with OpenCV to press the beacons, and this year we decided to do the same. We use OpenCV to find the color we are looking for on the beacon in conjunction with Vuforia. First, it detects the search pattern in the view with vuforia, then isolates that area and finds the side of the beacon with the correct color. Our code is...
We use Vuforia and Open CV vision to autonomously drive our robot to the beacon and then click the button corresponding to our team's colour. We started this by getting the robot the recognize the image below the beacon and keep it within its line of vision. Vuforia is used by the phone's camera to inspect it's surroundings, and to locate target images. When images are located, Vuforia...
Inspire Award By Tycho, Jayesh, Lin, Omar, Max, Darshan, Evan, Ethan, Janavi, and Charlotte
1st Place at North Texas Regional Championship
Iron Reign members left to right are Ethan Helfman (Build, Communications), Janavi Chada (Programming, Communications), Tycho Virani (Programming Lead, Main Driver), Jayesh Sharma (Business Lead, Build, Communications), Darshan Patel (Build), Lin Rogers (Communications Lead, Logistics, Business) and Charlotte Leakey (Programming, Logistics), with Evan Daane (from BTW, Build, Photography) in repose. Not shown: Max Virani (Design Lead, Programming), Omar...
A year and a half ago while the new Android-based platform was still in pre-launch, we were the first team to share a machine vision testbed on the FTC Forums. That color-blog tracker was implemented with OpenCV on Android, but with a different low-level control system and robotics framework. Then we integrated OpenCV into our implementation of ftc_app, which was in turn based on the great...
This shows a test of our encoder issues. It might have been a month ago that we noticed a strange behavior in our autonomous code when the robot was moving forward at low speed. It would curve to the right when we were telling it to go straight. We probably would have noticed the problem...
Today, I combined the autonomous and teleop so that we can demo both more easily. As well, during testing, we now can switch between them seamlessly so that our testing is power. The most important part of this code is that we can configure the autonomous before we launch - telling the robot how many balls we have, how many to shoot, what side the robot is on, and...
Autonomous is one of the things that we tend to be weak on every year, and this year, we really want to get to super-regionals. So, to start off this year's autonomous, we first mapped out a potential path for the robot on the field. We then followed up with programming, using our previous methods like driveForward and driveCrab. So now, we have a basic autonomous program in which we can...
Today, I wrote the whole code for controlling our mecanum wheels. It is entirely fron scratch, and works perfectly right off the bat. This code allows us to strafe, move backwards and forwards, and rotate, in one method.
Reflections
We still have a lot of coding to do, as we're currently working on a particle-launching system. As well, we need to consider autonomous soon.
Programming our New Robot By Tycho, Lin, Ethan, and Jayesh
Task: Program our new mecanum wheel driving platform
Now that our new robot has been built with a mecanum wheel platform, we can start write our drive code and figure out how to make our robot preform three basic motions: forwards and backwards, side-to-side and to rotate. We decided that, in order to get the best understanding of our robot,...
Task: Create a list of motors/servos and what ports they're connected to
Very often, when we disconnect a motor or servo (maybe on accident), we forget what port we got it from. This even happened to us today when we unplugged the servo that lifts the trough. Because of this hassle, we decided to write out a list of all the motors and servos on our robot and what...
A Presentation for the Ages By Ethan, Jayesh, Max, Tycho, Lin, Omar, Evan, Alisa
Task: Work on our presentation to the judges
Our main weakness in previous years had been our presentation. This year, we plan to change that. When our team was solely FLL, we practiced our presentations beforehand, so, we're applying that to this year. We've done 2-3 presentations so far and it seems it really helps us. As well, we're making a powerpoint presentation to assist us, giving us information we might forget and providing a visual...
Functions of our controller By Alisa, Ethan, Trace
Task: Listing out the functions of our game controller
In order to find a button for our argos mode drive, we made a rough draft of our game controller listing the functions of each button. For example, the 'X' button is for the churo climb, the 'A' button is for our beater to stop, etc. We had about 5 unused buttons so in the end, we decided on using the top left button for argos mode drive. Now that...
Task: Implement color blob detection on the robot for detecting beacons
Even though this was very long overdue, we now have a way to turn towards a certain color similar to what Argos does. What we're basically doing is calculating the angle that the robot is off-center of the blob it's tracking, and letting it correct for the error and straighten up with our PID code that's already in place. Right now, it doesn't actually move backwards and forwards, only turn on its axis,...
Scrimmage at Greenhill By Darshan, Alisa, Omar, Lin, Max, Tycho, Evan
Task: Practice with other teams and see
This past Friday, the team tried to get the robot working in a small scrimmage with seven or so other teams. At the scrimmage, we managed to get our cow-catcher working, even though we nearly burned out our servo. During the process of finding the right positioning and testing it while driving we managed to tear up quite a few rubber inserts on our treads. Even though the scrim was scheduled to...
Today we continued working on the code involved for the first ten seconds of motion in the tournament known as autonomous. In this period we hope to reach the other side of the field towards the Res-Q beacon to dump our figures from the beginning and then trying to go as high as we can up the mountain. We coded this using our IronDem and the Pose classes to calculate the range and angle of our current...
Task: To try out the robot in a competition setting
Last weekend Darshan and I tried to drive the robot in a small scrimmage with eight other teams. Even though we hit a few bumps along the competition we were able see how the robot drove and handled. We also saw what we could do easily and what was hard for us. Among the bumps was an incident involving a tread falling off the track because we hit...
Meeting other Teams at the Scrimmage By Lin, Omar, Darshan, Jayesh, Tycho, Max, Evan
Task: Get a feel for where we are and the progress of other teams
Whether or not we were allowed to compete, we felt that it would be worthwhile to go to the scrimmage, if only to see how we compared to other teams. Climbing the mountain is the hardest mechanical design challenge so far, and we were able to see their solutions.
Reflections
Many teams had treads like ours, with different arrangements of idlers, while...
Starting up our Autonomous Code by Jayesh, Tycho, Max
Task: Get started on autonomous coding
We spent some of the time at the scrimmage today starting up our autonomous code in android studio. The main basis of our coding was put around climbing the ramp, which is our main function of our robot without the rest of the hardware in place. The main point of the code was put around PID and Pose, as PID helped give us our position on the field and Pose helped in giving our heading.
Task: Create a rough draft of the code for the Pose class
Starting just a few weeks into the competition year, we thought up ideas for a system of classes to allow the robot to navigate to a location on the field on its own, including the Pose class, which we would use to keep tabs on where the robot was on the field at any particular time. With this, it would be possible to...
Coding for Autonomous Pilot Program By Tycho and Dylan
Task: Program Pilot Code
Our task was to begin programming for our first version of the Pilot class of the robot using other classes for our angle, position, and controller that we created. The controller class known as PID controller used part of the controller set given to us from the repository but was recalibrated to be in terms of time and independent of all else. This we need to integrate in our pilot class which we will use to tie all programs...
Before Scrimmage basic tests By Lin, Tycho, Omar, Max, Darshan
Task: make sure the robot can run
There's a scrimmage next Saturday that we may or may not be going to based on Dallasisd technicalities with our team. Our goals for today were to get a basic autonomous going to dump climbers and maybe some mountain climbing tests. However, the controller apps needed to be updated and we got a lot of errors when connecting.
Reflections
If the controller gives an error mentioning "USB UART" apparently the only fix is to...
Labor Day Meeting By Lin, Jayesh, Darshan, Alisa, Omar, Max, Tycho
Task: Learn about OpModes
Today, we reviewed opmodes in the FTC API and how to register new opmodes. We also learned about the differences between regular, linear and synchronous (from SwerveRobotics) opmodes.
Task: Review Blogs/Journal
Our Blog == our Journal. What we also did today was quickly review how to create blog entries so we can do that more frequently this year. We also tried to catch up on our lengthy backlog of journal articles. We were very busy...
Task: Determine necessities of this year's field navigation code
Today was our very first meet of the year, and as such, not too much got done other than planning and experimentation. For the greater part of the meeting, I worked on concepts for an overhauled field navigation system (Mostly for autonomous, but possibly for tele-op as well). This would include the robot knowing where it is on the field when it starts, the field's and its own dimensions, the locations...
Overview of new hardware and software By Max,Tycho,Lin,Alisa,Ethan,Trace
Task: Getting a first glimpse at the new motors and controllers with Imperial robotics
We continued our meeting at the Dallas Makerspace after the GitHub tutorial. We unpackaged the new motors and motor controllers for the first time and took out the phones to scan the hardware configuration. We added Anderson Power poles to some of the motors. Lin taught Alisa and Ethan how to crimp wires and attach the power poles.
Reflections
Scanning the hardware configuration was very troublesome...
PID is useful in many ways, such as setting an arm to a certain position, going straight based on a gyro or following a line. There are three parts to a PID control program : Proportional, Integral and Derivative.
Proportional: How much error is there? The proportional part of PID is useful because it tells you how far away you are from your target value, so you know how fast you should go to correct it.