Articles by tag: control

Articles by tag: control

    Swerve Drive Experiment

    Swerve Drive Experiment By Abhi

    Task: Consider a Swerve Drive base

    Last season, we saw many robots that utilized a swerve drive rather than the mecanum drive for omnidirectional movement. To further expand Iron Reign's repertoire of drive bases, I wanted to further investigate this chassis. Swerve was considered as an alternative to swerve because of its increased speed in addition to the maneuverability of the drive base to allow for quick scoring due to its use of traction wheels at pivot angles. Before we could consider making a prototype, we investigated several other examples.

    Among the examples considered was the PRINT swerve for FTC by team 9773. After reading their detailed assembly instructions, I moved away from their design for many reasons. First, the final cost of the drive train was very expensive; we did not have a very high budget despite help from our sponsors. If this drive train was not functional or if the chassis didn't make sense to use in Rover Ruckus, we would have almost no money for an alternate drive train. Also, they parts used by 9773 involved X-rail rather than extrusion rail from REV. This would cause problems in the future as we would need to redesign the REVolution system for X-rail.

    Another example was from team 9048 which appeared to be more feasible. Because they used REV rail and many 3D printed parts, this was a more feasible prototype. Because they didn't have a parts list, we had the find the rough estimate of cost from the REV and Andymark websites. Upon further analysis, we realized that the cost, though cheaper than the chassis of 9773, would still be a considerable chunk of our budget.

    At this point it was evident most swerve drives being used are very expensive. Wary of making this investment, I worked with our sister team 3734 to create a budget swerve with materials around the house. A basic sketch is listed below.

    Next Steps

    Scavenge for parts in the house and Robodojo to make swerve modules.

    Swerve Drive Prototype

    Swerve Drive Prototype By Abhi and Christian

    Task: Build a Swerve Drive base

    Over the past week, I worked with Christian and another member of Imperial to prototype a drive train. Due to the limited resources. we decided to use Tetrix parts since we had an abundance of those. We decided to make the swerve such that a servo would turn a swerve module and the motors would be attached directly to the wheels.

    Immediately we noticed it was very feeble. The servos were working very hard to turn the heavy module and the motors had trouble staying aligned. Also, programming the chassis was also a challenge. After experimenting further, the base even broke. This was a moment of realization. Not only was swerve expensive and complicated, we also would need to replace a module really quickly at competition which needed more resources and an immaculate design. With all these considerations, I ultimately decided that swerve wasn't worth it to use as a drive chassis at this time.

    Next Steps

    Consider and prototype other chassis designs until Rover Ruckus begins.

    Position Tracking

    Position Tracking By Abhi

    Task: Design a way to track the robot's location

    During Relic Recovery season, we had many problems with our autonomous due to slippage in the mecanum wheels and our need to align to the balancing stone, both of which created high error in our encoder feedback. To address this recurring issue, we searched for an alternative way to identify our position on the field. Upon researching online and discussing with other teams, we discovered an alternative tracker sensor with unpowered omni wheels. This tracker may be used during Rover Ruckus or beyond depending on what our chassis will be.

    We designed the tracker by building a small right angular REV rail assembly. On this, we attached 2 omni wheels at 90 degrees to one another and added axle encoders. The omni wheels were not driven because we simply wanted them to glide along the floor and read the encoder values of the movements. This method of tracking is commonly referred to as "dead wheel tracking". Since the omnis will always be touching the ground, any movement will be sensed in them and prevents changes in readings due to defense or drive wheel slippage.

    To test the concept, we attached the apparatus to ARGOS. With some upgrades to the ARGOS code by using the IMU and omni wheels, we added some basic trigonometry to the code to accurately track the position. The omni setup was relatively accurate and may be used for future projects and robots.

    Next Steps

    Now that we have a prototype to track position without using too many resources, we need to test it on an actual FTC chassis. Depending on whether or not there is terrain in Rover Ruckus, the use of this system will change. Until then, we can still experiment with this and develop a useful multipurpose sensor.

    Replay Autonomous

    Replay Autonomous By Arjun

    Task: Design a program to record and replay a driver run

    One of the difficulties in writing an autonomous program is the long development cycle. We have to unplug the robot controller, plug it into a computer, make a few changes to the code, recompile and download the code, and then retest our program. All this must be done over and over again, until the autonomous is perfected. Each autonomous takes ~4 hours to write and tune. Over the entire season, we spend over 40 hours working on autonomous programs.

    One possible solution for this is to record a driver running through the autonomous, and then replay it. I used this solution on my previous robotics team. Since we had no access to a field, we had to write our entire autonomous at a competition. After some brainstorming, we decided to write a program to record our driver as he ran through our autonomous routine and then execute it during a match. It worked very well, and got us a few extra points each match.

    Using this program, writing an autonomous program is reduced to a matter of minutes. We just need to run through our autonomous routine a few times until we're happy with it, and then take the data from the console and paste it into our program. Then we recompile the program and run it.

    There are two parts to our replay program. One part (a Tele-op Opmode) records the driver's motions and outputs it into the Android console. The next part (an Autonomous Opmode) reads in that data, and turns it into a working autonomous program.

    Next Steps

    Our current replay program requires one recompilation. While it is very quick, one possible next step is to save the autonomous data straight into the phone's internal memory, so that we do not have to recompile the program. This could further reduce the time required to create an autonomous.

    One more next step could be a way to easily edit the autonomous. The output data is just a big list of numbers, and it is very difficult to edit it. If we need to tune the autonomous due to wear and tear on the robot, it is difficult to do so without rerecording. If we can figure out a mechanism for editing the generated autonomous, we can further reduce the time we spend creating autonomous programs.

    Rover Ruckus Brainstorming & Initial Thoughts

    Rover Ruckus Brainstorming & Initial Thoughts By Ethan, Charlotte, Kenna, Evan, Abhi, Arjun, Karina, and Justin

    Task: Come up with ideas for the 2018-19 season

    So, today was the first meeting in the Rover Ruckus season! On top of that, we had our first round of new recruits (20!). So, it was an extremely hectic session, but we came up with a lot of new ideas.

    Building

    • A One-way Intake System

    • This suggestion uses a plastic flap to "trap" game elements inside it, similar to the lid of a soda cup. You can put marbles through the straw-hole, but you can't easily get them back out.
    • Crater Bracing
    • In the past, we've had center-of-balance issues with our robot. To counteract this, we plan to attach shaped braces to our robot such that it can hold on to the walls and not tip over.
    • Extendable Arm + Silicone Grip

    • This one is simple - a linear slide arm attached to a motor so that it can pick up game elements and rotate. We fear, however, that many teams will adopt this strategy, so we probably won't do it. One unique part of our design would be the silicone grips, so that the "claws" can firmly grasp the silver and gold.
    • Binder-ring Hanger

    • When we did Res-Q, we dropped our robot more times than we'd like to admit. To prevent that, we're designing an interlocking mechanism that the robot can use to hang. It'll have an indent and a corresponding recess that resists lateral force by nature of the indent, but can be opened easily.
    • Passive Intake
    • Inspired by a few FRC Stronghold intake systems, we designed a passive intake. Attached to a weak spring, it would have the ability to move over game elements before falling back down to capture them. The benefit of this design is that we wouldn't have to use an extra motor for intake, but we risk controlling more than two elements at the same time.
    • Mechanum
    • Mechanum is our Ol' Faithful. We've used it for the past three years, so we're loath to abandon it for this year. It's still a good idea for this year, but strafing isn't as important, and we may need to emphasize speed instead. Plus, we're not exactly sure how to get over the crater walls with Mechanum.
    • Tape Measure
    • In Res-Q, we used a tape-measure system to pull our robot up, and we believe that we could do the same again this year. One issue is that our tape measure system is ridiculously heavy (~5 lbs) and with the new weight limits, this may not be ideal.
    • Mining
    • We're currently thinking of a "mining mechanism" that can score two glyphs at a time extremely quickly in exchange for not being able to climb. It'll involve a conveyor belt and a set of linear slides such that the objects in the crater can automatically be transferred to either the low-scoring zone or the higher one.

    Journal

    This year, we may switch to weekly summaries instead of meeting logs so that our journal is more reasonable for judges to read. In particular, we were inspired by team Nonstandard Deviation, which has an amazing engineering journal that we recommend the readers to check out.

    Programming

    Luckily, this year seems to have a more-easily programmed autonomous. We're working on some autonomous diagrams that we'll release in the next couple weeks. Aside from that, we have such a developed code base that we don't really need to update it any further.

    Next Steps

    We're going to prototype these ideas in the coming weeks and develop our thoughts more thoroughly.

    Vision Discussion

    Vision Discussion By Arjun and Abhi

    Task: Consider potential vision approaches for sampling

    Part of this year’s game requires us to be able to detect the location of minerals on the field. The main use for this is in sampling. During autonomous, we need to move only the gold mineral, without touching the silver minerals in order to earn points for sampling. There are a few ways we could be able to detect the location of the gold mineral.

    First, we could possibly use OpenCV to run transformations on the image that the camera sees. We would have to design an OpenCV pipeline which identifies yellow blobs, filters out those that aren’t minerals, and finds the centers of the blobs which are minerals. This is most likely the approach that many teams will use. The benefit of this approach is that it will be easy enough to write. However, it may not work in different lighting conditions that were not tested during the designing of the OpenCV pipeline.

    Another approach is to use Convolutional Neural Networks (CNNs) to identify the location of the gold mineral. Convolutional Neural Networks are a class of machine learning algorithms that “learn” to find patterns in images by looking at large amounts of samples. In order to develop a CNN to identify minerals, we must take lots of photos of the sampling arrangement in different arrangements (and lighting conditions), and then manually label them. Then, the algorithm will “learn” how to differentiate gold minerals from other objects on the field. A CNN should be able to work in many different lighting conditions, however, it is also more difficult to write.

    Next Steps

    As of now, Iron Reign is going to attempt both methods of classification and compare their performance.

    CNN Training

    CNN Training By Arjun and Abhi

    Task: Capture training data for a Convolutional Neural Network

    In order to train a Convolutional Neural Network, we need a whole bunch of training images. So we got out into the field, and took 125 photos of the sampling setup in different positions and angles. Our next step is to label the gold minerals in all of these photos, so that we can train a Convolutional Neural Network to label the gold minerals by learning from the patterns of the training data.

    Next Steps

    Next, we will go through and designate gold minerals. In addition, we must create a program to process these.

    Autonomous Path Planning

    Autonomous Path Planning By Abhi

    Task: Map Autonomous paths

    With the high point potential available in this year's autonomous it is essential to create autonomous paths right now. This year's auto is more complicated due to potential collisions with alliance partners in addition to an unknown period of time spend delatching from the lander. To address both these concerns, I developed 4 autonomous paths we will investigate with to use during competition.

    When making auto paths, there are some things to consider. One, the field is the exact same for both red and blue alliance, meaning we don't need to rewrite the code to act on the other side of the field. Second, we have to account for our alliance partner's autonomous if they have one and need to adapt to their path so we don't crash them. Third, we have to avoid the other alliance's bots to avoid penalties. There are no explicit boundaries this year for auto but if we somehow interrupt the opponent's auto we get heavily penalized. Now, with these in mind, lets look at these paths.

    This path plan is the simplest of all autonomi. I assume that our alliance partner has an autonomous and our robot only takes care of half the functions. It starts with a simple detaching from the lander, sampling the proper mineral, deploying the team marker, and parking in the crater. The reason I chose the opposite crater instead of the one on our nearside was that it was shorter distance and less chance to mess with our alliance partner. The issue with this plan is that it may interfere with the opponent's autonomous but if we drive strategically hugging the wall, we shouldn't have issues.

    This path is also a "simple" path but is obviously complicated. The issue is that the team marker depot is not on the same side as the lander, forcing us to drive all the way down and back to park in the crater. I could also change this one to go to the opposite crater but that may interfere with our alliance partner's code.

    This is one of the autonomi that assumes our alliance partners don't have an autonomous and is built for multi-functionality. The time restriction makes this autonomous unlikely but it is still nice to plan out a path for it.

    This is also one of the autonomi that assumes our alliance partners don't have an autonomous. This is the simpler one of the methods but still has the same restrictions

    Next Steps

    Although its great to think these paths will actually work out in the end, we might need to change them a lot. With potential collisions with alliance partners and opponents, we might need a drop down menu of sorts on the driver station that can let us put together a lot of different pieces so we can pick and choose the auto plan. Maybe we could even draw out the path in init. All this is only at the speculation stage right now.

    CNN Training Program

    CNN Training Program By Arjun and Abhi

    Task: Designing a program to label training data for our Convolutional Neural Network

    In order to use the captured training data, we need to label it by identifying the location of the gold mineral in it. We also need to normalize it by resizing the training images to a constant size, (320x240 pixels). While we could do this by hand, it would be a pain to do so. We would have to resize each individual picture, and then identify the coordinates of the center of the gold mineral, then create a file to store the resized image and coordinates.

    Instead of doing this, we decided to write a program to do this for us. That way, we could just click on the gold mineral on the screen, and the program would do the resizing and coordinate-finding for us. Thus, the process of labeling the images will be much easier.

    Throughout the weekend, I worked on this program. The end result is shown above.

    Next Steps

    Now that the program has been developed, we need to actually use it to label the training images we have. Then, we can train the Convolutional Neural Network.

    Labelling Minerals - CNN

    Labelling Minerals - CNN By Arjun and Abhi

    Task: Label training images to train a Neural Network

    Now that we have software to make labeling the training data easier, we have to actually use it to label the training images. Abhi and I split up our training data into two halves, and we each labeled one half. Then, when we had completed the labeling, we recombined the images. The images we labeled are publicly available at https://github.com/arjvik/RoverRuckusTrainingData.

    Next Steps

    We need to actually write a Convolutional Neural Network using the training data we collected.

    Upgrading to FTC SDK version 4.0

    Upgrading to FTC SDK version 4.0 By Arjun

    Task: Upgrade our code to the latest version of the FTC SDK

    FTC recently released version 4.0 of their SDK, with initial support for external cameras, better PIDF motor control, improved wireless connectivity, new sensors, and other general improvements. Our code was based on last year's SDK version 3.7, so we needed to merge the new SDK with our repository.

    The merge was slightly difficult, as there were some issues with the Gradle build system. However, after a little fiddling with the configuration, as well as fixing some errors in the internal code we changed, we were able to successfully merge the new SDK.

    After the merge, we tested that our code still worked on Kraken, last year's competition robot. It ran with no problems.

    Developing a CNN

    Developing a CNN By Arjun and Abhi

    Task: Begin developing a Convolutional Neural Network using TensorFlow and Python

    Now that we have gathered and labeled our training data, we began writing our Convolutional Neural Network. Since Abhi had used Python and TensorFlow to write a neural network in the past during his visit to MIT over the summer, we decided to do the same now.

    After running our model, however, we noticed that it was not very accurate. Though we knew that was due to a bad choice of layer structure or hyperparameters, we were not able to determine the exact cause. (Hyperparameters are special parameters that need to be just right for the neural network to do well. If they are off, the neural network will not work well.) We fiddled with many of the hyperparameters and layer structure options, but were unable to fix the inaccuracy levels.

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    model = Sequential()
    model.add(Conv2D(64, activation="relu", input_shape=(n_rows, n_cols, 1), kernel_size=(3,3)))
    model.add(Conv2D(32, activation="relu", kernel_size=(3,3)))
    model.add(MaxPooling2D(pool_size=(8, 8), padding="same"))
    model.add(Conv2D(8, activation="tanh", kernel_size=(3,3)))
    model.add(MaxPooling2D(pool_size=(8, 8), padding="same"))
    model.add(Conv2D(4, activation="relu", kernel_size=(3,3)))
    model.add(Conv2D(4, activation="tanh", kernel_size=(1,1)))
    model.add(Flatten())
    model.add(Dense(2, activation="linear"))
    model.summary()
    

    Next Steps

    We have not fully given up, though. We plan to keep attempting to improve the accuracy of our neural network model.

    Rewriting CNN

    Rewriting CNN By Arjun and Abhi

    Task: Begin rewriting the Convolutional Neural Network using Java and DL4J

    While we were using Python and TensorFlow to train our convolutional neural network, we decided to attempt writing this in Java, as the code for our robot is entirely in Java, and before we can use our neural network, it must be written in Java.

    We also decided to try using DL4J, a competing library to TensorFlow, to write our neural network, to determine if it was easier to write a neural network using DL4J or TensorFlow. We found that both DL4J and TensorFlow were similarly easy to use, and while each had a different style, code written using both were equally easy to read and maintain.

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    java
    		//Download dataset
    		DataDownloader downloader = new DataDownloader();
    		File rootDir = downloader.downloadFilesFromGit("https://github.com/arjvik/RoverRuckusTrainingData.git", "data/RoverRuckusTrainingData", "TrainingData");
    		
    		//Read in dataset
    		DataSetIterator iterator = new CustomDataSetIterator(rootDir, 1);
    		
    		//Normalization
    		DataNormalization scaler = new ImagePreProcessingScaler(0, 1);
    		scaler.fit(iterator);
    		iterator.setPreProcessor(scaler);
    		
    		//Read in test dataset
    		DataSetIterator testIterator = new CustomDataSetIterator(new File(rootDir, "Test"), 1);
    			
    		//Test Normalization
    		DataNormalization testScaler = new ImagePreProcessingScaler(0, 1);
    		testScaler.fit(testIterator);
    		testIterator.setPreProcessor(testScaler);
    		
    		//Layer Configuration
    		MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder()
    				.seed(SEED)
    				.l2(0.005)
    				.weightInit(WeightInit.XAVIER)
    				.list()
    				.layer(0, new ConvolutionLayer.Builder()
    						.nIn(1)
    						.kernelSize(3, 3)
    						.stride(1, 1)
    						.activation(Activation.RELU)
    						.build())
    				.layer(1, new ConvolutionLayer.Builder()
    						.nIn(1)
    						.kernelSize(3, 3)
    						.stride(1, 1)
    						.activation(Activation.RELU)
    						.build())
    				/* ...more layer code... */
    				.build();
    

    Next Steps

    We still need to attempt to to fix the inaccuracy in the predictions made by our neural network.

    Pose BigWheel

    Pose BigWheel By Abhi

    Task: New Pose for Big Wheel robot

    Historically, Iron Reign has used a class called "Pose" to control all the hardware mapping of our robot instead of putting it directly into our opmodes. This has created cleaner code and smoother integration with our crazy functions. However, we used the same Pose for the past two years since both had an almost identical drive base. Since there wasn't a viable differential drive Pose in the past, I made a new one using inspiration from the mecanum one. Pose will be used from this point onwards in our code to setup.

    We start with initializing everything including PID constants and all our motors/sensors. I will skip all this for this post since this is repetitive in all team code.

    In the init, I made the hardware mapping for the motors we have on BigWheel right now. Other functions will come in later.

    Here is where a lot of the work happens. This is what allows our robot to move accurately using IMU and encoder values.

    There are a lot of other methods beyond these but there is just a lot of technical math behind them with trigonometry. I won't bore you with the details but our code is open source so you can find the necessary help if you just look at our github!

    RIP CNN

    RIP CNN By Abhi

    Task: Farewell Iron Reign's CNN

    FTC released new code to support Tensorflow and automatically detect minerals with the model they trained. Unfortunately, all of our CNN work was undercut by this update. The silver lining is that we have done enough research into how CNN's work and it will allow us to understand the mind of the FTC app better. In addition, we may retrain this model if we feel it doesn't work well. But now, it is time to bid farewell to our CNN.

    Next Steps

    From this point, we will further analyze the CNN to determine its ability to detect the minerals. At the same time, we will also look into OpenCV detection.

    Code Post-Mortem after Conrad Qualifier

    Code Post-Mortem after Conrad Qualifier By Arjun and Abhi

    Task: Analyze code failure at Conrad Qualifier

    Iron Reign has been working hard on our robot, but despite that, we did not perform well owing to our autonomous performance.

    Our autonomous plan was fairly simple: perform sampling, deploy the team marker, then drive to the crater to park. We planned to use the built-in TensorFlow object detection for our sampling, and thus assumed that our autonomous would be fairly easy.

    On Thursday, I worked on writing a class to help us detect the location of the gold mineral using the built-in TensorFlow object detection. While testing this class, I noticed that it produced an error rather than outputting the location of the gold mineral. This error was not diagnosed until the morning of the competition.

    On Friday, Abhi worked on writing code for the driving part of the autonomous. He wrote three different autonomous routines, one for each position of the gold mineral. His code did not select the routine to use yet, leaving it open for us to connect to the TensorFlow class to determine which position the gold mineral was.

    On Saturday, the morning of the competition, we debugged the TensorFlow class that was written earlier and determined the cause of the error. We had misused the API for the TensorFlow object detection, and after we corrected that, our code didn't spit out an error anymore. Then, we realized that TensorFlow only worked at certain camera positions and angles. We then had to adjust the position of our robot on the field, so that we could.

    Our code failure was mostly due to the fact that we only started working on our autonomous two days before the competition. Next time, we plan to make our autonomous an integral part of our robot, and focus on it much earlier.

    Next Steps:

    We spend more time focusing on code and autonomous, to ensure that we enter our next competition with a fully working autonomous.

    DPRG Vision Presentation

    DPRG Vision Presentation By Arjun and Abhi

    Task: Present to the Dallas Personal Robotics Group about computer vision

    We presented to the DPRG about our computer vision, touching on subjects including OpenCV, Vuforia, TensorFlow, and training our own Convolutional Neural Network. Everyone we presented to was very interested in our work, and they asked us many questions. We also received quite a few suggestions on ways we could improve the performance of our vision solutions. The presentation can be seen below.

    Next Steps

    We plan to research what they suggested, such as retraining our neural networks and reusing our old training images.

    Refactoring Vision Code

    Refactoring Vision Code By Arjun

    Task: Refactor Vision Code

    Iron Reign has been working on multiple vision pipelines, including TensorFlow, OpenCV, and a home-grown Convolutional Neural Network. Until now, all our code assumed that we only used TensorFlow, and we wanted to be able to switch out vision implementations quickly. As such, we decided to abstract away the actual vision pipeline used, which allows us to be able to choose between vision implementations at runtime.

    We did this by creating a java interface, VisionProvider, seen below. We then made our TensorFlowIntegration class (our code for detecting mineral positions using TensorFlow) implement VisionProvider.

    Next, we changed our opmode to use the new VisionProvider interface. We added code to allow us to switch vision implementations using the left button on the dpad.

    Our code for VisionProvider is shown below.

    1
    2
    3
    4
    5
    6
    public interface VisionProvider {
        public void initializeVision(HardwareMap hardwareMap, Telemetry telemetry);
        public void shutdownVision();
        public GoldPos detect();
    }
    ```
    

    These methods are implemented in the integration classes.
    Our new code for TensorflowIntegration is shown below:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    89
    90
    91
    92
    93
    94
    95
    96
    97
    98
    99
    public class TensorflowIntegration implements VisionProvider {
        private static final String TFOD_MODEL_ASSET = "RoverRuckus.tflite";
        private static final String LABEL_GOLD_MINERAL = "Gold Mineral";
        private static final String LABEL_SILVER_MINERAL = "Silver Mineral";
    
        private List<Recognition> cacheRecognitions = null;
      
        /**
         * {@link #vuforia} is the variable we will use to store our instance of the Vuforia
         * localization engine.
         */
        private VuforiaLocalizer vuforia;
        /**
         * {@link #tfod} is the variable we will use to store our instance of the Tensor Flow Object
         * Detection engine.
         */
        public TFObjectDetector tfod;
    
        /**
         * Initialize the Vuforia localization engine.
         */
        public void initVuforia() {
            /*
             * Configure Vuforia by creating a Parameter object, and passing it to the Vuforia engine.
             */
            VuforiaLocalizer.Parameters parameters = new VuforiaLocalizer.Parameters();
            parameters.vuforiaLicenseKey = RC.VUFORIA_LICENSE_KEY;
            ;
            parameters.cameraDirection = CameraDirection.FRONT;
            //  Instantiate the Vuforia engine
            vuforia = ClassFactory.getInstance().createVuforia(parameters);
        }
    
        /**
         * Initialize the Tensor Flow Object Detection engine.
         */
        private void initTfod(HardwareMap hardwareMap) {
            int tfodMonitorViewId = hardwareMap.appContext.getResources().getIdentifier(
                    "tfodMonitorViewId", "id", hardwareMap.appContext.getPackageName());
            TFObjectDetector.Parameters tfodParameters = new TFObjectDetector.Parameters(tfodMonitorViewId);
            tfod = ClassFactory.getInstance().createTFObjectDetector(tfodParameters, vuforia);
            tfod.loadModelFromAsset(TFOD_MODEL_ASSET, LABEL_GOLD_MINERAL, LABEL_SILVER_MINERAL);
        }
    
        @Override
        public void initializeVision(HardwareMap hardwareMap, Telemetry telemetry) {
            initVuforia();
    
            if (ClassFactory.getInstance().canCreateTFObjectDetector()) {
                initTfod(hardwareMap);
            } else {
                telemetry.addData("Sorry!", "This device is not compatible with TFOD");
            }
    
            if (tfod != null) {
                tfod.activate();
            }
        }
    
        @Override
        public void shutdownVision() {
            if (tfod != null) {
                tfod.shutdown();
            }
        }
    
        @Override
        public GoldPos detect() {
            List<Recognition> updatedRecognitions = tfod.getUpdatedRecognitions();
            if (updatedRecognitions != null) {
                cacheRecognitions = updatedRecognitions;
            }
            if (cacheRecognitions.size() == 3) {
                int goldMineralX = -1;
                int silverMineral1X = -1;
                int silverMineral2X = -1;
                for (Recognition recognition : cacheRecognitions) {
                    if (recognition.getLabel().equals(LABEL_GOLD_MINERAL)) {
                        goldMineralX = (int) recognition.getLeft();
                    } else if (silverMineral1X == -1) {
                        silverMineral1X = (int) recognition.getLeft();
                    } else {
                        silverMineral2X = (int) recognition.getLeft();
                    }
                }
                if (goldMineralX != -1 && silverMineral1X != -1 && silverMineral2X != -1)
                    if (goldMineralX < silverMineral1X && goldMineralX < silverMineral2X) {
                        return GoldPos.LEFT;
                    } else if (goldMineralX > silverMineral1X && goldMineralX > silverMineral2X) {
                        return GoldPos.RIGHT;
                    } else {
                        return GoldPos.MIDDLE;
                    }
            }
            return GoldPos.NONE_FOUND;
    
        }
    
    }
    

    Next Steps

    We need to implement detection using OpenCV, and make our class conform to VisionProvider, so that we can easily swap it out for TensorflowIntegration.

    We also need to do the same using our Convolutional Neural Network.

    Finally, it might be beneficial to have a dummy implementation that always “detects” the gold as being in the middle, so that if we know that all our vision implementations are failing, we can use this dummy one to prevent our autonomous from failing.

    OpenCV Support

    OpenCV Support By Arjun

    Task: Add OpenCV support to vision pipeline

    We recently refactored our vision code to allow us to easily swap out vision implementations. We had already implemented TensorFlow, but we hadn't implemented code for using OpenCV instead of TensorFlow. Using the GRIP pipeline we designed earlier, we wrote a class called OpenCVIntegration, which implements VisionProvider. This new class allows us to use OpenCV instead of TensorFlow for our vision implementation.
    Our code for OpenCVIntegration is shown below.

      1
      2
      3
      4
      5
      6
      7
      8
      9
     10
     11
     12
     13
     14
     15
     16
     17
     18
     19
     20
     21
     22
     23
     24
     25
     26
     27
     28
     29
     30
     31
     32
     33
     34
     35
     36
     37
     38
     39
     40
     41
     42
     43
     44
     45
     46
     47
     48
     49
     50
     51
     52
     53
     54
     55
     56
     57
     58
     59
     60
     61
     62
     63
     64
     65
     66
     67
     68
     69
     70
     71
     72
     73
     74
     75
     76
     77
     78
     79
     80
     81
     82
     83
     84
     85
     86
     87
     88
     89
     90
     91
     92
     93
     94
     95
     96
     97
     98
     99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    public class OpenCVIntegration implements VisionProvider {
    
        private VuforiaLocalizer vuforia;
        private Queue<VuforiaLocalizer.CloseableFrame> q;
        private int state = -3;
        private Mat mat;
        private List<MatOfPoint> contours;
        private Point lowest;
    
        private void initVuforia() {
            VuforiaLocalizer.Parameters parameters = new VuforiaLocalizer.Parameters();
            parameters.vuforiaLicenseKey = RC.VUFORIA_LICENSE_KEY;
            parameters.cameraDirection = VuforiaLocalizer.CameraDirection.FRONT;
            vuforia = ClassFactory.getInstance().createVuforia(parameters);
        }
    
        public void initializeVision(HardwareMap hardwareMap, Telemetry telemetry) {
            initVuforia();
            q = vuforia.getFrameQueue();
            state = -2;
    
        }
    
        public void shutdownVision() {}
    
        public GoldPos detect() {
            if (state == -2) {
                if (q.isEmpty())
                    return GoldPos.HOLD_STATE;
                VuforiaLocalizer.CloseableFrame frame = q.poll();
                Image img = VisionUtils.getImageFromFrame(frame, PIXEL_FORMAT.RGB565);
                Bitmap bm = Bitmap.createBitmap(img.getWidth(), img.getHeight(), Bitmap.Config.RGB_565);
                bm.copyPixelsFromBuffer(img.getPixels());
                mat = VisionUtils.bitmapToMat(bm, CvType.CV_8UC3);
            } else if (state == -1) {
                RoverRuckusGripPipeline pipeline = new RoverRuckusGripPipeline();
                pipeline.process(mat);
                contours = pipeline.filterContoursOutput();
            } else if (state == 0) {
                if (contours.size() == 0)
                    return GoldPos.NONE_FOUND;
                lowest = centroidish(contours.get(0));
            } else if (state < contours.size()) {
                Point centroid = centroidish(contours.get(state));
                if (lowest.y > centroid.y)
                    lowest = centroid;
            } else if (state == contours.size()) {
                if (lowest.x < 320d / 3)
                    return GoldPos.LEFT;
                else if (lowest.x < 640d / 3)
                    return GoldPos.MIDDLE;
                else
                    return GoldPos.RIGHT;
            } else {
                return GoldPos.ERROR2;
            }
            state++;
            return GoldPos.HOLD_STATE;
        }
    
        private static Point centroidish(MatOfPoint matOfPoint) {
            Rect br = Imgproc.boundingRect(matOfPoint);
            return new Point(br.x + br.width/2,br.y + br.height/2);
        }
    }public class OpenCVIntegration implements VisionProvider {
    
        private VuforiaLocalizer vuforia;
        private Queue<VuforiaLocalizer.CloseableFrame> q;
        private int state = -3;
        private Mat mat;
        private List<MatOfPoint> contours;
        private Point lowest;
    
        private void initVuforia() {
            VuforiaLocalizer.Parameters parameters = new VuforiaLocalizer.Parameters();
            parameters.vuforiaLicenseKey = RC.VUFORIA_LICENSE_KEY;
            parameters.cameraDirection = VuforiaLocalizer.CameraDirection.FRONT;
            vuforia = ClassFactory.getInstance().createVuforia(parameters);
        }
    
        public void initializeVision(HardwareMap hardwareMap, Telemetry telemetry) {
            initVuforia();
            q = vuforia.getFrameQueue();
            state = -2;
    
        }
    
        public void shutdownVision() {}
    
        public GoldPos detect() {
            if (state == -2) {
                if (q.isEmpty())
                    return GoldPos.HOLD_STATE;
                VuforiaLocalizer.CloseableFrame frame = q.poll();
                Image img = VisionUtils.getImageFromFrame(frame, PIXEL_FORMAT.RGB565);
                Bitmap bm = Bitmap.createBitmap(img.getWidth(), img.getHeight(), Bitmap.Config.RGB_565);
                bm.copyPixelsFromBuffer(img.getPixels());
                mat = VisionUtils.bitmapToMat(bm, CvType.CV_8UC3);
            } else if (state == -1) {
                RoverRuckusGripPipeline pipeline = new RoverRuckusGripPipeline();
                pipeline.process(mat);
                contours = pipeline.filterContoursOutput();
            } else if (state == 0) {
                if (contours.size() == 0)
                    return GoldPos.NONE_FOUND;
                lowest = centroidish(contours.get(0));
            } else if (state < contours.size()) {
                Point centroid = centroidish(contours.get(state));
                if (lowest.y > centroid.y)
                    lowest = centroid;
            } else if (state == contours.size()) {
                if (lowest.x < 320d / 3)
                    return GoldPos.LEFT;
                else if (lowest.x < 640d / 3)
                    return GoldPos.MIDDLE;
                else
                    return GoldPos.RIGHT;
            } else {
                return GoldPos.ERROR2;
            }
            state++;
            return GoldPos.HOLD_STATE;
        }
    
        private static Point centroidish(MatOfPoint matOfPoint) {
            Rect br = Imgproc.boundingRect(matOfPoint);
            return new Point(br.x + br.width/2,br.y + br.height/2);
        }
    }
    

    Debug OpenCV Errors

    Debug OpenCV Errors By Arjun

    Task: Use black magic to fix errors in our code

    We implemented OpenCV support in our code, but we hadn’t tested it until now. Upon testing, we realized it didn't work.

    The first problem we found was that Vuforia wasn’t reading in our frames. The queue which holds Vuforia frames was always empty. After making lots of small changes, we realized that this was due to not initializing our Vuforia correctly. After fixing this, we got a new error.

    The error message changed, meaning that we fixed one problem, but there was another problem hiding behind it. The new error we found was that our code was unable to access the native OpenCV libraries, namely it could not link to libopencv_java320.so. Unfortunately, we could not debug this any further.

    Next Steps

    We need to continue debugging this problem and find the root cause of it.

    Auto Paths

    Auto Paths By Abhi

    Task: Map and code auto for depot side start

    Today, we implemented our first autonomous path. Since we we still didn't have a complete vision software, we made these manually so we can integrate vision without issues. Here are videos of all of the paths. For the sake of debugging the bot stops after turning towards the crater but in reality it will drive and park in the far crater. These paths will help us score highly during autonomous.

    Center

    Left

    Right

    Next Steps

    We will get vision integrated into the paths.

    Issues with Driving

    Issues with Driving By Karina

    Task: Get ready for Regionals

    Regionals is coming up, and there are some driving issues that need to be addressed. Going back to November, one notable issue we had at the Conrad qualifier was the lack of friction between Bigwheel's wheels and the field tiles. There was not enough weight resting on the wheels, which made it hard to move suddenly.

    Since then many changes have been made to Bigwheel in terms of the lift. For starters, we switched out the REV extrusion linear slide for the MGN12H linear slide. We have also added more components to intake and carry minerals. These steps have fixed the previous issue if we keep the lift at a position not exceeding ~70 degrees while moving, but having added a lot of weight to the end of the slide makes rotating around the elbow joint of Bigwheel problematic. As you can see below, Bigwheel's chassis is not heavy enough to stay grounded when deploying the arm (and so I had to step on the back end of Bigwheel like a fool).

    Another issue I encountered during driver practice was trying to deposit minerals in the lander. By "having issues" I mean I couldn't. Superman broke as soon as I tried going into the up position, and this mechanism was intended to raise Bigwheel enough so that is would reach the lander. Regardless of Superman's condition, the container for the minerals was still loose and not attached to the servo. Consequently, I could not rotate the lift past the vertical without dropping the minerals I had collected.

    Next Steps

    To run a full practice match, Superman and the container will need to be fixed, as well as the weight issue. Meanwhile, I will practice getting minerals out of the crater.

    Vision Summary

    Vision Summary By Arjun and Abhi

    Task: Reflect on our vision development

    One of our priorities this season was our autonomous, as a perfect autonomous could score us a considerable amount of points. A large portion of these points come from sampling, so that was one of our main focuses within autonomous. Throughout the season, we developed a few different approaches to sampling.

    Early on in the season, we began experimenting with using a Convolutional Neural Network to detect the location of the gold mineral. A Convolutional Neural Network, or CNN, is a machine learning algorithm that uses multiple layers which "vote" on what the output should be based on the output of previous layers. We developed a tool to label training images for use in training a CNN, publicly available at https://github.com/arjvik/MineralLabler. We then began training a CNN with the training data we labeled. However, our CNN was unable to reach a high accuracy level, despite us spending lots of time tuning it. A large part of this came to our lack of training data. We haven't given up on it, though, and we hope to improve this approach in the coming weeks.

    We then turned to other alternatives. At this time, the built-in TensorFlow Object Detection code was released in the FTC SDK. We tried out TensorFlow, but we were unable to use it reliably. Our testing revealed that the detection provided by TensorFlow was not always able to detect the location of the gold mineral. We attempted to modify some of the parameters, however, since only the trained model was provided to us by FIRST, we were unable to increase its accuracy. We are currently looking to see if we can detect the sampling order even if we only detect some of the sampling minerals. We still have code to use TensorFlow on our robot, but it is only one of a few different vision backends available for selection during runtime.

    Another alternative vision framework we tried was OpenCV. OpenCV is a collection of vision processing algorithms which can be combined to form powerful pipelines. OpenCV pipelines perform sequential transformations on their input image, until it ends up in a desired form, such as a set of contours or boundaries of all minerals detected in the image. We developed an OpenCV pipeline to find the center of the gold mineral given an image of the sampling order. To create our pipeline, we used a tool called GRIP, which allows us to visualize and tune our pipeline. However, since we have found that bad lighting conditions greatly influence the quality of detection, we hope to add LED lights to the top of our phone mount so we can get consistent lighting on the field, hopefully further increasing our performance in dark field conditions.

    Since we wanted to be able to switch easily between these vision backends, we decided to write a modular framework which allows us to swap out vision implementations with ease. As such, we are now able to choose which vision backend we would like to use during the match, with just a single button press. Because of this, we can also work in parallel on all of the vision backends.

    Another abstraction we made was the ability to switch between different viewpoints, or cameras. This allows us to decide at runtime which viewpoint we wish to use, either the front/back camera of the phone, or external webcam. Of course, while there is no good reason to change this during competition (hopefully by then the placement of the phone and webcam on the robot will be finalized), it is extremely useful during the development of the robot, because we don't have everything about our robot finalized.

      Summary of what we have done:
    • Designed a convolutional neural network to perform sampling.
    • Tested out the provided TensorFlow model for sampling.
    • Developed an OpenCV pipeline to perform sampling.
    • Created a framework to switch between different Vision Providers at runtime.
    • Created a framework to switch between different camera viewpoints at runtime.

    Next Steps

    We would like to continue improving on and testing our vision software so that we can reliably sample during our autonomous.

    Minor Code Change

    Minor Code Change By Karina

    Task: Save Bigwheel from self destruction

    The other day, when running through Bigwheel's controls, we came across an error in the code. The motors on the elbow did not have min and max values for its range of motion, causing the gears to grind in non-optimal conditions. Needless to say, Iron Reign has gone through a few gears already. Adding stops in the code was simple enough:

    Testing the code revealed immediate success. we went through the full range of motion and no further grinding occurred.

    Next Steps

    Going forward, we will continue to debug code through drive practice.

    Code Updates

    Code Updates By Abhi and Arjun

    Task: Detail last-minute code changes to autonomous

    It is almost time for competition and with that comes a super duper autonomous. For the past couple of weeks and today, we focused on making our depot side work consistently. Because our robot wasn't fully built, we couldn't do auto-delatching. Today, we integrated our vision pipelines into the auto and tested all the paths with vision. They seemed to work at home base but the field we have isn't built to exact specifications.

    Next Steps

    At Wylie, we will have to tune auto paths to adjust from our field's discrepancies.

    Competition Day Code

    Competition Day Code By Abhi and Arjun

    Task: Update our code

    While at the Wylie quaiifier, we had to make many changes because our robot broke the night before.

    First thing that happened was that the belt code was added. Previously, we had relied on gravity and the polycarb locks we had on the slides but we quickly realized that the slides needed to articulate in order to preserve Superman. As a result, we added the belts into our collector class and used the encoders to power them.

    Next, we added manual overrides for all functions of our robot. Simply due to lack of time, we didn't add any presets and we focused on making the robot functional enough for competition. During competition, Karina was able to latch during endgame with purely the manual overrides.

    Finally, we did auto path tuning. We ended up using an OpenCV pipeline and we were accurately able to detect the gold mineral at all times. However, our practice field wasn't setup to the exact specifications needed so we spent the majority the day at the Wylie practice field tuning depot side auto (by the end of the day it worked almost perfectly every time.

    Next Steps

    We were lucky to have qualified early in the season we could make room for mistakes such as this. However, it will be hard to sustain this, so we must implement build freezes in the future.

    Code Updates

    Code Updates By Abhi

    Task: DISD STEM EXPO

    The picture above is a representation of our work today. After making sure all the manual drive controls were working, Karina found the positions she preferred for intake, deposit, and latch. Taking these encoder values from telemetry, we created new methods for the robot to run to those positions. As a result, the robot was very functional. We could latch onto the lander in 10 seconds (a much faster endgame than we had ever done).

    Next Steps

    The code is still a little messy so we will have to do further testing before any competition.

    Autonomous Non-Blocking State Machines

    Autonomous Non-Blocking State Machines By Arjun

    Task: Design a state machine class to make autonomous easier

    In the past our autonomous routines were tedious and difficult to change. Adding one step to the beginning of an autonomous would require changing the indexes of every single step afterwards, which could take a long time depending on the size of the routine. In addition, simple typos could go undetected, and cause lots of problems. Finally, there was so much repetitive code, making our routines over 400 lines long.

    In order to remedy this, we decided to create a state machine class that takes care of the repetitive parts of our autonomous code. We created a StateMachine class, which allows us to build autonomous routines as sequences of "states", or individual steps. This new state machine system makes autonomous routines much easier to code and tune, as well as removing the possibility for small bugs. We also were able to shorten our code by converting it to the new system, reducing each routine from over 400 lines to approximately 30 lines.

    Internally, StateMachine uses instances of the functional interface State (or some of its subclasses, SingleState for states that only need to be run once, TimedState, for states that are run on a timer, or MineralState, for states that do different things depending on the sampling order). Using a functional interface lets us use lambdas, which further reduce the length of our code. When it is executed, the state machine takes the current state and runs it. If the state is finished, the current state index (stored in a class called Stage) is incremented, and a state switch action is run, which stops all motors.

    Here is an autonomous routine which has been converted to the new system:

    private StateMachine auto_depotSample = getStateMachine(autoStage)
                .addNestedStateMachine(auto_setup) //common states to all autonomous
                .addMineralState(mineralStateProvider, //turn to mineral, depending on mineral
                        () -> robot.rotateIMU(39, TURN_TIME), //turn left
                        () -> true, //don't turn if mineral is in the middle
                        () -> robot.rotateIMU(321, TURN_TIME)) //turn right
                .addMineralState(mineralStateProvider, //move to mineral
                        () -> robot.driveForward(true, .604, DRIVE_POWER), //move more on the sides
                        () -> robot.driveForward(true, .47, DRIVE_POWER), //move less in the middle
                        () -> robot.driveForward(true, .604, DRIVE_POWER))
                .addMineralState(mineralStateProvider, //turn to depot
                        () -> robot.rotateIMU(345, TURN_TIME),
                        () -> true,
                        () -> robot.rotateIMU(15, TURN_TIME))
                .addMineralState(mineralStateProvider, //move to depot
                        () -> robot.driveForward(true, .880, DRIVE_POWER),
                        () -> robot.driveForward(true, .762, DRIVE_POWER),
                        () -> robot.driveForward(true, .890, DRIVE_POWER))
                .addTimedState(4, //turn on intake for 4 seconds
                        () -> robot.collector.eject(),
                        () -> robot.collector.stopIntake())
                .build();
    

    Control Mapping

    Control Mapping By Bhanaviya, Abhi, Ben, and Karina

    Task: Map and test controls

    With regionals being a week away, the robot needs to be in drive testing phase. So, we started out by mapping out controls as depicted above.

    Upon testing the controls, we realized that when the robot went into Superman-mode, it collapsed due to the lopsided structure of the base since the presets were not as accurate as they could be. The robot had trouble trying to find the right position when attempting to deposit and intake minerals.

    After we found a preset for the intake mechanism, we had to test it out to ensure that the arm extended far enough to sample. Our second task was ensuring that the robot could go into superman while still moving forward. To do this, we had to find the position which allowed the smaller wheel at the base of the robot to move forward while the robot was in motion.

    Next Steps

    We plan to revisit the robot's balancing issue in the next meet and find the accurate presets to fix the problem.

    Big Wheel Articulations

    Big Wheel Articulations By Abhi

    Task: Summary of all Big Wheel movements

    In our motion, our robot shifts multiple major subsystems (the elbow and Superman) that make it difficult to keep the robot from tipping. Therefore, through driver practice, we determined the 5 major deployment modes that would make it easier for the driver to transition from mode to mode. Each articulation is necessary to maintain the robot's center of gravity as its mode of operation shifts.

    The position seen above is called "safe drive". During normal match play, our drivers can go to this position to navigate the field quickly and with the arm out of the way.

    When the driver control period starts, we normally navigate to the crater then enter the intake position shown above. From this position, we can safely pick up minerals from the crater.

    From the intake position, the robot goes to safe drive to fix the weight balance then goes to the deposit position shown above. The arm can still extend upwards above the lander and our automatic sorter can place the minerals appropriately.

    During the end game, we enter a latchable position where our hook can easily slide into the latch. After hooked on, our robot can slightly lift itself off the ground to hook.

    At the beginning of the match, we can completely close the arm and superman to fit in sizing cube and latch on the lander.

    As you can see, there is a lot of articulations that need to work together during the course of the match. By putting this info in a state machine, we can easily toggle between articulations. Refer to our code snippets for more details.

    Next Steps

    At this point, we have 4 cycles in 1 minute 30 seconds. By adding some upgrades to the articulations using our new distance sensors, we hope to speed this up even more.

    Cart Hack

    Cart Hack By Arjun

    Task: Tweaking ftc_app to allow us to drive robots without a Driver Station phone

    As you already know, Iron Reign has a mechanized cart called Cartbot that we bring to competitions. We used the FTC control system to build it, so we could gain experience. However, this has one issue: we can only use one pair of Robot Controller and Driver Station phones at a competition, because of WiFi interference problems.

    To avoid this pitfall, we decided to tweak the ftc_app our team uses to allow us to plug in a controller directly into the Robot Controller. This cuts out the need for a Driver Station phone, which means that we can drive around Cartbot without worrying about breaking any rules.

    Another use for this tweak could be for testing, since with this new system we don't need a Driver Station when we are testing our tele-op.

    As of now this modification lives in a separate branch of our code, since we don't know how it may affect our match code. We hope to merge this later once we confirm it doesn't cause any issues.

    Road to Worlds Document

    Road to Worlds Document By Ethan, Charlotte, Evan, Karina, Janavi, Jose, Ben, Justin, Arjun, Abhi, and Bhanaviya

    Task: Consider what we need to do in the coming months

    ROAD TO WORLDS - What we need to do

     

    OVERALL:

    • New social media manager (Janavi/Ben) and photographer (Ethan, Paul, and Charlotte)

     

    ENGINEERING JOURNAL: - Charlotte, Ethan, & all freshmen

     

    • Big one - freshmen get to start doing a lot more

     

    • Engineering section revamp
      • Decide on major subsystems to focus on
        • Make summary pages and guides for judges to find relevant articles
      • Code section
        • Finalize state diagram
          • Label diagram to refer to the following print out of different parts of the code
        • Create plan to print out classes
        • Monthly summaries
      • Meeting Logs
        • Include meeting planning sessions at the beginning of every log
          • Start doing planning sessions!
        • Create monthly summaries
      • Biweekly Doodle Polls
        • record of supposed attendance rather than word of mouth
      • Design and format revamping
        • Start doing actual descriptions for blog commits
        • More bullet points to be more technical
        • Award highlights [Ethan][Done]

    Page numbers [Ethan][Done]

        • Awards on indexPrintable [Ethan][Done]
      • Irrelevant/distracting content
        • Packing list
        • Need a miscellaneous section
          • content
      • Details and dimensions
        • Could you build robot with our journal?
        • CAD models
        • More technical language, it is readable but not technical currently
    • Outreach
      • More about the impact and personal connections
      • What went wrong
      • Make content more concise and make it convey our message better



    ENGINEERING TEAM:

     

    • Making a new robot - All build team (Karina & Jose over spring break)

     

      • Need to organize motors (used, etc)
      • Test harness for motors (summer project)
    • Re-do wiring -Janavi and Abhi
    • Elbow joint needs to be redone (is at a slight angle) - Justin/Ben
      • 3D print as a prototype
        • Cut out of aluminum
      • Needs to be higher up and pushed forward
      • More serviceable
        • Can’t plug in servos
    • Sorter -Evan, Karina, and Justin
      • Sorter redesign
    • Intake -Evan, Karina, Abhi, Jose
      • Take video of performance to gauge how issues are happening and how we can fix
      • Subteam to tackle intake issues
    • Superman -Evan and Ben
      • Widen superman wheel
    • Lift
      • Transfer police (1:1 to 3:4)
      • Larger drive pulley
        • Mount motors differently to make room
    • Chassis -Karina and a freshman
      • Protection for LED strips
      • Battery mount
      • Phone mount
      • Camera mount
      • New 20:1 motors
      • Idler sprocket to take up slack in chain (caused by small sprocket driving large one)
    • CAD Model



    CODE TEAM: -Abhi and Arjun

    • add an autorecover function to our robot for when it tips over
      • it happened twice and we couldn’t recover fast enough to climb
    • something in the update loop to maintain balance
      • we were supposed to do this for regionals but we forgot to do it and we faced the consequences
    • fix IMU corrections such that we can align to field wall instead of me eyeballing a parallel position
    • use distance sensors to do wall following and crater detection
    • auto paths need to be expanded such that we can avoid alliance partners and have enough flexibility to pick and choose what path needs to be followed
      • In both auto paths, can facilitate double sampling
    • Tuning with PID (tuning constants)
    • Autonomous optimization



    DRIVE TEAM:

    • Driving Logs
      • everytime there is driving practice, a driver will fill out a log that records overall record time, record time for that day, number of cycles for each run, and other helpful stats to track the progress of driving practice
    • actual driving practice lol
    • Multiple drive teams

     

    COMPETITION PREP:

    • Pit setup
      • Clean up tent and make sure we have everything to put it together
      • Activities
        • Robotics related
      • Find nuts and bolts based on the online list
    • Helping other teams
    • Posters
    • Need a handout
    • Conduct in pits - need to be focused
    • MXP or no?
    • Spring break - who is here and what can we accomplish
    • Scouting

     

    Code Refactor

    Code Refactor By Abhi and Arjun

    Task: Code cleanup and season analysis

    At this point in the season, we have time to clean up our code before development for code. This is important to do now so that the code remains understandable as we make many changes for worlds.

    There aren't any new features that were added during these commits. In total, there were 12 files changed, 149 additions, and 253 deletions.

    Here is a brief graph of our commit history over the past season. As you can see, there was a spike during this code refactor.

    Here is a graph of additions and deletions over the course of the season. There was also another spike during this time as we made changes.

    Next Steps

    Hopefully this cleanup will help us on our journey to worlds.

    Localization

    Localization By Ben

    Localization

    A feature that is essential to many advanced autonomous sequences is the ability to know the robots absolute location (x position, y position, heading). For our localization, we determine the robots position relative to the fields coordinate frame. To track our position, we use encoders (to determine displacement) and a gyro (to determine heading).

    Our robots translational velocity can be determined by seeing how our encoder counts change over time. Heading velocity is simply how our angle changes in time. Thus, our actual velocity can be represented by the following equation.

    Integrating that to find our position yields

    Using this new equation, can obtain the robots updated x and y coordinates.

    Balancing Robot

    Balancing Robot By Abhi and Ben

    Initial Work on Balancing Robot

    Since our robot has two wheels and a long arm, we decided to take on an interesting problem: balancing our robot on two wheels as do modern hoverboards and Segways. Though the problem had already been solved by others, we tried our own approach.

    We first tried a PID control loop approach as we had traditionally been accustomed to that model for our autonomous and such. However, this served as a large challenge as lag in loop times didn't give us the sensitivity that was necessary. However, we tried to optimize this model.

    Next time we will continue fine tuning the gains, and use a graph plotting our current pitch versus the desired pitch to determine how we should tweak the gains to smoothly reach the setpoint. Another factor we need to account for is the varying loop times, and multiply these loop times into a pid calculations to ensure consistency. In addition, we may try to implement state space control to control this balancing instead of PID.

    Balancing Robot Updates

    Balancing Robot Updates By Abhi and Ben

    Updates on Balancing Robot

    Today we managed to get our robot to balance for 30 seconds after spending about an hour tuning the PID gains. We made significant progress, but there is a flaw in our algorithm that needs to be addressed. At the moment, we have a fixed pitch that we want the robot to balance at but due to the weight distribution of the robot, forcing it to balance at some fixed setpoint will not work well and will cause it to continually oscillate around that pitch instead of maintaining it.

    To address this issue, there are a number of solutions. As mentioned in the past post, one approach is to use state space control. Though it may present a more accurate approach, it is computationally intensive and is more difficult to implement. Another solution is to set the elbow to run to a vertical angle rather than having that value preset. For this, we would need another IMU sensor on the arm and this also adds another variable to consider in our algorithm.

    To learn more about this problem, we looked into this paper developed by Harvard and MIT that used Lagrangian mechanics relate the variables combined with state space control. Lagrangian mechanics allows you to represent the physics of the robot in terms of energy rather than Newtonian forces. The main equation, the Lagrangian, is given as follows:

    To actually represent the lagrangian in terms of our problem, there is a set of differential equations which can be fed into the state space control equation. For the sake of this post, I will not list it here but refer to the paper given for more info.

    Next Steps:

    This problem will be on hold until we finish the necessary code for our robot but we have a lot of new information we can use to solve the problem.

    Icarus Code Support

    Icarus Code Support By Abhi

    Task: Implement dual robot code

    With the birth of Icarus came a new job for the programmers: supporting both Bigwheel and Icarus. We needed the code to work both ways because new logic could be developed on bigwheel while the builders completed Icarus.

    This was done by simply creating an Enum for the robot type and feeding it into PoseBigWheel initialization. This value was fed into all the subsystems so they could be initialized properly. During init, we could now select the robot type and test with it. The change to the init loop is shown below.

    Next Steps

    After testing, it appears that our logic is functional for now. Coders can now further devlop our base without Icarus.

    Reverse Articulations

    Reverse Articulations By Abhi

    Task: Summary of Icarus Movements

    In post E-116, I showed all the big wheel articulations. As we shifted our robot to Icarus, we decided to change to a new set of articulations as they would work better to maintain the center of gravity of our robot. Once again, we made 5 major deployment modes. Each articulation is necessary to maintain the robot's center of gravity as its mode of operation shifts.

    The position seen above is called "safe drive". During normal match play, our drivers can go to this position to navigate the field quickly and with the arm out of the way. In addition, we use this articulation as we approach the lander to deposit.

    When the driver control period starts, we normally navigate to the crater then enter the intake position shown above. From this position, we can safely pick up minerals from the crater. Note that there are two articulations shown here. These show the intake position both contracted and extended during intake.

    During the end game, we enter a latchable position where our hook can easily slide into the latch. After hooked on, our robot can slightly lift itself off the ground to hook. This is the same articulation as before.

    At the beginning of the match, we can completely close the arm and superman to fit in sizing cube and latch on the lander. This is the same articulation as before.

    These articulations were integrated into our control loop just as before. This allowed smooth integration

    Next Steps

    As the final build of Icarus is completed, we can test these articulations and their implications.

    Center of Gravity calculations

    Center of Gravity calculations By Arjun

    Task: Determine equations to find robot Center of Gravity

    Because our robot tends to tip over often, we decided to start working on a dynamic anti-tip algorithm. In order to do so, we needed to be able to find the center of gravity of the robot. We did this by modeling the robot as 5 separate components, finding the center of gravity of each, and then using that to find the overall center of gravity. This will allow us to better understand when our robot is tipping programmatically.

    The five components we modeled the robot as are the main chassis, the arm, the intake, superman, and the wheels. We then assumed that each of these components had an even weight distribution, and found their individual centers of gravity. Finally, we took the weighted average of the individual centers of gravity in the ratio of the weights of each of the components.

    By having equations to find the center of gravity of our robot, we can continuously find it programmatically. Because of this, we can take corrective action to prevent tipping earlier than we would be able to by just looking at the IMU angle of our robot.

    Next Steps

    We now need to implement these equations in the code for our robot, so we can actually use them.

    Code updates at UIL

    Code updates at UIL By Arjun, Abhi, and Ben O

    Task: Update code to get ready for UIL

    It's competition time again, and with that means updating our code. We have made quite a few changes to our robot in the past few weeks, and so we needed to update our code to reflect those changes.

    Unfortunately, because the robot build was completed very late, we did not have much time to code. That meant that we not only needed to stay at the UIL event center until the minute it closed to use their practice field (we were literally the last team in the FTC pits), we also needed to pull a late-nighter and stay up from 11 pm to 4 am coding.

    One of our main priorities was autonomous. We decided early on to focus on our crater-side autonomous, because in our experience, most teams who only had one autonomous chose depot-side because it was easier to code.

    Unfortunately, we were quite wrong about that. We were forced to run our untested depot-side auto multiple times throughout the course of the day, and it caused us many headaches. Because of it, we missampled, got stuck in the crater, and tipped over in some of our matches where we were forced to run depot-side. Towards the end of the competition, we tried to quickly hack together a better depot-side autonomous, but we ran out of time to do so.

    Some of the changes we made to our crater-side auto were:

    • Updating to use our new reverse articulations
    • Moving vision detection during the de-latch sequence
    • Speeding up our autonomous by replacing driving with belt extensions
    • Sampling using the belt extensions instead of driving to prevent accidental missamples
    • Using PID for all turns to improve accuracy

    We also made some enhancements to teleop. We added a system to correct the elbow angle in accordance to the belt extensions so that we don't fall over during intake when drivers adjust the belts. We also performed more tuning to our articulations to make them easier to use.

    Finally, we added support for the LEDs to the code. After attaching the Blinkin LED controller late Friday night, we included LED color changes in the code. We use them to signal to drivers what mode we are in, and to indicate when our robot is ready to go.

    Control Hub First Impressions

    Control Hub First Impressions By Arjun and Abhi

    Task: Test the REV Control Hub ahead of the REV trial

    Iron Reign was recently selected to attend a REV Control Hub trial along with select other teams in the region. We wanted to do this so that we could get a good look at the control system that FTC would likely be switching to in the near future, as well as get another chance to test our robot in tournament conditions before Worlds.

    We received our Control Hub a few days ago, and today we started testing it. We noticed that while the control hub seemed to use the same exterior as the First Global control hubs, it seems to be different on the inside. For example, in the port labeled Micro USB, there was a USB C connector. We are glad that REV listened to us teams and made this change, as switching to USB C means that there will be less wear and tear on the port. The other ports included are a Mini USB port (we don't know what it is for), an HDMI port should we ever need to view the screen of the Control Hub, and two USB ports, presumably for Webcams and other accessories. The inclusion of 2 USB ports means that a USB Hub is no longer needed. One port appears to be USB 2.0, while the other appears to be USB 3.0.

    Getting started with programming it was quite easy. We tested using Android Studio, but both OnBot Java and Blocks should be able to work fine as we were able to access the programming webpage. We just plugged the battery in to the Control Hub, and then connected it to a computer via the provided USB C cable. The Control Hub immediately showed up in ADB. (Of course, if you forget to plug in the battery like we did at first, you won't be able to program it.)

    REV provided us with a separate SDK to use to program the Control Hub. Unfortunately, we are not allowed to redistribute it. We did note however, that much of the visible internals look the same. We performed a diff between the original ftc_app's FtcRobotControllerActivity.java and the one in the new Control Hub SDK, and saw nothing notable except for mentions of permissions such as Read/Write External Storage Devices, and Access Camera. These permissions look reminiscent of standard Android permissions, and is likely accounting for the fact that you can't accept permissions on a device without a screen.

    While testing it, we didn't have time to copy over our entire codebase, so we made a quick OpMode that moved one wheel of one of our old robots. Because the provided SDK is almost identical to ftc_app, no changes were needed to the existing sample OpModes. We successfully tested our OpMode, proving that it works fine with the new system.

    Pairing the DS phone to the Control Hub was very quick with no hurdles, just requiring us to select "Control Hub" as the pairing method, and connect to the hub's Wifi network. We were told that for the purposes of this test, the WiFi password was "password". This worked, but we hope that REV changes this in the future, as this means that other malicious teams can connect to our Control Hub too.

    We also tested ADB Wireless Debugging. We connected to the Control Hub Wifi through our laptop, and then made it listen for ADB connections over the network via adb tcpip 5555. However, since the Control Hub doesn't use Wifi Direct, we were unable to connect to it via adb connect 192.168.49.1:5555. The reason for this is that the ip address 192.168.49.1 is used mainly by devices for Wifi Direct. We saw that our Control Hub used 192.168.43.1 instead (using the ip route command on Linux, or ipconfig if you are on Windows). We aren't sure if the address 192.168.43.1 is the same for all Control Hubs, or if it is different per control hub. After finding this ip address, we connected via adb connect 192.168.43.1:5555. ADB worked as expected following that command.

    Next Steps

    Overall, our testing was a success. We hope to perform further testing before we attend the REV test on Saturday. We would like to test using Webcams, OpenCV, libraries such as FtcDashboard, and more.

    We will be posting a form where you can let us know about things you would like us to test. Stay tuned for that!

    Auto Paths, Updated

    Auto Paths, Updated By Abhi

    Task: Reflect and develop auto paths

    It has been a very long time since we have reconsidered our auto paths. Between my last post and now, we have made numerous changes to both the hardware and the articulations. As a result, we should rethink the paths we used and optimize them for scoring. After testing multiple paths and observing other teams, I identified 3 auto paths we will try to perfect for championships.

    These two paths represent crater side auto. Earlier in the season, I drew one of the paths to do double sampling. However, because of the time necessary for our delatch sequence, I determined we simply don't have the time necessary to double sample. The left path above is a safe auto path that we had tested many times and used at UIL. However, it doesn't allow us to score the sampled mineral into the lander which would give us 5 extra points during auto. That's why we created a theoretical path seen on the right side that would deposit the team marker before sampling. This allows us to score the sampling mineral rather than just pushing it.

    This is the depot path I was thinking about. Though it looks very similar to the past auto path, there are some notable differences. After the robot delatches from the lander, the lift will simply extend into the depot rather than driving into it. This allows us to extend back and pick up the sampling mineral and score it. Then the robot can turn to the crater and park.

    Next Steps

    One of the crater paths is already coded. Our first priority is to get the depot auto functional for worlds. If we have time still remaining, we can try to do the second crater path.

    Fixing Mini-Mech

    Fixing Mini-Mech By Cooper

    Task: Fix Mini-Mech in time for the Skystone reveal

    In two weeks, Iron Reign is planning on building a robot in 2 days, based on the 2019-2020 Skystone Reveal Video. We've never really built a robot in that short span of a time, so we realized that preparing a suitable chassis ahead of time will make the challenge a lot easier as it gives us time to focus on specific subsystems and code. As such, I worked on fixing up Mini Mech, as it is a possible candidate for our robot in a weekend, due to its small size and maneuverability. Mini-Mech was our 4-wheeled, mecanum-drive,summer chassis project from the Rover Ruckus season, and it has consistently served as a solid prototype to test the early stages of our build and code. I started by testing the drive motors, and then tightening them down, since were really loose.

    Then, I worked on adding a control hub to the chassis. Since Iron Reign was one of the teams who took part in REV's beta test for the control hubs last season, and because we are one of the pilot teams for the REV Control Hub in the North-Texas Region, using a control hub on our first robot of the season will help set us up for our first qualifier, during which we hope to use a control hub.

    Next Steps

    With MiniMech fixed, we now have at least one chassis design to build our Robot in 2 Days off of.

    Ri2D Code

    Ri2D Code By Jose

    Task: Code a Basic TeleOp Code for the Ri2D bot using pre-existing classes and methods

    As the Ri2D bot nears completion, the need for TeleOp code becomes apparent to actually make it move. Since this robot is based of from MiniMech, a previous chassis design for Rover Rockus, the code was simply inserted into its existing class. To allow its subsystems to move and hold their position when they are not, methods for it to pose were used from the code for Icarus, our Rover Rockus robot. Most of the `PoseBigWheel` class wont be used for this robot, but that's fine since that is as simple as not using the methods not needed, done. The constructor for the `PoseBigWheel` class needed to be modified since there are different motors and servos used, this was easy as we just needed to remove anything we didn't need. Again, most stuff here won't be used, but as long as we don't delete any PIVs we should be fine.

    Once the code for robot posing is made to match the Ri2D bot, we need to implement it. To do this an instance of this class was instantiated in the `MiniMech` class. With that, we can now use methods of the `Crane` class(the one with robot posing) in the `MiniMech` class.

    Now it's time to use these methods from the `Crane` class. Since the elbow and slides are the same from Icarus we can apply these methods directly. These were simple if statements to detects a button press and set the appropriate motor moving using the posing code from the `Crane` class. Instead of using basic `setMotor` commands to get the motors going, pre-coded methods were used, we can now keep the motors in the position where they are placed in the same amount of complexity and with no previous knowledge of how to code robot poseing.

    Finally, we have to code the servos. Since the `Crane` class comes with code for two servos we can advantage of it since the Ri2D bot has only two servos. Although the code for this is a lot simpler since robot posing isn't required here, it is still nice to have values for the open and closed positions stored in a PIV in the `Crane` class if we ever have to change them later. A simple toggle feature was used so one button sets the servo to an open position when closed and vice versa.

    Next Steps

    We could on some robot articulations later on, but a basic TeleOp program is good for now.

    FrankenDroid - TPM Calibration

    FrankenDroid - TPM Calibration By Jose, Cooper, and Bhanaviya

    Task: Calibrate FrankenDriod's Ticks Per Meter in preparation of programming autonomous

    Today we worked on the calibration of FrankenDriod's TPM. This is used to accurately and precisely move during autonomous by having a conversion factor between a given distance and the unit ticks, which is used in the code. This was done by commanding FrankenDroid to move forward 2000 ticks. Of course this wasn't a meter, but the distance it did travel(in centimeters) was divided by 100 and multiplied by 2000(estimate used above) to get the approximate TPM. After a few times of getting the number of ticks perfect, we got it exact to the centimeter. This was also done for strafing and with that we now have an exact TPM that can be used when programming autonomous.

    Next Steps

    Now for the fun part, actually programming auto paths. These will be planned out and coded at a later date.

    Beginning Auto Stone

    Beginning Auto Stone By Cooper and Karina

    Task: Design an intake for the stones based on wheels

    Initial Design: Rolling Intake

    We've been trying to get our start on autonomous today. We are still using FrankenDroid (our R12D mecanum drive test bot) because our competition bot is taking longer than we wanted. We just started coding, so we are just trying to learn how to use the statemachine class that Arjun wrote last year. We wanted to make a skeleton of a navigation routine that would pick up and deposit two skystones, although we ran into 3 different notable issues.

    Problem #1 - tuning Crane presets

    We needed to create some presets for repeatable positions for our crane subystem. Since we output all of that to telemetry constantly, it was easy to position the crane manually and see what the encoder positions were. We were mostly focusing on the elbow joints position, since the extension won't come into play until we are stacking taller towers. The positions we need for auto are:

    • BridgeTransit - the angle the arm needs to be to fit under the low skybridge
    • StoneClear - the angle that positions the gripper to safely just pass over an upright stone
    • StoneGrab - the angle that places the intake roller on the skystone to begin the grab

    Problem #2 - learning the statemachine builder

    I've never used lambda expressions before, so that was a bit of a learning curve. But once I got the syntax down, it was pretty easy to get the series of needed moves into the statemachine builder. The sequence comes from our auto planning post. The code has embedded comments to explain what we are trying to do:


    Problem #3 - REV IMU flipped?

    This was the hard one. We lost the last 1.5 hours of practice trying to figure this out, so we didn't get around to actually picking up any stones. We figured that our framework code couldn't have this wrong because it's based on last year's code and the team has been using imu-based navigation since before Karina started. So we thought it must be right and we just didn't know how to use it.

    The problem started with our turn navigation. We have a method called rotateIMU for in-place turns which just takes a target angle and a time limit. But the robot was turning the wrong way. We'd put in a 90 degree target value expecting it to rotate clockwise looking down at it, but it would turn counter clockwise and then it would oscillate wildly. Which at least looked like it found a target value. It just looked like very badly tuned PID overshoot and since we hadn't done PID tuning for this robot yet, we decided to start with that. That was a mistake. We ended up damping the P constant down so much until it had the tiniest effect and the oscillation never went away.

    We have another method built into our demo code called maintainHeading. Just what it sounds like, this lets us put the robot on a rotating lazy susan and it will use the imu to maintain it's current heading by rotating counter to the turntable. When we played with this it became clear the robot was anti-correcting. So we looked more closely at our imu outputs and yes, they were "backwards." Turning to the left increased the imu yaw when we expected turning to the right would do that.

    We have offset routines that help us with relative turns so we suspected the problem was in there. however, we traced it all the way back to the raw outputs from the imuAngles and found nothing. The REV Control Hub is acting like the imu module is installed upside down. We also have an Expansion Hub on the robot and that behaves the same way. This actually triggered a big debate about navigation standards between our mentors and they might write about that separately. So in the end, we went with the interpretation that the imu is flipped. Therefore, the correction was clear-- either to change our bias for clockwise therefore increasing in our relative turns, or to flip the imu output. We decided to flip the imu output and the fix was as simple as inserting the "360-" to this line of code:

    poseHeading = wrapAngle(360-imuAngles.firstAngle, offsetHeading);

    So the oscillation turned out to be at 180 degrees to the target angle. That's because the robot was anti-correcting but still able to know it wasn't near the target angle. At 180 it flips which direction it thinks it should turn for the shortest path to zero error, but the error was at its maximum, so the oscillation was violent. Once we got the heading flipped correctly, everything started working and the PID control is much better with the original constants. Now we could start making progress again.

    Though, the irony here is that we might end up mounting one of our REV hubs upside down on our competition robot. In which case we'll have to flip that imu back.

    Next Steps

    1)Articulating the Crane- We want to turn our Crane presets into proper articulations. Last year we built a complicated articulation engine that controlled that robot's many degrees of freedom. We have much simpler designs this year and don't really need a complicated articulation engine. But it has some nice benefits like set and forget target positions so the robot can be doing multiple things simultaneously from inside a step-by-step state machine. Plus since it is so much simpler this year and we have examples, the engine should be easier to code.

    2)Optimization- Our first pass at auto takes 28 seconds and that's with only 1.5 skystone runs and not even picking the first skystone up or placing it. And no foundation move or end run to the bridge. And no vision. We have a long way to go. But we are also doing this serially right now and we can recover some time when we get our crane operating in parallel with navigation. We're also running a .8 speed so can gain a bit there.

    3)Vision- We've played with both the tensor flow and vuforia samples and they work fairly well. Since we get great positioning feedback from vuforia we'll probably start with that for auto skystone retrieval.

    4)Behaviors- we want to make picking up stones an automatic low level behavior that works in auto and teleop. The robot should be able to automatically detect when it is over a stone and try to pick it up. We may want more than just vision to help with this. Possibly distance sensors as well.

    5)Wall detection- It probably makes sense to use distance sensors to detect distance to the wall and to stones. Our dead reckoning will need to be corrected as we get it up to maximum speed.

    Auto Path 1

    Auto Path 1 By Karina and Jose

    Task: Lay out our robot's path for autonomous

    To kick off our autonomous programming, Iron Reign created our first version autonomous path plan. We begin, like all robots, in the the loading stone, its back to the field wall and with our intake arm upwards. We approach the line-up of stones and deploy the arm to its intake state over the last stone. At the same time, we have the wheels of the gripper rotating for a few seconds. The, we back up directly. Using IMU, our robot rotates 90 degrees, and then crosses underneath the skybridge to the building zone. About 1 foot past the end of the foundation closest to the bridges, we rotate again to the right and then deposit our stone. Afterwards, we retract the intake arm, back up, and then park underneath the skybridge.

    Next steps: Improving autonomous by testing

    The autonomous we have now is very simple, but this is only our first version. There are multiple steps that can be taken to increase the amount of points we score during autonomous.

    In testing, I've noticed that (depending on how successfully we initialize our robot) the stone we pick up during autonomous sometimes drags on the ground. This creates a resistive force that is not healthy of our intake arm, which is mounted on the robot by a singular axle. To fix this, we can add code to slightly raise the arm before we began moving.

    Eventually, when multiple teams on an alliance have an autonomous program, our own path will need to account for possible collisions. It will be strategic to have multiple autonomous paths, where one retrieves stones and places them on the foundation, while the other robot positions itself to push/drag the foundation to the depot.

    Also, our autonomous path is geared toward being precise, but going forward into the season, we will need to intake and place more stones if we want to be competitive. As well, we will need to use robot vision to identify the skystone, and transport that stone to the foundation, since this earns more points.

    Coding 10/19/19 (Putting meat on the skeleton)

    Coding 10/19/19 (Putting meat on the skeleton) By Cooper

    Task: work on actually filling out the auto

    As seen in the last post, the skeleton of the auto was done. Tonight My goal was to fill it out-- make it do the things it needs to at the points based on the skeleton. This Would have been a bit more automated had we put a distance sensor on the robot, as I could just tell it to do certain actions based on how far it was from something. Without that, all I could do was hard code the distances. This took most of the time, but was efficient since I did it in stages.

    Stage 1 - The blocks

    My first task was to pick up the first block in the quarry line. I started by going forward and estimating the amount I needed to go, then went into the arm. I needed to make sure that when I went forward, I would go over the block just enough that I didn't move in when moving, but low enough to be able to be picked up with relative ease. So I ran a teleop version of this and got the value for the arm above the block, grabbing the block and just low enough to clear the bridge, a value I'd need later. Then I did trial and error on guessing the distance to the block until the grabber was in position just over the block. Then I ran into a little issue. I wanted to run the intake servos while I put the arm down, but in the StateMachine class, we can only have one action happening at a time per StateMachine object. Therefore, I just set it to run the servos after the arm was put down.

    However there was a separate issue concerning that as well. In the intake method, we assign a value to our servo PIVs to control the speed at which they run. This is how you are supposed to do that, the only problem is that that by itself is not compatible with our StateMachine. As we use Lambda functions, the code runs through the lines of .addState() repeatedly until the method call in the method call in that .addState() call returns true. For starters, we had to change the output method to return a boolean value. But isn't it, as if it was left as that, the lambda funtion would always get back a false from that .addState(), and be stuck like that until we stop it. So, I looked back on the old code from last year, and with the help of Mr.V, we found a .addTimedState() method. This takes in a method like a normal .addState() method, but a time to complete can be assigned. With the intake method always returning false, it means that the servos would run until the end of the time set and then it would end that action and move on to the next.

    Stage 2 - The deposit

    So, after the bock was picked up, the robot was told to turn to the other end of the field, where another set of estimations were used to move forward. This is where the value of just clearing the bridge came to play. To get under the bridge, we have to hold the block and arm in a certain position. After the bridge is cleared, the arm is set to move back up so when we turn to face the build plate, we could deposit. Now this was interesting. As hard as I tried I could not get the deposit to work reliably, but some of the accidental effects gave me ideas for how to get the most efficient way of placing the block. On one of the runs, the block was set down and it didn't quite sit where it needed to, as to say it was tilted back away from the robot. This led to the arm knocking it back into the correct place. I think this is a great way to have a more catch-all way make sure that the first block is correctly placed. I would have expanded on the idea, but I had to leave soon after.

    Next Steps

    I need to test more efficient paths for this auto, but other than that, I just need to finish this version of the auto for the scrimmage.

    Control Mapping

    Control Mapping By Bhanaviya and Cooper

    Task: Map and test controls

    With the Hedrick Middle School scrimmage being a day away, the robot needs to be in drive testing phase. So, we started out by mapping out controls as depicted above.

    Upon testing the controls, we realized that when the robot attempted to move, it was unable to do so without strafing. To fix this issue, we decided to utilize a "dead-zone" of the left joystick. The dead-zone is a range of values in our code that is basically ineffective. Although this meant that that the zone did not have a purpose, we realized that its uselessness could be rendered to stop the robot from strafing. Although we do plan to implement strafe later on in our actual competition robot (TomBot), for the duration of the scrimmage, the deadzone in Frankendroid's (our scrimmage robot) controls will hold the set of values for strafe so that the robot cannot strafe at any point in time during the scrimmage. This will give our drivers more control over the robot during matches.

    Next Steps

    We plan to drive-test at the scrimmage tomorrow to ensure that the robot can move accurately without strafing. Once we begin code on Iron Roomba, we plan to orient strafe in such a way that it does not interfere with the rest of the robot's controls. At any rate, the dead-zone has given us a possible solution to work with if the strafe issue occurs on our competition bot.Since this is the control map for our scrimmage robot, we anticipate that the controls will change once Iron Roomba is further along in the engineering process. A new post featuring Roomba's controls will be created then.

    Driving at the Hedrick Scrimmage

    Driving at the Hedrick Scrimmage By Karina and Jose

    Task: Figure out what went wrong at the scrimmage

    We didn't do too well in teleop driving at the Hedrick Scrimmage, with our max stone deposit being 2 stones. There are several things to blame.

    In usual Iron Reign fashion, we didn't start practicing driving until a day or two before. Since we were not familiar with the controls, we could no perform a maximum capacity.

    There were also more technical issues with our robot. For one, the arm was mounted wihh little reinforcement. Small amounts of torque provided when dragging a stone across the floor gradually made it so that the line of the arm was not parallel to the frame of the robot, but slightly at an angle. And so, picking up the stones manually was not as straight forward a task as it should have been.

    This flaw could easily have been corrected for if Frankendroid could strafe. Frankendroid struggled with this. When extended, the weight of the arm lifted the back wheel opposite the corner on which the arm was mounted on off the ground. Thus, strafing to align with a stone when the arm was extended was a lengthy and tedious task.

    Next steps:

    Frankendroid has served its purpose well: it moved at the scrimmage and gave the team a better feel for the competition environment. But it's time to let go. Moving forward, Iron Reign will focus its efforts on building our circular TomBot Ironically, we will likely have to deconstruct Frankendroid to harvest parts.

    Coding Before Scrimmage

    Coding Before Scrimmage By Cooper, Karina, Bhanaviya, and Trey

    Task: Finish the temporary auto and work with drivers for teleop

    Tonight, the night before the scrimmage, We worked on making the depositing of the stone and parking of the robot more reliable. Or as reliable as possible, as we are planning to use FrankenDroid, which is somewhat in need of repair, which I also did with the help of Trey, Bhanaviya and Karina. This had a few changes come with it, as while we solved the problems of when we started the auto, there were still many that cropped up.

    Problem #1 - dragging the base

    In the auto, we need to drag the build area into the taped off section in the corner. This poses a problem, as dragging it can lead to major inaccuracies in estimated positioning. This, however can be solved somewhat easily once we have a distance sensor, which we could use in junction with PID based turning. Though in theory I could have done it with just the PID turning. While I would have loved to test that, there was another problem--

    Problem #2 - problem with hook

    There was a problem with our hook. I tried every time I ran auto to get the hook to work. I changed the return value, I changed the physical positioning of where it started, yet nothing worked. This was interesting, as it does work in the teleop. In any case, it prevented us to actually dragging the base in this version of auto. Looking back on it, there was a possibility that I needed to set it as a timed state, like the gripper, since we were using a servo to control it. While its unlikely, it's possible.

    Problem #3 - PID Tuning?

    This was the major issue of tonight, which we haven't found the root of just quite yet. During the auto, at the third turn, where the robot turns to heat to the foundation, there is a ~25% chance that the PID does not check where it turns and it just continues wherever it turns to. This usually leads to it overshooting and then ramming in to the wall. There is a temporary fix, however. For now, it seems that if only happens after we upload the code to the robot, or if we run auto fresh off of it being off. That is to say, if we run the auto at least for a second and then reset and re-init, then it will be good. This is a good thing however, as any chance we get to fix the underlying code's problems, that means we won't have to make a work around after in the season.

    Problem #4 - putting the block on the build platform

    This was the major fixable problem in the code. During auto, we need to take a block from the quarry and put it on the foundation. The problem is when we actually go to deposit it. When we go to put it down, we need to be very accurate, which with FrankenDroid is not easy. With no distance sensors, the best we can do is to tune the exact movements. While this isn't the greatest solution, this will do for now. In the future, we will have a distance sensor so that we can know where we are exactly in relation to the base.

    Next Steps

    We need to implement the distance sensors and other sensors on the robot. Obviously we aren't going to be using FrankenDroid for too much longer. TomBot may bring new innovations like a telemetry wheel which will make auto more accurate.

    Hedrick Scrimmage - Code

    Hedrick Scrimmage - Code By Jose and Cooper

    Task: Discuss what went and what needs improvement in our code

    Taking part in the annual Hedrick Scrimmage, we got to test our Robot In 24 Hours, FrankenDroid. Specifically, since both coders on the team are new to the sub-team, we wanted to see the code capabilities we could offer. For this event we had two autonomous paths: the first one simply walks underneath the skybridge for some easy 5 points, the other grabs a stone(we had no vision on FrankenDroid so no way to detect a skystone), moves to the building zone to drop said stone and parks under the skybridge. For being coded in a just a few days, these auto paths were both high in pointage and accuracy/precision. As well as auto, we wanted to test driver enhancements. These were coded at the event but proved to be useful. They include: a button press to move the arm to either fully retracted, perpendicular to the ground for strafing, and disableing strafing whilst in stacking or intake mode. These also proved to be effective on the playing field, making the drivers' life easier.

    Next Steps

    We need to incorporate vision into our autonomous, most likely Vuforia, to be able to detect skystones as well as speeding up the auto paths to be able to complete a 2 skystone auto.

    Transition from Expansion Hub to Control Hub

    Transition from Expansion Hub to Control Hub By Jose and Cooper

    Task: Discuss the transition from using the Expansion Hub to using the Control Hub

    Over the past month we have used the control hub our robot in 24 hours, FrankenDriod. This was a great way to test its viability before implementing in onto our competition robot. We have already used the control hub at the REV test event where we were given a sample control hub to replace the existing expansion hub in our Rover Rockus bot. This proved the control hub to be much better than the expansion hub since there was no worry of a phone disconnect mid-match. This was no different on FrankenDriod, as we had less ping, didn't have to worry about a phone mount, and most important of all, we could push code to it via wifi. This is a useful feature since modifications to the bot's code can be done on the spot with no need for a wired connection. The only downside we see as of now is that an external webcam must be used for vision, this of course, is because we no longer have a phone to this. This is fine since we are used to using a camera for vision anyways so there is no difference there.

    Next Steps

    Considering that our team is one of the NTX teams who have received permission to beta-test a control hub at qualifiers, we will now use it on our current competition bot, Iron Roomba, especially since we have proven the control hub to be fully viable on a competition bot, having used FrankenDroid at the Hedrick Scrimmage.

    Coding TomBot

    Coding TomBot By Cooper and Jose

    Task: Use existing code from the code base to program TomBot

    To code TomBot, we decided to use the codebase from Frankendroid, as its the one we were most comfortable with. This will change after the qualifier, as we recognize that the robot is more like last year's robot, Icarus. This will, in the long run, help us as we will be able to minimize the amount of refactoring we have to do. But in the meantime, we made 4 major changes in the code for Tombot.

    Change #1 - Mecanum Drive to Differential

    The first was the change from a mecanum drive to a differential, arcade style control. This was done by commenting out the lines for strafing, and changing the method call to a dormant method, which was a remnant from some testing done with a linear op mode for an early version of FrankenDroid. We got rid of the power assignment for the front motors, and just used the back two motors to represent our 2 drive motors. This gave us some trouble, which I’ll cover later. After that trouble, the method was still broken, as the left stick y was controlling the left motor, and the right stick x was controlling the right motor. This was due to the incorrect power assignments in the code. With that fix done, it drove as it should after the switching of an encoder cable.

    Change #2 - Rolling Gripper to 3-finger gripper

    The next 'big' change was the change from the rolling gripper on frankenDroid to the 3- finger design on TomBot. I use the word big lightly, as it wasn't more than commenting out the lines for one of the servos. However, this will have a major impact, which can be seen in the details in our grippers post. This is also note worthy in terms of auto, as it will have adverse effects on auto. This is due to the current instability and overall unpredictability of it. So, in auto, we will have to compensate for it.

    Change #3 - Turret

    One of the biggest changes to the code base we made was with the Turntable class that I wrote. This was also, therefore, the hardest part. Due to the fact I'm still relatively new to this, I got a lot of my examples from the Crane class that Ahbi wrote last year. I started first by tackling making a basic skeleton, including methods like rotateTo() and rotateRight(). Then I started filling them in. For some reason, the first go around at this, I decided to through out all the things they taught me in school and use rotateRight() and rotateLeft() as my lowest level method, instead of rotateTo(). Another thing I failed to realize is that I didn't fully get the Crane class, and made a redundant positionInternal variable for the encoder values that is assigned at the rotate method calls and then another variable called currentPosition was assigned to that, and then the encoder value for the motor was set to that. This sounds stupid, because it was. This cost me a good day of working and was a great lesson in taking my time understanding something before I go off and do it

    Once I had realized my misunderstandings of the Crane class, I was able to move on. I cut out all the unnecessary positionInternal code, using the other variable (currentRotation) to be changed in the rotate mehtods. Speaking of, I also got some sense into me and changed the rotate methods to use setRotation() as its lower-level method, making the code more professional in nature. This, still, was not our only problem. Next was encountered a bizzare glitch-like attribute to using the rotate methods. There was a sporadic, sudden movement whenever we pressed the button assigned for turning the table (the a button as it was just a test). After many looks at all the possible variables of failure, we whittled it down to be the fact that we assigned it to the controllers A button. What we observed was the turntable working, just not how we thought we were telling it to. In the button map, there was a method called toggleAllowed() infront of all the boolean-value button. This, unbeknownst to me was actually a toggle method written by Tycho many years ago. This toggle made it so the action assigned to the button only happened once, which is useful for things like latches and poses, as the driver could overshoot it if left to un-press the button in the correct amount of time. This, however, in our case led to the turnLeft() method (the one assigned at the time) only happen once, which was that sudden, sporadic movement after the a button.

    Once we changed it to a trigger, it worked-- almost. there was still some bug in the code that made it do some pretty funky stuff, which is hard to describe. After we whittled it down to just a small error of changing a negative to a positive, it worked perfectly.

    Change #4 - XML file

    During the Woodrow Scrimmage, I spent most of my time dealing with null pointer exception errors and incorrect XML assignments. This was, again, due to a lack of knowledge of the code base. I tried to comment out certain motors, which led to the null pointers, and tried to get rid of those null pointers in the XML file. After awhile of this loop, I realized my mistake in that the null pointers were due to a method call on an uninstantiated object. When I put all the assignments back in the Init, I was finally able to get it running

    Next Steps

    My next steps are to tune the PID values for auto, so I can use the skeleton from FrankenDroid. Then I need to take some of the sounds from the driver phone, like the critical error one, as it can severely affect workflow and my sanity. Finally, I need to change the turret to make it so that it uses the IMU heading instead of entirely the encoder value from the turntable.

    Last Minute Code Changes

    Last Minute Code Changes By Cooper

    Task: Debug some last-minute code to be ready for our first qualifier of the season

    This article may seem a bit rushed, but that's because it is - for good reason. Tonight is the night before the qualifier and it’s roughly 2 in the morning. Tonight we got a lot done, but a lot didn’t get done. We can explain.

    We finally have a robot in a build state that we could use to test the code for the turntable properly. The only tragedy - it wasn’t refined, per say. But it’s good enough for tonight. There are some random discrepancies between the controller and the actual turning portions of the turntable, but they seem to be largely minute.

    Next, we had issues just a bit earlier tonight with the elbow. First off, the elbow was backwards. The elbow would count the ticks backwards, such that down was a positive tick direction. Looking through all the code, we saw that the motors’ encoder value was flipped through a direct call to the DCMotor class. So we turned that one off and tried it but that didn’t work, so we then found another and put back the first in a different position in the code, thinking they’d cancel out. But, eventually, the solution was as simple as taking out the encoder values, which allowed the elbow to count the ticks forward. We plan to fine-tune our solution after the qualifier, but for now, it will allow the elbow to work.

    Next steps

    Get some sleep and then refine and complete the code tomorrow morning at the qualifier, and hopefully write some auto

    Post-Qualifier Code Debugging

    Post-Qualifier Code Debugging By Cooper

    Task: Debug code after the Allen Qualifier

    After the qualifier, along with articulation plans, we had a long list of bugs in the code that needed to be sorted out. Most of them were a direct effect of not being able to test the code until the night before the qualifier. In hindsight, there were some issues which needed to be debugged in the turntable and turret.

    The first one that we tackled was the turntable wind up and delay. This was one of the bigger problems, as it led to the instabilities seen at the qualifier. These included random jerking to one side, inconsistent speed, and most importantly the delay. As described by Justin, it was a 2-3 second time period in which the turntable did nothing and then started moving. This was especially important to fix for stacking, as quite obviously precision and careful movements are key to this game.

    So we started at the source of what we thought was the discrepancy— the rotateTurret() method. This was under scrutiny, as it was the lowest level call, or in other terms the only code that assigns new tick targets to the motor. In the rotate methods that are called by other classes, we assigned a new value to a different value called currentRotation. Once one of the rotate (right or left) methods were called, then the new value would be assigned to currentRotation. Then where the update() method for this turret class was called in the loop, it would call rotateTurret(), which would them assign currentRotationInternal to currentRotation, and then subsequently call the setTargetPositon() giving currentRotationInternal as it’s new tick value target position.

    We also started going through the demo mode that was written last year. We have this idea for a great cool demo mode that will be documented once it’s in progress. However, to get there we need a working IMU. We technically had an IMU that worked at the competition, though it was never properly used or calibrated. So, we decided to look into getting the IMUs running. We started by looking at the current demo code and seeing what it could do. Most of it was outdated. But, we did find what we were looking for- the maintainHeading() method, in which we called another method, driveIMU. We then wrote a new maintainHeadingTurret() which works pretty well. Granted, we need to adjust the kP and kI values for the PID, but that is quite easy.

    Next Steps

    Continue tuning PID values in both the turn-table and turret.

    Future TomBot Articulations

    Future TomBot Articulations By Cooper

    Task: Plan out potential robot articulations to improve game strategy

    Getting back from the tournament, we were able to immediately start to think about what was the big problems and possible improvements to the articulations of the robot. Overall, we ended up coming up with several ideas, both for fixing things and for efficiency.

    1- Turntable Articulations

    In the competition, we realized the extreme convenience that having some articulations for the turret. Not to say that we hadn't tried to make them before the competition, we were having some issues writing them. plus, even if we didn't have those convene, it would have been improbable that we would have gotten them tuned for the competition. Anyways, even though we agreed on needing to have these presets, we could not agree on what they should be. One argument was that we should have them field-centric, meaning that it would stay in one position from the POV of the audience. This was cited to have a good number of use cases, such as repetitive positions, like the left/right and forward of the field. However, another idea arose to have them be robot-centric. This would allow for faster relative turns.

    So, what we've decided to do is write the code for both. The field-centric will be turns and subsequent static positions will implement the IMU on the control hub mounted to the turret. The robot-centric version will be based on the tick values of the encoders on the turret's motor. Then, we will have the drivers choose which one they prefer. This we believe is effective, as it will allow for a more consistent use of the turntable for the driver.

    2- Move to Tower Height Articulations

    this is one of the more useful Ideas, which would be to extend the arm to the current height of the tower. How would we do it? Well, we have come up with a 2 step plan to do this, in different levels of difficulty. The first one is based on trig. We used the second controller to increment and decrement the level of the current tower. That value is then used in the extendToTowerHeight() method, which was written as the following:

    public void extendToTowerHeight(){ hypotenuse = (int)(Math.sqrt(.25 * Math.pow((currentTowerHeight* blockHeightMeter),2)));//in meters setElbowTargetPos((int)(ticksPerDegree*Math.acos(.5/ hypotenuse)),1); setExtendABobTargetPos((int)(hypotenuse *(107.0/2960.0))); }

    As you can see, we used the current tower height times the height of a block to get the opposite side of the triangle relative to our theta, in this case the arm angle. The .25 is an understood floor distance between the robot and the tower. This means that the arm will always extend to the same floor distance every time. We think this to be the most effective, as it means not only that the driver will have a constant to base the timing of the extension, but we minimize the amount we have to extend our arm. If we assumed the length of the hypotenuse, there would be overextension for lower levels, which would have to be accounted for.

    The next phase of the design will use a camera to continue to extend the arm until it doesn't see any blocks. not only will this allow for a faster ascension and more general use cases, It will eliminate the need for a second controller (or at least for this part.

    3-Auto-grab Articulation

    Finally, the last one that we came up with is the idea to auto-grab blocks. To do this we would use vison to detect the angle and distance that block is away from the robots back arm and extend to it. Then rotate the gripper, snatch it and reel it back.

    Next Steps

    Use a culmination of drive testing and experimentation to refine the robots movements and ultimately automate the robot’s actions.

    Turret IMU Code

    Turret IMU Code By Jose and Abhi

    Task: Code some driver enhancements for the turret

    With the return of the king(Abhi - an alumni of our team) we were able to make some code changes, mainly dealing with the turret and its IMU since that is our current weak point. At first we experimented with field-centric controls but then realized that for ease of driving the robot, turret-centric control are necessary. After a few lines of code using the turret's IMU, we were able to make the turret maintain its heading, as the chassis turn, so does the turret to maintain its position. This is useful because it will allow the driver to turn the chassis without having to turn the turret as well.

    Next Steps

    We must continue tuning the PID of the turret to allow for more stable and accurate articulations.

    Code Developments 12/28

    Code Developments 12/28 By Cooper

    Task: Gripper swivel, extendToTowerHeight, and retractFromTowerHeight. Oh My!

    Today was a long day, clocking in 10 hrs continuously. In those ten hours, I was able to make tremendous progress. Overall, we have 4 main areas of work done.

    The first one gets it’s own blog post, which is the extendToTowerHeight, which encompasses fixing the 2nd controller, calculating the TPM of the arm, and calculating the TPD for the elbow.

    The second focus of the day was mounting and programming the swivel of the gripper. Aaron designed a swivel mount for the gripper the night after the qualifier, which was mounted on the robot. It was taken off by Aaron to finish the design and then today I put it back on, and then wired it. Once we tested to make sure the servo actually worked, we added a method in the Crane class that swivels the gripper continuously. But, since the servo is still a static one, I was also able to implement a toggle that toggles between 90, 0, and -90. With a couple of tests we were able to determine the correct speed at which to rotate and the code ended up looking like this:

    public void swivelGripper(boolean right){ if(right == true) gripperSwivel.setPosition(gripperSwivel.getPosition()-.02); else gripperSwivel.setPosition(gripperSwivel.getPosition()+.02); }

    The third development was the retractFromTowerHeight() method that was written. This is complementary to extendToTowerHeight, but is significantly less complex. The goal of this method was to make retracting from the tower easier, by automatically raising and retracting the arm, such that we don’t hit the tower going down. This was made by using a previously coded articulation, retract, with a call to setElbowTargetPos before it, such that it raises the arm just enough for the gripper to miss the tower. After a couple of test runs, we got it to work perfectly. The final order of business was the jump from ticks being used on the turntable to IMU mode. It was really out of my grasp, so I asked for help from Mr.V. After a couple of hours trying to get the IMU setup for the turret, we finally got it to work, giving us our first step to the conversion. The second came with the changing of the way the turntable moves, as we made a new low level setTurntableTargetPos() method, which is what everything else will call. Finally, we converted all of the old setTurnTablePos() methods to use degrees.

    Next Steps

    As of now both extendToTowerHeight() and the gripper swivel are good. On the retractFromTowerHeight(), it may be important to think of the edge cases of when we are really up high. Also, the turntable is unusable until we tune PID, so that will be our first priority.

    Extend to Tower Height and Retract from Tower

    Extend to Tower Height and Retract from Tower By Cooper

    Task: Develop the controller so that it can extend to tower height

    Since we have decided to move onto using 2 controllers, we can have more room for optimizations and shortcuts/ articulations. One such articulation is the extendToTowerHeight articulation . It takes a value for the current tower height and when a button is pushed, it extends to just over that height, so a block can be placed. This happened in 2 different segments of development.

    The first leg of development was the controller portion. Since this was the first time we had used a 2nd controller, we ran into an unexpected issue. We use a method called toggleAllowed() that Tycho wrote many years ago for our non-continuous inputs. It worked just fine until we passed it the second controllers inputs, as then it would not register any input. The problem was in the method, as it worked on an array of the buttons on the controller to save states, and there was conflict with the first controller. So, we created a new array of indexes for the second controller, and made it so in the method call you pass it the gpID (gamepad ID), which tells it which of those index arrays to use. Once that was solved, we were able to successfully put incrementTowerHeight() on the y button and decrementTowerHeight() on the x button. The current tower height is then spit out in telemetry for the second driver to see.

    Then came the hard part of using that information. After a long discussion, we decided to with a extendToTowerHeight() that has a constant distance, as having a sensor for distance to the build platform would have too many variables in what direction i t should be in, and having it be constant means the math works out nicely. So this is how it would look:

    Now, we can go over how we would find all of these values. To start, we can look at the constant distance measure, and to be perfectly honest it is a completely arbitrary value. We just placed the robot a distance that looked correct away from the center of the field. This isn’t that bad, as A) we can change it, and B) it doesn’t need to be calculated. The driver just needs to practice with this value and that makes it correct. In the end we decided to go with ~.77 meters.

    Then before we moved on we decided to calculate the TPM (Ticks Per Meter) of the extension of the arm, and the TPD (Ticks Per Degree) of the elbow, as it is necessary for the next calculations. For the TPM, we busted out a ruler and measure the extension of different positions in both inches (which were converted into Meters) and the tick value, then added them all up respectively and made a tick per meter ratio. In the end, we ended up with a TPM of 806/.2921. We did similar with the TPD, just with a level, and got 19.4705882353. With a quick setExtendABobLengthMeters() and a setElbowTargetAngle() method, it was time to set up the math. As can be seen in the diagram, we can think of the entire system as a right triangle. We know the opposite side (to theta) length, as we can multiply the tower height by the height, and we know the adjacent side’s length, as it is constant. Therefore, we can use the Pythagorean theorem to calculate the distance, in meters, of the hypotenuse.

    hypotenuse = Math.sqrt(.76790169 + Math.pow(((currentTowerHeight+1)* blockHeightMeter),2));//in meters

    From that, we can calculate the theta using a arccosine function of the adjacent / hypotenuse. In code, it ended up looking like this: .setElbowTargetAngle(Math.toDegrees(Math.acos(0.8763/ hypotenuse)));

    Then we set the extension to be the hypotenuse:

    setExtendABobLengthMeters(hypotenuse-.3683);

    While it has yet to be seen its effectiveness, it should at the very least function well, as shown in our tests. This will help the drivers get into the general area of the tower, so they can worry more about the fine adjustments. For a more visual representation, here is the position in CAD:

    Next Steps:

    We need to work on 2 main things. Tuning is one, as while it is close, it’s not perfect. The second thing to work on is using a custom vision program to automatically detect the height of the tower. This would take all the stress off the drivers.

    Last Coding Session of the Decade

    Last Coding Session of the Decade By Cooper

    Task: Gripper swivel, extendToTowerHeight, and retractFromTowerHeight. Oh My!

    Today is the second to last day of 2019, and therefore the same of the decade. Thus, I want to spend it at robotics. Today I worked solely on vision testing and attempt of implementation. However they ended up being fruitless, but let me not get ahead of myself. To start the day, I tried looking at the example vuforia code that was provided. After which I hooked up a camera to the control hub to try any see it in action. We learned that in the telemetry, there are 2 lines of values spit out, which are the local position in mm and the XYZ values of the block. For the first bit of the day when we were testing, we thought to use the XYZ values, but they seemed to be unreliable, so we switched over to the local position. Once we had gotten that down and gotten a map of values of where the skystone would be, I tried to tailor the concept class to be directly used in our pipeline from last year, and then refactor all of it. But this didn’t work, as it would always throw an error and for the life of me I could not get it to work.

    Today wasn’t a complete waste, however, as I have learned a valuable lesson -- don’t be lazy. I was lazy when I just tried to use the example code provided, and it’s what ultimately led to the failure.

    Next Steps

    Take another stab at this, but actually learn the associated methods in the example code, and make my own class, so it will actually function.

    Control Mapping v2

    Control Mapping v2 By Jose

    Task: Map out the new control scheme

    As we progressively make our robot more autonomous when it comes to repeated tasks, it's time to map these driver enhancements. Since we have so many degrees of freedom with TomBot we will experiment with using two controllers, where one is the main controller for operating the robots and the second handles simpler tasks such as setting the tower height and toggling the foundation hook.

    Next Steps

    We need to experiment with the two-driver system as well as implement a manual override mode and a precision mode where all the controls are slowed down.

    Testing Two Drivers

    Testing Two Drivers By Justin and Aaron

    Task: Practice driving with two drivers

    Today we started testing out our new two controller setup. The goal is to have one driver control just the base, and have the other driver control the arm and turret. With the early stage of the 2 driver code, we were able to practice maneuvering around the field and placing blocks. unfortunately the code wasn't completely sorted, so the turret controller lacked many features that were still on the drive controller.

    An issue we noticed at first was that the drive controls were backwards, which was quickly fixed in code. After the robot was driveable, we spent most of our time practicing picking up blocks and testing out new code presets. Throughout the day we transferred functions between controllers to divide the workload of the robot into the most efficient structure. We found that whoever controls the base should also be responsible for placing the arm in the general area of where it needs to be, then the turret driver can make fine adjustments to grab and place blocks. This setup worked well and allows us to quickly grab stones off the lineup shortly after auto.

    Next Steps

    Next we will practice becoming more fluid with our driving and look for more common driving sequences that can be simplified to a single button.

    Driver Optimization Developments 5/1

    Driver Optimization Developments 5/1 By Cooper

    Task: Improve driver optimizations

    Today we worked on driver optimizations, since Justin was here. We changed around the controls for the arm to be more like the drivetrain and the D-pad on controller 1, with the left stick by controlling the elbow, the x controlling the turret, and y on the right stick to control the extension of the arm. This was cited to be more natural to the drivers than the previous setup. Then we tuned the PID values for the turret, while also reducing the dampener coefficient of the controller for the turret. Though here we ran into some issues with the dead zone rendering the entire axis of the given controller stick useless, but we shortly fixed it. There was also a problem with our rotateCardinal() method for the turret that we fixed by redoing our direction picking algorithms. Finally, I worked on tuning auto just a tad, but then had to leave.

    Next Steps

    Analyze more driver practice to get more concise controls for the driver, and finish auto.

    Drive Practice 1/6

    Drive Practice 1/6 By Justin and Aaron

    Task: Practice driving with new code

    Today we worked on driving the robot with new presets. Over the weekend, our coders worked on new presets to speed up our cycling time. The first preset the drivers learned was the cardinal directions, which allows the base driver, but potentially both drivers to quickly rotate the turntable 90 degrees. This made switching from intakeing to stacking directions very fast. To further speed up our stacking time, our coders implemented a stack to tower height, which allows the driver to set a height and the gripper will raise to it. This took a lot of practice to correctly distance the robot from the foundation for the preset to reach the tower. To avoid knocking over our own tower, we decided that the arm driver should stop the 90 degree rotate before it fully turns, so when the arm is extended it goes to the side of the tower, so the driver can rotate the turntable and still place the stone.

    We also worked on dividing the control between the two drivers, which involved transferring functions between controllers. We debated who should have turntable control and decided the base should, but we would like to test giving the turret driver control. The extend to height controls were originally on the drive controller but were moved to the turret to allow for a quicker extension process. The gripper wobble greatly slows down our stacking, even after dampening it.

    Next Steps

    Our next steps are to practice driving for our next qualifier and modify our gripper joint. A lot of our robot issues can be solved with enough drive practice. We need to start exploring other gripper joint options to allow it maintain orientation but not sway.

    Auto Developments at the STEM Expo

    Auto Developments at the STEM Expo By Cooper

    Task: Improve autonomous and tune IMU

    During the STEM Expo, while also helping volunteer, we worked on auto. There were a series of cascading events that were planned and completed. The first of which was to calculate the TPM of the base. There was, however, a problem before we did that. Our robot has a slight drift when trying to drive straight, which could be solved by driving based off of the IMU. However, we had discovered a couple of days ago that it doesn’t run. This made no sense, until a critical detail was uncovered -- it sets active to false. With this knowledge, Ahbi sluthed that the action was immediately being completed, since it was in an autonomous path. We then took a break from that and calculated the TPM a different, and far less complex way- we drove it a meter by hand and recorded the tick values. After we did that, we averaged them up, and got 1304, which in the end we decided to use, since just after that Ahbi figured out the problem with the driveIMU() method, and it went perfectly a meter. The issue was rooted in one wrong less-than sign, which was in the if statement to detect if we had gotten to our destination yet.

    Next Steps:

    This is the first time we've actually tuned auto since the UME Qualifier, but now that Mahesh is trying to implement Vision, we plan to improve the sensor capabilities of our robot as well.

    Code Changes At STEM Expo

    Code Changes At STEM Expo By Mahesh, Cooper, and Abhi

    Task: Use Vuforia To Detect Skystones And Tune Ticks Per Meter

    This Saturday, we had the privilege of being a vendor at the Annual DISD STEM Expo. While this event served as a good way for us to showcase TomBot at our booth, it also gave us the much-needed chance to experiment with vision. With this year's game rewarding 24 points for locating skystones and placing them onto the foundation, vision is an essential element to success. To detect skystones, we could have gone down three distinct paths: OpenCV, Vuforia, or Tensorflow.

    We chose to use Vuforia instead of Tensorflow or OpenCV to detect skystones since the software gave the rotation and translation of the skystones relative to the robot's position, which could then be used to determine the position of the skystone, either left, center, or right. Additionally, Vuforia has proven to work under different lighting conditions and environments in the past, whereas Open CV requires rigorous tuning in order to prove flexible for a variety of field settings.

    The second major task we worked on during the STEM Expo was calibrating ticks per meter. The issue we encountered when driving both wheels forward a set number of ticks was that the robot drifted slightly to the right, either meaning that the wheels are misaligned or that one wheel is larger than the other. To fix this issue, rather than tuning PID Coefficients, we figured out a separate ticks per meter measurement for both wheels, so that one wheel would move less than the other to account for the difference in wheel diameters. After experimenting with different values and tuning appropriately based on results, we arrived at a ticks per meter number for each wheel.

    We could have used a more mathematical approach for calculating ticks per meter, which would be equal to (ticksPerRevolution * driveGearReduction) / (wheelDiameter * PI), with "wheelDiameter" being measured in meters. However, this solution would require a very precise measurement of each wheel's diameter, which our caliper is not wide enough to measure. Additionally, this solution would not account for wheel slippage, and for these reasons, we chose the latter approach.

    Next Steps

    Unfortunately, the vuforia vision pipeline did not work at the STEM Expo, which may be a result of bad lighting or some other code error. Moreover, constants such as the camera's placement relative to the center of the robot have not been measured as of now, which is a task for the future. In order to make sure vuforia is working properly, we should send the camera's feed into FTC Dashboard in order to debug more effectively and pinpoint the issue at hand.

    For last year's game, three different vision pipelines were used, Tensorflow, Vuforia, and OpenCV, and all three were compared for their effectiveness for finding the positions of gold cube mineral. This strategy can be employed for this year as well, since building a robust OpenCV pipeline would be impressive for the control award, and comparing all three options would give us a better idea as to which one works most effectively for this year specifically.

    Coding the Snapdragon Gripper

    Coding the Snapdragon Gripper By Cooper

    Task: Code the new Snapdragon gripper

    Last night we installed the new Snapdragon gripper, which means we needed to re-work the gripper code. We started out by getting the positions the servo would go to using a servo tester. Then we decided whether to make it an articulation, which originally we did. This articulation would set the servo to pull up the gripper front and then return to its relaxed position. After doing some testing, that method was not working.

    So, we moved on to reformatting the gripper update sequence we had for the last gripper. There we still saw no success after that. So, we decided to call it a night, as it was getting late. The next morning, with a clear mind, we realized that the wire connection was flipped on the perf board, wherein after flipping it it worked fine.

    Next Steps:

    We still need to test it with drivers, see if there are any quirks.

    TomBot Calibration Sequence

    TomBot Calibration Sequence By Cooper

    Task: Create a calibration sequence to find a starting position for autonomous

    Today we worked on the calibration sequence. This has been a problem for awhile now, as the robot has so many degrees of freedom, and not a single flat edge to square off of (other than the guillotine, but that isn’t necessarily orthogonal to anything), it is rather difficult to come up with some way to ensure precision on startup, and this year its integral to the auton.

    To start, the arm is in need of a good way to calibrate. In theory, we have a couple of constants. We have a hard stop to the elbow, thoughtfully provided by the logarithmic spiral. We also can get the ticks from that position to a point that we define as zero. In terms of extension, we have a hard stop on the full retract, which is really all that is needed. So, we start by retracting the arm and increasing the angle of the elbow until it stalls, and we set that as the 0 for the extension. Then, we go down -elbowMax while extending the arm, such that it doesn’t hit the robot, and quickly set that elbow position as the 0 for the robot.

    Previous to this revision, we had different juxtapositions of the robot in terms of the arm and the base, because we couldn’t figure out what was the best compromise of precision and ease. This time around, we decided to have the robot and the arm face the north wall. In this way, the north is common between both alliances and sides, and we can just tell it with a button push which alliance it’s on. So, with that in mind, the next steps of the calibration are to raise up the arm and turn to be orthographically square with the wall. Then, it uses a driveIMUDistance to go back and tap the wall. This is how this sequence will probably stay relatively similar throughout the rest of the time with this robot, as this seems to be what we’ve been trying to achieve for awhile now. There, however, are still things that could be added.

    Next Steps

    In the future, we could add a magnetic limit switch between the turret and the base, so we can automate turning the turntable to the correct position. Also, we could add distance sensors to the (relative) back, left and right, as to ensure that were in the correct position based on the distance to the wall.

    OpenCV Grip Pipeline

    OpenCV Grip Pipeline By Mahesh

    Task: Develop An OpenCV GRIP Pipeline To Detect Skystones

    With this year's game awarding 20 points to teams than can successfully locate Skystones during autonomous, a fast and reliable OpenCV Pipeline is necessary to succeed in robot game. Our other two choices, using Vuforia and Tensorflow, were ruled out due to high lighting requirements and slow performance, respectively.

    With many different morphological operations existing in OpenCV and no clear way to visualize them using a control hub and driver station, we used FTC Dashboard to view camera output and change variables realtime. This allowed us to more rapidly debug issues and see operations on an image, like in a driver controller and expansion hub setup.

    To rapidly develop different pipelines, we used GRIP, a program designed specifically for OpenCV testing. After experimenting with different threshold values and operations, we found that a 4 step pipeline like the following would work best.

    The first step is a gaussian blur, used to remove any noise from the raw camera output and smoothen the darkness of the black skystone. Next, a mask is applied to essentially crop the blurred image, allowing the pipeline to focus on only the three stones. An HSV threshold is then applied to retain colors which have low values; essentially black. Afterwards, a blob of white pixels appears near the black skystone, who's center can be determined by using a blob detector, or even by finding contours, filtering them appropriately, and placing a bounding rectangle around them, then taking the center of that rectangle to be close to the centroid of the black skystone blob. Here is a visual representation of each stage of the OpenCV pipeline:

    Next Steps

    The next and only step is to integrate the GRIP pipeline with our existing FTC Webcam capture system, which uses Vuforia to take frames, and decide which x-coordinates of the skystone coorespond to which positions of the skystone. Specifically, we have to take the width of the final images and divide it into three equal sections, then take the boundaries of those three sections to decide the location of each skystone.

    Control And Vision DPRG Presentation

    Control And Vision DPRG Presentation By Mahesh and Cooper

    Task: Present Control And Vision To DPRG And Gather Feedback

    This saturday, we had the privilege to present our team's Control and Vision algorithms this year to the Dallas Personal Robotics Group. During this event, we described the layout of our robot's control scheme, as well as our OpenCV vision pipeline, in order to gather suggestions for improvement. This opportunity allowed us to improve our pipeline based on the feedback from more than a dozen individuals experienced in the designing, building, and programming of robots. We were able to demo our robot on a playing field, showcasing the mechanics of its design as well as semi-autonomous articulations to help improve driver performance.

    Here are is the slideshow we presented to DPRG:

    For this year's game, we chose a four step vision pipeline to detect skystones, which comprised of a blur, followed by a mask, then an HSV threshold, and finally a blob detector to locate the centroid of the black skystone. Although this pipeline worked fairly well for us, differences in lighting and the environment we are competing in may result in varying degrees of inaccuracy. To combat this, the DPRG suggested we used some kind of flash or LED in order to keep lighting of the stones consistent throughout different settings. However, this may result in specular reflections showing up on the black skystone, which will interfere with our vision pipeline. Another suggestion thrown was to detect the yellow contours in the image, and crop according to the minimum and maximum x and y values of the contour, allowing us to focus on only the three stones on the field and discard colors in the background. This suggestion is particularly useful, since any tilt of the webcam, slight deviation in the calibration sequence, or skystones lying outside the boundaries of the mask will not affect the detection of skystones.

    Next Steps

    The most significant input that DPRG gave us during the presentation was the cropping of skystones based on the size of the yellow contour present in the input image, allowing us to detect the black skystone even if it lies outside the mask. To implement this, we would have to test an HSV threshold to detect yellow contours in the image using GRIP, filtering those yellow contours appropriately, and cropping the input image based on the coordinates of a bounding box placed around the contour. Although this addition is not absolutely necessary it is still a useful add on to our pipeline, and will make performance more reliable.

    The Night Before Regionals - Code

    The Night Before Regionals - Code By Cooper and Trey

    Task: Fix our autonomous path the night before regionals.

    Twas the night before regionals, and all through the house, every creature was stirring, especially the raccoons, and boy are they loud.

    Anyways, it’s just me and Trey pulling an all nighter tonight, such that he can work on build and I can work on auto. Right now the auto is in a pretty decent shape, as we have the grabbing of one stone and then the pulling of the foundation, but we need to marry the two. So our plan is to use The distance sensors on the front and sides of the robot to position ourselves for the pull.

    Another thing we are working on is a problem with our bot that is compounded with a problem with our field. Our robot has a wheel that is just slightly bigger than the other. This leads to drift, if the imu was not used. But since our field has a slope to it, it drifts horizontally, which is not fixable with just the imu. So we plan to use a correction method, where the distance from where we want to go and the distance to the block to create a triangle from which we should be able to get the angle at which we need to go and how far we should go to end up perfectly spaced from the side of the build platform.

    Our final task tonight is to simply speed up the auto. Right now we have points at which the robot has to stop so that we don’t overshoot things, but that is fixable without stopping the robot.

    Next Steps

    Mirror the auto onto blue side and practice going from auto to teleop

    Driving at Regionals

    Driving at Regionals By Justin, Aaron, and Jose

    Task: Drive at Regionals

    Driving at regionals was unfortunately a learning opportunity for our drivers. In our first few matches, for some reason we couldn't get our robot moving; we faced code crashes, cables being pulled, and incorrect calibration during the transition from autonomous to tele-op. These issues combined with our weak autonomous (sorry coders), led to a very unimpressive robot performance for our first few matches.

    When we finally got our robot working, our lack of practice and coordination really showed. The lack of coordination between these drivers and coders resulted in drivers relying on manual controls, rather than preset articulations. Our articulations were also very harsh and untested, some resulting in constant gear grinding, which pushed the drivers to use manual controls. This slowed down our robot and made us very inneficient at cycling. The presets that gave us the most issues were transitioning from stacking or intaking to moving. The intake to north preset, which pointed the arm north after picking up a block, practically tossed the stones we picked up out of our gripper. The stacking to intake preset, which raised the arm off of a tower and pointed it south, would keep raisign the arm up, stripping the gears. This made us rely on our very slow manual arm and turntable controls. A failure in the capstone mechanism caused the capstone to fall off the robot during matches. With all of these issues, we stacked at most three stones during a match; not nearly enough to make us a considerable team for alliance selection.

    Next Steps

    We need to get consistent driver practice while coordinating with coders about the effectiveness of their presets. Many of our failures at regionals could be solved by driver practice. Our drivers being comfortable with the robot, both manually and with presets, would allow us to stack much faster and speed up the robot's in code to make our robot as efficient as possible.

    Wylie East Regional Qualifier Code Post-Mortem

    Wylie East Regional Qualifier Code Post-Mortem By Mahesh and Cooper

    Task: Reflect On Code Changes And Choices Made During The Wylie East Regional Qualifier

    Despite putting in lots of effort in order to pull off a working autonomous before regionals, small and subtle issues that surfaced only during testing at the competition as well as various other small bugs with our autonomous routine prevented us from performing well on the field. Trying to write a full autonomous in the last week before competition was a huge mistake, and if more time was dedicated to testing, tuning, and debugging small issues with our code base, we could have accentuated the theoretical aspect of our code with actual gameplay on the field. The issues experienced during the Wylie East Qualifier can be boiled down to the following:

    Improper Shutdown / Initialization of the Webcam and Vuforia

    We frequently encountered vuforia instantiation exceptions after attempting to initialize the camera after an abrupt stop. We suspect this issue to have originated from the improper shutdown of the Webcam, which would likely result from an abrupt stop / abort. During later runs of our autonomous and teleop with multiple, more complex vision pipelines, we saw that attempting to reinstantiate Vuforia after it has already been instantiated resulted in an exception being thrown. This issue caused us to not play in certain matches, since our program was either stalled or its execution was delayed from restarting the robot.

    Disconnection Of The Webcam (Inability To Access Camera From Rev Hub)

    Ocassionally when attempting to initialize our robot, we saw a warning pop up on the driver station which read "Unable to recognize webcam with serial ID ..." indicating that either the webcam had been disconnected or was for some other reason not recognized by the rev hub. On physical inspection of the robot, the webcam appeared to be connected to the robot via USB. The solution we came up with was to quickly disconnect and reconnect the webcam, after which the warning disappeared.

    This issue prevailed in other forms on the competition field, however. Sometimes, during gameplay, when the webcam was accessed, the blue lights on the rim of our webcam would not light up (meaning that the webcam was not active), and our program would stall on skystone detection. This happened despite getting rid of the driver station, warning, and is most likely another result of improper initialization / shutdown of vuforia after an abrupt stop or abort.

    State Machine Issues

    At the end of our autonomous, if the statemachine had completed, our robot would proceed to spin slowly in a circle indefinately. This unexpected behaviour was stopped using a stopAll() function which set all motor power values to zero, effectively preventing any functions which messed with the robot's movement to be ignored at the end of our statemachine's execution.

    Lack of Testing / Tuning

    By far the biggest reason why we did not perform as predicted at the qualifier was because of the lack of testing and tuning of autonomous routines. This would include running our statemachines multiple times to fine tune values to minimize error, and debug any arising issues like those that we experience during the competition. A lack of tuning made the time spent on our skystone detection pipeline completely useless as our crane did not extend to the right length in order to pick up the skystone, a direct result of inadequate testing. All of the above issues could have been prevented if they had surfaced during extensive testing, which we did not do, and will make sure we follow in the future.

    Next Steps:

    In the future, we ultimately plan to put a freeze on our codebase at least 1 week before competition, so that the remaining time can be spend for building, driver practice, etc. Additionally, we have agreed to extensively test any new additions to our codebase, and assess their effect on other subsystems before deploying them onto our robot.

    Code Changes The Week Before Regionals

    Code Changes The Week Before Regionals By Mahesh and Cooper

    Task: Assess Code Changes During The Week Before Regionals

    Numerous code changes were made during the week before regionals, the most signicant of which were attempted two days before regionals, a costly mistake during competition. Firstly, three different paths were layed out for respective position of the skystone (South, middle, and North), which involved rotating to face the block, driving to the block, extending enough distance to capture the block, and driving towards the foundation afterwards.

    Next, we proceeded to add small features to our codebase, the first of which was integral windup prevention. We saw that when tuning gains for our turret PID, we experienced a build up of steady state error which was counteracted by increasing our integral gain, but resulted in adverse side effects. We used the following code to declare a range which we refer to as the integralCutIn range, and when the error of the system drops below that threshold, the integral term kicks in.

    This code was put in to account for a phenomenon known as integral wind up; when the theoretical correction given by the system surpasses the maximum correction that the system can deliver. An accumulation of error results in more correction that can possibly be given in the real model, so to prevent this, the integral term is active only within a small range of error, so that the robot can deliver a reasonable amount of correction and avoid overshoot.

    We continued to tune and tweak our autonomous routine Tuesday, making minor changes and deleting unecessary code. We also encountered errors with the turret which were fixed Wednesday, although the biggest changes to our algorithms occured in our skystone detection vision pipeline on Thursday.

    On Thursday, code was added in order to select the largest area contour detected by our vision pipeline, and avoid percieving any disturbances or noise in the image as a potential skystone. We achieved this by first iterating through the found contours, calculating area using Imgproc.contourArea(MatOfPoint contour), keeping track of the maximum area contour, and using moments to calculate the x and y coordinates of the blob detected. The screen was then divided into three areas, each of which corresponding to the three skystone positions, and the final position of the skystone was determined using the x coordinate of the blob. A snippet of the code can be seen here:

    On the final stretch, we added localization using Acme Robotic's roadrunner motion profiling library, which will be expanded on in the future. We also tuned and tweaked PID gains and ticks per meter. Finally, we added code to read distance sensors, which will be used in the future to detect distance to the foundation and follow walls. In addition we integrated the vision pipeline with the existing addMineralState(MineralState mineralState) method to be used during the start of autonomous.

    Next Steps:

    In the future, we plan to use the features we added during this weekend to expand on our robot's autonomous routine and semi-autonomous articulations. These include incorporating odometry and localization to reduce the error experienced during autonomous, and even drive to specific points on the field during teleop to improve cycle times. In addition we can now use distance sensors to follow walls, further improving our accuracy, as well as determine the distance to the foundation, allowing for autonomic placement of skystones using the extendToTowerHeight articulation.