Return to Machine Vision
Tags: Software and InnovatePersonhours: 4
Task: Prepare to reintegrate machine vision
A year and a half ago while the new Android-based platform was still in pre-launch, we were the first team to share a machine vision testbed on the FTC Forums. That color-blog tracker was implemented with OpenCV on Android, but with a different low-level control system and robotics framework. Then we integrated OpenCV into our implementation of ftc_app, which was in turn based on the great work of rgatkinson's team supporting Swerve Robotics. Our main game repo for FIRST RESQ was also open sourced and we gained a lot of experience using it. But we had many issues that prevented full usage of the capability. There were problems with the whole control system that created extremely variable loop times which really challenged our custom PID implementation. On top of that, we found that in many tournaments the beacons were not working, or the lighting was too bright and our camera was being flooded by the white shell of the beacons. That made it an unreliable solution.
So this year we switched to the modern robotics color sensor as a slightly more reliable method of detecting the current color while up close. This also gave us the ability to add color sensors to both sides of the robot so we no longer have to turn around when on the blue alliance. And we found we had good-enough navigation based on calibrated odometry so that we could get into position without color tracking.
But now we need to go ahead and try to re-integrate our previous machine vision code and see if we can improve on the situation. We also need to at least try out the Vuforia object tracking capabilities, even though we've set that as a lower priority because we know that specular reflections are likely to be a problem under varying lighting conditions at different competition spaces. We've noticed this problem at a couple of spaces due to the marker placement behind the planar polycarb of the border walls. So we don't actually plan to rely on this as a primary means of navigation and positioning, but we should try it out and see how robust it might be.
We still want to use machine vision to possibly track beacons and particles. We are hoping to track particles to create an autonomous behavior that triggers during teleop so that a particle near the front of our particle collector can be automatically approached and pulled in without operator intervention. This should help since picking up particles on the far side of the robot from the drivers is very difficult because of blocked sight lines. We want to use color tracking to make sure we don't pick up opposing alliance particles.
Research / References
I checked out Vuforia and there is no ability to track based on color. So we need to use OpenCV again, but when Vuforia is present it also locks up access to the camera. Fortunately there is now a way to get a frame from Vuforia and reformat the image data for OpenCV's use.
- How to integrate OpenCV into an Android Studio Project
- How to get access to camera data from Vuforia
- Forum post on grabbing frames from Vuforia. This shows a concrete example by Corban987 on how to use the preferred vuforia.getFrameQueue method in the current api.
- Complete Video Series on Vuforia Somehow we missed this until later in our research, but Team 3491 FIXIT are clearly the authorities on the subject of vufuria integration in FTC. Check out their videos 4 thru 7.
I plan another post to document the actual steps we went through. Stay tuned. If Vuforia proves troublesome, we might revert to getting our image from a camera preview just like last year. Though that would mean messing around with the Android manifest and the layouts in the main FtcRobotController folder.