Table of Contents
- Overview
- The Drift Problem
- Limelight AprilTag Localization
- DECODE Field Layout and Caveats
- Assessing Fix Quality
- Applying Corrections (Hard Reset, Exponential Filter, Kalman)
- FTC SDK AprilTag Detection (Webcam Alternative)
- VisionPortal Threading Model
- CPU Management Strategies
- Decimation, Resolution, Frame Rate
- Relocalization Opportunities
- Match Start, Autonomous, TeleOp
- Physical Landmarks (Walls, Tape Lines, Distance Sensors)
- Implementation Architecture
- RobotLocalizer Concept
- Testing and Validation
- Ground Truth Validation
- Static and Dynamic Accuracy Testing
- Edge Cases and Camera Snapshots
- Summary: Match Flow
- References
Overview
This document outlines Iron Reign’s strategy for maintaining accurate robot position knowledge throughout a match. We combine multiple localization sources to achieve reliable positioning:
| Source | Strengths | Weaknesses |
|---|---|---|
| Pinpoint Odometry | Continuous, low-latency, good short-term accuracy | Drifts over time, wheel slip, collisions |
| Limelight AprilTags | Absolute position, no drift | Requires line-of-sight, latency, intermittent |
| IMU Heading | Stable heading reference | Gyro drift over time |
| Physical Landmarks | Absolute reference points | Requires contact/proximity |
The goal is sensor fusion: use Pinpoint as our primary continuous localizer, and periodically correct its drift using AprilTag fixes and other absolute references.
The Drift Problem
Odometry systems like Pinpoint accumulate error over time:
Match Start: Position error = 0
After 30 sec: Position error = 1-2 inches (normal operation)
After 60 sec: Position error = 2-4 inches
After collision: Position error = ??? (unpredictable jump)
Sources of drift:
- Wheel slip during acceleration/deceleration
- Omni roller compliance under load
- Collisions with field elements or other robots
- IMU gyro drift affecting heading calculations
- Encoder resolution limits at high speeds
Even small heading errors compound over distance:
- 1° heading error over 6 feet = 1.3 inches lateral error
- 2° heading error over 6 feet = 2.5 inches lateral error
Limelight AprilTag Localization
What the Limelight Provides
The Limelight 3A with AprilTag detection provides several pose outputs:
| Method | Description | Use Case |
|---|---|---|
getBotpose_MT2() |
Robot pose in field coordinates (MegaTag2) | Primary relocalization source |
getBotpose() |
Robot pose (single-tag) | Backup if MT2 unavailable |
getFiducialResults() |
Per-tag 3D pose data | Quality assessment, multi-tag validation |
DECODE Field AprilTag Layout
From the official Limelight field map:
| Tag ID | Purpose | Position (meters) | Position (inches) | Use for Localization? |
|---|---|---|---|---|
| 20 | Blue Goal | (-1.48, -1.41, 0.75) | (-58.4, -55.6, 29.5) | Yes |
| 24 | Red Goal | (-1.48, +1.41, 0.75) | (-58.4, +55.6, 29.5) | Yes |
| 21-23 | Obelisk | (varies) | (varies) | No |
DECODE Localization Caveats
Important: DECODE has significantly fewer reliable localization tags than prior seasons.
Only Tags 20 and 24 should be used for relocalization. The three additional tags (21-23) on the Obelisk near field center should NOT be used because:
- The Obelisk serves a different game purpose (not a fixed field element)
- Its position is not guaranteed to be accurate
- Using it for MT2/botpose calculations could introduce errors
Implications of sparse tag coverage:
- Single-tag fixes more common - With only 2 tags on one wall, you’ll often see just one tag at a time. Multi-tag (MT2) fixes will only occur when:
- Robot is positioned to see both goals simultaneously
- Both tags are within detection range
-
Heading accuracy may suffer - Multi-tag detection improves heading estimates. With single-tag fixes, heading is derived from tag orientation which is noisier.
-
Coverage gaps - When facing away from the goal wall, no tags are visible. Plan autonomous routes to periodically face the goals for corrections.
- Quality filtering is critical - With fewer tags, each fix matters more. Be conservative with quality thresholds to avoid bad corrections.
Recommended approach for DECODE:
// Only use goal tags for localization
private static final Set<Integer> VALID_LOCALIZATION_TAGS = Set.of(20, 24);
public boolean isValidLocalizationTag(int tagId) {
return VALID_LOCALIZATION_TAGS.contains(tagId);
}
// In quality check:
public boolean isGoodFix(LLResult result, Pose2d currentPose) {
// ... existing checks ...
// DECODE-specific: Verify we're using goal tags only
List<LLResultTypes.FiducialResult> tags = result.getFiducialResults();
for (LLResultTypes.FiducialResult tag : tags) {
if (!isValidLocalizationTag(tag.getFiducialId())) {
// Obelisk tag detected - don't trust this fix
return false;
}
}
return true;
}
Community observation: As of early 2026, there hasn’t been significant community discussion about fix quality degradation from the sparse tag layout. Teams should benchmark their specific setups and share findings.
Assessing Fix Quality
Not all AprilTag detections are equally trustworthy. Before applying a correction:
public class VisionFixQuality {
// Thresholds for accepting a fix
public static final double MIN_TARGET_AREA = 0.5; // % of frame
public static final double MAX_TARGET_AREA = 25.0; // Too close = distortion
public static final int MIN_TAGS_FOR_MT2 = 2; // Multi-tag preferred
public static final double MAX_POSE_JUMP = 0.5; // meters - reject outliers
public static final double MAX_LATENCY_MS = 100; // Stale data threshold
public static boolean isGoodFix(LLResult result, Pose2d currentPose) {
if (result == null || !result.isValid()) return false;
// Check target area (too small = far/noisy, too large = too close)
double ta = result.getTa();
if (ta < MIN_TARGET_AREA || ta > MAX_TARGET_AREA) return false;
// Check latency
double latency = result.getTargetingLatency() + result.getCaptureLatency();
if (latency > MAX_LATENCY_MS) return false;
// Check for outliers (pose jump too large)
Pose3D botPose = result.getBotpose_MT2();
if (botPose == null) return false;
double dx = botPose.getPosition().x - currentPose.position.x;
double dy = botPose.getPosition().y - currentPose.position.y;
double jump = Math.hypot(dx, dy);
if (jump > MAX_POSE_JUMP) {
// Large jump - could be valid (after collision) or outlier
// Log for review, maybe require multiple consistent fixes
return false;
}
// Check number of tags (MT2 quality)
List<LLResultTypes.FiducialResult> tags = result.getFiducialResults();
// More tags = better fix, but single tag can still be useful
return true;
}
}
Applying Corrections
There are several strategies for applying AprilTag corrections to Pinpoint:
Strategy 1: Hard Reset (Simple)
// Directly overwrite Pinpoint pose
if (visionFixQuality.isGoodFix(result, currentPose)) {
Pose3D visionPose = result.getBotpose_MT2();
Pose2d newPose = new Pose2d(
visionPose.getPosition().x,
visionPose.getPosition().y,
currentPose.heading // Keep Pinpoint heading or use vision heading
);
driveTrain.setPose(newPose);
}
Pros: Simple, immediate correction Cons: Can cause discontinuities in trajectory following
Strategy 2: Exponential Filter (Smooth)
// Blend vision and odometry poses
public static final double VISION_WEIGHT = 0.3; // 30% vision, 70% odometry
if (visionFixQuality.isGoodFix(result, currentPose)) {
Pose3D visionPose = result.getBotpose_MT2();
double newX = currentPose.position.x * (1 - VISION_WEIGHT)
+ visionPose.getPosition().x * VISION_WEIGHT;
double newY = currentPose.position.y * (1 - VISION_WEIGHT)
+ visionPose.getPosition().y * VISION_WEIGHT;
driveTrain.setPose(new Pose2d(newX, newY, currentPose.heading));
}
Pros: Smooth transitions, tolerates occasional bad fixes Cons: Slower convergence, may never fully correct large errors
Strategy 3: Kalman Filter (Advanced)
A proper Kalman filter weighs corrections by their estimated uncertainty:
- Pinpoint uncertainty grows over time/distance
- Vision uncertainty depends on range, angle, number of tags
- Optimal blend changes dynamically
This is more complex but provides the best results for competitive play.
FTC SDK AprilTag Detection (Webcam Alternative)
For teams without a Limelight, the FTC SDK provides built-in AprilTag detection using standard webcams. This offers similar pose data but requires careful CPU management since processing happens on the Control Hub.
Key Differences from Limelight
| Aspect | Limelight | Webcam + FTC SDK |
|---|---|---|
| Processing | Dedicated co-processor | Control Hub CPU (shared) |
| Loop impact | Minimal | Can drop loop rate 20-40% |
| Latency | ~20-30ms | ~50-100ms typical |
| Setup | Plug and play | Requires VisionPortal config |
| Cost | ~$400 | ~$30-50 for webcam |
| Multi-tag | MegaTag2 built-in | Manual implementation |
Understanding VisionPortal Threading
VisionPortal is built on EasyOpenCV, which handles camera operations on a background thread. However, “background” doesn’t mean “free” on the Control Hub’s limited CPU.
What runs in the background:
- Camera frame capture (USB I/O)
- Color space conversion (YUV to RGB)
- Frame buffering
What competes with your main loop:
- AprilTag detection algorithm (the expensive part)
- Pose estimation calculations
- Any custom processing you add
The Control Hub has 4 CPU cores, but your OpMode’s main loop, the SDK’s internal operations, and vision processing all compete for these resources. When VisionPortal processes a frame, it consumes CPU cycles that could otherwise run your control loops.
Key insight: Even though camera I/O is asynchronous, the actual detection work still affects your loop rate. This is why the CPU management strategies below matter - they control when and how often that detection work happens.
Blocking operations: stopStreaming() and resumeStreaming() are blocking calls - your code waits until the camera thread responds. Don’t call these in tight loops.
Setting Up VisionPortal
import org.firstinspires.ftc.vision.VisionPortal;
import org.firstinspires.ftc.vision.apriltag.AprilTagProcessor;
import org.firstinspires.ftc.vision.apriltag.AprilTagDetection;
public class WebcamVision {
private VisionPortal visionPortal;
private AprilTagProcessor aprilTagProcessor;
public void init(HardwareMap hardwareMap) {
// Create the AprilTag processor
aprilTagProcessor = new AprilTagProcessor.Builder()
.setDrawAxes(true)
.setDrawCubeProjection(true)
.setDrawTagOutline(true)
// Use current season's tag library (has field positions built-in)
// .setTagLibrary(AprilTagGameDatabase.getCurrentGameTagLibrary())
.build();
// Create the VisionPortal
visionPortal = new VisionPortal.Builder()
.setCamera(hardwareMap.get(WebcamName.class, "Webcam 1"))
.addProcessor(aprilTagProcessor)
// CPU management settings - see below
.setCameraResolution(new Size(640, 480))
.setStreamFormat(VisionPortal.StreamFormat.MJPEG)
.enableLiveView(false) // Disable for competition
.build();
}
}
CPU Management Strategies
The Control Hub has limited CPU, and vision processing competes with your main robot loop. Here’s how to manage it:
Strategy 1: Reduce Resolution
Lower resolution = faster processing, but reduced range/accuracy.
// Resolution options (smaller = faster, less accurate at distance)
.setCameraResolution(new Size(640, 480)) // Good balance
.setCameraResolution(new Size(320, 240)) // Faster, shorter range
.setCameraResolution(new Size(1280, 720)) // Slow, best accuracy
Recommendation: Start with 640x480, reduce if loop rate suffers.
Strategy 2: Reduce Frame Rate
Process fewer frames per second to free CPU for the main loop.
// Limit camera FPS
visionPortal.setProcessorEnabled(aprilTagProcessor, true);
// Or manually control when to process
public void update() {
if (shouldProcessVision()) {
// Only check detections periodically
List<AprilTagDetection> detections = aprilTagProcessor.getDetections();
processDetections(detections);
}
}
private long lastVisionCheck = 0;
private static final long VISION_INTERVAL_MS = 100; // 10 Hz max
private boolean shouldProcessVision() {
long now = System.currentTimeMillis();
if (now - lastVisionCheck > VISION_INTERVAL_MS) {
lastVisionCheck = now;
return true;
}
return false;
}
Strategy 3: Disable When Not Needed
Turn off vision processing during CPU-intensive operations.
// Disable during trajectory following
public void startTrajectory() {
visionPortal.setProcessorEnabled(aprilTagProcessor, false);
// Run trajectory...
}
// Re-enable when trajectory completes
public void onTrajectoryComplete() {
visionPortal.setProcessorEnabled(aprilTagProcessor, true);
}
// Or completely stop the portal
public void stopVision() {
visionPortal.stopStreaming();
}
public void resumeVision() {
visionPortal.resumeStreaming();
}
Strategy 4: Decimation
Decimation downsamples the image before tag detection, reducing CPU load at the cost of detection range. It’s set on the processor instance (not the builder) and can be changed on-the-fly during a match:
// Build the processor first
aprilTagProcessor = new AprilTagProcessor.Builder().build();
// Then set decimation (can be changed anytime during the match)
aprilTagProcessor.setDecimation(2); // Lower = better range, slower
From FTC SDK sample comments (Logitech C920 at 640x480):
| Decimation | 2” Tag Range | 5” Tag Range | Frame Rate |
|---|---|---|---|
| 2 | ~10 ft | - | ~10 FPS |
| 3 (default) | ~4 ft | ~10 ft | ~30 FPS |
Tip: Start with the default (3). Only reduce decimation if you need longer-range detection and can tolerate slower frame rates.
Accessing Pose Data
The FTC SDK provides similar pose information to Limelight:
public Pose2d getRobotPoseFromAprilTags() {
List<AprilTagDetection> detections = aprilTagProcessor.getDetections();
for (AprilTagDetection detection : detections) {
if (detection.metadata != null) { // Tag is in the library
// Robot pose in field coordinates (if tag position is known)
if (detection.robotPose != null) {
return new Pose2d(
detection.robotPose.getPosition().x,
detection.robotPose.getPosition().y,
detection.robotPose.getOrientation().getYaw()
);
}
// Alternative: Tag pose relative to camera
// detection.ftcPose gives you:
// .x = lateral offset (inches)
// .y = forward distance (inches)
// .z = height offset (inches)
// .yaw, .pitch, .roll = orientation
}
}
return null; // No valid detection
}
Quality Assessment for Webcam Detections
public boolean isGoodDetection(AprilTagDetection detection, Pose2d currentPose) {
if (detection == null || detection.metadata == null) return false;
// Check decision margin (confidence measure)
// Higher = more confident detection
if (detection.decisionMargin < 50) return false; // Low confidence
// Check range (too far = noisy)
double range = detection.ftcPose.range;
if (range > 72) return false; // Beyond 6 feet, accuracy degrades
// Check pose jump (outlier rejection)
if (detection.robotPose != null) {
double dx = detection.robotPose.getPosition().x - currentPose.position.x;
double dy = detection.robotPose.getPosition().y - currentPose.position.y;
if (Math.hypot(dx, dy) > 0.5) return false; // 0.5 meter jump = suspicious
}
return true;
}
Multi-Tag Pose Estimation
Unlike Limelight’s MegaTag2, the FTC SDK processes tags individually. For better accuracy, average multiple tag detections:
public Pose2d getAveragedRobotPose() {
List<AprilTagDetection> detections = aprilTagProcessor.getDetections();
double sumX = 0, sumY = 0, sumHeading = 0;
int validCount = 0;
for (AprilTagDetection detection : detections) {
if (detection.robotPose != null && isGoodDetection(detection, currentPose)) {
sumX += detection.robotPose.getPosition().x;
sumY += detection.robotPose.getPosition().y;
sumHeading += detection.robotPose.getOrientation().getYaw();
validCount++;
}
}
if (validCount == 0) return null;
return new Pose2d(
sumX / validCount,
sumY / validCount,
sumHeading / validCount
);
}
Complete Webcam Vision Subsystem Example
@Config(value = "WebcamVision")
public class WebcamVision implements Subsystem {
private VisionPortal visionPortal;
private AprilTagProcessor aprilTagProcessor;
// Configuration
public static int TARGET_FPS = 10; // Limit processing rate
public static double MIN_DECISION_MARGIN = 50;
public static double MAX_RANGE_INCHES = 72;
// State
private Pose2d robotPose = null;
private boolean hasValidPose = false;
private long lastProcessTime = 0;
public void init(HardwareMap hardwareMap) {
aprilTagProcessor = new AprilTagProcessor.Builder()
.setDrawAxes(false)
.setDrawCubeProjection(false)
.setDrawTagOutline(false) // Disable overlays for speed
.build();
visionPortal = new VisionPortal.Builder()
.setCamera(hardwareMap.get(WebcamName.class, "Webcam 1"))
.addProcessor(aprilTagProcessor)
.setCameraResolution(new Size(640, 480))
.setStreamFormat(VisionPortal.StreamFormat.MJPEG)
.enableLiveView(false)
.build();
}
@Override
public void readSensors() {
// Nothing - VisionPortal runs asynchronously
}
@Override
public void calc(Canvas fieldOverlay) {
// Rate limit vision processing
long now = System.currentTimeMillis();
if (now - lastProcessTime < (1000 / TARGET_FPS)) {
return; // Skip this cycle
}
lastProcessTime = now;
// Process detections
List<AprilTagDetection> detections = aprilTagProcessor.getDetections();
robotPose = computeAveragedPose(detections);
hasValidPose = (robotPose != null);
}
@Override
public void act() {
// Nothing - read-only subsystem
}
private Pose2d computeAveragedPose(List<AprilTagDetection> detections) {
// ... averaging logic from above
}
// Enable/disable for CPU management
public void setEnabled(boolean enabled) {
visionPortal.setProcessorEnabled(aprilTagProcessor, enabled);
}
public boolean hasValidPose() { return hasValidPose; }
public Pose2d getRobotPose() { return robotPose; }
@Override
public void stop() {
visionPortal.close();
}
}
Performance Benchmarking
Before competition, measure your actual loop rates:
// In your OpMode
private ElapsedTime loopTimer = new ElapsedTime();
private double avgLoopTime = 0;
private int loopCount = 0;
@Override
public void loop() {
double loopTime = loopTimer.milliseconds();
loopTimer.reset();
avgLoopTime = (avgLoopTime * loopCount + loopTime) / (loopCount + 1);
loopCount++;
telemetry.addData("Avg Loop", "%.1f ms (%.0f Hz)", avgLoopTime, 1000/avgLoopTime);
// Your normal loop code...
}
Target: Keep loop rate above 50 Hz (20ms per loop) for responsive control.
If loop rate drops below 30 Hz:
- Reduce camera resolution
- Increase vision processing interval
- Disable vision during trajectories
- Consider disabling vision during teleop (use only in auto)
Relocalization Opportunities
1. Match Start
Situation: Robot is placed in one of 2 starting positions per alliance. Action: Hard-set pose to known starting position.
// In init()
public static final Pose2d BLUE_START_1 = new Pose2d(...);
public static final Pose2d BLUE_START_2 = new Pose2d(...);
public static final Pose2d RED_START_1 = new Pose2d(...);
public static final Pose2d RED_START_2 = new Pose2d(...);
driveTrain.setPose(selectedStartingPose);
2. During Autonomous
Situation: Continuous operation, need reliable positioning for trajectories. Strategy:
- Apply vision corrections when robot is stationary or moving slowly
- Reject corrections during high-speed trajectory following
- Use vision for endpoint verification after trajectories complete
// In autonomous loop
if (driveTrain.getMode() != Mode.RR_ACTION) { // Not mid-trajectory
if (vision.hasBotPose() && isGoodFix(...)) {
applyVisionCorrection();
}
}
3. Transition to TeleOp
Situation: 30 seconds of autonomous may have accumulated drift. Strategy:
- If near an AprilTag at end of auto, get a fix before teleop starts
- Consider a brief “localization pause” at transition
4. During TeleOp
Situation: Driver control with periodic need for accurate positioning (e.g., launching). Strategy:
- Continuous background correction when tags visible
- Higher-weight corrections when preparing to launch
- Consider “aim assist” that uses vision tx for final alignment
5. Known Physical Landmarks
Wall Contact
If the robot has touch sensors or distance sensors facing a known wall:
// Robot is against the back wall (X = -1.8288 meters = -72 inches)
if (backTouchSensor.isPressed()) {
Pose2d current = driveTrain.getPose();
driveTrain.setPose(new Pose2d(-1.8288, current.position.y, current.heading));
}
Tape Lines
Color sensors detecting field tape lines provide single-axis corrections:
// Detected tape line at known Y position
if (colorSensor.detectsTape() && nearExpectedTapeLine()) {
Pose2d current = driveTrain.getPose();
driveTrain.setPose(new Pose2d(current.position.x, TAPE_LINE_Y, current.heading));
}
Dual Distance Sensors for Heading
Two distance sensors pointing at a wall can verify/correct heading:
// Sensors separated by SENSOR_SPACING meters
double leftDist = leftDistanceSensor.getDistance();
double rightDist = rightDistanceSensor.getDistance();
double headingError = Math.atan2(rightDist - leftDist, SENSOR_SPACING);
// Compare to Pinpoint heading, apply correction if significant
Implementation Architecture
Recommended Class Structure
Robot.java
├── driveTrain (TankDrivePinpoint)
│ └── pinpoint (GoBildaPinpointDriver)
│ └── getPose() / setPose()
├── vision (Vision)
│ └── getBotPose(), getDistanceToGoal(), hasBotPose()
└── localizer (RobotLocalizer) // NEW - coordinates corrections
└── update()
└── applyVisionCorrection()
└── applyLandmarkCorrection()
RobotLocalizer Concept
public class RobotLocalizer {
private final TankDrivePinpoint driveTrain;
private final Vision vision;
// Correction state
private long lastCorrectionTime = 0;
private static final long MIN_CORRECTION_INTERVAL_MS = 500;
// Drift estimation
private double estimatedDriftMeters = 0;
private static final double DRIFT_RATE = 0.001; // meters per second
public void update() {
// Estimate drift accumulation
long now = System.currentTimeMillis();
estimatedDriftMeters += DRIFT_RATE * (now - lastUpdateTime) / 1000.0;
// Check for vision correction opportunity
if (vision.hasBotPose() && shouldApplyCorrection()) {
applyVisionCorrection();
}
}
private boolean shouldApplyCorrection() {
// Don't correct too frequently
if (System.currentTimeMillis() - lastCorrectionTime < MIN_CORRECTION_INTERVAL_MS) {
return false;
}
// More aggressive correction when drift is high
// More conservative when drift is low
return estimatedDriftMeters > 0.05; // 5cm threshold
}
private void applyVisionCorrection() {
// Apply correction (using chosen strategy)
// Reset drift estimate
estimatedDriftMeters = 0;
lastCorrectionTime = System.currentTimeMillis();
}
}
Testing and Validation
Why Ground Truth Validation Matters
AprilTag localization looks precise on paper, but real-world accuracy depends on many factors:
- Camera calibration quality
- Tag placement accuracy on the field
- Lighting conditions
- Motion blur during detection
- Viewing angle and distance
Never assume the vision system is correct. Always validate against physical measurements before trusting it for competition.
Establishing Ground Truth
Option 1: Tape Measure Grid
- Create a grid of known positions on your practice field using tape
- Mark positions at 1-foot intervals across the field
- Use a plumb bob or laser level to ensure robot placement is precise
- Record the actual (X, Y) position and heading at each test point
Option 2: Field Features
- Use field tile corners (12” grid) as reference points
- Field walls provide known X or Y coordinates
- Tape line intersections give precise positions
Option 3: Total Station / Laser Measurement For highest accuracy, use surveying equipment to establish reference positions.
Static Accuracy Testing
Procedure:
// Test OpMode for static accuracy validation
@TeleOp(name = "Vision Ground Truth Test")
public class VisionGroundTruthTest extends OpMode {
// Known test positions (measure these precisely on your field)
private static final Pose2d[] TEST_POSITIONS = {
new Pose2d(-1.0, -1.0, Math.toRadians(0)), // Position A
new Pose2d(-1.0, 0.0, Math.toRadians(45)), // Position B
new Pose2d(-0.5, -0.5, Math.toRadians(90)), // Position C
// Add more positions covering your operating range
};
private int currentTest = 0;
@Override
public void loop() {
Pose2d groundTruth = TEST_POSITIONS[currentTest];
Pose2d visionPose = vision.hasBotPose() ?
new Pose2d(vision.getRobotX(), vision.getRobotY(), 0) : null;
telemetry.addData("Test Position", currentTest);
telemetry.addData("Ground Truth", "(%.3f, %.3f)",
groundTruth.position.x, groundTruth.position.y);
if (visionPose != null) {
double errorX = visionPose.position.x - groundTruth.position.x;
double errorY = visionPose.position.y - groundTruth.position.y;
double errorDist = Math.hypot(errorX, errorY);
telemetry.addData("Vision Pose", "(%.3f, %.3f)",
visionPose.position.x, visionPose.position.y);
telemetry.addData("Error", "(%.3f, %.3f) = %.3f m (%.1f in)",
errorX, errorY, errorDist, errorDist * 39.37);
} else {
telemetry.addData("Vision", "NO FIX");
}
// D-pad to cycle through test positions
if (gamepad1.dpad_up) currentTest = (currentTest + 1) % TEST_POSITIONS.length;
}
}
What to measure:
- Place robot precisely at each test position
- Record vision pose vs ground truth
- Calculate error magnitude and direction
- Note number of tags visible, distance to tags, viewing angle
Expected results:
- Static accuracy should be within 2-3 inches at reasonable range
- Error typically increases with distance from tags
- Oblique viewing angles may show systematic bias
Dynamic Accuracy Testing (Motion Effects)
AprilTag solutions have latency. When the robot is moving, the pose you receive describes where the robot was, not where it is now.
Test 1: Translation During Detection
// Drive at known speed, log vision fixes
public void testTranslationLatency() {
// Drive forward at constant 0.5 m/s
double driveSpeed = 0.5; // m/s
// When vision fix arrives, calculate where robot actually is
// vs where vision says it was
if (vision.hasBotPose()) {
double latency = vision.getLatency(); // seconds
double expectedOffset = driveSpeed * latency;
telemetry.addData("Latency", "%.0f ms", latency * 1000);
telemetry.addData("Position Lag", "%.1f inches", expectedOffset * 39.37);
}
}
Test 2: Rotation During Detection Rotation is particularly problematic - the robot’s heading changes during the camera exposure:
- Spin robot at known angular velocity (e.g., 90°/sec)
- Log vision heading vs Pinpoint heading
- Expect heading errors proportional to:
angular_velocity * exposure_time
Special Case: Turret-Mounted Cameras If your camera is mounted on a turret that pans to keep the AprilTag centered in frame, rotation artifacts are significantly reduced since the camera-to-tag relationship stays stable even while the chassis rotates.
However, this introduces a new challenge: the vision solution gives you the camera’s pose, not the chassis pose. You must apply the turret angle offset:
// Vision gives camera pose in field coordinates
Pose2d cameraPose = vision.getBotPose();
double turretAngle = turret.getCurrentAngle(); // radians, relative to chassis forward
// Transform camera pose to chassis pose
// Camera is offset from chassis center by turret position
double chassisHeading = cameraPose.heading - turretAngle;
double offsetX = TURRET_RADIUS * Math.cos(turretAngle);
double offsetY = TURRET_RADIUS * Math.sin(turretAngle);
Pose2d chassisPose = new Pose2d(
cameraPose.position.x - offsetX * Math.cos(chassisHeading) + offsetY * Math.sin(chassisHeading),
cameraPose.position.y - offsetX * Math.sin(chassisHeading) - offsetY * Math.cos(chassisHeading),
chassisHeading
);
Test 3: Combined Motion Real autonomous involves simultaneous translation and rotation. Test realistic trajectories.
Mitigation strategies:
- Only apply corrections when robot velocity is below threshold
- Use latency compensation:
corrected_pose = vision_pose + velocity * latency - Weight corrections by inverse of robot speed
Edge Case Testing
Distance limits: | Test | Expected Result | |——|—————–| | Tag at 1 foot | May fail (too close, tag doesn’t fit in frame) | | Tag at 3 feet | Good accuracy | | Tag at 6 feet | Acceptable accuracy | | Tag at 10+ feet | Degraded accuracy, may lose detection |
Angle limits: | Viewing Angle | Expected Result | |—————|—————–| | 0° (straight on) | Best accuracy | | 30° oblique | Good accuracy | | 45° oblique | Acceptable, watch for bias | | 60°+ oblique | Poor accuracy, may reject |
Lighting conditions:
- Test in competition-like lighting (often harsh overhead)
- Test with backlight (windows behind tags)
- Test in dim conditions
Obstruction:
- Partial tag occlusion (other robots, game pieces)
- Dirty/scratched camera lens
Capturing Camera Snapshots for Investigation
When something goes wrong, a camera snapshot helps diagnose the issue.
Limelight: The Limelight web interface (http://limelight.local:5801) allows snapshot capture. You can also trigger snapshots programmatically:
// Limelight snapshot (check current API)
limelight.captureSnapshot("debug_shot");
VisionPortal / Webcam:
// Save frames for later analysis
public class DebugVisionProcessor implements VisionProcessor {
private Mat lastFrame;
private boolean captureRequested = false;
@Override
public Object processFrame(Mat frame, long captureTimeNanos) {
if (captureRequested) {
// Save to file
String filename = "/sdcard/FIRST/vision_debug_" +
System.currentTimeMillis() + ".png";
Imgcodecs.imwrite(filename, frame);
captureRequested = false;
}
lastFrame = frame.clone();
return null;
}
public void requestCapture() {
captureRequested = true;
}
}
What to capture:
- Frame when a bad fix was accepted
- Frame when a good fix was rejected
- Frame at detection limits (max range, extreme angle)
- Frame showing lighting issues
Post-capture analysis:
- Pull images from robot via ADB:
adb pull /sdcard/FIRST/ - Examine tag visibility, blur, exposure
- Check for reflections, occlusions, or lighting artifacts
- Verify tag is fully in frame with margin
Building a Validation Dataset
For systematic improvement, build a dataset:
public class ValidationLogger {
private PrintWriter logFile;
public void logFix(Pose2d groundTruth, Pose2d visionPose,
double latency, int numTags, double ta,
boolean wasAccepted) {
// CSV format for later analysis
logFile.printf("%.3f,%.3f,%.3f,%.3f,%.3f,%.3f,%.0f,%d,%.2f,%b%n",
groundTruth.position.x, groundTruth.position.y, groundTruth.heading,
visionPose.position.x, visionPose.position.y, visionPose.heading,
latency * 1000, numTags, ta, wasAccepted);
}
}
Analyze the dataset to:
- Find systematic biases (vision always reads 2” left, etc.)
- Identify conditions that produce bad fixes
- Tune quality thresholds based on real data
- Track improvements over time
Validation Checklist
Before competition, verify:
- Static accuracy < 3 inches at typical operating distances
- Heading accuracy < 3° when stationary
- No fixes accepted when robot is moving fast
- Quality filters reject obviously bad fixes
- System recovers gracefully when no tags visible
- Latency is consistent and within expectations
- Edge cases (distance limits, angles) behave as expected
- Camera snapshot capture works for debugging
Summary: Match Flow
INIT
├── Select starting position (dashboard toggle)
├── Hard-set pose to known starting position
└── Initialize all sensors
AUTONOMOUS
├── Trust Pinpoint for trajectory following
├── Apply vision corrections between trajectories
├── Log all corrections for post-match analysis
└── Final position check before teleop
TELEOP
├── Continuous background corrections when tags visible
├── Higher-confidence corrections before precision operations
├── Manual "recalibrate" button for driver if needed
└── Use distance-to-goal for launcher speed
END GAME
└── Final push - may disable corrections for maximum responsiveness
References
Limelight
FTC SDK AprilTag
Field and Hardware
- FTC Field Coordinate System
- DECODE Game Manual (Field specs start on page 59)
- DECODE Field Resources
- GoBilda Pinpoint Documentation
