Autonomous 1/10th Scale Racecar
Designed and deployed a fully autonomous racecar using ROS 2 and NVIDIA Jetson Nano. Engineered a robust Sensor Fusion pipeline (EKF) combining GPS and IMU for stable 50Hz pose estimation.
"Achieved stable high-speed navigation through multi-sensor fusion and time-optimal planning."
Autonomous 1/10th Scale Racecar: Sensor Fusion & Trajectory Optimization
Course: UCSD MAE 148 – Introduction to Autonomous Vehicles | Role: Team Member (Team 5)
Project Overview
This project involved designing and deploying a fully autonomous 1/10th scale racecar capable of high-speed navigation. The course began as a guided implementation of the UCSD Robocar Framework, where we built a standard autonomous vehicle using a Jetson Nano and ROS 2.
It evolved into an open-ended research capstone where my team (Team 5) chose to push the platform's limits by moving beyond basic lane following. We engineered a robust Sensor Fusion pipeline and implemented Trajectory Optimization to achieve faster, more reliable autonomous laps compared to standard PID control methods.
Key Technical Contributions
1. Multi-Sensor Fusion (GPS + IMU + Vision)
To solve the instability of high-speed control, we moved away from simple reactive vision. We developed a sensor fusion node using an Extended Kalman Filter (EKF).
- The Challenge: The GPS provided global positioning but at a slow update rate (10Hz), while the onboard IMU provided fast but drifty data (100Hz).
- The Solution: We fused the 10Hz GPS pose from a Point One Navigation module with high-frequency IMU data from a Luxonis OAK-D Lite camera.
- Result: This produced a stable, high-frequency (30-50Hz) pose estimation, enabling the vehicle to maintain localization even during high-speed maneuvers where raw GPS data would lag.
2. Time-Optimal Trajectory Planning
Instead of simply following the center of the lane, we implemented the TUM Minimum-Time Optimizer to mathematically generate the fastest possible racing line around the track.
- Pipeline: We recorded track waypoints using the DonkeyCar framework, converted them into a format compatible with the TUM optimizer, and generated a new path that optimized for curvature and velocity.
- Velocity Profiling: The optimizer generated a specific velocity profile, allowing the car to accelerate on straights and braking appropriately before entering corners, rather than running at a static speed.
3. Embedded System & Perception
- Hardware Architecture: The car runs on a Jetson Nano utilizing a VESC (Vedder Electronic Speed Controller) for precise motor control.
- ROS 2 Architecture: We utilized a modular node-based system in ROS 2 (Foxy), separating sensing, perception, and control layers.
- Computer Vision: We integrated a Luxonis OAK-D Lite camera for visual perception. While our final racing logic relied heavily on GPS-Fusion, we explored deploying Roboflow models to the OAK-D's VPU for real-time object detection, offloading compute from the main CPU.
Tech Stack
- Hardware: NVIDIA Jetson Nano, VESC 6, Luxonis OAK-D Lite, Point One Navigation GPS.
- Software: ROS 2 (Foxy), Docker, Python, C++.
- Algorithms: Extended Kalman Filter (EKF), PID Control, TUM Trajectory Optimization.
- Tools: Git/GitLab, Rviz (Visualization), Foxglove (Data Analysis).