The F1TENTH platform is an open-source, small-scale autonomous racing car designed for research and education in autonomous systems. Built on a 1/10-scale RC chassis, it integrates sensors such as LiDAR, camera, and IMU, all running on a ROS2-based control stack. Its modular architecture allows experiments in perception, planning, and control, while remaining low-cost and portable for classroom and laboratory use. The platform is supported by an active international community, offering simulation environments, datasets, and open course materials that make it ideal for hands-on learning and benchmarking in robotics and self-driving research. In academia, it serves as a standardized benchmark for teaching autonomous driving algorithms, allowing students to bridge theory and practice through competitions, lab assignments, and project-based learning. Universities use F1TENTH to demonstrate safety validation, sensor fusion, and real-time decision-making concepts within a manageable and reproducible framework, making it an ideal entry point for higher education in robotics and autonomous vehicle research.

The interface requirements for the F1TENTH use case define how the simulation and control systems communicate within the ROS2 environment (ROS2 Humble, with possible but nontrivial porting to Jazzy). The platform’s sensor outputs include LiDAR scans, RGB or RGB-D camera feeds, and IMU data, which can be noisy and unreliable due to the small size of the vehicle. Additional outputs, such as pose and velocity estimates, may be derived using external localization methods. Control inputs consist of speed commands—either normalized or expressed in meters per second—and steering angle values, typically represented as PWM signals that can be translated into degrees. These interfaces ensure smooth integration between the physical and simulated components, supporting real-time testing, algorithm verification, and reproducibility in educational and research contexts.
The F1TENTH use case defines simulation requirements that ensure accessible and reproducible environments for validating autonomous driving algorithms within academic and research settings.
- The simulation environment must support lightweight, scalable, and educationally accessible setups suitable for university use. - It should utilize kinematic single-track or Ackermann steering models to balance computational efficiency with sufficient accuracy for low-speed vehicles. - The framework must enable reproducible validation of perception, planning, and control algorithms in both 2D and 3D environments. - ROS2 compatibility is required, with optional integration to simplified Autoware vehicle models for advanced testing. - Simulations should run smoothly on standard laptops without the need for high-end GPUs, ensuring broad accessibility for students. - The overall design must support iterative experimentation, open-source deployment, and hands-on learning in line with SafeAV’s educational objectives.
The F1TENTH use case focuses on providing an open, reproducible, and educational platform for validating perception, planning, and control algorithms in small-scale autonomous vehicles. Its simulation and interface requirements emphasize lightweight, ROS2-compatible environments that run efficiently on standard hardware, enabling hands-on learning, iterative testing, and scalable experimentation in autonomous systems education.