| Both sides previous revisionPrevious revisionNext revision | Previous revision |
| en:safeav:ctrl:testing [2026/03/26 13:09] – airi | en:safeav:ctrl:testing [2026/03/26 13:43] (current) – airi |
|---|
| |
| Ground systems benefit from the most accessible and diverse physical testing environments. **Proving grounds and AV test tracks**—such as Mcity and American Center for Mobility—replicate urban, suburban, and highway conditions with controllable variables (traffic signals, pedestrian dummies, weather systems). OEMs also use large private facilities (e.g., General Motors Milford Proving Ground) for durability, ADAS, and edge-case testing. These environments enable **repeatable scenario testing**, fault injection, and safe validation of perception and decision-making systems. Increasingly, they are instrumented with high-precision localization, V2X infrastructure, and synchronized data capture to support validation at scale. | Ground systems benefit from the most accessible and diverse physical testing environments. **Proving grounds and AV test tracks**—such as Mcity and American Center for Mobility—replicate urban, suburban, and highway conditions with controllable variables (traffic signals, pedestrian dummies, weather systems). OEMs also use large private facilities (e.g., General Motors Milford Proving Ground) for durability, ADAS, and edge-case testing. These environments enable **repeatable scenario testing**, fault injection, and safe validation of perception and decision-making systems. Increasingly, they are instrumented with high-precision localization, V2X infrastructure, and synchronized data capture to support validation at scale. |
| | |
| | ===== Airbone Systems (Aviation & UAVs) ===== |
| |
| <figure Ref.figure6.12b> | <figure Ref.figure6.12b> |
| <caption>Airbone Systems (Aviation & UAVs)</caption> | <caption>Airbone Systems (Aviation & UAVs)</caption> |
| </figure> | </figure> |
| | |
| | Airborne testing combines **ground-based facilities and open-air test ranges**. Wind tunnels (e.g., NASA Ames Research Center Wind Tunnel) provide controlled aerodynamic testing across regimes, while **iron-bird rigs** and avionics labs enable hardware/software integration before flight. Actual flight testing occurs at restricted ranges such as Edwards Air Force Base or FAA-designated UAV corridors, where telemetry, radar tracking, and chase aircraft ensure safety. Compared to ground systems, **repeatability is lower**, and environmental factors (weather, airspace constraints) play a larger role, but the combination of lab + flight test provides a structured certification pathway. |
| | |
| | <figure Ref.figure6.12c> |
| | {{:en:safeav:ctrl:figure6.12c.jpg?600|}} |
| | <caption>Marine Systems (Surface & Underwater)</caption> |
| | </figure> |
| | |
| | Marine testing relies on a mix of **controlled hydrodynamic facilities and open-water trials**. Towing tanks and wave basins—such as those at Naval Surface Warfare Center—allow precise study of hull performance, propulsion, and wave interaction. For autonomy, sheltered environments (harbors, test lakes) are used for early-stage validation, followed by coastal and deep-sea trials. Facilities often include instrumented buoys, GPS-denied navigation testing zones, and long-duration endurance setups. Compared to ground and air, marine systems emphasize **disturbance realism (waves, currents)** and **long-horizon reliability**, with less focus on dense, repeatable interaction scenarios. |
| | |
| | <figure Ref.figure6.12d> |
| | {{:en:safeav:ctrl:figure6.12d.jpg?600|}} |
| | <caption>Space Systems (Launch, Orbital, Deep Space</caption> |
| | </figure> |
| | |
| | Space systems have the most specialized and constrained physical testing infrastructure. Because full end-to-end testing in the operational environment is impossible, engineers rely on **high-fidelity ground facilities** that replicate aspects of space conditions. These include thermal vacuum chambers (e.g., NASA Johnson Space Center Chamber A), vibration and acoustic test facilities for launch loads, and propulsion test stands (e.g., Stennis Space Center). RF anechoic chambers validate communication and sensing systems. While these facilities achieve extreme fidelity for specific physics, **system-level validation is fragmented**, requiring heavy reliance on simulation and incremental subsystem testing. The cost and irreversibility of failure drive a test philosophy centered on qualification, redundancy, and conservative margins. |
| | |
| | ===== Cross-Domain Insight ===== |
| | |
| | Across all four domains, physical testing evolves from **highly repeatable, scenario-rich environments (ground)** to **physics-constrained, partial-reality validation (space)**. Airborne and marine systems sit in between, blending controlled facilities with real-world trials. A consistent trend is the integration of **instrumented test environments with digital twins**, enabling bidirectional feedback between physical experiments and simulation models—an increasingly critical capability for validating autonomous and safety-critical systems. |
| | |
| | Summary: |
| | |
| | This chapter develops a comprehensive view of how **control, decision-making, and motion planning** form the core of autonomous system behavior, and how these elements vary across domains and implementation paradigms. It begins by contrasting **classical control methods**—such as PID, LQR, and state estimation—with **AI-based approaches** like reinforcement learning and neural network controllers. Classical methods offer strong guarantees in stability, transparency, and certifiability, making them well-suited for safety-critical low-level control. In contrast, AI-based methods provide adaptability and the ability to handle complex, nonlinear dynamics but introduce challenges in explainability, verification, and robustness. The chapter emphasizes that **hybrid architectures**—where AI handles high-level decisions and classical control ensures safe execution—are emerging as the most practical and safety-aligned approach. |
| | |
| | The chapter then explores the **decision and planning hierarchy**, distinguishing between behavioral algorithms (“what to do”) and motion planning (“how to do it”). Behavioral methods such as finite state machines, behavior trees, and utility-based reasoning govern high-level actions like lane changes or yielding, while motion planners generate feasible trajectories using techniques like A*, RRT*, and model predictive control. A key insight is the tight coupling between these layers and the control system: perception feeds behavior, behavior drives planning, and planning feeds control in a continuous loop. Safety emerges not from any single layer, but from their coordinated operation under uncertainty, including prediction of other agents, adherence to constraints, and real-time replanning. |
| | |
| | Finally, the chapter focuses on **validation and assurance**, highlighting the central role of digital twins, scenario-based testing, and formal methods. A modern V&V framework combines **multi-fidelity simulation (low- and high-fidelity)**, *design-of-experiments scenario generation**, and **formal specification of safety properties** (e.g., using Scenic and temporal logic). These methods enable systematic exploration of edge cases, measurement of safety metrics (e.g., time-to-collision, trajectory error), and structured comparison between simulation and real-world testing. Physical testing—from AV tracks to space qualification facilities—complements simulation, while continuous feedback from deployed systems updates the digital twin. The overarching theme is that **credible safety assurance requires a tightly integrated loop between simulation, formalism, and real-world validation**, with explicit measurement of the sim-to-real gap. |
| | |
| | Assessments: |
| | |
| |
| ^ # ^ Project Title ^ Description ^ Learning Objectives ^ | ^ # ^ Project Title ^ Description ^ Learning Objectives ^ |