This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| en:safeav:ctrl:vctrl [2025/06/05 15:11] – pczekalski | en:safeav:ctrl:vctrl [2026/04/29 17:01] (current) – raivo.sell | ||
|---|---|---|---|
| Line 1: | Line 1: | ||
| ====== Validation of Control & Planning ====== | ====== Validation of Control & Planning ====== | ||
| - | {{: | ||
| - | <todo @raivo.sell></todo> | + | |
| + | ===== Principles and Scope ===== | ||
| + | |||
| + | |||
| + | Planning and control are where intent becomes motion. A planning stack selects a feasible, safety-aware trajectory under evolving constraints; | ||
| + | |||
| + | Planning/ | ||
| + | |||
| + | A final principle is lifecycle realism. A digital twin is not just a CAD model; it is a live feedback loop receiving data from the physical system and its environment, | ||
| + | |||
| + | ===== Scenario-Based Validation with Digital Twins ===== | ||
| + | |||
| + | |||
| + | The V&V workflow begins with a formal scenario description: | ||
| + | |||
| + | To maintain broad coverage without sacrificing realism, validations can be done using a two-layer approach shown in Figure 1. A low-fidelity (LF) layer (e.g., SUMO) sweeps wide parameter grids quickly to reveal where planning/ | ||
| + | |||
| + | |||
| + | <figure Low and High Fidelity> | ||
| + | {{ : | ||
| + | <caption>Fidelity of AV simulation: a) Low-Fidelity SUMO simulator((Pablo Alvarez Lopez, Michael Behrisch, Laura Bieker-Walz, | ||
| + | Pang Flötteröd, | ||
| + | ner, and Evamarie Wießner. Microscopic traffic simulation using sumo. In The 21st | ||
| + | IEEE International Conference on Intelligent Transportation Systems. IEEE, 2018.)) b) High-Fidelity AWSIM | ||
| + | simulator ((Autoware Foundation. TIER IV AWSIM. https:// | ||
| + | 2022.)) | ||
| + | </ | ||
| + | |||
| + | |||
| + | Formal methods strengthen this flow. In the simulation-to-track pipeline, scenarios and safety properties are specified formally (e.g., via Scenic and Metric Temporal Logic), falsification synthesizes challenging test cases, and a mapping executes those cases on a closed track((Fremont, | ||
| + | |||
| + | Finally, environment twins are built from aerial photogrammetry and point-cloud processing (with RTK-supported georeferencing), | ||
| + | |||
| + | ===== Methods and Metrics for Planning & Control ===== | ||
| + | |||
| + | |||
| + | Mission-level planning validation starts from a start–goal pair and asks whether the vehicle reaches the destination via a safe, policy-compliant trajectory. Your platform publishes three families of evidence: (i) trajectory-following error relative to the global path; (ii) safety outcomes such as collisions or violations of separation; and (iii) mission success (goal reached without violations). This couples path selection quality to execution fidelity. | ||
| + | |||
| + | At the local planning level, your case study focuses on the planner inside the autonomous software. The planner synthesizes a global and a local path, then evaluates them based on predictions from surrounding actors to select a safe local trajectory for maneuvers such as passing and lane changes. By parameterizing scenarios with variables such as the initial separation to the lead vehicle and the lead vehicle’s speed, you create a grid of concrete cases that stress the evaluator’s thresholds. The outcomes are categorized by meaningful labels—Success, | ||
| + | |||
| + | <figure Trajectory Validation> | ||
| + | {{ : | ||
| + | < | ||
| + | </ | ||
| + | |||
| + | Control validation links perception-induced delays to braking and steering outcomes. Your framework computes Time-to-Collision (Formula) along with the simulator and AV-stack response times to detected obstacles. Sufficient response time allows a safe return to nominal headway; excessive delay predicts collision, sharp braking, or planner oscillations. By logging ground truth, perception outputs, CAN bus commands, and the resulting dynamics, the analysis separates sensing delays from controller latency, revealing where mitigation belongs (planner margins vs. control gains). | ||
| + | |||
| + | A necessary dependency is localization health. Your tests inject controlled GPS/IMU degradations and dropouts through simulator APIs, then compare expected vs. actual pose per frame to quantify drift. Because planning and control are sensitive to absolute and relative pose, this produces actionable thresholds for safe operation (e.g., maximum tolerated RMS deviation before reducing speed or restricting maneuvers). | ||
| + | |||
| + | Finally, your program extends to low-level control via HIL-style twins. A Simulink-based network of virtual ECUs and data buses sits between Autoware’s navigation outputs and simulator actuation. This lets you simulate bus traffic, counters, and checksums; disable subsystems (e.g., steering module) to provoke graceful degradation; | ||