Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
en:safeav:ctrl:sim [2026/03/26 11:09] airien:safeav:ctrl:sim [2026/04/24 09:45] (current) raivo.sell
Line 1: Line 1:
 ====== Simulation & Formal Methods ====== ====== Simulation & Formal Methods ======
-{{:en:iot-open:czapka_m.png?50| Masters (2nd level) classification icon }} 
  
-<todo @momala></todo> +===== Why Simulation Needs Formalism =====
- +
-====== Why Simulation Needs Formalism ======+
  
 Simulation is indispensable in autonomous-vehicle validation because it lets us probe safety-critical behavior without exposing the public to risk, but simulation alone is only as persuasive as its predictive value. A simulator that cannot anticipate how the real system behaves—because of poor modeling, missing variability, or unmeasured assumptions—does not provide credible evidence for a safety case. This is why we pair simulation with formal methods: a discipline for specifying scenarios and safety properties with mathematical precision, generating test cases systematically, and measuring how closely simulated outcomes match track or road trials. In our program, the digital twin of the vehicle and its operating environment acts as the concrete “world model,” while formal specifications direct the exploration of that world to the places where safety margins are most likely to fail. Simulation is indispensable in autonomous-vehicle validation because it lets us probe safety-critical behavior without exposing the public to risk, but simulation alone is only as persuasive as its predictive value. A simulator that cannot anticipate how the real system behaves—because of poor modeling, missing variability, or unmeasured assumptions—does not provide credible evidence for a safety case. This is why we pair simulation with formal methods: a discipline for specifying scenarios and safety properties with mathematical precision, generating test cases systematically, and measuring how closely simulated outcomes match track or road trials. In our program, the digital twin of the vehicle and its operating environment acts as the concrete “world model,” while formal specifications direct the exploration of that world to the places where safety margins are most likely to fail.
Line 21: Line 18:
  
  
-====== From Scenarios to Properties: Making Requirements Executable ======+===== From Scenarios to Properties: Making Requirements Executable =====
  
 Formal methods begin by making requirements executable. We express test intent as a distribution over concrete scenes using the SCENIC language, which provides geometric and probabilistic constructs to describe traffic, occlusions, placements, and behaviors. A SCENIC program defines a scenario whose parameters are sampled to generate test cases; each case yields a simulation trace against which temporal properties—our safety requirements—are monitored. This tight loop, implemented with the VERIFAI toolkit, supports falsification (actively searching for violations), guided sampling, and clustering of outcomes for test selection. Formal methods begin by making requirements executable. We express test intent as a distribution over concrete scenes using the SCENIC language, which provides geometric and probabilistic constructs to describe traffic, occlusions, placements, and behaviors. A SCENIC program defines a scenario whose parameters are sampled to generate test cases; each case yields a simulation trace against which temporal properties—our safety requirements—are monitored. This tight loop, implemented with the VERIFAI toolkit, supports falsification (actively searching for violations), guided sampling, and clustering of outcomes for test selection.
Line 29: Line 26:
 Our project also leverages scenario distribution over maps: using OpenDRIVE networks of the TalTech campus, SCENIC instantiates the same behavioral narrative—say, overtaking a slow or stopped vehicle—at diverse locations, ensuring that lane geometry, curbside clutter, and occlusions vary meaningfully while the safety property remains constant. The result is a family of tests that stress the same planning and perception obligations under different geometric and environmental embeddings. Our project also leverages scenario distribution over maps: using OpenDRIVE networks of the TalTech campus, SCENIC instantiates the same behavioral narrative—say, overtaking a slow or stopped vehicle—at diverse locations, ensuring that lane geometry, curbside clutter, and occlusions vary meaningfully while the safety property remains constant. The result is a family of tests that stress the same planning and perception obligations under different geometric and environmental embeddings.
  
-====== Selection, Execution, and Measuring the Sim-to-Real Gap ======+===== Selection, Execution, and Measuring the Sim-to-Real Gap =====
  
 A formal pipeline is only convincing if simulated insights transfer to the track. After falsification, we select representative safe/unsafe cases through visualization or clustering of the safe/error tables and implement them on a closed course with controllable agents. Notably, the same SCENIC parameters (starting pose, start time, velocities) drive hardware actors on the track as drove agents in simulation, subject to physical limitations of the test equipment. This parity enables apples-to-apples comparisons between simulated and real traces. A formal pipeline is only convincing if simulated insights transfer to the track. After falsification, we select representative safe/unsafe cases through visualization or clustering of the safe/error tables and implement them on a closed course with controllable agents. Notably, the same SCENIC parameters (starting pose, start time, velocities) drive hardware actors on the track as drove agents in simulation, subject to physical limitations of the test equipment. This parity enables apples-to-apples comparisons between simulated and real traces.
Line 37: Line 34:
 This formal sim-to-track pipeline does more than label outcomes; it helps diagnose causes. By replaying logged runs through the autonomy stack’s visualization tools, we can attribute unsafe behavior to perception misses, unstable planning decisions, or mispredictions, and then target those subsystems in subsequent formal campaigns. In one case set, the dominant failure mode was oscillatory planning around a pedestrian, discovered and characterized through this exact loop of scenario specification, falsification, track execution, and trace analysis. This formal sim-to-track pipeline does more than label outcomes; it helps diagnose causes. By replaying logged runs through the autonomy stack’s visualization tools, we can attribute unsafe behavior to perception misses, unstable planning decisions, or mispredictions, and then target those subsystems in subsequent formal campaigns. In one case set, the dominant failure mode was oscillatory planning around a pedestrian, discovered and characterized through this exact loop of scenario specification, falsification, track execution, and trace analysis.
  
-====== Multi-Fidelity Workflows and Continuous Assurance ======+===== Multi-Fidelity Workflows and Continuous Assurance =====
  
 Exhaustive testing is infeasible, so we combine multiple fidelity levels to balance breadth with realism. Low-fidelity (LF) platforms sweep large scenario grids quickly to map where safety margins begin to tighten; high-fidelity (HF) platforms (e.g., LGSVL/Unity integrated with Autoware) replay the most informative LF cases with photorealistic sensors and closed-loop control. Logging is harmonized so that KPIs and traces are comparable across levels, and optimization or tuning derived from LF sweeps is verified under HF realism before any track time is spent. In extensive experiments, thousands of LF runs revealed broad patterns, but only HF replays uncovered subtle interactions that flipped outcomes—evidence that fidelity matters exactly where the safety case will later be challenged. Exhaustive testing is infeasible, so we combine multiple fidelity levels to balance breadth with realism. Low-fidelity (LF) platforms sweep large scenario grids quickly to map where safety margins begin to tighten; high-fidelity (HF) platforms (e.g., LGSVL/Unity integrated with Autoware) replay the most informative LF cases with photorealistic sensors and closed-loop control. Logging is harmonized so that KPIs and traces are comparable across levels, and optimization or tuning derived from LF sweeps is verified under HF realism before any track time is spent. In extensive experiments, thousands of LF runs revealed broad patterns, but only HF replays uncovered subtle interactions that flipped outcomes—evidence that fidelity matters exactly where the safety case will later be challenged.
en/safeav/ctrl/sim.1774516196.txt.gz · Last modified: by airi
CC Attribution-Share Alike 4.0 International
www.chimeric.de Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0