This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revision | |||
| en:safeav:maps:validation [2026/03/31 10:04] – [Multi-Fidelity Workflow and Scenario-to-Track Bridge] airi | en:safeav:maps:validation [2026/03/31 10:14] (current) – [Assessment] airi | ||
|---|---|---|---|
| Line 85: | Line 85: | ||
| ====== Assessment ====== | ====== Assessment ====== | ||
| + | |||
| + | ^ # ^ Project Title ^ Description ^ Learning Objectives ^ | ||
| + | | 1 | Multi-Sensor Perception Benchmarking | Build a perception pipeline using at least two sensor modalities (e.g., camera + LiDAR or radar). Evaluate object detection performance under varying conditions (lighting, weather, occlusion) using real or simulated datasets. | Understand strengths/ | ||
| + | | 2 | ODD-Driven Scenario Generation & Validation Study | Define an Operational Design Domain (ODD) for an autonomous system (e.g., urban driving, coastal navigation). Generate a set of test scenarios (including edge cases) and validate system performance using simulation tools. | Define and scope an ODD. Develop scenario-based testing strategies. Understand coverage and edge-case generation. Link scenarios to safety outcomes. | | ||
| + | | 3 | Sensor Failure and Degradation Analysis | Simulate sensor failures (e.g., camera blackout, GNSS loss, radar noise) and analyze system-level impact on perception, localization, | ||
| + | | 4 | AI vs Conventional Algorithm Validation Study | Compare a traditional perception algorithm (e.g., rule-based or classical ML) with a deep learning model on the same dataset. Analyze differences in performance, | ||
| + | | 5 | End-to-End V&V Framework Design (Digital Twin) | Design a validation framework for perception, mapping, and localization using simulation (digital twin). Include KPIs, test conditions (e.g., ISO 26262, SOTIF), simulations, | ||
| + | |||
| + | |||