| Both sides previous revisionPrevious revision | |
| en:safeav:softsys [2026/03/31 11:06] – airi | en:safeav:softsys [2026/03/31 11:16] (current) – [History of Software and Cyber-Physical Systems] airi |
|---|
| In cyber-physical systems (CPS), the role of open-source software has been more gradual but increasingly significant, particularly as systems have become more complex, networked, and software-defined. Platforms such as FreeRTOS, Zephyr, and middleware frameworks like ROS have enabled broader access to embedded and robotic system development, fostering innovation in domains such as autonomous vehicles, industrial automation, and drones. Open-source approaches in CPS provide advantages in transparency, flexibility, and community-driven validation, which are particularly valuable for research and prototyping. However, their adoption in safety-critical domains—such as avionics, automotive safety systems, and space missions—has required careful integration with certification processes, long-term support models, and rigorous verification and validation practices. Increasingly, hybrid models are emerging in which open-source components form the foundation of development platforms, while certified, domain-specific layers ensure compliance with safety and reliability requirements, reflecting a convergence between the open innovation model of IT and the stringent assurance needs of cyber-physical systems. | In cyber-physical systems (CPS), the role of open-source software has been more gradual but increasingly significant, particularly as systems have become more complex, networked, and software-defined. Platforms such as FreeRTOS, Zephyr, and middleware frameworks like ROS have enabled broader access to embedded and robotic system development, fostering innovation in domains such as autonomous vehicles, industrial automation, and drones. Open-source approaches in CPS provide advantages in transparency, flexibility, and community-driven validation, which are particularly valuable for research and prototyping. However, their adoption in safety-critical domains—such as avionics, automotive safety systems, and space missions—has required careful integration with certification processes, long-term support models, and rigorous verification and validation practices. Increasingly, hybrid models are emerging in which open-source components form the foundation of development platforms, while certified, domain-specific layers ensure compliance with safety and reliability requirements, reflecting a convergence between the open innovation model of IT and the stringent assurance needs of cyber-physical systems. |
| |
| | ====== Software and Safety Standards ====== |
| | |
| | As software moved from advisory and convenience roles into closed-loop control, fault management, and autonomy, safety standards had to shift from focusing mainly on hardware reliability to addressing software behavior, development process, traceability, and verification evidence. The big historical move was this: hardware could often be analyzed in terms of random failures and wear-out mechanisms, but software introduced a different kind of risk—systematic faults from requirements errors, design flaws, implementation mistakes, and unexpected interactions. That forced each domain to build standards that emphasized lifecycle rigor, requirements traceability, verification independence, configuration control, and structured safety arguments rather than just component robustness. IEC 61508 became the broad functional-safety reference point for programmable electronic systems and explicitly includes software requirements in Part 3, while later domain-specific standards adapted that logic to their own operating environments. |
| | |
| | In ground systems, especially automotive, the early era of software safety was relatively informal: OEMs and suppliers used internal engineering discipline, testing, and FMEA-style thinking, but there was no unified framework tailored to vehicle software. As vehicles became software-intensive—first in engine control, then braking, steering, airbags, networking, and ADAS—the industry needed a standard that treated software as part of a full safety lifecycle. That came through ISO 26262, first published in 2011 as an adaptation of IEC 61508 for road vehicles. ISO 26262 introduced Automotive Safety Integrity Levels (ASILs), hazard analysis and risk assessment, lifecycle processes, and safety measures for both hardware and software, embedding software assurance into vehicle development rather than leaving it as a late-stage test problem. In practical terms, the standard pushed the automotive industry toward stronger requirements engineering, bidirectional traceability, safer software architecture, verification planning, and formal integration of software into system-level safety cases. |
| | |
| | In airborne systems, software safety standards emerged earlier and with greater rigor because software entered flight-critical functions sooner. Aviation could not treat software as just another engineering layer once digital flight control, navigation, and avionics displays became mission- and safety-critical. That is why DO-178, originally published in 1981, became so influential: it defined design assurance for airborne software and tied development rigor to the criticality of the function. Over time this matured through DO-178B and then DO-178C in 2011, which remains the core software assurance framework recognized by the FAA through AC 20-115D. The airborne sector’s key historical move was to make software safety depend not on testing alone, but on documented objectives, lifecycle evidence, configuration control, structural coverage, tool qualification where needed, and verification commensurate with software level. In other words, aviation moved earliest and most clearly toward the idea that safe software is demonstrated through a disciplined assurance process, not just by showing that a program “seems to work.” |
| | |
| | In marine systems, the evolution was slower and more fragmented. Marine governance historically focused more on mechanical integrity, redundancy, seaworthiness, and prescriptive equipment rules than on software-specific lifecycle assurance. As ships adopted integrated bridge systems, dynamic positioning, digital navigation, and autonomous functions, classification societies such as DNV, ABS, and Lloyd’s Register increasingly had to account for software quality, cyber resilience, and failure behavior in control systems. But unlike aviation and automotive, the marine sector did not converge as early on a single universally dominant software-safety standard. Instead, it has generally relied on a patchwork of class rules, IEC-derived functional-safety thinking, equipment standards, and system-specific assurance practices. So the historical movement in marine has been from equipment approval and redundancy rules toward a more software-aware model, but one that still remains less unified and less process-centered than in aerospace or automotive. That difference reflects the sector’s lower production volumes, varied vessel types, long lifecycles, and less centralized certification structure. The chapter you shared captures this well in noting that marine governance has remained more prescriptive and performance-based than process-assurance-based. |
| | |
| | In space systems, software safety evolved under extreme mission-assurance constraints rather than through a single commercial certification pathway. Space programs recognized early that software errors could be catastrophic because repair is difficult or impossible, communication delays are long, and missions are expensive. For a long time, safety was handled through agency-specific reliability doctrine, redundancy, conservative design, and system engineering discipline rather than a single software certification standard like DO-178. NASA’s own software-safety framework became more explicit with NASA-STD-8719.13, first issued in 1997 and updated since; NASA describes it as specifying the activities necessary to ensure safety is designed into software acquired or developed by the agency. The space sector’s historical movement, then, has been from mission-specific reliability practice toward more formalized software-safety activities, documentation, and risk-scaled rigor. Compared with airborne systems, the emphasis is often less on certifying a product line for repeated operation and more on ensuring that mission-specific software hazards are identified, mitigated, and managed as part of a broader system safety case. |
| | |
| | ====== Software Supply Chain and Manufacturing ====== |
| | |
| | Software entered complex engineered products long before anyone talked about “software-defined” anything. In the earliest generations of electronic products, software was small, tightly coupled to a specific hardware function, and often treated almost like firmware: a fixed control layer burned into ROM or maintained by a small engineering team. Productization in that era was primarily a hardware discipline. Once the design was frozen and qualified, the software was expected to stay stable for years, sometimes for the entire product life. Maintainability existed, but mostly in the form of patching defects, issuing service updates, and preserving compatibility with replacement hardware. The supply chain focus was similarly physical: semiconductors, boards, connectors, and mechanical parts dominated risk and planning. Software dependencies were limited enough that organizations could often understand the full stack internally. That began to change as products became networked, feature-rich, and digitally updatable. |
| | |
| | From the 1980s through the 2000s, software became a much larger share of product value, especially in embedded systems, telecommunications, aerospace, and automotive electronics. This changed productization from a one-time release activity into an ongoing lifecycle problem. A product now had to be launched, updated, serviced, secured, and sometimes reconfigured in the field. Maintainability became more than clean code or modular design; it came to mean version control across hardware variants, traceability from requirements to deployed binaries, long-term support for aging platforms, and the ability to diagnose failures across interacting subsystems. At the same time, the software supply chain became more complex. Instead of mostly internal code, products increasingly depended on third-party operating systems, middleware, protocol stacks, compilers, libraries, vendor SDKs, and eventually open-source components. NIST now describes the software supply chain as the collection of activities involved in producing and delivering software, noting that its integrity depends on the security and discipline of those activities; modern guidance emphasizes practices such as SBOMs, vendor risk assessment, vulnerability management, and secure development frameworks. Historically, that marks a major shift: software was no longer just something a company wrote, but something it assembled, integrated, inherited, and continuously governed. |
| | |
| | The modern phase extends this logic even further. In connected products, especially vehicles, software is now a primary means of differentiation, feature delivery, and even business model evolution. That is where the idea of the software-defined vehicle (SDV) comes in. Historically, vehicles were built around many function-specific ECUs with tightly coupled hardware and software, and new capability typically arrived only with a new model year or hardware redesign. The SDV concept reflects a move away from that paradigm toward centralized or zonal computing, richer abstraction layers, and over-the-air updatability, so that features, performance, user experience, and even some platform behavior can evolve after the vehicle is sold. Industry analysts describe this shift as part of a broader transition in automotive E/E architecture, where software and centralized computing become the core enablers of innovation and ongoing value creation. From a historical perspective, the SDV is the endpoint of a long arc: products began as hardware with a little embedded code, became integrated systems whose success depended on software lifecycle management, and are now increasingly understood as updatable software platforms embodied in hardware. |
| | |
| | ====== Validation and Verification ====== |
| | |
| | IT-based software is verified through a structured combination of requirements-based testing, code analysis, and runtime validation, augmented by principles from Carnegie Mellon University Software Engineering Institute methodologies such as the Capability Maturity Model Integration and disciplined software engineering practices. Verification begins with ensuring that requirements are well-defined, traceable, and testable—aligned with CMMI’s emphasis on requirements management and validation. Development proceeds through unit, integration, and system testing, supported by peer reviews, formal inspections, and static analysis, reflecting SEI’s focus on early defect removal and process discipline. Measurement and analysis play a key role, with metrics collected to assess defect density, coverage, and process performance. Configuration management ensures that all artifacts (code, tests, requirements) are version-controlled and reproducible, while process maturity levels guide organizations toward increasingly predictable and optimized verification practices. Continuous integration pipelines automate regression testing, and in higher-maturity environments, quantitative process control and causal analysis are used to systematically improve quality. Finally, verification extends into operations through monitoring and feedback loops, embodying the SEI philosophy of continuous process improvement across the software lifecycle. |
| | |
| | Validation of cyber-physical software places strong emphasis on hardware/software co-verification using a spectrum of simulation and emulation techniques to ensure correct behavior before deployment in the physical world. At the earliest stages, model-in-the-loop (MIL) and software-in-the-loop (SIL) simulations evaluate control algorithms and software logic against mathematical models of the environment and plant dynamics. These are followed by hardware-in-the-loop (HIL) approaches, where real control software executes on target or representative hardware while interacting with simulated sensors, actuators, and physical processes in real time—commonly used in automotive engine control, avionics flight systems, and industrial automation. As system complexity increases, processor-in-the-loop (PIL) and full-system emulation platforms enable timing-accurate execution and validation of embedded software under realistic workloads. In semiconductor and advanced embedded domains, platforms such as QEMU and commercial FPGA-based emulators allow early software bring-up prior to silicon availability. Across these stages, validation focuses not only on functional correctness but also on timing determinism, fault handling, and interaction with physical processes. This layered approach enables progressive risk reduction, bridging the gap between abstract models and real-world deployment while supporting the stringent safety and reliability requirements of cyber-physical systems. |
| | |
| | ====== Summary ====== |
| | |
| | A dominant IT electronic ecosystem drives the fundamental rhythm of hardware and software development. Cyber-physical systems, with considerably lower volume, have had to adapt to this dominant rhythm in the following ways: |
| | |
| | - **Hardware Obsolescence and Reliability:** The IT ecosystem churns through product development at a pace of 18–24 months while cyber-physical systems have operational lifetimes beyond five years. This raises a requirement for very careful supply chain management for semiconductor components. |
| | - **Software Ecosystem:** Operating systems, compilers, open-source software, communication standards, and middleware are continuously evolving cyber-physical ecosystems. This requires a dedicated architecture where safety-critical/real-time components can work alongside IT components (e.g., infotainment systems). |
| | - **Development Cost:** Traditional models of fully encapsulated cyber-physical products (e.g., automobile platforms) are increasingly shifting to the IT release cycle with over-the-air updates. |
| | - **Cybersecurity:** The introduction of communication systems as well as traditional IT software into cyber-physical systems has opened up an attack surface for bad actors. |
| | |
| | Taken together, the shift from largely mechanical systems to software defined vehicles is a massive shift in design, manufacturing, support, and even legal ownership. Software is typically licensed to the OEM and then to the final customer. |
| |
| |