| Next revision | Previous revision |
| en:safeav:softsys [2025/04/24 18:03] – created pczekalski | en:safeav:softsys [2026/03/31 11:16] (current) – [History of Software and Cyber-Physical Systems] airi |
|---|
| ====== Software Systems and Middleware ====== | ====== Software Systems and Middleware ====== |
| | |
| | //put your contents here// |
| | |
| | <todo @karlisberkolds></todo> |
| | |
| | What is Software ? |
| | |
| | {{:en:safeav:figure4a.jpg?600|}} |
| | |
| | **Programmable Hardware and the Emergence of Software Systems** |
| | |
| | The previous chapter introduced electronic hardware and the role of electronic components in implementing system functionality. However, the physical nature of hardware—and the inherent complexity of designing across mechanical, electrical, and logical domains—places fundamental limits on the speed and flexibility with which new system capabilities can be developed. To address these limitations, hardware platforms evolved to support programmability after fabrication. This programmability enables a separation between physical implementation and functional behavior, allowing systems to be adapted without redesigning the underlying hardware. |
| | |
| | - **Configuration:** In many modern systems, hardware components can be configured after silicon fabrication to support multiple operating modes or product variants. For example, parameters such as bus widths, cache sizes, or feature sets may be selected through configuration registers or firmware-controlled settings. |
| | - **Hardware Function Realization:** Certain hardware platforms support the post-silicon realization of hardware functionality through programmable logic structures. A canonical example is the Field Programmable Gate Array (FPGA), which enables designers to implement custom digital circuits after manufacturing. These devices are programmed using hardware description languages (HDLs), such as Verilog or VHDL, and have become foundational in embedded systems, prototyping, and specialized computing. |
| | - **Programmable Processors:** A broad class of stored-program computing engines based on the von Neumann architecture, including microprocessors and microcontrollers, falls into this category. Historically programmed in assembly language, these devices are now predominantly programmed using high-level languages such as C, along with higher-level abstractions in more complex systems. |
| | |
| | These programming paradigms introduce several important system-level considerations: |
| | |
| | - **Development Ecosystem:** Programmability necessitates a supporting software development toolchain, including compilers, assemblers, linkers, debuggers. This development ecosystem becomes an integral part of the system and must be maintained, validated, and supported throughout the product lifecycle. |
| | - **Product Lifecycle:** Historically, system programming was performed during manufacturing, resulting in a largely static, well-contained product. Post-deployment reprogramming was relatively rare, with notable exceptions in domains such as space systems. In contrast, modern systems increasingly rely on field updates and continuous software evolution, fundamentally altering lifecycle management. |
| | - **Peripherals and Interconnects:** System flexibility was further enhanced through standardized hardware peripherals. These devices integrate mechanical, electrical, and computational functions and communicate via well-defined interconnect standards such as PCI and USB. This modularity enables extensibility and interoperability across systems. |
| | |
| | The concept of programmable hardware was significantly advanced in the 1960s with the introduction of the IBM System/360, which formalized the notion of a stable computer architecture. This development marked a critical transition from device-specific design to platform-based computing and introduced several enduring properties: |
| | |
| | - **Abstraction and Compatibility:** Computer architectures retained the fundamental von Neumann model—comprising multiple abstraction implementations. This abstraction enabled backward compatibility, allowing software developed for one generation of hardware to execute on future systems. As a result, performance improvements could be driven by advances in semiconductor processes and microarchitecture without requiring changes to application software. |
| | - **Operating Systems:** The presence of a stable hardware abstraction enabled the development of higher-level system software such as operating systems, process isolation, scheduling, and resource management were formalized within operating systems. These systems provided a consistent execution environment and significantly improved programmability, portability, and system utilization. |
| | - **Networking:** As computing systems proliferated, the need for communication between geographically distributed machines led to the development of networking. Layered abstractions—from physical transmission to application protocols—enabled reliable data exchange and ultimately supported the emergence of distributed systems and global connectivity. |
| | |
| | Since the introduction of computer architectures in the 1960s, rapid advances in semiconductor technology, system design, and networking have driven an exponential expansion in computing capability. These developments have transformed nearly every aspect of modern society through what is broadly referred to as information technology. The programming of these systems—spanning configuration, control, and application logic—is collectively known as software. |
| | |
| | Open-source systems have played a transformative role in the evolution of information technology by accelerating innovation, lowering barriers to entry, and standardizing software infrastructure across heterogeneous environments. Foundational platforms such as Linux, the Apache HTTP Server, and languages and ecosystems such as Python and GCC enabled a global, collaborative development model in which individuals, academia, and industry could contribute to shared software stacks. This model fostered rapid iteration, transparency, and portability, allowing software to scale from individual machines to cloud-scale distributed systems. Open-source licensing also enabled companies to build commercial products atop shared infrastructure, leading to the emergence of entire ecosystems around cloud computing, data analytics, and artificial intelligence. As a result, open-source software became a cornerstone of modern IT, underpinning everything from web services to high-performance computing and enabling a pace of innovation that would have been difficult to achieve through proprietary development alone. |
| | |
| | ====== History of Software and Cyber-Physical Systems ====== |
| | |
| | While the IT ecosystem drove massive innovations and built incredible capabilities, these capabilities could not be directly used in cyber-physical systems. Cyber-physical software differs from conventional embedded or enterprise software because it operates under strict real-time constraints and it needs robust fault tolerance and safety compliance. The historical introduction of software into cyber-physical systems followed different timelines across ground, airborne, marine, and space domains, but in all four cases the long-term trend was the same: software evolved from supporting narrow control functions to becoming the central coordinating layer for sensing, decision-making, communication, and actuation. In the earliest generation of these systems, most functionality was mechanical, hydraulic, analog, or electromechanical. As digital electronics matured, software first entered as a way to improve control precision, reduce weight, support diagnostics, and increase flexibility. Over time, however, software stopped being merely an enhancement and became essential to system operation. This shift was one of the major enablers of autonomy. |
| | |
| | In **ground systems**, especially automobiles, software emerged in a practical production role during the 1970s and early 1980s, when tightening emissions regulations pushed manufacturers toward microprocessor-based engine control. Early automotive software was relatively narrow in scope, focused on ignition timing, fuel injection, and engine management. As electronics spread into anti-lock braking, traction control, airbags, steering, body electronics, and infotainment, software grew from embedded control logic into a distributed system running across many electronic control units. The later introduction of in-vehicle networks such as CAN and FlexRay further expanded software’s role, because control units now had to exchange data and coordinate across domains rather than operate as isolated devices. By the 2010s, with electrification and ADAS, software had become inseparable from perception, energy management, diagnostics, communications, and vehicle behavior. |
| | |
| | In **airborne systems**, software entered earlier and under stricter safety expectations because avionics quickly became tied to navigation, stability, and flight control. Early aircraft electronics were largely analog and federated, but the move to digital control accelerated in the 1970s and 1980s, culminating in the rise of fly-by-wire systems. NASA notes that its F-8 Digital Fly-By-Wire aircraft became, on May 25, 1972, the first aircraft to fly completely dependent on an electronic flight-control system, marking a major turning point in the acceptance of software within the control loop. Later developments such as glass cockpits, FADEC, and integrated avionics made software central not only to control, but also to displays, redundancy management, fault monitoring, and mission systems. Because software was trusted with flight-critical functions so early, airborne systems also developed rigorous assurance frameworks earlier than most other sectors. |
| | |
| | In **marine systems**, software was introduced more gradually and often first appeared as an aid to navigation, propulsion monitoring, and ship management rather than as the immediate core of vessel control. During the 1980s and 1990s, software became increasingly important through GPS integration, electronic charting, digital propulsion governors, alarm monitoring, and networking standards such as NMEA 0183 and NMEA 2000. As ships adopted Integrated Bridge Systems and Integrated Platform Management Systems, software took on a more integrative role, connecting radar, sonar, charting, safety alerts, and propulsion information into shared consoles and coordinated workflows. The marine sector generally moved more slowly than aerospace or automotive because of lower production volumes, long vessel lifecycles, and a historically stronger dependence on mechanical and human-operated systems. Still, the same underlying pattern emerged: software shifted from assisting operators to structuring the flow of information and control across the vessel. |
| | |
| | In **space systems**, software became important very early because spacecraft had to function with limited or delayed human intervention. Even early missions required onboard digital logic for guidance, control, telemetry, and fault management. Apollo is a landmark example: NASA records describe the Apollo primary guidance, navigation, and control system as centered on the Apollo Guidance Computer, making software a mission-critical part of spacecraft operation during the 1960s lunar program. In later decades, spacecraft software expanded to support attitude control, payload operation, onboard data handling, autonomous fault detection, and increasingly software-defined mission behavior. Modern space systems add reconfigurable payloads, autonomous navigation, and onboard AI, but the historical pattern remains continuous: because space systems operate remotely and under extreme constraints, software has long been essential not just for convenience, but for basic mission survival and autonomy. |
| | |
| | As software methods migrated from traditional computing into cyber-physical systems (CPS), a distinct class of software infrastructure emerged to manage the tight coupling between computation and the physical world. Central to this evolution was the adoption of **real-time operating systems (RTOSes)**, which provide deterministic task scheduling, bounded interrupt latency, and predictable timing behavior—properties essential for interacting with sensors, actuators, and control loops. Unlike general-purpose operating systems, RTOSes are designed to guarantee that critical tasks execute within strict temporal constraints, often using priority-based preemptive scheduling and carefully managed resource sharing. Representative RTOS implementations include VxWorks, widely used in aerospace and defense systems; QNX, common in automotive and industrial platforms; and FreeRTOS, broadly adopted in embedded and IoT devices. In addition to RTOS kernels, CPS software stacks increasingly incorporated device drivers, middleware for communication (e.g., message queues and publish–subscribe frameworks such as DDS), and hardware abstraction layers (HALs) to isolate application logic from platform-specific details. These components enabled modular software architectures while preserving the determinism required for control and safety. |
| | Across domains such as ground, airborne, marine, and space systems, RTOS-based architectures became foundational to system design, with domain-specific adaptations. In **ground systems**, automotive platforms standardized software stacks such as AUTOSAR, where RTOS scheduling supports engine control units (ECUs), braking systems (ABS), and advanced driver assistance systems (ADAS). In **airborne systems**, avionics platforms such as the Boeing 787 rely on partitioned RTOS environments (often based on VxWorks) to meet stringent safety certification requirements (e.g., DO-178C), ensuring temporal and spatial isolation between flight-critical functions. In **marine systems**, integrated bridge and navigation systems—such as those used on modern commercial vessels and naval ships—employ real-time software (often QNX-based) to coordinate radar, GPS, and autopilot control loops under standards like IEC 61162 (NMEA). In **space systems**, spacecraft such as the Mars Perseverance Rover utilize RTOS platforms like VxWorks to manage guidance, navigation, and control in environments where remote operation and fault tolerance are essential. Over time, these systems evolved from tightly coupled, monolithic implementations to more layered and componentized architectures, incorporating standardized interfaces and increasingly sophisticated middleware. This progression laid the groundwork for modern trends such as software-defined vehicles, autonomous systems, and distributed CPS platforms, where software not only controls physical processes but also enables continuous updates, adaptability, and higher-level system intelligence. |
| | |
| | In cyber-physical systems (CPS), the role of open-source software has been more gradual but increasingly significant, particularly as systems have become more complex, networked, and software-defined. Platforms such as FreeRTOS, Zephyr, and middleware frameworks like ROS have enabled broader access to embedded and robotic system development, fostering innovation in domains such as autonomous vehicles, industrial automation, and drones. Open-source approaches in CPS provide advantages in transparency, flexibility, and community-driven validation, which are particularly valuable for research and prototyping. However, their adoption in safety-critical domains—such as avionics, automotive safety systems, and space missions—has required careful integration with certification processes, long-term support models, and rigorous verification and validation practices. Increasingly, hybrid models are emerging in which open-source components form the foundation of development platforms, while certified, domain-specific layers ensure compliance with safety and reliability requirements, reflecting a convergence between the open innovation model of IT and the stringent assurance needs of cyber-physical systems. |
| | |
| | ====== Software and Safety Standards ====== |
| | |
| | As software moved from advisory and convenience roles into closed-loop control, fault management, and autonomy, safety standards had to shift from focusing mainly on hardware reliability to addressing software behavior, development process, traceability, and verification evidence. The big historical move was this: hardware could often be analyzed in terms of random failures and wear-out mechanisms, but software introduced a different kind of risk—systematic faults from requirements errors, design flaws, implementation mistakes, and unexpected interactions. That forced each domain to build standards that emphasized lifecycle rigor, requirements traceability, verification independence, configuration control, and structured safety arguments rather than just component robustness. IEC 61508 became the broad functional-safety reference point for programmable electronic systems and explicitly includes software requirements in Part 3, while later domain-specific standards adapted that logic to their own operating environments. |
| | |
| | In ground systems, especially automotive, the early era of software safety was relatively informal: OEMs and suppliers used internal engineering discipline, testing, and FMEA-style thinking, but there was no unified framework tailored to vehicle software. As vehicles became software-intensive—first in engine control, then braking, steering, airbags, networking, and ADAS—the industry needed a standard that treated software as part of a full safety lifecycle. That came through ISO 26262, first published in 2011 as an adaptation of IEC 61508 for road vehicles. ISO 26262 introduced Automotive Safety Integrity Levels (ASILs), hazard analysis and risk assessment, lifecycle processes, and safety measures for both hardware and software, embedding software assurance into vehicle development rather than leaving it as a late-stage test problem. In practical terms, the standard pushed the automotive industry toward stronger requirements engineering, bidirectional traceability, safer software architecture, verification planning, and formal integration of software into system-level safety cases. |
| | |
| | In airborne systems, software safety standards emerged earlier and with greater rigor because software entered flight-critical functions sooner. Aviation could not treat software as just another engineering layer once digital flight control, navigation, and avionics displays became mission- and safety-critical. That is why DO-178, originally published in 1981, became so influential: it defined design assurance for airborne software and tied development rigor to the criticality of the function. Over time this matured through DO-178B and then DO-178C in 2011, which remains the core software assurance framework recognized by the FAA through AC 20-115D. The airborne sector’s key historical move was to make software safety depend not on testing alone, but on documented objectives, lifecycle evidence, configuration control, structural coverage, tool qualification where needed, and verification commensurate with software level. In other words, aviation moved earliest and most clearly toward the idea that safe software is demonstrated through a disciplined assurance process, not just by showing that a program “seems to work.” |
| | |
| | In marine systems, the evolution was slower and more fragmented. Marine governance historically focused more on mechanical integrity, redundancy, seaworthiness, and prescriptive equipment rules than on software-specific lifecycle assurance. As ships adopted integrated bridge systems, dynamic positioning, digital navigation, and autonomous functions, classification societies such as DNV, ABS, and Lloyd’s Register increasingly had to account for software quality, cyber resilience, and failure behavior in control systems. But unlike aviation and automotive, the marine sector did not converge as early on a single universally dominant software-safety standard. Instead, it has generally relied on a patchwork of class rules, IEC-derived functional-safety thinking, equipment standards, and system-specific assurance practices. So the historical movement in marine has been from equipment approval and redundancy rules toward a more software-aware model, but one that still remains less unified and less process-centered than in aerospace or automotive. That difference reflects the sector’s lower production volumes, varied vessel types, long lifecycles, and less centralized certification structure. The chapter you shared captures this well in noting that marine governance has remained more prescriptive and performance-based than process-assurance-based. |
| | |
| | In space systems, software safety evolved under extreme mission-assurance constraints rather than through a single commercial certification pathway. Space programs recognized early that software errors could be catastrophic because repair is difficult or impossible, communication delays are long, and missions are expensive. For a long time, safety was handled through agency-specific reliability doctrine, redundancy, conservative design, and system engineering discipline rather than a single software certification standard like DO-178. NASA’s own software-safety framework became more explicit with NASA-STD-8719.13, first issued in 1997 and updated since; NASA describes it as specifying the activities necessary to ensure safety is designed into software acquired or developed by the agency. The space sector’s historical movement, then, has been from mission-specific reliability practice toward more formalized software-safety activities, documentation, and risk-scaled rigor. Compared with airborne systems, the emphasis is often less on certifying a product line for repeated operation and more on ensuring that mission-specific software hazards are identified, mitigated, and managed as part of a broader system safety case. |
| | |
| | ====== Software Supply Chain and Manufacturing ====== |
| | |
| | Software entered complex engineered products long before anyone talked about “software-defined” anything. In the earliest generations of electronic products, software was small, tightly coupled to a specific hardware function, and often treated almost like firmware: a fixed control layer burned into ROM or maintained by a small engineering team. Productization in that era was primarily a hardware discipline. Once the design was frozen and qualified, the software was expected to stay stable for years, sometimes for the entire product life. Maintainability existed, but mostly in the form of patching defects, issuing service updates, and preserving compatibility with replacement hardware. The supply chain focus was similarly physical: semiconductors, boards, connectors, and mechanical parts dominated risk and planning. Software dependencies were limited enough that organizations could often understand the full stack internally. That began to change as products became networked, feature-rich, and digitally updatable. |
| | |
| | From the 1980s through the 2000s, software became a much larger share of product value, especially in embedded systems, telecommunications, aerospace, and automotive electronics. This changed productization from a one-time release activity into an ongoing lifecycle problem. A product now had to be launched, updated, serviced, secured, and sometimes reconfigured in the field. Maintainability became more than clean code or modular design; it came to mean version control across hardware variants, traceability from requirements to deployed binaries, long-term support for aging platforms, and the ability to diagnose failures across interacting subsystems. At the same time, the software supply chain became more complex. Instead of mostly internal code, products increasingly depended on third-party operating systems, middleware, protocol stacks, compilers, libraries, vendor SDKs, and eventually open-source components. NIST now describes the software supply chain as the collection of activities involved in producing and delivering software, noting that its integrity depends on the security and discipline of those activities; modern guidance emphasizes practices such as SBOMs, vendor risk assessment, vulnerability management, and secure development frameworks. Historically, that marks a major shift: software was no longer just something a company wrote, but something it assembled, integrated, inherited, and continuously governed. |
| | |
| | The modern phase extends this logic even further. In connected products, especially vehicles, software is now a primary means of differentiation, feature delivery, and even business model evolution. That is where the idea of the software-defined vehicle (SDV) comes in. Historically, vehicles were built around many function-specific ECUs with tightly coupled hardware and software, and new capability typically arrived only with a new model year or hardware redesign. The SDV concept reflects a move away from that paradigm toward centralized or zonal computing, richer abstraction layers, and over-the-air updatability, so that features, performance, user experience, and even some platform behavior can evolve after the vehicle is sold. Industry analysts describe this shift as part of a broader transition in automotive E/E architecture, where software and centralized computing become the core enablers of innovation and ongoing value creation. From a historical perspective, the SDV is the endpoint of a long arc: products began as hardware with a little embedded code, became integrated systems whose success depended on software lifecycle management, and are now increasingly understood as updatable software platforms embodied in hardware. |
| | |
| | ====== Validation and Verification ====== |
| | |
| | IT-based software is verified through a structured combination of requirements-based testing, code analysis, and runtime validation, augmented by principles from Carnegie Mellon University Software Engineering Institute methodologies such as the Capability Maturity Model Integration and disciplined software engineering practices. Verification begins with ensuring that requirements are well-defined, traceable, and testable—aligned with CMMI’s emphasis on requirements management and validation. Development proceeds through unit, integration, and system testing, supported by peer reviews, formal inspections, and static analysis, reflecting SEI’s focus on early defect removal and process discipline. Measurement and analysis play a key role, with metrics collected to assess defect density, coverage, and process performance. Configuration management ensures that all artifacts (code, tests, requirements) are version-controlled and reproducible, while process maturity levels guide organizations toward increasingly predictable and optimized verification practices. Continuous integration pipelines automate regression testing, and in higher-maturity environments, quantitative process control and causal analysis are used to systematically improve quality. Finally, verification extends into operations through monitoring and feedback loops, embodying the SEI philosophy of continuous process improvement across the software lifecycle. |
| | |
| | Validation of cyber-physical software places strong emphasis on hardware/software co-verification using a spectrum of simulation and emulation techniques to ensure correct behavior before deployment in the physical world. At the earliest stages, model-in-the-loop (MIL) and software-in-the-loop (SIL) simulations evaluate control algorithms and software logic against mathematical models of the environment and plant dynamics. These are followed by hardware-in-the-loop (HIL) approaches, where real control software executes on target or representative hardware while interacting with simulated sensors, actuators, and physical processes in real time—commonly used in automotive engine control, avionics flight systems, and industrial automation. As system complexity increases, processor-in-the-loop (PIL) and full-system emulation platforms enable timing-accurate execution and validation of embedded software under realistic workloads. In semiconductor and advanced embedded domains, platforms such as QEMU and commercial FPGA-based emulators allow early software bring-up prior to silicon availability. Across these stages, validation focuses not only on functional correctness but also on timing determinism, fault handling, and interaction with physical processes. This layered approach enables progressive risk reduction, bridging the gap between abstract models and real-world deployment while supporting the stringent safety and reliability requirements of cyber-physical systems. |
| | |
| | ====== Summary ====== |
| | |
| | A dominant IT electronic ecosystem drives the fundamental rhythm of hardware and software development. Cyber-physical systems, with considerably lower volume, have had to adapt to this dominant rhythm in the following ways: |
| | |
| | - **Hardware Obsolescence and Reliability:** The IT ecosystem churns through product development at a pace of 18–24 months while cyber-physical systems have operational lifetimes beyond five years. This raises a requirement for very careful supply chain management for semiconductor components. |
| | - **Software Ecosystem:** Operating systems, compilers, open-source software, communication standards, and middleware are continuously evolving cyber-physical ecosystems. This requires a dedicated architecture where safety-critical/real-time components can work alongside IT components (e.g., infotainment systems). |
| | - **Development Cost:** Traditional models of fully encapsulated cyber-physical products (e.g., automobile platforms) are increasingly shifting to the IT release cycle with over-the-air updates. |
| | - **Cybersecurity:** The introduction of communication systems as well as traditional IT software into cyber-physical systems has opened up an attack surface for bad actors. |
| | |
| | Taken together, the shift from largely mechanical systems to software defined vehicles is a massive shift in design, manufacturing, support, and even legal ownership. Software is typically licensed to the OEM and then to the final customer. |
| | |
| | |
| | <WRAP excludefrompdf> |
| | The following chapters contain more details: |
| | * [[en:safeav:softsys:softstacks]] |
| | * [[en:safeav:softsys:softmgmt]] |
| | * [[en:safeav:softsys:softtests]] |
| | * [[en:safeav:softsys:criticalsys]] |
| | * [[en:safeav:softsys:vaicomp]] |
| | </WRAP> |