Draft:Operational assurance

Systems Assurance Concept From Wikipedia, the free encyclopedia

Operational assurance is an approach to maintaining confidence that a deployed system continues to operate within intended bounds during real-world use. The term is used in connection with autonomous, safety-critical, and software-defined systems, where design-time validation may be insufficient because systems can change after deployment through updates, environmental variation, or adaptive behavior.[1][2]

NASA's technology taxonomy uses the phrase operational assurance of autonomous systems for confirming, before or during operations, that an autonomous system is operating safely and efficiently and is not adversely affecting other systems.[3] Related literature connects operational assurance with runtime monitoring, post-deployment verification, and dynamic assurance cases that extend assurance activity beyond initial certification or release.[4][5]

Overview

Operational assurance addresses the problem that systems judged acceptable at release may not remain acceptable in operation. This is especially relevant for autonomous and AI-enabled systems, where traditional assurance methods may struggle with complexity, uncertainty, and post-deployment change.[6][7]

In practice, the concept overlaps with continuous monitoring, anomaly detection, maintenance of operational constraints, and collection of evidence about deployed behavior.[8][9]

Relationship to runtime assurance

Operational assurance overlaps with runtime assurance, a field that combines design-time analysis with runtime mechanisms intended to preserve required safety or correctness properties during operation. In robotics research, runtime assurance has been described as using runtime monitors and control switching to keep systems within specified safety properties when high-performance controllers cannot be fully trusted on their own.[10]

Post-deployment monitoring

Public guidance on AI assurance has increasingly emphasized post-deployment monitoring. NIST has argued for a "trust but verify continuously" approach and its AI RMF Playbook discusses real-time monitoring, anomaly detection, incident response, and continuous feedback during deployment and operation.[11][12][13]

A 2026 NIST report also noted that terminology and best practices for monitoring deployed AI systems remain immature and fragmented, which reflects the still-emerging nature of the field.[14]

See also

References

Related Articles

Wikiwand AI