Draft:Operational assurance
Systems Assurance Concept
From Wikipedia, the free encyclopedia
Operational assurance is an approach to maintaining confidence that a deployed system continues to operate within intended bounds during real-world use. The term is used in connection with autonomous, safety-critical, and software-defined systems, where design-time validation may be insufficient because systems can change after deployment through updates, environmental variation, or adaptive behavior.[1][2]
| Review waiting, please be patient.
This may take 8 weeks or more, since drafts are reviewed in no specific order. There are 3,072 pending submissions waiting for review.
Where to get help
How to improve a draft
You can also browse Wikipedia:Featured articles and Wikipedia:Good articles to find examples of Wikipedia's best writing on topics similar to your proposed article. Improving your odds of a speedy review To improve your odds of a faster review, tag your draft with relevant WikiProject tags using the button below. This will let reviewers know a new draft has been submitted in their area of interest. For instance, if you wrote about a female astronomer, you would want to add the Biography, Astronomy, and Women scientists tags. Editor resources
Reviewer tools
|
NASA's technology taxonomy uses the phrase operational assurance of autonomous systems for confirming, before or during operations, that an autonomous system is operating safely and efficiently and is not adversely affecting other systems.[3] Related literature connects operational assurance with runtime monitoring, post-deployment verification, and dynamic assurance cases that extend assurance activity beyond initial certification or release.[4][5]
Overview
Operational assurance addresses the problem that systems judged acceptable at release may not remain acceptable in operation. This is especially relevant for autonomous and AI-enabled systems, where traditional assurance methods may struggle with complexity, uncertainty, and post-deployment change.[6][7]
In practice, the concept overlaps with continuous monitoring, anomaly detection, maintenance of operational constraints, and collection of evidence about deployed behavior.[8][9]
Relationship to runtime assurance
Operational assurance overlaps with runtime assurance, a field that combines design-time analysis with runtime mechanisms intended to preserve required safety or correctness properties during operation. In robotics research, runtime assurance has been described as using runtime monitors and control switching to keep systems within specified safety properties when high-performance controllers cannot be fully trusted on their own.[10]
Post-deployment monitoring
Public guidance on AI assurance has increasingly emphasized post-deployment monitoring. NIST has argued for a "trust but verify continuously" approach and its AI RMF Playbook discusses real-time monitoring, anomaly detection, incident response, and continuous feedback during deployment and operation.[11][12][13]
A 2026 NIST report also noted that terminology and best practices for monitoring deployed AI systems remain immature and fragmented, which reflects the still-emerging nature of the field.[14]
