Axiomatic System Safety Approaches in the Evaluation of AI-related Risks within Complex Systems
Currently there is a very high-level motivation to apply AI constructs in all forms of systems, operations, and uses. Consider even safety-related functionality that is based upon AI processes. Many are attempting to develop generic requirements associated with AI design and implementation, and such efforts are not system, operational or use specific. Consequently, specific hazards, threats, vulnerabilities cannot be specific, nor can the risks and mitigation be defined. Detailed analyses are required based upon system safety axioms.
System Safety-related Axioms…
There are many axioms that are not readily apparent. These axioms are acquired because of research within system safety, safety management, safety engineering and human factors and other system assurance considerations. Such axioms require integration to address system risks and associated axioms in context with system assurance are also included.
Axioms equate to truisms, tenets, rules, and principles. Consider formal rules and informal axioms based upon science as well as heuristics, which provide methods of solving problems for which no formula exist. Heuristics are based upon informal methods and experience and employing a form of trial-and-error iteration.
System Situation Awareness…
Maintaining system situation awareness of these axioms during decisions that can affect risk elimination and/or control. Maintain awareness of an evolving situation is a complex process that includes recognizing that perception of reality may differ from reality itself. Situation awareness requires continual questioning, cross-checking, refinement and updating of perception. Constant, conscious monitoring of the total system and human situation is required. Situation awareness refers to a human’s ability to accurately perceive what is going on within the safety system. It extends throughout the system and adverse progression life cycles. The human needs to maintain control over the system (including AI).
Axiomatic Design and Universal Design Theory…
In axiomatic design there are underlying rules or design axioms in which good design is based; engineering is motivated by observing and by enabling a good design; apply two simple axioms concerning independence and information that may govern design (laws of physics); the axioms designs must be decomposed into a hierarchical structure. Axiomatic design is possible due to automated design tools that enable modeling complex interrelationships.
(For additional information on UDT refer to:
Lossack, R., Grabowski, H., The Axiomatic Approach in the Universal Design Theory, Proceedings of ICAD2000, First International Conference on Axiomatic Design, Cambridge, MA, June 21-23, 2000, ICAD005.
Suh Pyo Nam, Axiomatic Design, Advances and Applications, Oxford University Press, New York, 2001.)
Discussions on Axiomatic Design are applicable to system safety that almost any formal method in science, engineering, and management can be adapted to be used to solve safety-related challenges once understood, hence axiomatic system safety approaches. Keep in mind, however, that the rules defined in axiomatic design remain applicable should safety application be attempted.
Axiom 1 (Axiom of finite physical effects) …
This axiom is concerned with the components or basic elements defined on the abstraction level of physical principles. We cannot design anything which is incompatible with natural principles and laws. And we know that these well-known physical principles are finite.
Hypothesis 1 (Hypothesis of finite basic elements) …
On every abstraction level there is only a finite number of basic elements.
Hypothesis 2 (Hypothesis of finite abstraction levels) …
There are a finite number of abstraction levels one can use to model an artifact or to describe design processes.
Hypothesis 3 (Hypothesis of finite transitions) …
The number of possible transitions between different abstraction levels is finite. The mapping between the abstraction levels is a mapping between the basic elements defined in each abstraction level.
Hypothesis 4 (Hypothesis of invention) …
New artifacts are always made and created from a new combination of known basic elements. That applies to the basic elements of the abstraction levels, e.g., a function, a physical principle, an effective surface etc., which are to be combined.
Hypothesis 5 (Hypothesis of solution finding) …
Each product (safety) requirement points at least to one solution area. From this follows that a solution is determined unambiguously if the set of requirements is complete, consistent, and valued. Specifics remain important not generializations.
Methods of Safety-Related Integration…
Within the safety context there are many forms of integration: There is the integration of requirements driven by system assurance disciplines’, coordinating requirements development, refining subsystem and system requirements, system and subsystem specification coordination, development of interface control documentation; The aspects of conducting system-level analyses are refined including: assimilating, modeling, and conducting simulation, bread boarding and testing.
Further Considerations…
Applying life cycle considerations and enabling the form, fit, and function of the system elements: hardware, software, firmware, logic, the human, and environment; There are further needs associated with organizational coordination, decision making, system management practices, communications with the timely interchange of data and information, incorporating protocols, procedures, and developing written literature related to the design. Developing and implementing resources, acquiring data and information, data mining, inclusive research, and development activities; Incorporating system integration rules, inductive and deductive logic, precepts, consistent practices, naming conventions, use of integration tools and methods.
System assurance…
Safety (system) assurance equates to the concept of acceptable level of risk. In a general context, risk relates to the likelihood of any form of harm to the system. If the system does not perform as intended there may be unfavorable outcomes such as failures, malfunctions, errors and mistakes and consequent hazards, threats, vulnerabilities.
Some events may or may not directly relate to a safety-related outcome. However, when unfavorable situations occur there may be synergistic relationships or synergistic risks. When the system is not in balance, which means the system is not operating within the design envelope or within specification. Such synergistic relationships represent an adverse or unintended integration. System assurance includes the integration specialty discipline requirements to enable acceptable risk. These efforts include the identification, elimination and control of the specialty-related system and synergistic risks.
Consider that if the system fails due to an inadequate reliability requirement, such a failure may be a hazard. It this case a reliability-related risk can be directly equated to a safety-related risk. Another synergistic risk may be associated with a cyber-threat and venerability. A hacker gains access into a safety critical system and causes inadvertent action which is a hazard or a threat. Hence, a security-related risk results in an adverse safety-related outcome.
Inclusive Analyses…
The scenario-driven hazard, threat, vulnerability analysis, risk ranking, and risk assessment has been applied toward system analysis because it enables an overall process to systematically analyze system risks. The technique relies on the understanding the dynamics of a system accident or system adverse sequence. Accidents are unplanned sequences of events that result in harm and intentional threats are planned. Accidents and adverse events are not usually the result of a single cause or hazard or threat. Accidents and adverse events are the result of many initiators and contributors, threats, and vulnerabilities. In hypothesizing a potential system risk, the analyst thinks in terms of a scenario. The scenario-driven analysis process involves constructing scenarios by identifying initiators, subsequent contributors, threats, vulnerabilities and defining the harm.
Determining potential event propagation through a complex system involves extensive analysis. Specific system assurance methods such as software hazard analysis, human interface analysis, additional scenario analysis, and modeling techniques may be applied to determine system, systemic and synergistic risks, which are the inappropriate interaction of software, firmware, logic, human, machine, and environment.
The scenario concept first came to mind after study of Willie Hammers books and material on system safety and in later discussions with Hammer. Hammer[1] initially discussed concepts of initiators, contributors, and primary hazards in context of hazard analysis. Hammer noted that determining exactly which hazard is or has been directly responsible for an accident is not as simple as it seems. Consequently, Hammer discussed a set of hazards that form sequences within the potential or actual accident. The sequences are comprised of initiating, contributory and primary hazards. Initiating hazards define the start of the adverse sequence. They are latent design defects, or errors, or oversights, which under certain conditions, manifest or trigger the adverse flow. Contributory hazards are unsafe acts and/or conditions that contribute within the flow. Primary hazards are the harm. Hammer specifically described an accident sequence involving a series of events that result in the rupture of a high-pressure air tank. The injury and/or damage resulting from the rupture of the tank were considered the primary hazards. The moisture that caused corrosion of the tank was considered the initiating hazard; the corrosion, loss of strength, and the pressure were contributory hazards. The author had conducted numerous discussions with Hammer to refine the scenario-driven concepts applied.
System Accidents…
System accidents or adverse system events can be very from complex or simple circumstances. Initiators could be the result of latent hazards, like software design errors, or specification errors, or oversights involving inappropriate assumptions. If controls are not adequate, they become initiators or contributory hazards. If the control is not verified it may not function when required. Validation considers the adequacy of the control. It is the determination of sufficiency of the control and if it has been appropriately designed or applied. Initiating and contributory hazards are unsafe acts or conditions, which under specific conditions will result in system accidents or intentional adverse events.
Most accidents are the result of human error, and it could be argued that many unsafe conditions associated with a design, or vulnerabilities are the result of human error, oversight, omission, or poor assumption, or poor decisions. Keeping this human interface in mind, to determine hazards/threats, a separation designation is required between the physical condition and an apparent human error. This being the case, a criterion was needed. A line of logic was defined which provides separation between physical unsafe conditions and non-physical unsafe human acts. The line was drawn where the human assumes control, or assumes interaction, or interfaces with the system; when the human is directly in the loop, such as a train operator. Since the human is directly in the loop, if the human deviates, there can be direct result harm, such as a delayed contingency response.
[1] Hammer W., Handbook of System and Product Safety, Prentice-Hall, Inc., 1972, pages 63 and 64.
Head of AI (Centre of Excellence) | Systems & Safety Engineering
7moMike Allocco, Emeritus Fellow ISSS I always read your posts and comments with interest. I have been looking for a copy of your book. Are you able to point me to where I can buy a copy? Preferably digital/ebook, or your recommendation for books or papers.
Sociological Safety® | The Sociological Workplace | Trivalent Safety Ecosystem
8moMike Allocco, Emeritus Fellow ISSS. In the other post, I mentioned organizing principles, which may be what you mean by "axioms." We've got synthetic ecosystems growing so fast that we're deep in a sociotechnological quandary, trying to close the barn door on managing the diffusion of innovation after the horse is long gone. (And we know for certain that was intentional.) Control is no longer a possibility. "Governance decisions" (regulations) are going to be ex post facto based on data from the inevitable detrimental events that should have never happened in the first place. (No one will slow down to collect data on profitable AI successes until adverse effects show up down the road.) So discerning or developing axioms will not be centered on proactive risk assessments to make new rules of the road. It will be largely rooted in the gravity of the consequences of the mishaps and tragedies. That's one prong. A second prong could be based on the factual history of the diffusion of innovation and projecting that into the future to target and stave off the highest potential negatives. Broadly, weaponization and negligence may be the most fruitful potential domains to pull from. And, let's stay calm, and hope Bill Murry is right.
Digital Quality & Regulatory
9moWhat are some metrics we can use to evaluate and measure the AI system?
Executive Leader of EHS & Sustainability | Organizational Change Agent | Leadership Team Development | Industrial Operations | Collaborative Boardroom Leader
10moInteresting article Mike that puts some rigor around attempts to implement AI methods for improved safety. I appreciate the other references. Regarding this sentence: “Axiomatic design is possible due to automated design tools…”, what are these tools? I’d like to learn more about them.