Engineering Safe And Secure Software Systems 1st Edition C Warren Axelrod
Engineering Safe And Secure Software Systems 1st Edition C Warren Axelrod
Engineering Safe And Secure Software Systems 1st Edition C Warren Axelrod
Engineering Safe And Secure Software Systems 1st Edition C Warren Axelrod
1. Engineering Safe And Secure Software Systems 1st
Edition C Warren Axelrod download
https://ebookbell.com/product/engineering-safe-and-secure-
software-systems-1st-edition-c-warren-axelrod-4724402
Explore and download more ebooks at ebookbell.com
2. Here are some recommended products that we believe you will be
interested in. You can click the link to download.
Engineering Safe And Secure Cyberphysical Systems The Specification
Pearl Approach 1st Edition Roman Gumzej Auth
https://ebookbell.com/product/engineering-safe-and-secure-
cyberphysical-systems-the-specification-pearl-approach-1st-edition-
roman-gumzej-auth-5355748
Safe Operation And Maintenance Of Dry Dock Facilities Manuals And
Reports On Engineering Practices No 121 Paul A Harren
https://ebookbell.com/product/safe-operation-and-maintenance-of-dry-
dock-facilities-manuals-and-reports-on-engineering-practices-
no-121-paul-a-harren-4633356
An Effective Strategy For Safe Design In Engineering And Construction
Andy Painting
https://ebookbell.com/product/an-effective-strategy-for-safe-design-
in-engineering-and-construction-andy-painting-46706768
Safe 40 Distilled Applying The Scaled Agile Framework For Lean
Software And Systems Engineering Richard Knaster
https://ebookbell.com/product/safe-40-distilled-applying-the-scaled-
agile-framework-for-lean-software-and-systems-engineering-richard-
knaster-6685158
3. Using Lasers As Safe Alternatives For Adhesive Bonding Emerging
Research And Opportunities Advances In Chemical And Materials
Engineering 1st Edition Barbara Ewa Cieciska
https://ebookbell.com/product/using-lasers-as-safe-alternatives-for-
adhesive-bonding-emerging-research-and-opportunities-advances-in-
chemical-and-materials-engineering-1st-edition-barbara-ewa-
cieciska-22475474
Productivity Improvement For Construction And Engineering Implementing
Programs That Save Money And Time Phd Jk Yates
https://ebookbell.com/product/productivity-improvement-for-
construction-and-engineering-implementing-programs-that-save-money-
and-time-phd-jk-yates-5394162
Medications For Opioid Use Disorder Save Lives 1st Edition And
Medicine Engineering National Academies Of Sciences Health And
Medicine Division Board On Health Sciences Policy Committee On
Medicationassisted Treatment For Opioid Use Disorder Michelle Mancher
Alan I Leshner
https://ebookbell.com/product/medications-for-opioid-use-disorder-
save-lives-1st-edition-and-medicine-engineering-national-academies-of-
sciences-health-and-medicine-division-board-on-health-sciences-policy-
committee-on-medicationassisted-treatment-for-opioid-use-disorder-
michelle-mancher-alan-i-leshner-51652936
Handbook Of Systems Engineering And Management 2nd Edition Andrew P
Sage
https://ebookbell.com/product/handbook-of-systems-engineering-and-
management-2nd-edition-andrew-p-sage-4653846
Fundamentals Of Complex Analysis With Applications To Engineering And
Science 3rd Ed Edward B Saff
https://ebookbell.com/product/fundamentals-of-complex-analysis-with-
applications-to-engineering-and-science-3rd-ed-edward-b-saff-1007836
10. To Judy, David, Nicole, Elisabeth, Evan, and Jolie,
with wishes for a safer and more secure world for future generations
12. vii
Contents
Preface xvii
Foreword xxi
1 Introduction 1
Preamble 1
Scope and Structure of the Book 3
Acknowledgments 4
Endnotes 5
2 Engineering Systems 7
Introduction 8
Some Initial Observations 8
Deficient Definitions 11
Rationale 12
What are Systems? 13
Deconstructing Systems Engineering 16
What Is Systems Engineering? 19
13. viii Engineering Safe and Secure Software Systems
Systems Engineering and the Systems Engineering
Management Process 20
The DoD Text 22
Another Observation 22
More on Systems Engineering 23
The Systems Engineering Process (SEP) 23
Summary and Conclusions 26
Endnotes 26
3 Engineering Software Systems 29
Introduction 29
The Great Debate 31
Some Observations 32
Rationale 33
Understanding Software Systems Engineering 34
Deconstructing Software Systems Engineering 34
What Is Software? 35
What Are Software Systems? 36
Are Control Software Systems Different? 42
What is Software Systems Engineering? 42
The Software Systems Engineering Process 44
Steps in the Software Development Process 44
Omissions or Lack of Attention 48
Nonfunctional Requirements 48
Testing Nonfunctional Attributes 49
14. Contents ix
Verification and Validation 49
Creating Requisite Functional and Nonfunctional Data 52
Resiliency and Availability 55
Decommissioning 56
Summary and Conclusions 56
Endnotes 57
4 Engineering Secure and Safe Systems, Part I 59
Introduction 59
The Approach 60
Security Versus Safety 60
Four Approaches to Developing Critical Systems 63
The Dependability Approach 64
The Safety Engineering Approach 65
The Secure Systems Approach 67
The Real-Time Systems Approach 68
Security-Critical and Safety-Critical Systems 68
Summary and Conclusions 70
Endnotes 70
5 Engineering Secure and Safe Systems, Part 2 73
Introduction 73
Approach 75
Reducing the Safety-Security Deficit 76
Game-Changing and Clean-Slate Approaches 77
A Note on Protection 81
Safety-Security Governance Structure and Risk
Management 83
15. x Engineering Safe and Secure Software Systems
An Illustration 83
The General Development Life Cycle 84
Structure of the Software Systems Development
Life Cycle 86
Life Cycle Processes 89
Governance Structure for Systems Engineering Projects 92
Risks of Security-Oriented Versus Safety-Oriented
Software Systems 94
Expertise Needed at Various Stages 95
Summary and Conclusions 95
Endnotes 96
6 Software Systems Security and Safety Risk 99
Introduction 99
Understanding Risk 100
Risks of Determining Risk 100
Software-Related Risks 101
Motivations for Risk Mitigation 103
Defining Risk 104
Assessing and Calculating Risk 105
Threats Versus Exploits 107
Threat Risk Modeling 111
Threats from Safety-Critical Systems 114
Creating Exploits and Suffering Events 116
Vulnerabilities 119
Application Risk Management Considerations 120
Subjective vs. Objective vs. Personal Risk 121
Personalization of Risk 122
16. Contents xi
The Fallacies of Data Ownership, Risk Appetite, and
Risk Tolerance 122
The Dynamics of Risk 124
A Holistic View of Risk 125
Summary and Conclusions 126
Endnotes 128
7 Software System Security and Safety Metrics 131
Introduction 131
Obtaining Meaningful Data 133
Defining Metrics 133
Differentiating Between Metrics and Measures 135
Software Metrics 138
Measuring and Reporting Metrics 140
Metrics for Meeting Requirements 143
Risk Metrics 146
Consideration of Individual Metrics 146
Security Metrics for Software Systems 150
Safety Metrics for Software Systems 151
Summary and Conclusions 152
Endnotes 153
8 Software System Development Processes 157
Introduction 157
Processes and Their Optimization 158
Processes in Relation to Projects and Products/Services 159
17. xii Engineering Safe and Secure Software Systems
Some Definitions 161
Chronology of Maturity Models 164
Security and Safety in Maturity Models 165
FAA Model 165
The +SAFE V1.2 Extension 167
The +SECURE V1.3 Extension 167
The CMMI®
Approach 167
General CMMI® 167
CMMI®
for Development 168
Incorporating Safety and Security Processes 169
+SAFE V1.2 Comparisons 169
+SECURE V1.2 Comparisons 172
Summary and Conclusions 173
Endnotes 175
9 Secure SSDLC Projects in Greater Detail 177
Introduction 177
Different Terms, Same or Different Meanings 178
Creating and Using Software Systems 180
Phases and Steps of the SSDLC 182
Summary and Conclusions 191
Endnotes 193
10 Safe SSDLC Projects in Greater Detail 195
Introduction 195
Definitions and Terms 196
Hazard Analysis 198
Software Requirements Hazard Analysis 199
Top-Level Design Hazard Analysis 200
Detailed Design Hazard Analysis 201
Code-Level Software Hazard Analysis 201
18. Contents xiii
Software Safety Testing 201
Software/User Interface Analysis 202
Software Change Hazard Analysis 203
The Safe Software System Development Lifecycle 204
Combined Safety and Security Requirements 207
Summary and Conclusions 208
Endnotes 209
11 The Economics of Software Systems’ Safety and
Security 211
Introduction 211
Closing the Gap 212
Technical Debt 214
Application of Technical Debt Concept to Security
and Safety 215
System Obsolescence and Replacement 217
The Responsibility for Safety and Security by
Individuals and Groups 218
Basic Idea 218
Extending the Model 219
Concept and Requirements Phase 219
Design and Architecture Phase 222
Development 223
Verification 224
Validation 224
Deployment, Operations, Maintenance, and
Technical Support 225
Decommissioning and Disposal 226
Overall Impression 226
Methods for Encouraging Optimal Behavior 226
Pricing 227
Chargeback 227
Costs and Risk Mitigation 228
Management Mandate 228
19. xiv Engineering Safe and Secure Software Systems
Legislation 229
Regulation 229
Standards and Certifications 229
Going Forward 230
Tampering 231
Tamper Evidence 231
Tamper Resistance 232
Tamperproofing 232
A Brief Note on Patterns 234
Conclusions 236
Endnotes 238
Appendix A: Software Vulnerabilities, Errors,
and Attacks 239
Ranking Errors, Vulnerabilities, and Risks 240
The OWASP Top Security Risks 241
The CWE/SANS Most Dangerous Software Errors 244
Top-Ranking Safety Issues 244
Enumeration and Classification 246
WASC Threat Classification 248
Summary and Conclusions 250
Endnotes 250
Appendix B: Comparison of ISO/IEC 12207 and
CMMI®
-DEV Process Areas 253
Appendix C: Security-Related Tasks in the
Secure SSDLC 257
Task Areas for SSDLC Phases 258
Involvement by Teams and Groups for Secure
SSDLC Phases 262
20. Contents xv
A Note on Sources 288
Endnotes 288
Appendix D: Safety-Related Tasks in the Safe SSDLC 289
Task Areas for Safe SSDLC Phases 289
Levels of Involvement 309
A Note on Sources 309
Endnotes 313
About the Author 315
Index 317
22. xvii
Preface
The best laid plans o’ Mice an’ Men, Gang oft agley ...
—Robert Burns, To a Mouse
The initial concept for this book arose some 3 to 4 years ago. However, it
was quite different from how the book turned out. I had spent much of the
previous decade working on application security, particularly software assur-
ance. Several years ago, I had the good fortune of being the technical lead on
a software assurance initiative for the banking and finance sector, supported
by the Financial Services Technology Consortium. The first phase of the proj-
ect provided insights from thought leaders from independent software vendors
(ISVs), information security tools and services vendors, industry and profes-
sional associations, academia, and a number of leading financial institutions. A
collection of state-of-the-art practices was assembled, with the intention of us-
ing the “best” approaches for assuring the quality of software through industry-
sponsored testing. This work provided a substantial amount of the research that
was behind the BITS publication Software Assurance Framework [1].
The ultimate goal of the software assurance initiative was to establish a
state-of-the-art testing facility for the financial services industry using methods,
tools, and services chosen from the initial research. Unfortunately, the financial
meltdown of 2008 interceded. Various mergers and restructurings took place,
so that attention was turned to other more pressing matters.
The financial services industry and other critical infrastructure sectors
continue to be in dire need of such testing laboratories to ensure that com-
monly used software meets agreed-upon security standards. Others, such as Joel
Brenner [2], also espouse such a concept as some form of Consumers-Union-
like autonomous testing service. But who would support and fund such an
effort?
23. xviii Engineering Safe and Secure Software Systems
Perhaps an interesting indicator as to how such testing laboratories might
be created is the example of Huawei, the Chinese telecommunications giant,
which built the Cyber Security Evaluation Centre in England. The Centre’s
purpose is to assure potential British customers of Huawei products and services
that the company‘s technology can be trusted not only to do what customers
want them to do (and nothing more) but also that Huawei’s products cannot be
successfully attacked by cyber criminals, terrorists or foreign spies, as reported
in The Economist [3].
My initial book concept was to cover software assurance with particular
emphasis on requirements, context, and the like, with some, but relatively little
mention of the differences between developing security-critical and safety-criti-
cal software systems, and how to bring these diverse fields together. Then what
caused me to put so much more emphasis on the engineering of the security
and safety of software systems? It was a realization that professionals in the
software security and safety fields have minimal interaction and are generally
not familiar with the other engineers’ knowledge base and standard procedures.
This argument is supported from personal experience as well as from the ob-
servation that the majority of books and articles covering one area have little if
any mention of the other field. I first became aware of the difference in culture
and approach when I did some work with a small government contractor that
builds safety-critical software systems. Their approach was so different from
what I had been used to throughout an IT career in financial services. The ratio
of programming to testing appeared to be reversed for each type of system, with
security-critical information systems receiving much less testing relative to the
extensive verification and validation processes to which safety-critical control
systems are subjected.
A number of other factors began pointing to the need for developing a
book that would help security software engineers understand safety and vice
versa. However, the most influential event was a serendipitous conversation
at the 2010 IEEE International Conference on Homeland Security Technology
in Waltham, Massachusetts with Peter Gutgarts. He and Aaron Termin were
presenting on their topic “Security-Critical versus Safety-Critical Software.”
Peter Gutgarts and I discussed the subject at length and what I gained from
the conversation was the realization that so little attention was being paid to
combined safety and security aspects of critical software systems. These systems,
which include so-called cyber-physical systems, are rapidly becoming crucial to
the future of modern economies. Before our eyes, software systems are being
developed that merge sophisticated modern web applications with traditional
industrial control systems. Such cyber-physical systems offer the best—and the
worst—of both worlds. For example, the ability to monitor and control elec-
tricity meters remotely creates many new features, such as determining and
24. Preface xix
reporting unusual activities or cessation of service, but they also expose the elec-
tricity grid to hackers with malevolent and destructive intentions.
Today, the security of the smart grid and analogous systems for water and
natural gas distribution and the like are being discussed and are a clear concern
of both government and the private sector. Yet, beyond the talk, what is actu-
ally being done is woefully inadequate. Is this a case of the road to hell being
paved with good intentions? It seems that there is agreement in principle as to
the need, yet the lack of subject-matter expertise and processes for coordination
and collaboration are thwarting attempts to make it all happen. We have not yet
built the necessary bridges between security professionals and safety engineers
to enable fully coordinated efforts. The cultural differences and the goals of the
security and safety silos are such that they do not interact and do not under-
stand the needs of the other group.
Seeing the lack of interaction, in the field and in the literature, I deter-
mined that the software engineering field needs to initiate conversations that
will reduce and eliminate the lack of communication between information se-
curity professionals and software safety engineers.
Brenner [2] describes how, to their credit, the U.S. military found a way
to ensure that the efforts of diverse experts would be coordinated and consistent
across Defense Departments by creating the Joints Chiefs of Staff. As Brenner
puts it:
“Joint organization has paid extraordinary operational dividends but it
was a struggle to make it work. Overcoming intense service-level loyalties took
time... The [1986 Goldwater-Nichols Act] is one of the most important organi-
zational reforms in the history of the United State government...”
If the military can do it, so perhaps could the rest of government and the
private sector when it comes to coordinating the efforts to ensure that critical
software systems are both secure and safe. The need is there for solutions that
will overcome the security, safety, and other deficiencies in the software devel-
opment processes and the software systems that they produce.
The original direction of improving the software assurance processes and
building testing facilities continues to be extremely important and needs to
be addressed vigorously in short order. However, I chose to focus this book
on reducing the huge gap that exists between the software safety and security
engineering fields. No doubt others will revisit my original software-assurance
topics at a later date. For now, I will focus on what I see to be one of the most
pressing needs of today’s software engineering world—the collaboration be-
tween software security and safety engineers in the creation of complex systems
of systems, particularly cyber-physical systems.
While I have covered many concepts in the software systems safety-se-
curity space, this work should not be considered an end in itself. It provides a
starting point from which you can examine further the many complex topics
25. xx Engineering Safe and Secure Software Systems
that have been discussed in this book. Efforts on your part to expand upon that
which you read here may be challenging—but they will also be exciting. You
will be rewarded with the knowledge that you are learning more about what is
surely one of the most critical areas of software engineering. Good luck with
your endeavors.
Endnotes
[1] BITS Division of the Financial Services Roundtable, Software Assurance Framework, Janu-
ary 2012. Available at http://www.bits.org/publications/security/BITSSoftwareAssur-
ance0112.pdf. Accessed August 26, 2012.
[2] Brenner, J., America the Vulnerable: Inside the New Threat Matrix of Digital Espionage,
Crime, and Warfare, New York: Penguin Press, 2011.
[3] “Briefing Huawei: The Company that Spooked the World,” The Economist, August 4–10,
2012, pp.19–23.
26. xxi
Foreword
Computer security was once a casual, even polite, discipline that was practiced
in business and government via little tips: “Don’t use your name as a password,”
published policy rules: “Please inform the security team of any Internet-facing
servers,” and off-the-shelf products: “Based on our scan, your system is free of
malware!”
This era of casualness has unfortunately long since passed, and technol-
ogy practitioners, ranging from system administrators, to software engineers,
to casual users, all understand that malicious threats to organizational assets are
real. Just about every major organization on the planet has been hit with ad-
vanced persistent threats (APTs), distributed denial of service (DDOS) threats,
or both—and the effects are often lethal. Safety-critical issues have been simi-
larly intense with often serious consequences for the organization.
Warren Axelrod’s new book is an important contribution to the disci-
plines of security and safety—offering information that will be vital to anyone
connected in any way to the protection of information and assets. While his
book targets the software engineering life cycle, many of its security and safety
points can be expanded and extrapolated to more general contexts. The discus-
sions and examples are practical and provide considerable insights for profes-
sionals interested in immediate-term benefit.
The material on safeguarding public data and intellectual property is par-
ticularly useful, given so many recent public issues. Its emphasis on risk, and
how it relates to security and safety-critical systems is also particularly relevant
in the current global environment.
Newly published works on this topic of computer security and the related
topic of system safety seem to appear on a regular basis, perhaps even daily. But
few are produced by someone with the experience and knowledge of this capa-
ble author, a longtime friend of mine. His background and expertise are unique
27. xxii Engineering Safe and Secure Software Systems
and I am pleased that he’s taken the time to share a portion of his knowledge
on protecting assets.
If you have already purchased this book, then by all means, please turn
the page and begin reading. If, however, you are flipping through these pages
online (or at your bookstore), then I strongly suggest that you add this book to
your cart. It’s worth the read.
Edward G. Amoroso1
AT&T Chief Security Officer and Senior Vice President
1. Dr. Edward G. Amoroso is responsible for all areas of information security including plan-
ning, design, development, implementation, and operations support for AT&T’s extensive
global network and communications. Ed Amoroso is an adjunct professor of computer sci-
ence at Stevens Institute of Technology and holds a Ph.D. and masters degree in computer
science. He is a frequent expert speaker before Senate subcommittees on the topic of cyber
security.
28. 1
1
Introduction
A paradigm shift is underway, and a number of recent threads point
towards a fusion of security with software engineering, or at the very
least to an influx of software engineering ideas.
—Ross J. Anderson, Cambridge University, Why Cryptosystems Fail,
1993 [1]
While the general concept of safety and reliability is understood by most
parties, the specialty of software safety and reliability is not.
—Debra S. Hermann, Software Safety and Reliability [2]
Preamble
Despite heroic efforts by security and safety software engineers, information
and physical security professionals, and those involved with creating and de-
ploying safe and secure software, software systems still succumb to willful and
accidental attacks and are unable to avoid completely malicious or unintended
damage to human life and the environment.
One reason might be the shortage of subject-matter experts with in-depth
knowledge and broad experience in application security, software system safety,
software assurance, and related subjects. Even so, we must question why we
aren’t doing a better job of building, operating, and maintaining safe and secure
software-intensive systems. After all, there is a considerable body of knowledge,
much of which will be discussed in this book, about building, deploying, op-
erating, and maintaining safety-critical and security-critical software-intensive
systems.
These problems do not lie with lack of knowledge or ability on the part of
experts; rather, they have to do with the omission of critical elements—not the
29. 2 Engineering Safe and Secure Software Systems
least of which is communication among diverse stakeholders—which is needed
for securing software systems and making them safe. Recommendations by re-
searchers, particularly as they relate to information security and system safety,
are often tentative and normative in nature and seldom usable in the real world.
In some cases, this is due to the lack of authority and power of those asserting
the need for action to improve the safety and security of software systems. In
other instances, researchers are unrealistic about the application of certain tech-
niques, recommending activities that may be too difficult or expensive for the
average development shop to implement.
There is also a mind-numbing, imprecise, and often misleading vocabu-
lary that results from the art of creating safe and secure software systems sitting
at the intersection of so many different fields with stakeholders who do not
communicate properly with one another. As a result, conversations and interac-
tions among safety and security experts are frequently obscure and misguided,
and mutual agreement about the relative importance of security and safety for
specific critical systems is seldom reached. This can produce systems where
weaknesses of one segment are detrimental to otherwise high-integrity, high-
assurance systems. This failure to communicate leads to critical systems that are
vulnerable to attack and where successful attacks can result in considerable loss
of assets or harm to life and the environment.
In this book, we shall look beyond the standard texts, although they will
be referenced and studied extensively. Rather, we shall base our inquiries on the
simple question: “What will it take to ensure that the software systems, upon
which our economic and physical lives depend, are trustworthy, dependable,
and sustainable?”
Part of the answer will come from a rigorous examination of the terms
and definitions that we use. For example, the words “security” and “safety”
are commonly used interchangeably, but if examined more closely, their differ-
ences are the key to understanding how to create secure, safe, and dependable
software-intensive systems. Then we have the lack of rigorous testing, needed
to ensure that systems will not misbehave or allow manipulation that could lead
to disastrous outcomes.
It should be noted that even today there is little communication between
security and safety silos involved in building complex security-critical and safe-
ty-critical software intensive systems.This isolation leads to application software
without reference to the platforms and infrastructures upon which it operates.
Context is a key aspect of achieving security, safety, reliability, availability,
integrity, resiliency, and the like. A system has a very different set of demands
if it is exposed to public communications networks such as the Internet, than
if it is isolated from external networks; however, there are ways to bridge such
an “air gap,” as was demonstrated by the Stuxnet attack on the Iranian uranium
processing plants.
30. Introduction 3
Furthermore, the meaningfulness of application security metrics must be
examined. Measurements that we have and use are often not up to the job, as
witnessed by the frequent injection of malware, submission to denial of service
attacks, and common system failures. To what extent is it a question of not
using the right metrics versus not collecting the data upon which to base the
metrics in the first place? The gathering and analysis of appropriate metrics is
the basis of risk analysis. If we don’t know what is going on inside of our soft-
ware systems, then we cannot determine the level of risk being incurred, and
consequently, we are then unable to mitigate those risks effectively.
Additionally, we have supply chain issues. Can we trust commercial soft-
ware generally and software developed offshore in particular? Do we know
whose fingers touched the software (and the hardware on which it runs) during
the development life cycle, deployment, and maintenance? Is there reason to
suspect that those with evil intentions have inserted back doors and malware
into the software systems? How can we ensure not only the security and integ-
rity of applications in general but also the specific copy that we are using?
This book addresses what the author considers to be real issues and im-
pediments to the development and operation of safe and secure mission-critical,
software-intensive systems. It does not ignore the common vulnerabilities and
errors found in applications, such as those listed in The Open Web Application
Security Project (OWASP) top ten or The System Administration, Network-
ing, and Security Institute (SANS)/Common Weakness Enumeration (CWE)
top twenty and described in Appendix A. Rather, it takes the view that these
specific risks are important and must necessarily be dealt with, but they alone
will not solve the problem of deficient software systems. As we will discover,
merely solving these problems is not sufficient to achieve a level of assurance
that critical software systems meet high standards of security and safety. What
is needed is a pragmatic view of what it takes to achieve the optimal levels of
security and safety, however optimality is defined. It is the mission of this book
to focus on the neglected aspects of software systems engineering that stand the
chance of our making a material improvement in the state of software, a state
that currently is deteriorating rapidly.
Scope and Structure of the Book
As mentioned previously in this book, it was to be a treatise on software assur-
ance. However, it rapidly became apparent that, in order to come up with an
approach that would cover both the security and safety of software systems, it
was necessary to go back in time to see how software engineering has evolved
over recent decades. We also sought to learn why there is so little overlap be-
tween the areas of interest of software security engineers and software safety en-
31. 4 Engineering Safe and Secure Software Systems
gineers. Books on software security seldom include references to software safety,
and vice versa; it is as if each party were totally oblivious as to what goes on with
their software engineering counterparts in the other area. This book attempts
to bridge that gap between cyber security and physical safety professionals. This
approach recognizes that these so-called cyber-physical systems evolve over time
and, as they do, there will be an increasing need for those who understand both
security and safety to interrelate.
The first half of this book, Chapters 2 through 5, examines the history
and evolution of systems engineering, covering software systems and the se-
curity and safety attributes of those systems. The next section, Chapter 6 and
7, addresses risk issues and metrics, which are fundamental to the assessment
and management of systems in general and secure and safe software systems in
particular. In the next sections, Chapters 8, 9, and 10, we describe the processes
available for developing software systems, with particular emphasis on secure
and safe software systems. Chapter 11 discusses how economic and behavioral
factors influence the degree to which stakeholders participate in and support
security and safety features and attributes.
Appendix A provides lists and explanations of programming weakness
and errors as presented by OWASP, SANS, and others. Appendix B compares
the ISO/IEC 12207 and CMMI®
-DEV process areas. Appendix C provides
checklists in support of security-critical systems, and Appendix D does the same
for safety-critical systems. These latter two sets of checklists serve two purposes.
The first is to help software engineers ensure that they have considered a full,
though not complete, set of issues relating to their area of specialty (namely
security or safety). The second purpose is to enable software security engineers
to learn what factors make for safe systems, and to help software safety engi-
neers determine which issues are important for building and operating security-
critical systems.
Acknowledgments
While I have been around computer systems throughout my career, my interest
in application security in particular came from discussions in the mid-1990s
with Tom Whitman at Pershing LLC, where I was the chief information secu-
rity officer. Tom was a keen advocate of including security signoffs through-
out the software development life cycle and strongly supported my efforts to
introduce a role for security in the life cycle process. Around that time, very
few security professionals were giving much attention to building-in security
as software development passed through its sequence of phases. As well, there
were few experts in the area. However, I was fortunate enough to have Ken van
Wyk train some of Pershing’s software development staff in how to design and
32. Introduction 5
code programs using secure methods. We also held discussions with Dr. Gary
McGraw of Cigital, who is recognized as a one of the top thought leaders in
the field.
In 2009–2010, I led a Software Assurance Initiative (SAI) project for Fi-
nancial Services Technology Consortium (FSTC), which is now part of BITS/
FS Roundtable. This gave me an opportunity to work with Dr. Daniel Schutzer
and Roger Lang. Project team members included representatives from finan-
cial institutions, professional associations, software vendors, software security
service providers, and academia. The purpose of the SAI project was to pull
together state-of-the-art practices in application security for use by financial
firms. Team members included pioneers in their respective specialties, and ev-
eryone benefited from the outstanding presentations that they gave. I wish to
thank team participants for greatly enhancing my personal knowledge of the
subject. The ultimate goal was to set up an industry software test center in
which commonly used software products could be evaluated. The industry was
not ready for the test lab at the time, although the concept behind it (namely
testing and certifying software products for a group rather than by individual
firm to save on the cost of testing) is still valid, and was mentioned in Joel
Brenner’s recent book [3].
Dr. Jennifer L. Bayuk, who is the program director for cyber security at
the Stevens Institute of Technology, and was formerly an eminent information
security executive for a large financial services firm, introduced me to leading-
edge software engineering as taught at Stevens.
Thanks are also due to the staff at Decilog Inc., particularly Neal and
Scott Marchesano, Bruce Hennessy, and Jeff Valino, who enabled me to learn
so much more about how safety-critical, software-intensive control systems are
developed and tested. In addition, I owe a great deal to Artech House editors
Deidre Byrne, Samantha Ronan, and Judi Stone, who kept the pressure on
until I finally delivered the manuscript. I really don’t think I could have done it
without their support and urging. Also, my anonymous reviewer was extremely
helpful in pointing out omissions and errors, and in making some very helpful
suggestions.
Finally, I want to thank my family, especially my wife, Judy, for putting
up with so much during the writing of this book. It really did interfere with so
many other things that we wanted to do together. Unfortunately, that seems to
be the price that most authors and their families are required to pay.
Endnotes
[1] See http://web.cs.wpi.edu/~guttman/cs559_website/wcf.pdf, last accessed on July 23,
2012.
33. 6 Engineering Safe and Secure Software Systems
[2] Herrmann, D. S., Software Safety and Reliability, Los Alamitos, CA: IEEE Computer So-
ciety, 1999.
[3] Brenner, J., America the Vulnerable: Inside the New Threat Matrix of Digital Espionage,
Crime, and Warfare, New York: Penguin Press, 2011.
34. 7
2
Engineering Systems
There is nothing more difficult to carry out, nor more doubtful of suc-
cess, nor more dangerous to handle, than to initiate a new order of
things [or create a new system]. For the reformer has enemies in all
those who profit by the old order, and only lukewarm defenders in all
those who would profit by the new order, this lukewarmness arising
partly from fear of their adversaries, who have the laws in their favor;
and partly from the incredulity of mankind, who do not truly believe in
anything new until they have had the actual experience of it.
—Niccolo Machiavelli, The Prince, 1513
You have to be run by ideas, not hierarchy. The best ideas have to win.
—Steve Jobs, Cofounder, Apple Computer
Author’s note: Steve Jobs’s remark raises questions as to the place of creativity and
innovation in the systems engineering process, particularly the engineering of
safe and secure software systems. One might presume that there can be little op-
portunity for creativity and flexibility in the clearly-defined, highly-structured
world of making software systems both safe and secure. However, it can be
argued that the essence of the problems that we face in software engineering is
exactly this; that is, not enough imagination and innovation are being brought
to bear on the traditional systematic engineering processes that produce today’s
software systems. Perhaps we lack the out-of-box thinking that is needed to
anticipate the range of threats to which our systems are subjected and to come
up with innovative approaches to avoidance, deterrence, and remediation. In
this book, we attempt to bring some measure of insight into the somewhat
moribund approaches that, to date, may have lacked sufficient innovative and
35. 8 Engineering Safe and Secure Software Systems
effective ideas. First we will look at what exists today and what is missing. We
will then suggest how we might alter the balance between good and evil so that
the good guys have the better ideas…and win.
Introduction
Before embarking on our journey through the maze that is software systems en-
gineering and into the land of safety and security engineering (as these apply to
software systems), we will dabble in the more general field of systems engineer-
ing, which is by far better established and has higher credibility than software
systems security and safety engineering.
It is quite remarkable that, with a foundation as sound and effective as
systems engineering has become over the past 70 or so years, the building and
deployment of safe, secure, dependable, resilient, and reliable software systems
remains so inadequate. It is not that the fundamental principles underlying the
engineering of software systems are particularly lacking, but clearly software
systems engineering has not yet received the attention and support that are
needed to establish the pursuit of safe and secure systems as a respected field in
its own right.
Before diving into the engineering of software systems, we will first ex-
amine the broader field of systems engineering—its approaches, practices, stan-
dards, and so on. The intent is to provide a foundation upon which to build
lower-level approaches relating to software safety and security, as well as to help
identify and resolve the gaps between systems engineering and software engi-
neering. By carrying over some of the structure and processes of the longer-
established discipline of systems engineering, we can apply the experience and
proven practices of systems engineering to the less mature art of engineering
safe and secure software systems.
Some Initial Observations
Later in this chapter, we will work with a specific hierarchy of terms in order
to facilitate more meaningful definitions that we will use throughout the book.
The hierarchy trickles down from general systems to software-oriented and
hardware-oriented systems, and then to the security and safety of those systems.
While the focus of this book is on software systems, there is increasing interest
in the need for codesigning applications and related systems, particularly for
high-performance computing. We will, therefore, consider the implications of
the need to design applications in conjunction with the contexts within which
they operate.
36. Engineering Systems 9
We shall also see that the more one drills down into specific character-
istics, the less we seem to have a good knowledge base, sound practices, and
effective support and understanding. This is indicated by the opposing arrows
of Figure 2.1.
General systems engineering is a long-established, well-regarded field with
many researchers and practitioners having developed an abundance of stan-
dards, processes, and procedures. However, as we descend down the hierarchy
to software and hardware systems engineering, we begin to hear complaints
about whether or not the essence of systems engineering has been transferred
to software systems engineering in particular. Quite a number of outspoken
individuals decry the current state of the software systems engineering field as
lacking the credibility, discipline, and effectiveness that would be expected of
such a technical field.
Top class software development shops display high levels of sophistica-
tion, structure, and control throughout their software development life cycle
processes, some accomplishing the highest levels of capability maturity. Perhaps
the best-known example of a process improvement approach is the Capabil-
ity Maturity Model Integration (CMMI) process introduced by the Software
Engineering Institute of Carnegie Mellon University. A considerable amount
of resources are available on their website [1]. It is interesting to note that, even
though the Software Engineering Institute has developed and supported the
CMMI process for over two decades, the approach is not limited to software
development processes, but has expanded into such areas as system design, ac-
quisition, security, risk, and process management. The International Systems
Security Engineering Association (ISSEA) [2], in fact developed a Systems Se-
curity Engineering Capability Maturity Model (SSE-CMM–ISO/IEC 21827)
[3]. However, judging from the ISSEA website, the organization appears to no
longer be active. This is perhaps indicative of the low level of interest and sup-
port for the development of secure software systems.
Figure 2.1 The relationship of management effectiveness to scope.
37. 10 Engineering Safe and Secure Software Systems
Yet, while those processes may meet the high capability standards that
certification requires, the quality of software produced can still be deficient.
For example, the software might not function properly (i.e., it does not meet
all stated functional requirements) and may not satisfy nonfunctional require-
ments for security, resiliency, scalability, and so on.
What happens if we drill down one more level to the security and safety
aspects of software systems? Safety-critical software systems are generally more
solidly built; that is, they are more reliable and resilient than security-critical
software systems. This is understandable because human life is often at stake.
However, even safety-critical systems do not always achieve the highest stan-
dards, particularly with respect to their security attributes. Quite a large body of
work has been developed over the past 15 years or so relating to building secu-
rity into software, yet the acceptance of security requirements as part and parcel
of a software system is low in comparison to the acceptance of safety require-
ments. That is not to say that there aren’t any forward-thinking organizations
that excel with respect to their security practices. There are. Some of them have
had their achievements broadly publicized, such as in the Building Security In
Maturity Model (BSIMM) report, which documents the security posture of
firms that participated in the survey based upon a series of attributes [4].
As illustrated in Figure 2.1, we see that in general, certain aspects of the
engineering of secure and safe software systems such as interoperability and
portability diminish in the amount of attention paid to them as we enter into
these areas of more limited scope.
What we are seeing is essentially a top-down process, with the message
becoming more garbled and less intelligible as we approach more specialized
and detailed areas. From reading a broad range of publications and attending
a number of conferences and seminars on systems engineering, software engi-
neering, and related fields, we see that, in general, systems engineers pay rela-
tively little attention to software systems engineering; software engineers appear
to ignore both the guidance that can be gleaned from systems engineering on
the one hand, and the nonfunctional aspects of software systems, such as secu-
rity, safety, and resiliency on the other hand. With such a top-down hierarchy,
there is little chance of significant improvement in the overall security, safety,
and resiliency of software systems. Indeed, it appears that we might be heading
in the opposite direction.
Despite some excellent software engineering programs being offered by
major universities, such as Carnegie Mellon University’s Software Engineering
Institute and the Stevens Institute of Technology, the field is still grossly under-
served. For example, a Master of Software Assurance Curriculum [5] illustrates
some significant advances that are being made in the academic arena. Unfortu-
nately, the size and scope of these academic initiatives only begins to address the
huge deficiencies in knowledge and training in this field.
38. Engineering Systems 11
The answer to the question of what needs to be done is fairly obvious:
we need to ensure that the supporters of nonfunctional capabilities, such as
security professionals, are involved at the beginning of the decision process and
throughout the development life cycle of software-intensive systems. The chal-
lenge is not to determine what should be done—that has already been estab-
lished—but how to get it done. With all the forces of time to market, cost
savings, competitive advantage, budget constraints, complex functionality, and
the like, working against the goal of safer and more secure software systems, it is
unlikely that needed changes will happen of their own accord. There need to be
incentives for those making such a radical change in approach, or disincentives
for those not adhering to a mandate to change the approach to engineering safe
and secure software-intensive systems.
Deficient Definitions
A significant detraction to solving the problem of attaining the needed level of
proficiency in systems engineering, particularly with regard to software systems
engineering, lies in the ambiguity of language used and the relative looseness of
definitions of terms like systems engineering and software engineering, security
engineering, and safety engineering. In [6], Allman describes, with considerable
insight, the reasons for such ambiguity, as follows:
Our normal human language is often ambiguous; in real life we handle
these ambiguities without difficulty ... but in the technical world they can
cause problems. Extremely precise language, however, is so unnatural to us
that it can be hard to appreciate the subtleties. Standards often use formal
grammar, mathematical equations, and finite-state machines in order to
convey precise information concisely ... but these do not stand on their
own ...
In its definitive guide, the International Council on Systems Engineering
(INCOSE) [7] expresses this same concept with a specific focus on systems
engineering (SE), as follows:
One of the Systems Engineer’s first jobs on a project is to establish
nomenclature and terminology that support clear, unambiguous
communication and definition of the system and its functions, elements,
operations, and associated processes ... It is essential to the advancement of
the field of SE that common definitions and understandings be established
regarding general methods and terminology that in turn support
common processes. As more Systems Engineers accept and use a common
terminology, we will experience improvements in communications,
understanding, and ultimately, productivity.
39. 12 Engineering Safe and Secure Software Systems
The INCOSE guide [7] then provides a list of definitions for frequently
used terms, some of which will be repeated below.
A major contributing factor to deficiencies in software systems engineer-
ing appears to be the unwillingness or inability of specialists in one specific area
(such as system engineers, software engineers, and hardware engineers) to com-
municate adequately. There is a particular need to improve communications
between security and safety software systems engineers, as discussed in [8].
This lack of communication causes subject-matter experts and decision-
makers in each narrow area of specialization to be at serious risk of missing
important information regarding system requirements and other factors from
those with the necessary expertise. This is particularly apparent during the criti-
cal requirements, validation and verification stages of the software, and hard-
ware development life cycles. Such omissions can—and have—led to many
inferior decisions.
Systems engineers dealing with entire systems seem to pay little attention
to software considerations, as shown by the short shrift treatment of software
topics in many systems engineering books, publications, and presentations.
Conversely, with their concentration on the development, operation, and main-
tenance of applications and system software, software engineers tend to pay
insufficient attention to many of the broader systems considerations, including
platforms and infrastructures upon which applications and systems software
operate as well as the human-system interactions. As a result, software products
often do not exhibit the levels of confidentiality, integrity, availability, safety,
interoperability, portability, scalability, resiliency, and recovery that should be
incorporated into any critical system. This is mainly due to these nonfunctional
characteristics of applications, systems software, firmware, and hardware not
having been adequately accounted for in the design and development, and par-
ticularly, the integration of these systems.
Rationale
Clearly, the way in which one refers to various topics, subjects, and items can
greatly affect how one deals with them. Therefore, we will now examine some
definitions of systems engineering, as well as its components and subcompo-
nents. We will look into definitions of applications, system software, software
systems, and software engineering (or, more precisely in the last case, software
systems engineering). It will be shown why the broad range of common usage
of these definitions produces large gaps in understanding between what exists
and what is needed.
We will then go through a similar exercise for security and safety engi-
neering: first defining our terms and then describing why the philosophies and
40. Engineering Systems 13
means of addressing issues differ so much between the two disciplines of soft-
ware security engineering and software safety engineering.
We will investigate these gaps further and suggest some specific approach-
es and activities to address and close them. We will provide guidance for adher-
ing to more structured systems-engineering processes for designing, building,
testing, deploying, operating, and decommissioning safe and secure software-
intensive systems. Furthermore, we will suggest how the numerous separate and
independent silos can be brought together into a more collaborative environ-
ment so that each party can learn from the others and consequently arrive at
much more productive and effective modes of operation and more acceptable
results, as described in [8].
What Are Systems?
In order to come up with a viable definition for systems engineering, we must
first develop and describe a structure that relates systems engineering to all the
various components and subcomponents that arguably fall within its scope. We
will select components that have particular relevance to this book from those
shown in Figure 2.2. However, we need to always keep the remaining compo-
Figure 2.2 Structure and hierarchies of systems engineering.
41. 14 Engineering Safe and Secure Software Systems
nents and subcomponents in mind because they provide appropriate context
for further discussions.
The hierarchy and relationships of components and subcomponents (as
shown in Figure 2.2) provide a host of terms, and consequent definitions. At
the top of the diagram, there is a box labeled “systems,” and near the bottom is
another box labeled “engineering.”
First, let us examine the definitions for a system and for engineering sepa-
rately, and then combined.
The IEEE Standard 610.12-1990, which has been replaced with IEEE/
ISO/IEC 24765-2010 [9], defines a system as:
… a collection of components organized to accomplish a specific function
or set of functions.
Christensen and Thayer [10] rework the above definition somewhat to
come up with the following:
A system is a collection of related or associated entities that together
accomplish ... one or more specific objectives.
In the U.S. Department of Defense (DoD) text [11], the definition
becomes:
A system is an integrated composite of people, products, and processes that
provide a capability to satisfy a stated need or objective.
When each of the above definitions are compared, the main difference
is whether the goal is to satisfy a single need, function, or objective, or more
than one of them. The IEEE definition takes care of both. However, the DoD
definition also has merit because it lists the components as people, products
and processes.
In Figure 2.2, the second row of boxes represents six elements that can
make up a system, the first three of which correspond to the DoD definition.
The elements include the following areas: people; technology (or products)—
such as hardware and software; processes—development, operations; facilities;
data; and documents.
As we proceed down Figure 2.2, the next two rows itemize the character-
istics of the elements. For clarity, we only show those items that are of specific
interest. However, it should be noted that each of the elements can be broken
down, and that the breakdowns are also relevant to other areas of interest; this
is not shown here in order to simplify the presentation. As a simple example,
the category “data” can be broken down into structured and unstructured data.
Our particular focus is on technology systems, and within that category we
42. Engineering Systems 15
are focusing on software, although we shall touch upon other elements such as
people (or human factors) and processes such as development and operations.
Software is further split into its functional and nonfunctional character-
istics. This is a common way to divide software characteristics. However, as we
will show subsequently, such a characterization has led to major gaps in how
software systems are tested. In any event, the usual set of nonfunctional soft-
ware characteristics is as follows:
• Security;
• Safety;
• Performance;
• Reliability;
• Compliance.
As the title of this book suggests, our main focus will be on the first two
categories, namely, the safety and security of software systems.
The distinction between safety-critical and security-critical software sys-
tems will be examined in greater detail in Chapter 4. At this point, we present
the definitions introduced by Boehm [12], as follows:
• Safety-critical software: the software must not harm the world.
• Security-critical software: the world must not harm the software.
That is to say, software systems that are judged to be safety-critical must
not present a hazard to human life or the environment. This is an outward-
looking perspective.
On the other hand, security-critical software systems are inward-looking
to the extent that it is necessary to protect the software and any sensitive data
such as nonpublic personal information and intellectual property from misuse,
damage, or destruction from attacks by external or internal persons or com-
puter applications.
As we shall see throughout this book, it is this divergence of viewpoint
between software safety engineers and software security engineers that leads to
major software systems engineering issues, particularly when there is a require-
ment to combine both safety-critical and security-critical software-intensive
systems into what are increasingly termed cyber-physical systems, particularly by
United States government agencies.
Near to the bottom of Figure 2.2, we show a box for “engineering.” Engi-
neering can be defined as follows:
43. 16 Engineering Safe and Secure Software Systems
… the creative application of scientific principles to design or develop
structures, machines, apparatus, or manufacturing processes, or works
utilizing them singly or in combination; or to construct or operate the
same with full cognizance of their design; or to forecast their behavior
under specific operating conditions; all as respects an intended function,
economics of operation and safety to life and property [13].
At the bottom of Figure 2.2, we include activities relating to engineering,
namely assurance and management. Assurance comprises testing and repairing.
Management activities related to engineering include monitoring, analyzing,
reporting, and responding.
As the reader can see, some of the boxes in Figure 2.2 are made up of
solid lines, whereas other boxes are bounded by dashed lines. We will bypass
the dash-framed boxes when using the hierarchy to designate a particular com-
bination. Thus, we arrive at the term software security engineering rather than
technology software nonfunctional security engineering.
Deconstructing Systems Engineering
The field of engineering has clearly had, and continues to enjoy, a very long and
auspicious history that goes back over millennia. There is no question that the
building of the Egyptian pyramids, the English Stonehenge, the Easter Island
statues, the Mayan pyramids at Chichen Itza, Mexico, and other marvelous
ancient constructions were amazing feats of engineering. Did those projects
follow what we now would consider to be a systems engineering approach? The
builders of these monuments clearly must have adopted some organizational
structure and management processes in order to accomplish what they did.
Archeologists have discovered highly sophisticated mathematical, architectural,
and construction methods in documents used by engineers of these ancient
monuments. However, the processes would not have followed the specific pre-
cepts of what we now call systems engineering in as formal a manner, although
many of the components must have been utilized in order to accomplish these
great works.
In modern times, the specific terminology for the discipline of systems
engineering has only been in common use for a relatively short 70 years or
so, although various documents attribute the creation of the term to different
parties and at different times. Christensen and Thayer [10] quote Alberts [14],
who asserted that the term was originated by a certain H. C. Hitch at Penn
State University in 1956 [15].
The INCOSE website claims that the term systems engineering was coined
by researchers at Bell Telephone Laboratories in the 1940s [16]. The INCOSE
document [7] provides a timeline going back to the development of the Rocket
44. Engineering Systems 17
locomotive by Robert Stephenson and Company in the 1820s, and includes a
British team that analyzed the United Kingdom’s air defense system in 1937.
The INCOSE timeline also includes Bell Labs during 1939–1945, which agrees
with the above claim. INCOSE attributes the invention of systems analysis to
the RAND Corporation in 1956, which appears to differ from the Christensen
and Thayer claim (with respect to who originated which terms). However, both
agree that the term was originated in 1956. It is sufficiently accurate for our
purposes—given the discrepancies in the literature—to state that systems en-
gineering and systems analysis as we know now them originated in the mid-
twentieth century and evolved through the latter half of that century.
A somewhat broad consensus has it that the discipline of systems en-
gineering was driven in the 1940s and 1950s by the need to manage large,
complex projects involving systems of systems, in which the properties of the
sum of the parts of a system were often greater than the sum of the properties
of the individual parts [17]. Much of the motivation for developing the art of
systems engineering came from the requirements of the U.S. Department of
Defense [11] and the National Aeronautics and Space Administration (NASA)
[9]. We will, therefore, lean heavily on the works of these and other U.S. gov-
ernment agencies, as well as the publications of professional associations, such
as the IEEE (Institute of Electrical and Electronics Engineers) [9] and INCOSE
(International Council on Systems Engineering) [7], in our discussion of the
attributes of systems engineering.
Today, there are many researchers and practitioners who are proud to
number themselves among the ranks of systems engineers. Professional orga-
nizations, such as INCOSE and IEEE, which are a highly regarded and which
produce impressive lists of publications and other valuable resources, have
sprung up and prospered in response to the evolution of the field.
However, as discussed above, there appears to be a substantial gap between
how software engineers view themselves with respect to systems engineering and
how software systems engineers should integrate systems engineering into their
own practices more extensively. This will be discussed in the following chapters.
It is not that software developers do not follow structured practices equivalent
to systems engineering guidelines—for the most part they do—particularly in
the U.S. Department of Defense, where precise and detailed software develop-
ment practices have evolved, as described in [11]. It is more that adoption of
the formal processes of systems engineering is not generally accepted in what
are considered related, but distinct, fields relating to software, safety, and secu-
rity. For example, the system development life cycle (SDLC) has indeed been
enthusiastically adopted by many modern software development shops, but is
not usually embraced by smaller, less mature organizations because doing so
introduces considerable overhead.
45. 18 Engineering Safe and Secure Software Systems
It is quite confusing to see so many combinations of the words systems,
software, and engineering. What are the precise differences between systems en-
gineering, software engineering, software systems engineering, and systems soft-
ware engineering? Some of the differences may appear to be obvious, but others
are not so obvious. When systems are aggregated into systems of systems, what
terms should be used to describe the characteristics of the much more complex
combinations of systems, which frequently exhibit behaviors beyond those of
their individual component systems? The combination of safety-critical systems
and security-critical systems, such as is occurring with the so-called smart grid,
produces issues well beyond the reach of the individual systems.
In order to get a handle on these issues, we use the elements of the table in
Figure 2.3 to assist with explaining the definitions and the relationships among
components. For example, we see “systems engineering” in the first line of the
table, “software engineering” in the second line, and “software systems engineer-
ing” in the third line. If we were to reverse Columns 2 and 4 in the third line,
we would get “systems software engineering,” which has a completely different
meaning from “software systems engineering.” Systems software engineering
relates to the engineering of systems software. Software systems engineering
refers to the application of systems engineering practices to all types of software-
intensive products. Systems software is a category of software that lies below the
applications software layer and performs many of the administrative functions
Figure 2.3 Terminology and combinations of terms.
46. Engineering Systems 19
that support the applications. It includes operating systems and utilities, such
as sort algorithms.
Some might consider such differentiations as those mentioned above and
illustrated in Figure 2.3 to be splitting hairs, but terminology that is confusing
and ambiguous will almost invariably lead to misunderstandings or worse, as
was suggested above. Such a situation arises when researchers and practitioners
miss whole bodies of applicable work because calling the same item by partially
or completely different names results in their searches for relevant information
being incomplete or, worse yet, inaccurate and misleading. We have already
discussed how software engineers often do not see themselves as systems en-
gineers and vice versa, but this is merely on the surface. As we dig deeper, we
will find that such high-level perceptions lead to problems with the omission
of important requirements for complex systems that are commonly developed
and implemented today.
What Is Systems Engineering?
At this point, we will tackle the seemingly simple task of defining systems en-
gineering. There are many definitions of this term, each of which differs some-
what from the other, sometimes minimally; in other cases, the differences are
substantial.
One widely-quoted definition of systems engineering is the one in the
DoD text [11], namely:
… an interdisciplinary engineering management process that evolves and
verifies an integrated, life-cycle balanced set of systems that satisfy customer
needs.
An important aspect of this definition is that systems engineering is a
process involving a variety of professional disciplines. The sole objective of this
process is to create systems that meet users’ needs. The author does not argue
with this definition, but will raise questions with respect to the real-world im-
plementation of the process. For example, are all necessary disciplines included
at appropriate stages of the process? It will be asserted that they are not. Have
the needs of customers or nonhuman users been adequately identified? It will
be claimed that users likely only express the functional requirements of which
they are personally aware or that directly affect them, and that they omit a host
of requirements that they either don’t know about or don’t care about.
Another definition of systems engineering is:
47. 20 Engineering Safe and Secure Software Systems
… an interdisciplinary field of engineering that focuses on how complex
engineering projects should be designed and managed over the life cycle of
the project. [19]
And yet another definition [10, p. 8] is:
… the practical application of the scientific, engineering, and management
skills required to transform a user’s need into a description of a system
configuration that best satisfies the need in an effective and efficient way.
Clearly these definitions suggest a life cycle process that begins with the
identification of user needs and should end up with one or more systems that
meet those needs. The success of systems engineering in general depends on ef-
forts to organize system design and development processes that function effec-
tively within and between organizational units because the right level of man-
agement and control functions exists. In the following section, we examine the
various stages in the process and describe some effective controls for the systems
creation environment.
Systems Engineering and the Systems Engineering Management
Process
As mentioned above, the pioneers of systems engineering were involved in large
government (or public sector) programs relating to space exploration and mili-
tary systems. We will, therefore, place emphasis on those sources initially, and
then look at how the systems engineering (SE) approach has been adopted by
others, particularly in the nongovernment or private sector.
The definitions of public and private sectors can vary with country. In the
United States, “public” is generally equivalent to “government” and “private”
means “nongovernment,” although nonprofit, academic, and religious organi-
zations are often recognized to be different. In Great Britain and many other
European countries, a third sector is defined. This refers to nongovernment,
nonprofit organizations. In other countries, such as Japan and India, they de-
fine a “joint sector,” which refers to industries or companies owned and/or run
by both the public and private sectors [20]. For the purposes of this book, we
differentiate between government and nongovernment sectors.
Christensen and Thayer [10] list the following systems engineering
functions:
• Problem definition is the determination of customer requirements with
respect to product expectations, needs, and constraints.
48. Engineering Systems 21
• Solution analysis is the analysis of options satisfying requirements and
constraints and selection of the optimal solution.
• Process planning is the prioritization of major technical tasks and the
effort required for each task and the determination of potential project
risks
• Process control is the establishment of control methods for technical ac-
tivities including the measurement of progress, review of intermediate
products, and initiation of corrective action when necessary.
• Product evaluation is the determination of quality and quantity through
testing, demonstration, analysis, examination, and inspection.
These life cycle functions are common throughout software systems engi-
neering projects and their subcomponents. The latter deal with such attributes
as security and performance. It is, however, quite confusing because different
types of projects result in varying emphasis on specific activity phases and the
terms used, particularly when comparing DoD terminology with others, which
often results in even more confusion.
In simple terms, requirements are obtained and converted into some form
of system or other by means of a series of activities making up a project. Due
diligence suggests that, prior to release, a system needs to be thoroughly tested
to ensure that it meets original requirements. This sounds simple, but, as we
shall see, if it were that simple then there would be universal standards, which,
if adhered to, would presumably result in systems that meet their requirements
perfectly every time. Clearly, we do not see such a result and this is because
there are always tradeoffs that have been made due to constraints on resources
and funding, time until delivery, and the ability to do as good a job as possible.
A project is always a tradeoff among timeliness, quality, and cost; you can try to
optimize two out of three, but it is not feasible to optimize all three for the same
project. With respect to the project management triangle, practitioners will sum-
marize this as: fast, good, cheap—pick any two [21]! A major exception to this
restriction appears to be well-funded government contracts, where cost over-
runs and time delays are common and seemingly acceptable, although much of
the reason for this may lie in the governance model in which politicians with
little technical knowledge—but a clear political agenda—have responsibility for
oversight of the management of the project.
It is ironic that the origins of systems engineering and by far its most rig-
orous methodologies have emanated from military and space programs, both
of which are government sponsored, and yet these projects appear to frequently
suffer huge overruns in time and money and often lack critical features that
were specified in the requirements documentation. The same appears to be
quite common in academia, where the product itself is often subservient to the
49. 22 Engineering Safe and Secure Software Systems
potential accolades and kudos to be gained from well-accepted research papers
and publications in quality journals. That is not to say that the private sector is
doing any better a job in general, but where there is effective management and
governance, large complex projects do get done on time, fall within budget, and
exhibit high quality.
Nevertheless, the private sector can learn a great deal from the methodolo-
gies created by research organizations for government, as long as effective gov-
ernance and strong project management are implemented. We will, therefore,
proceed with further descriptions of systems engineering management derived
from publications by engineering groups, such as the Institute of Electrical and
Electronic Engineers (IEEE) Computer Society, and the U.S. Department of
Defense (DoD) initiatives.
The DoD Text
Perhaps one of the most detailed and complete descriptions of the systems engi-
neering process is that utilized by the DoD. The document, Systems Engineering
Fundamentals, was prepared and published in 2001 by the DoD Systems Man-
agement College [6], and is available online for everyone to read.
For those, such as this author, whose information technology and infor-
mation security career have been in the private sector, the terminology and em-
phasis of the DoD approach can appear quite foreign. The management process
is seen as being highly regimented, with little opportunity for variations. This
likely stifles innovation, as the quotation by Steve Jobs at the beginning of this
chapter would suggest. Of course, given the types of systems developed for the
military, perhaps the last thing that one wants to see is unbridled creativity,
which is nonetheless often the hallmark of leading-edge commercial systems
ventures.
As will be discussed later in this book, this dichotomy of approaches be-
tween military and commercial systems engineering carries through to many
other aspects of the systems. In particular, military systems have (up until re-
cently) emphasized safety, reliability, and predictable operation above all else,
whereas commercial systems generally look to beef up security (although ve-
hicle control systems do focus on safety). Going forward, however, the impor-
tance of security, especially for network-centric systems, is being recognized and
accounted for in all areas of endeavor.
Another Observation
In researching for this book, the author became fascinated with the singular
lack of reference to the DoD standards and practices in books and articles about
50. Engineering Systems 23
nongovernment, especially nonmilitary, systems and software engineering. It is
as if a whole body of knowledge relating to military and other government sys-
tems and software engineering has been excluded from consideration in those
publications aimed at a general audience. This is a shame, to say the least, as
such knowledge transfer would help all involved parties.
More on Systems Engineering
Remaining on this topic of the differences in approaches between various pub-
lic-sector and private-sector entities, we will briefly look at how the emphasis of
systems engineering management approaches differs between groups and how
the difference in emphasis is at once considered appropriate for the types of sys-
tem that have traditionally been within their purview, but, on the other hand,
hampers each group from benefiting from the knowledge and skills of other
groups. Some professional organizations, such as INCOSE, make creditable at-
tempts to be all-inclusive and try to encourage broader participation, yet many
other groups remain quite insular.
It is suggested, therefore, that the reader delve into a broader range of
bodies of knowledge to gain from the wisdom developed within the various
silos, which unfortunately do not communicate adequately. Some of the differ-
ences that you are likely to discover have to do with the more highly developed
governance models that government entities have created and their somewhat
obsessive concentration on requirements going in and on verification and vali-
dation coming out of the process. Not to take anything away from focusing on
these important areas, but much of the private sector seems unable to afford the
luxury of such time and money consuming efforts. Despite this drawback, it
should serve us well to take a brief tour of the DoD approach and compare and
contrast various components.
The Systems Engineering Process (SEP)
The DoD document [11] describes the SEP as “... a comprehensive, iterative
and recursive problem-solving process, applied sequentially top-down by in-
tegrated teams.” A significant point here is the use of the term “top-down,”
clearly pointing to the existence of a well-defined hierarchical structure. While
the corporate world also has, for the most part, clearly laid-out organizational
structures, there is generally more room for collaborative teamwork within the
corporate structure. In fact, in many highly creative systems and software devel-
opment shops, particularly with start-ups and fast-moving companies, design-
ers and developers are given considerable freedom and latitude when it comes to
problem-solving and putting together systems requirements. Of course, this lat-
51. 24 Engineering Safe and Secure Software Systems
ter environment has both positive and negative characteristics, and these would
not likely work for weapons systems, for example. Larger, more mature private
institutions typically have many formal processes and management controls in
place, and may be more readily compared with the military, which starts out
being large and formal.
When it comes to developing system requirements, government—par-
ticularly the military—have well-defined user populations that may have little
direct input into defining initial requirements, although they will probably be
involved in testing systems in the final stages and providing feedback for tweak-
ing the systems. While the DoD document talks about customers, these recipi-
ents of the end product do not necessarily have much choice about whether or
not they are comfortable with using the resulting systems, despite any deficien-
cies in their design or manufacture.
That is to say, the usual competitive marketplace—while existing among
bidders for the military contracts—is not necessarily in force with respect to end
users when it comes to choosing system features and functionality, for example.
The difference between customers who have free choice and those who
don’t affects their influence on the design of a system and its use. Here is a real-
world example. The information technology department of a highly-structured
financial institution (for which both Asian and American traders worked) de-
signed and developed trading systems based on some minimal set of require-
ments obtained from some of the traders. After the trading system has been
installed, the financial institution insisted that traders use the system as deliv-
ered, whether or not it fully met the traders’ needs. When the same system was
presented to traders in the United States (who typically consider themselves
independent entrepreneurs as opposed to subservient employees), they refused
to use the system. It was very apparent that cultural differences between traders
in each country had a lot to do with how systems were designed and developed
and whether or not they were accepted. To some extent, the same observation
holds true with government agencies and internal corporate systems. However,
when it comes to commercial systems, the marketplace determines what gets
used and what languishes.
The formal requirements process in the DoD SEF document [11] in-
dicates that the system operator is the key customer, and that customers are
required to provide the basic needs for the system. According to the DoD SEF
[11], operational requirements should answer the following questions, some of
which have been slightly modified from the original:
• Where will the system be used?
• How will the system accomplish the mission objective?
• What are the critical system parameters to accomplish the mission?
52. Engineering Systems 25
• How should the various system components be used?
• How effective and efficient must the system be in performing its mis-
sion?
• How long will the system be in use?
• In which environments will the system be expected to operate effec-
tively?
The DoD SEF [11] lists the following system requirements:
• Customer requirements are the expectations of systems in terms of mis-
sion objectives, environment, constraints, and measures of effectiveness
and suitability.
• Functional requirements are the necessary tasks, actions, or activities that
must be completed.
• Performance requirements are the extent to which mission or function
must be executed with respect to quantity, quality, coverage, timeliness,
and readiness.
• Design requirements are the “build to” requirements for products (e.g.,
hardware, software) and “how to execute” requirements for processes.
• Derived requirements are the implied or transformed from higher-level
requirements.
• Allocated requirements are established from division of high-level require-
ments into a number of lower-level requirements.
It is noteworthy that the above requirements list makes no explicit men-
tion of either security or safety. These latter factors—along with others relat-
ing to such characteristics as resiliency and interoperability—would likely be
subsumed within the performance requirements, were they to be considered at
all. Nevertheless, the lack of appearance of these nonfunctional requirements
is not surprising because broader publications in systems engineering also tend
to omit them. It is usually only when, for example, there are special issues of
magazines and journals related to such fields are systems engineering, electrical
engineering, or other branches of engineering focused on software, security,
and safety (as well as books that specifically cover software, security and safety
engineering, and related subjects), that the reader of the broader engineering
publications gets to learn more about these topics.
53. 26 Engineering Safe and Secure Software Systems
Summary and Conclusions
In this chapter, we have addressed some of the confusion in regard to the defi-
nitions of systems and systems engineering, and we have discussed some of the
issues that arise in the systems engineering process. The reader can access some
excellent guides and handbooks on systems engineering, some of which (such as
the DoD and NASA documents referenced in this chapter) are available in the
public domain at no charge, to learn more about specific aspects of the subject.
The main take-away from this chapter should be recognition that there
is inadequate sharing among various government and nongovernment players
and among different subcategories, such as those relating to security and safety.
A goal of subsequent chapters is to encourage the engagement of diverse
groups, who today do not communicate sufficiently, and also to achieve some
measure of consistency across fields.
Endnotes
[1] Available at http://www.sei.cmu.edu/cmmi/, last accessed on July 8, 2012.
[2] Available at http://www.issea.org, last accessed on July 8, 2012.
[3] Available at http://www.sse-cmm.org/issea/issea.asp Accessed on July 8, 2012.
[4] Available at http://bsimm.com, last accessed on July 8, 2012.
[5] Available at http://www.cert.org/mswa/, accessed on July 8, 2012.
[6] Allman, E., “The Robustness Principle Reconsidered,” Communications of the ACM, Vol.
54, No. 8, 2011, pp. 40–45.
[7] Haskins, C. (ed.), Systems Engineering Handbook: A Guide for System Life Cycle Processes
and Activities, San Diego, CA: International Council on Systems Engineering (INCOSE),
2011.
[8] Axelrod, C. W., “Applying Lessons from Safety-Critical Systems to Security-Critical Soft-
ware,” 2011 IEEE LISAT (Long Island Systems, Applications and Technology) Conference,
Farmingdale, NY, May 2011, published on the IEEE Xplore website.
[9] IEEE/ISO/IECStandard24765:2010,SystemsandSoftwareEngineering—Vocabulary,2010.
Available at http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=5733833�����
,����
���
ac-
cessed on July 8, 2012 (registration required to obtain document).
[10] Christensen, M. J., and R. H. Thayer, The Project Manager’s Guide to Software Engineer-
ing’s Best Practices, Los Alamitos, CA: IEEE Computer Society, 2002.
[11] United States Department of Defense (DoD), Systems Engineering Fundamentals, Fort Bel-
voir, VA: Defense Acquisition University Press, 2001. Available at http://www.dau.mil/
pubs/pdf/SEFGuide%2001-01.pdf, accessed on July 8, 2012.
[12] Boehm, B. W., Characteristics of Software Quality, New York: North-Holland Publishing
Company, 1978.
54. Engineering Systems 27
[13] This definition of engineering is available at http://en.wikipedia.org/wiki/Engineering, last
accessed on July 8, 2012. The definition is attributed to the American Engineers’ Council
of Professional Development (ECPD), which is the predecessor of the Accreditation Board
for Engineering and Technology (ABET).
[14] Alberts, H. C., “System Engineering—Managing the Gestalt,” System Engineering Course
Syllabus, Fort Belvoir, VA: Department of Defense Systems Management College, 1988.
[15] The specific reference to H. C. Alberts can be found in Chapter 1 of The Project Manager’s
Guide to Software Engineering’s Best Practices [5]. This chapter is posted in full on the Wiley
website at http://media.wiley.com/product_data/excerpt/96/07695119/0769511996.pdf,
last accessed on July 8, 2012.
[16] “A Brief History of Systems Engineering,” is available at www.incose.org/mediarelations/
briefhistory.aspx, last accessed on July 8, 2012.
[17] A system of systems is defined in the INCOSE document [2] as applying “to a system-of-
interest whose system elements are themselves systems; typically these entail large scale
inter-disciplinary problems with multiple, heterogeneous, distributed systems.”
[18] National Aeronautics and Space Administration (NASA), NASA Systems Engineering
Handbook, NASA/SP-2007-6105 Rev 1, Washington, DC: NASA, 2007. Available at
http://www.tsgc.utexas.edu/challenge/PDF/NASA-SystemsEngrHandbook.pdf, accessed
July 8, 2012.
[19] This definition of systems engineering is available at http://en.wikipedia.org/wiki/Systems_
engineering Accessed on July 8, 2012.
[20] Available at http://en.wikipedia.org/wiki/Voluntary_sector, last accessed on July 8, 2012.
[21] For an overview of the project triangle, see http://en.wikipedia.org/wiki/Project_triangle,
last accessed on July 8, 2012.
78. Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.
More than just a book-buying platform, we strive to be a bridge
connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.
Join us on a journey of knowledge exploration, passion nurturing, and
personal growth every day!
ebookbell.com