SlideShare a Scribd company logo
https://www.huffpost.com/entry/online-dating-vs-
offline_b_4037867
For your initial post, provide a sentence to share which article
you are referring to so that you can best communicate with your
peers. Include a link to your selection.
· Explain how the argument contains or avoids bias.
i. Provide specific examples to support your explanation.
ii. What assumptions does it make?
· Discuss the credibility of the overall argument.
i. Were the resources the argument was built upon credible?
ii. Does the credibility support or undermine the article’s claims
in any important ways?
In response to your peers, provide an additional resource to
support or refute the argument your peer makes. Do you agree
with their claims of credibility? Are there any other possible
bias not identified?
Response #1
Allysa Tantala posted Sep 22, 2019 10:17 PM
Subscribe
The article that I am looking at is Online Dating Vs. Offline
Dating: Pros and Cons.It was written by Julie Spira, an online
dating expert, bestselling author, and CEO of Cyber-Dating
Expert. The name of the article is spot on in describing what it
is about. The author goes through the pros and cons of dating
online and offline in today’s day and age. The author avoids
bias because she looks at both options in both their positive and
negative attributes. She comes at the issues from both angles
and I believe she does a very good job at remaining unbiased.
She states that “if you're serious about meeting someone
special, you must include a combination of both online and
offline dating in your routine” (Spira, 2013, par. 18). She’s
stating that both options have their pros and cons and that really
a combination of both is needed to find someone. The only bias
I could see anyone pointing out would be that she is a woman,
so you do not get the male perspective on these things. That
being said, I one hundred percent think she covers all of the
questions people may have about online and offline dating in
today’s world. The only assumption being made here is that the
reader wants to be out in the dating world and they need to
know what is best. But, the title of the article is pretty self-
explanatory so if someone did not want to know these things,
they would not have to waste their time reading it all because
they could tell what it would be about by the title.
The resource that she used was herself, and like I stated above,
she is an online dating expert, bestselling author, and CEO of
Cyber-Dating Expert; so she is more than qualified to give her
perspective on these issues. I find her to be credible and thought
provoking. Her credibility supports everything the article says
and makes the reader feel like they are being told the truth by
someone who completely understands all of the pros and cons.
Resource:
Spira, J. (2013, December 3). Online Dating Vs. Offline Dating:
Pros and Cons. Retrieved
from https://www.huffpost.com/entry/online-dating-vs-
offline_b_4037867
Response #2
Jennifer Caforio posted Sep 22, 2019 3:07 PM
Subscribe
Hello all,
I chose to look at online vs. offline dating. The article I will be
looking at is Online Dating Vs. Offline Dating: Pros and Cons.
The article I have selected avoids bias. It does this by offering
pros and cons of both dating platforms. She does not claim that
one is better, rather, they are both very important to today’s
society when looking for a partner (Spira, 2013). Spira says “As
one who believes in casting a wide net, I tell singles that you
really need to do both. (Spira, 2013) I feel that this sentence
clears any bias of any dating platform. She also goes on to list
the pros and cons of online dating and the pros and cons on
traditional dating. The listing of both platforms shows the
reader the complete picture, this can lead the reader to decide
independently.
This article makes the assumption that online dating is just as
important as offline dating. This platform is valuable and
useful. Although this article is well written and contains no bias
it is not very credible. It does express valuable and sensible
claims; however, it does not support them. the resources in the
article were not credible in that no other resources are listed.
Julia Spira lists herself as an online dating expert, she does
appear to have many years’ experience in the area. She
references (in general) other dating experts or sites that agree
with her views, she does not list them with a name or title of
works published. The credibility of this article undermines the
argument’s claims by not listing any other credible resources on
the subject. Although she presents logical thinking and non-bias
testimony it appears it is the opinion of only herself.
Resources
Spira, J. (2013, December 3). Online Dating Vs. Offline Dating:
Pros and Cons. Retrieved from
https://www.huffpost.com/entry/online-dating-vs-
offline_b_4037867
www.it-ebooks.info
http://www.it-ebooks.info/
fl ast.indd 11:57:23:AM 01/17/2014 Page xx
www.it-ebooks.info
http://www.it-ebooks.info/
ffi rs.indd 12:57:18:PM 01/17/2014 Page i
Threat Modeling
Designing for Security
Adam Shostack
www.it-ebooks.info
http://www.it-ebooks.info/
ffi rs.indd 12:57:18:PM 01/17/2014 Page ii
Threat Modeling: Designing for Security
Published by
John Wiley & Sons, Inc.
10475 Crosspoint
BoulevardIndianapolis, IN 46256
www.wiley.com
Copyright © 2014 by Adam Shostack
Published by John Wiley & Sons, Inc., Indianapolis, Indiana
Published simultaneously in Canada
ISBN: 978-1-118-80999-0
ISBN: 978-1-118-82269-2 (ebk)
ISBN: 978-1-118-81005-7 (ebk)
Manufactured in the United States of America
10 9 8 7 6 5 4 3 2 1
No part of this publication may be reproduced, stored in a
retrieval system or transmitted in any form or
by any means, electronic, mechanical, photocopying, recording,
scanning or otherwise, except as permitted
under Sections 107 or 108 of the 1976 United States Copyright
Act, without either the prior written permis-
sion of the Publisher, or authorization through payment of the
appropriate per-copy fee to the Copyright
Clearance Center, 222 Rosewood Drive, Danvers, MA 01923,
(978) 750-8400, fax (978) 646-8600. Requests to
the Publisher for permission should be addressed to the
Permissions Department, John Wiley & Sons, Inc.,
111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax
(201) 748-6008, or online at http://www.wiley
.com/go/permissions.
Limit of Liability/Disclaimer of Warranty: The publisher and
the author make no representations or
warranties with respect to the accuracy or completeness of the
contents of this work and specifi cally disclaim
all warranties, including without limitation warranties of fi
tness for a particular purpose. No warranty may
be created or extended by sales or promotional materials. The
advice and strategies contained herein may not
be suitable for every situation. This work is sold with the
understanding that the publisher is not engaged in
rendering legal, accounting, or other professional services. If
professional assistance is required, the services
of a competent professional person should be sought. Neither
the publisher nor the author shall be liable for
damages arising herefrom. The fact that an organization or Web
site is referred to in this work as a citation
and/or a potential source of further information does not mean
that the author or the publisher endorses
the information the organization or website may provide or
recommendations it may make. Further, readers
should be aware that Internet websites listed in this work may
have changed or disappeared between when
this work was written and when it is read.
For general information on our other products and services
please contact our Customer Care Department
within the United States at (877) 762-2974, outside the United
States at (317) 572-3993 or fax (317) 572-4002.
Wiley publishes in a variety of print and electronic formats and
by print-on-demand. Some material included
with standard print versions of this book may not be included in
e-books or in print-on-demand. If this book
refers to media such as a CD or DVD that is not included in the
version you purchased, you may download
this material at http://booksupport.wiley.com. For more
information about Wiley products,
visit www.wiley.com.
Library of Congress Control Number: 2013954095
Trademarks: Wiley and the Wiley logo are trademarks or
registered trademarks of John Wiley & Sons, Inc.
and/or its affi liates, in the United States and other countries,
and may not be used without written permission.
All other trademarks are the property of their respective owners.
John Wiley & Sons, Inc. is not associated
with any product or vendor mentioned in this book.
www.it-ebooks.info
http://www.it-ebooks.info/
ffi rs.indd 12:57:18:PM 01/17/2014 Page iii
For all those striving to deliver more secure systems
www.it-ebooks.info
http://www.it-ebooks.info/
iv
ffi rs.indd 12:57:18:PM 01/17/2014 Page iv
Credits
Executive Editor
Carol Long
Project Editors
Victoria Swider
Tom Dinse
Technical Editor
Chris Wysopal
Production Editor
Christine Mugnolo
Copy Editor
Luann Rouff
Editorial Manager
Mary Beth Wakefi eld
Freelancer Editorial Manager
Rosemarie Graham
Associate Director of Marketing
David Mayhew
Marketing Manager
Ashley Zurcher
Business Manager
Amy Knies
Vice President and Executive
Group Publisher
Richard Swadley
Associate Publisher
Jim Minatel
Project Coordinator, Cover
Todd Klemme
Technical Proofreader
Russ McRee
Proofreader
Nancy Carrasco
Indexer
Robert Swanson
Cover Image
Courtesy of Microsoft
Cover Designer
Wiley
www.it-ebooks.info
http://www.it-ebooks.info/
v
ffi rs.indd 12:57:18:PM 01/17/2014 Page v
Adam Shostack is currently a program manager at Microsoft.
His security roles there have included security development
processes, usable security, and attack modeling. His attack-
modeling work led to security updates for Autorun being
delivered to hundreds of millions of computers. He shipped
the SDL Threat Modeling Tool and the Elevation of Privilege
threat modeling game. While doing security development
process work, he delivered threat modeling training across
Microsoft and its partners and customers.
Prior to Microsoft, he has been an executive at a number of
successful
information security and privacy startups. He helped found the
CVE, the
Privacy Enhancing Technologies Symposium and the
International Financial
Cryptography Association. He has been a consultant to banks,
hospitals and
startups and established software companies. For the fi rst
several years of his
career, he was a systems manager for a medical research lab.
Shostack is a
prolifi c author, blogger, and public speaker. With Andrew
Stewart, he co-authored
The New School of Information Security (Addison-Wesley,
2008).
About the Author
www.it-ebooks.info
http://www.it-ebooks.info/
vi
ffi rs.indd 12:57:18:PM 01/17/2014 Page vi
Chris Wysopal, Veracode’s CTO and Co-Founder, is responsible
for the company’s
software security analysis capabilities. In 2008 he was named
one of InfoWorld’s
Top 25 CTO’s and one of the 100 most infl uential people in IT
by eWeek. One of
the original vulnerability researchers and a member of L0pht
Heavy Industries,
he has testifi ed on Capitol Hill in the US on the subjects of
government computer
security and how vulnerabilities are discovered in software. He
is an author of
L0phtCrack and netcat for Windows. He is the lead author of
The Art of Software
Security Testing (Addison-Wesley, 2006).
About the Technical Editor
www.it-ebooks.info
http://www.it-ebooks.info/
vii
ffi rs.indd 12:57:18:PM 01/17/2014 Page vii
First and foremost, I’d like to thank countless engineers at
Microsoft and else-
where who have given me feedback about their experiences
threat modeling. I
wouldn’t have had the opportunity to have so many open and
direct conversa-
tions without the support of Eric Bidstrup and Steve Lipner,
who on my fi rst
day at Microsoft told me to go “wallow in the problem for a
while.” I don’t
think either expected “a while” to be quite so long. Nearly eight
years later with
countless deliverables along the way, this book is my most
complete answer to
the question they asked me: “How can we get better threat
models?”
Ellen Cram Kowalczyk helped me make the book a reality in the
Microsoft
context, gave great feedback on both details and aspects that
were missing, and
also provided a lot of the history of threat modeling from the fi
rst security pushes
through the formation of the SDL, and she was a great manager
and mentor.
Ellen and Steve Lipner were also invaluable in helping me
obtain permission
to use Microsoft documents.
The Elevation of Privilege game that opens this book owes
much to Jacqueline
Beauchere, who saw promise in an ugly prototype called
“Threat Spades,” and
invested in making it beautiful and widely available.
The SDL Threat Modeling Tool might not exist if Chris
Peterson hadn’t given
me a chance to build a threat modeling tool for the Windows
team to use. Ivan
Medvedev, Patrick McCuller, Meng Li, and Larry Osterman
built the fi rst version
of that tool. I’d like to thank the many engineers in Windows,
and later across
Microsoft, who provided bug reports and suggestions for
improvements in the
beta days, and acknowledge all those who just fl amed at us,
reminding us of the
importance of getting threat modeling right. Without that tool,
my experience
and breadth in threat modeling would be far poorer.
Larry Osterman, Douglas MacIver, Eric Douglas, Michael
Howard, and Bob
Fruth gave me hours of their time and experience in
understanding threat
Acknowledgments
www.it-ebooks.info
http://www.it-ebooks.info/
viii Acknowledgments
ffi rs.indd 12:57:18:PM 01/17/2014 Page viii
modeling at Microsoft. Window Snyder’s perspective as I
started the Microsoft
job has been invaluable over the years. Knowing when you’re
done . . . well,
this book is nearly done.
Rob Reeder was a great guide to the fi eld of usable security,
and Chapter 15
would look very different if not for our years of collaboration. I
can’t discuss
usable security without thanking Lorrie Cranor for her help on
that topic; but
also for the chance to keynote the Symposium on Usable
Privacy and Security,
which led me to think about usable engineering advice, a
perspective that is
now suffused throughout this book.
Andy Stiengrubel, Don Ankney, and Russ McRee all taught me
important
lessons related to operational threat modeling, and how the
trade-offs change
as you change context. Guys, thank you for beating on me—
those lessons now
permeate many chapters. Alec Yasinac, Harold Pardue, and Jeff
Landry were
generous with their time discussing their attack tree experience,
and Chapters
4 and 17 are better for those conversations. Joseph Lorenzo Hall
was also a gem
in helping with attack trees. Wendy Nather argued strongly that
assets and
attackers are great ways to make threats real, and thus help
overcome resistance
to fi xing them. Rob Sama checked the Acme fi nancials
example from a CPA’s
perspective, correcting many of my errors. Dave Awksmith
graciously allowed
me to include his threat personas as a complete appendix. Jason
Nehrboss gave
me some of the best feedback I’ve ever received on very early
chapters.
I’d also like to acknowledge Jacob Appelbaum, Crispin Cowan,
Dana Epp (for
years of help, on both the book and tools), Jeremi Gosney,
Yoshi Kohno, David
LeBlanc, Marsh Ray, Nick Mathewson, Tamara McBride, Russ
McRee, Talhah
Mir, David Mortman, Alec Muffet, Ben Rothke, Andrew
Stewart, and Bryan
Sullivan for helpful feedback on drafts and/or ideas that made it
into the book
in a wide variety of ways.
Of course, none of those acknowledged in this section are
responsible for the
errors which doubtless crept in or remain.
Writing this book “by myself” (an odd phrase given everyone
I’m acknowl-
edging) makes me miss working with Andrew Stewart, my
partner in writing
on The New School of Information Security. Especially since
people sometimes
attribute that book to me, I want to be public about how much I
missed his
collaboration in this project.
This book wouldn’t be in the form it is were it not for Bruce
Schneier’s will-
ingness to make an introduction to Carol Long, and Carol’s
willingness to pick
up the book. It wasn’t always easy to read the feedback and
suggested changes
from my excellent project editor, Victoria Swider, but this thing
is better where I
did. Tom Dinse stepped in as the project ended and masterfully
took control of a
very large number of open tasks, bringing them to resolution on
a tight schedule.
Lastly, and most importantly, thank you to Terri, for all your
help, support,
and love, and for putting up with “it’s almost done” for a very,
very long time.
—Adam Shostack
www.it-ebooks.info
http://www.it-ebooks.info/
ix
ftoc.indd 11:56:7:AM 01/17/2014 Page ix
Introduction xxi
Part I Getting Started 1
Chapter 1 Dive In and Threat Model! 3
Learning to Threat Model 4
What Are You Building? 5
What Can Go Wrong? 7
Addressing Each Threat 12
Checking Your Work 24
Threat Modeling on Your Own 26
Checklists for Diving In and Threat Modeling 27
Summary 28
Chapter 2 Strategies for Threat Modeling 29
“What’s Your Threat Model?” 30
Brainstorming Your Threats 31
Brainstorming Variants 32
Literature Review 33
Perspective on Brainstorming 34
Structured Approaches to Threat Modeling 34
Focusing on Assets 36
Focusing on Attackers 40
Focusing on Software 41
Models of Software 43
Types of Diagrams 44
Trust Boundaries 50
What to Include in a Diagram 52
Complex Diagrams 52
Labels in Diagrams 53
Contents
www.it-ebooks.info
http://www.it-ebooks.info/
x Contents
ftoc.indd 11:56:7:AM 01/17/2014 Page x
Color in Diagrams 53
Entry Points 53
Validating Diagrams 54
Summary 56
Part II Finding Threats 59
Chapter 3 STRIDE 61
Understanding STRIDE and Why It’s Useful 62
Spoofing Threats 64
Spoofing a Process or File on the Same Machine 65
Spoofing a Machine 66
Spoofing a Person 66
Tampering Threats 67
Tampering with a File 68
Tampering with Memory 68
Tampering with a Network 68
Repudiation Threats 68
Attacking the Logs 69
Repudiating an Action 70
Information Disclosure Threats 70
Information Disclosure from a Process 71
Information Disclosure from a Data Store 71
Information Disclosure from a Data Flow 72
Denial-of-Service Threats 72
Elevation of Privilege Threats 73
Elevate Privileges by Corrupting a Process 74
Elevate Privileges through Authorization Failures 74
Extended Example: STRIDE Threats against Acme-DB 74
STRIDE Variants 78
STRIDE-per-Element 78
STRIDE-per-Interaction 80
DESIST 85
Exit Criteria 85
Summary 85
Chapter 4 Attack Trees 87
Working with Attack Trees 87
Using Attack Trees to Find Threats 88
Creating New Attack Trees 88
Representing a Tree 91
Human-Viewable Representations 91
Structured Representations 94
Example Attack Tree 94
Real Attack Trees 96
Fraud Attack Tree 96
Election Operations Assessment Threat Trees 96
Mind Maps 98
www.it-ebooks.info
http://www.it-ebooks.info/
Contents xi
ftoc.indd 11:56:7:AM 01/17/2014 Page xi
Perspective on Attack Trees 98
Summary 100
Chapter 5 Attack Libraries 101
Properties of Attack Libraries 101
Libraries and Checklists 103
Libraries and Literature Reviews 103
CAPEC 104
Exit Criteria 106
Perspective on CAPEC 106
OWASP Top Ten 108
Summary 108
Chapter 6 Privacy Tools 111
Solove’s Taxonomy of Privacy 112
Privacy Considerations for
Internet Protocols 114
Privacy Impact Assessments (PIA) 114
The Nymity Slider and the Privacy Ratchet 115
Contextual Integrity 117
Contextual Integrity Decision Heuristic 118
Augmented Contextual Integrity Heuristic 119
Perspective on Contextual Integrity 119
LINDDUN 120
Summary 121
Part III Managing and Addressing Threats 123
Chapter 7 Processing and Managing Threats 125
Starting the Threat Modeling Project 126
When to Threat Model 126
What to Start and (Plan to) End With 128
Where to Start 128
Digging Deeper into Mitigations 130
The Order of Mitigation 131
Playing Chess 131
Prioritizing 132
Running from the Bear 132
Tracking with Tables and Lists 133
Tracking Threats 133
Making Assumptions 135
External Security Notes 136
Scenario-Specifi c Elements of
Threat Modeling 138
Customer/Vendor Trust Boundary 139
New Technologies 139
Threat Modeling an API 141
Summary 143
www.it-ebooks.info
http://www.it-ebooks.info/
xii Contents
ftoc.indd 11:56:7:AM 01/17/2014 Page xii
Chapter 8 Defensive Tactics and Technologies 145
Tactics and Technologies for Mitigating Threats 145
Authentication: Mitigating Spoofi ng 146
Integrity: Mitigating Tampering 148
Non-Repudiation: Mitigating Repudiation 150
Confi dentiality: Mitigating Information Disclosure 153
Availability: Mitigating Denial of Service 155
Authorization: Mitigating Elevation of Privilege 157
Tactic and Technology Traps 159
Addressing Threats with Patterns 159
Standard Deployments 160
Addressing CAPEC Threats 160
Mitigating Privacy Threats 160
Minimization 160
Cryptography 161
Compliance and Policy 164
Summary 164
Chapter 9 Trade-Off s When Addressing Threats 167
Classic Strategies for Risk Management 168
Avoiding Risks 168
Addressing Risks 168
Accepting Risks 169
Transferring Risks 169
Ignoring Risks 169
Selecting Mitigations for Risk Management 170
Changing the Design 170
Applying Standard Mitigation Technologies 174
Designing a Custom Mitigation 176
Fuzzing Is Not a Mitigation 177
Threat-Specifi c Prioritization Approaches 178
Simple Approaches 178
Threat-Ranking with a Bug Bar 180
Cost Estimation Approaches 181
Mitigation via Risk Acceptance 184
Mitigation via Business Acceptance 184
Mitigation via User Acceptance 185
Arms Races in Mitigation Strategies 185
Summary 186
Chapter 10 Validating That Threats Are Addressed 189
Testing Threat Mitigations 190
Test Process Integration 190
How to Test a Mitigation 191
Penetration Testing 191
www.it-ebooks.info
http://www.it-ebooks.info/
Contents xiii
ftoc.indd 11:56:7:AM 01/17/2014 Page xiii
Checking Code You Acquire 192
Constructing a Software Model 193
Using the Software Model 194
QA’ing Threat Modeling 195
Model/Reality Conformance 195
Task and Process Completion 196
Bug Checking 196
Process Aspects of Addressing Threats 197
Threat Modeling Empowers Testing;
Testing Empowers Threat Modeling 197
Validation/Transformation 197
Document Assumptions as You Go 198
Tables and Lists 198
Summary 202
Chapter 11 Threat Modeling Tools 203
Generally Useful Tools 204
Whiteboards 204
Offi ce Suites 204
Bug-Tracking Systems 204
Open-Source Tools 206
TRIKE 206
SeaMonster 206
Elevation of Privilege 206
Commercial Tools 208
ThreatModeler 208
Corporate Threat Modeller 208
SecurITree 209
Little-JIL 209
Microsoft’s SDL Threat Modeling Tool 209
Tools That Don’t Exist Yet 213
Summary 213
Part IV Threat Modeling in Technologies and Tricky Areas 215
Chapter 12 Requirements Cookbook 217
Why a “Cookbook”? 218
The Interplay of Requirements, Threats,
and Mitigations 219
Business Requirements 220
Outshining the Competition 220
Industry Requirements 220
Scenario-Driven Requirements 221
Prevent/Detect/Respond as a Frame
for Requirements 221
Prevention 221
Detection 225
www.it-ebooks.info
http://www.it-ebooks.info/
xiv Contents
ftoc.indd 11:56:7:AM 01/17/2014 Page xiv
Response 225
People/Process/Technology as a Frame
for Requirements 227
People 227
Process 228
Technology 228
Development Requirements vs. Acquisition Requirements 228
Compliance-Driven Requirements 229
Cloud Security Alliance 229
NIST Publication 200 230
PCI-DSS 231
Privacy Requirements 231
Fair Information Practices 232
Privacy by Design 232
The Seven Laws of Identity 233
Microsoft Privacy Standards for Development 234
The STRIDE Requirements 234
Authentication 235
Integrity 236
Non-Repudiation 237
Confi dentiality 238
Availability 238
Authorization 239
Non-Requirements 240
Operational Non-Requirements 240
Warnings and Prompts 241
Microsoft’s “10 Immutable Laws” 241
Summary 242
Chapter 13 Web and Cloud Threats 243
Web Threats 243
Website Threats 244
Web Browser and Plugin Threats 244
Cloud Tenant Threats 246
Insider Threats 246
Co-Tenant Threats 247
Threats to Compliance 247
Legal Threats 248
Threats to Forensic Response 248
Miscellaneous Threats 248
Cloud Provider Threats 249
Threats Directly from Tenants 249
Threats Caused by Tenant Behavior 250
Mobile Threats 250
Summary 251
www.it-ebooks.info
http://www.it-ebooks.info/
Contents xv
ftoc.indd 11:56:7:AM 01/17/2014 Page xv
Chapter 14 Accounts and Identity 253
Account Life Cycles 254
Account Creation 254
Account Maintenance 257
Account Termination 258
Account Life-Cycle Checklist 258
Authentication 259
Login 260
Login Failures 262
Threats to “What You Have” 263
Threats to “What You Are” 264
Threats to “What You Know” 267
Authentication Checklist 271
Account Recovery 271
Time and Account Recovery 272
E-mail for Account Recovery 273
Knowledge-Based Authentication 274
Social Authentication 278
Attacker-Driven Analysis of
Account Recovery 280
Multi-Channel Authentication 281
Account Recovery Checklist 281
Names, IDs, and SSNs 282
Names 282
Identity Documents 285
Social Security Numbers and Other
National Identity Numbers 286
Identity Theft 289
Names, IDs, and SSNs Checklist 290
Summary 290
Chapter 15 Human Factors and Usability 293
Models of People 294
Applying Behaviorist Models of People 295
Cognitive Science Models of People 297
Heuristic Models of People 302
Models of Software Scenarios 304
Modeling the Software 304
Diagramming for Modeling the Software 307
Modeling Electronic Social Engineering Attacks 309
Threat Elicitation Techniques 311
Brainstorming 311
The Ceremony Approach to Threat Modeling 311
Ceremony Analysis Heuristics 312
Integrating Usability into the Four-Stage Framework 315
www.it-ebooks.info
http://www.it-ebooks.info/
xvi Contents
ftoc.indd 11:56:7:AM 01/17/2014 Page xvi
Tools and Techniques for Addressing
Human Factors 316
Myths That Inhibit Human Factors Work 317
Design Patterns for Good Decisions 317
Design Patterns for a Kind Learning
Environment 320
User Interface Tools and Techniques 322
Confi guration 322
Explicit Warnings 323
Patterns That Grab Attention 325
Testing for Human Factors 327
Benign and Malicious Scenarios 328
Ecological Validity 328
Perspective on Usability and Ceremonies 329
Summary 331
Chapter 16 Threats to Cryptosystems 333
Cryptographic Primitives 334
Basic Primitives 334
Privacy Primitives 339
Modern Cryptographic Primitives 339
Classic Threat Actors 341
Attacks against Cryptosystems 342
Building with Crypto 346
Making Choices 346
Preparing for Upgrades 346
Key Management 346
Authenticating before Decrypting 348
Things to Remember about Crypto 348
Use a Cryptosystem Designed by Professionals 348
Use Cryptographic Code Built and Tested by Professionals 348
Cryptography Is Not Magic Security Dust 349
Assume It Will All Become Public 349
You Still Need to Manage Keys 349
Secret Systems: Kerckhoffs and His Principles 349
Summary 351
Part V Taking It to the Next Level 353
Chapter 17 Bringing Threat Modeling to Your Organization 355
How To Introduce Threat Modeling 356
Convincing Individual Contributors 357
Convincing Management 358
Who Does What? 359
Threat Modeling and Project Management 359
www.it-ebooks.info
http://www.it-ebooks.info/
Contents xvii
ftoc.indd 11:56:7:AM 01/17/2014 Page xvii
Prerequisites 360
Deliverables 360
Individual Roles and Responsibilities 362
Group Interaction 363
Diversity in Threat Modeling Teams 367
Threat Modeling within a Development Life Cycle 367
Development Process Issues 368
Organizational Issues 373
Customizing a Process for Your Organization 378
Overcoming Objections to Threat Modeling 379
Resource Objections 379
Value Objections 380
Objections to the Plan 381
Summary 383
Chapter 18 Experimental Approaches 385
Looking in the Seams 386
Operational Threat Models 387
FlipIT 388
Kill Chains 388
The “Broad Street” Taxonomy 392
Adversarial Machine Learning 398
Threat Modeling a Business 399
Threats to Threat Modeling Approaches 400
Dangerous Deliverables 400
Enumerate All Assumptions 400
Dangerous Approaches 402
How to Experiment 404
Defi ne a Problem 404
Find Aspects to Measure and Measure Them 404
Study Your Results 405
Summary 405
Chapter 19 Architecting for Success 407
Understanding Flow 407
Flow and Threat Modeling 409
Stymieing People 411
Beware of Cognitive Load 411
Avoid Creator Blindness 412
Assets and Attackers 412
Knowing the Participants 413
Boundary Objects 414
The Best Is the Enemy of the Good 415
Closing Perspectives 416
“The Threat Model Has Changed” 417
www.it-ebooks.info
http://www.it-ebooks.info/
xviii Contents
ftoc.indd 11:56:7:AM 01/17/2014 Page xviii
On Artistry 418
Summary 419
Now Threat Model 420
Appendix A Helpful Tools 421
Common Answers to “What’s Your Threat Model?” 421
Network Attackers 421
Physical Attackers 422
Attacks against People 423
Supply Chain Attackers 423
Privacy Attackers 424
Non-Sentient “Attackers” 424
The Internet Threat Model 424
Assets 425
Computers as Assets 425
People as Assets 426
Processes as Assets 426
Intangible Assets 427
Stepping-Stone Assets 427
Appendix B Threat Trees 429
STRIDE Threat Trees 430
Spoofi ng an External Entity (Client/ Person/Account) 432
Spoofi ng a Process 438
Spoofi ng of a Data Flow 439
Tampering with a Process 442
Tampering with a Data Flow 444
Tampering with a Data Store 446
Repudiation against a Process (or by an External Entity) 450
Repudiation, Data Store 452
Information Disclosure from a Process 454
Information Disclosure from a Data Flow 456
Information Disclosure from a Data Store 459
Denial of Service against a Process 462
Denial of Service against a Data Flow 463
Denial of Service against a Data Store 466
Elevation of Privilege against a Process 468
Other Threat Trees 470
Running Code 471
Attack via a “Social” Program 474
Attack with Tricky Filenames 476
Appendix C Attacker Lists 477
Attacker Lists 478
Barnard’s List 478
Verizon’s Lists 478
OWASP 478
Intel TARA 479
www.it-ebooks.info
http://www.it-ebooks.info/
Contents xix
ftoc.indd 11:56:7:AM 01/17/2014 Page xix
Personas and Archetypes 480
Aucsmith’s Attacker Personas 481
Background and Defi nitions 481
Personas 484
David “Ne0phyate” Bradley – Vandal 484
JoLynn “NightLily” Dobney – Trespasser 486
Sean “Keech” Purcell – Defacer 488
Bryan “CrossFyre” Walton – Author 490
Lorrin Smith-Bates – Insider 492
Douglas Hite – Thief 494
Mr. Smith – Terrorist 496
Mr. Jones – Spy 498
Appendix D Elevation of Privilege: The Cards 501
Spoofi ng 501
Tampering 503
Repudiation 504
Information Disclosure 506
Denial of Service 507
Elevation of Privilege (EoP) 508
Appendix E Case Studies 511
The Acme Database 512
Security Requirements 512
Software Model 512
Threats and Mitigations 513
Acme’s Operational Network 519
Security Requirements 519
Operational Network 520
Threats to the Network 521
Phones and One-Time Token Authenticators 525
The Scenario 526
The Threats 527
Possible Redesigns 528
Sample for You to Model 528
Background 529
The iNTegrity Data Flow Diagrams 530
Exercises 531
Glossary 533
Bibliography 543
Index 567
www.it-ebooks.info
http://www.it-ebooks.info/
fl ast.indd 11:57:23:AM 01/17/2014 Page xx
www.it-ebooks.info
http://www.it-ebooks.info/
xxi
fl ast.indd 11:57:23:AM 01/17/2014 Page xxi
All models are wrong, some models are useful.
— George Box
This book describes the useful models you can employ to
address or mitigate
these potential threats. People who build software, systems, or
things with
software need to address the many predictable threats their
systems can face.
Threat modeling is a fancy name for something we all do
instinctively. If I
asked you to threat model your house, you might start by
thinking about the
precious things within it: your family, heirlooms, photos, or
perhaps your collec-
tion of signed movie posters. You might start thinking about the
ways someone
might break in, such as unlocked doors or open windows. And
you might start
thinking about the sorts of people who might break in, including
neighborhood
kids, professional burglars, drug addicts, perhaps a stalker, or
someone trying
to steal your Picasso original.
Each of these examples has an analog in the software world, but
for now,
the important thing is not how you guard against each threat,
but that you’re
able to relate to this way of thinking. If you were asked to help
assess a friend’s
house, you could probably help, but you might lack confi dence
in how complete
your analysis is. If you were asked to secure an offi ce complex,
you might have
a still harder time, and securing a military base or a prison
seems even more
diffi cult. In those cases, your instincts are insuffi cient, and
you’d need tools to
help tackle the questions. This book will give you the tools to
think about threat
modeling technology in structured and effective ways.
In this introduction, you’ll learn about what threat modeling is
and why indi-
viduals, teams, and organizations threat model. Those reasons
include fi nding
security issues early, improving your understanding of security
requirements,
and being able to engineer and deliver better products. This
introduction has
Introduction
www.it-ebooks.info
http://www.it-ebooks.info/
xxii Introduction
fl ast.indd 11:57:23:AM 01/17/2014 Page xxii
fi ve main sections describing what the book is about, including
a defi nition of
threat modeling and reasons it’s important; who should read this
book; how to
use it, and what you can expect to gain from the various parts,
and new lessons
in threat modeling.
What Is Threat Modeling?
Everyone threat models. Many people do it out of frustration in
line at the airport,
sneaking out of the house or into a bar. At the airport, you
might idly consider how
to sneak something through security, even if you have no intent
to do so. Sneaking
in or out of someplace, you worry about who might catch you.
When you speed
down the highway, you work with an implicit threat model
where the main threat
is the police, who you probably think are lurking behind a
billboard or overpass.
Threats of road obstructions, deer, or rain might play into your
model as well.
When you threat model, you usually use two types of models.
There’s a model
of what you’re building, and there’s a model of the threats
(what can go wrong).
What you’re building with software might be a website, a
downloadable program
or app, or it might be delivered in a hardware package. It might
be a distributed
system, or some of the “things” that will be part of the “Internet
of things.” You
model so that you can look at the forest, not the trees. A good
model helps you
address classes or groups of attacks, and deliver a more secure
product.
The English word threat has many meanings. It can be used to
describe a
person, such as “Osama bin Laden was a threat to America,” or
people, such
as “the insider threat.” It can be used to describe an event, such
as “There is
a threat of a hurricane coming through this weekend,” and it can
be used to
describe a weakness or possibility of attack, such as “What are
you doing about
confi dentiality threats?” It is also used to describe viruses and
malware such as
“This threat incorporates three different methods for
spreading.” It can be used
to describe behavior such as “There’s a threat of operator
error.”
Similarly, the term threat modeling has many meanings, and the
term threat
model is used in many distinct and perhaps incompatible ways,
including:
■ As a verb—for example, “Have you threat modeled?” That is,
have you
gone through an analysis process to fi gure out what might go
wrong with
the thing you’re building?
■ As a noun, to ask what threat model is being used. For
example, “Our
threat model is someone in possession of the machine,” or “Our
threat
model is a skilled and determined remote attacker.”
■ It can mean building up a set of idealized attackers.
■ It can mean abstracting threats into classes such as
tampering.
There are doubtless other defi nitions. All of these are useful in
various sce-
narios and thus correct, and there are few less fruitful ways to
spend your time
www.it-ebooks.info
http://www.it-ebooks.info/
Introduction xxiii
fl ast.indd 11:57:23:AM 01/17/2014 Page xxiii
than debating them. Arguing over defi nitions is a strange game,
and the only
way to win is not to play. This book takes a big tent approach to
threat model-
ing and includes a wide range of techniques you can apply early
to make what
you’re designing or building more secure. It will also address
the reality that
some techniques are more effective than others, and that some
techniques are
more likely to work for people with particular skills or
experience.
Threat modeling is the key to a focused defense. Without threat
models, you
can never stop playing whack-a-mole.
In short, threat modeling is the use of abstractions to aid in
thinking about
risks.
Reasons to Threat Model
In today’s fast-paced world, there is a tendency to streamline
development activ-
ity, and there are important reasons to threat model, which are
covered in this
section. Those include fi nding security bugs early,
understanding your security
requirements, and engineering and delivering better products.
Find Security Bugs Early
If you think about building a house, decisions you make early
will have dramatic
effects on security. Wooden walls and lots of ground-level
windows expose you
to more risks than brick construction and few windows. Either
may be a reason-
able choice, depending on where you’re building and other
factors. Once you’ve
chosen, changes will be expensive. Sure, you can put bars over
your windows,
but wouldn’t it be better to use a more appropriate design from
the start? The
same sorts of tradeoffs can apply in technology. Threat
modeling will help you
fi nd design issues even before you’ve written a line of code,
and that’s the best
time to fi nd those issues.
Understand Your Security Requirements
Good threat models can help you ask “Is that really a
requirement?” For example,
does the system need to be secure against someone in physical
possession of
the device? Apple has said yes for the iPhone, which is different
from the tradi-
tional world of the PC. As you fi nd threats and triage what
you’re going to do
with them, you clarify your requirements. With more clear
requirements, you
can devote your energy to a consistent set of security features
and properties.
There is an important interplay between requirements, threats,
and mitiga-
tions. As you model threats, you’ll fi nd that some threats don’t
line up with your
business requirements, and as such may not be worth
addressing. Alternately,
your requirements may not be complete. With other threats,
you’ll fi nd that
addressing them is too complex or expensive. You’ll need to
make a call between
www.it-ebooks.info
http://www.it-ebooks.info/
xxiv Introduction
fl ast.indd 11:57:23:AM 01/17/2014 Page xxiv
addressing them partially in the current version or accepting
(and communicat-
ing) that you can’t address those threats.
Engineer and Deliver Better Products
By considering your requirements and design early in the
process, you can
dramatically lower the odds that you’ll be re-designing, re-
factoring, or facing
a constant stream of security bugs. That will let you deliver a
better product on
a more predictable schedule. All the effort that would go to
those can be put
into building a better, faster, cheaper or more secure product.
You can focus on
whatever properties your customers want.
Address Issues Other Techniques Won’t
The last reason to threat model is that threat modeling will lead
you to catego-
ries of issues that other tools won’t fi nd. Some of these issues
will be errors of
omission, such as a failure to authenticate a connection. That’s
not something
that a code analysis tool will fi nd. Other issues will be unique
to your design.
To the extent that you have a set of smart developers building
something new,
you might have new ways threats can manifest. Models of what
goes wrong,
by abstracting away details, will help you see analogies and
similarities to
problems that have been discovered in other systems.
A corollary of this is that threat modeling should not focus on
issues that your
other safety and security engineering is likely to fi nd (except
insofar as fi nding
them early lets you avoid re-engineering). So if, for example,
you’re building a
product with a database, threat modeling might touch quickly on
SQL injection
attacks, and the variety of trust boundaries that might be
injectable. However,
you may know that you’ll encounter those. Your threat
modeling should focus
on issues that other techniques can’t fi nd.
Who Should Read This book?
This book is written for those who create or operate complex
technology. That’s
primarily software engineers and systems administrators, but it
also includes
a variety of related roles, including analysts or architects.
There’s also a lot of
information in here for security professionals, so this book
should be useful to
them and those who work with them. Different parts of the book
are designed
for different people—in general, the early chapters are for
generalists (or special-
ists in something other than security), while the end of the book
speaks more
to security specialists.
www.it-ebooks.info
http://www.it-ebooks.info/
Introduction xxv
fl ast.indd 11:57:23:AM 01/17/2014 Page xxv
You don’t need to be a security expert, professional, or even
enthusiast to
get substantial benefi t from this book. I assume that you
understand that there
are people out there whose interests and desires don’t line up
with yours. For
example, maybe they’d like to take money from you, or they
may have other
goals, like puffi ng themselves up at your expense or using your
computer to
attack other people.
This book is written in plain language for anyone who can write
or spec a
program, but sometimes a little jargon helps in precision,
conciseness, or clarity,
so there’s a glossary.
What You Will Gain from This Book
When you read this book cover to cover, you will gain a rich
knowledge of threat
modeling techniques. You’ll learn to apply those techniques to
your projects
so you can build software that’s more secure from the get-go,
and deploy it
more securely. You’ll learn to how to make security tradeoffs in
ways that are
considered, measured, and appropriate. You will learn a set of
tools and when
to bring them to bear. You will discover a set of glamorous
distractions. Those
distractions might seem like wonderful, sexy ideas, but they
hide an ugly inte-
rior. You’ll learn why they prevent you from effectively threat
modeling, and
how to avoid them.
You’ll also learn to focus on the actionable outputs of threat
modeling, and
I’ll generally call those “bugs.” There are arguments that it’s
helpful to consider
code issues as bugs, and design issues as fl aws. In my book,
those arguments
are a distraction; you should threat model to fi nd issues that
you can address,
and arguing about labels probably doesn’t help you address
them.
Lessons for Diff erent Readers
This book is designed to be useful to a wide variety of people
working in tech-
nology. That includes a continuum from those who develop
software to those
who combine it into systems that meet operational or business
goals to those
who focus on making it more secure.
For convenience, this book pretends there is a bright dividing
line between
development and operations. The distinction is used as a way of
understand-
ing who has what capabilities, choices, and responsibilities. For
example, it is
“easy” for a developer to change what is logged, or to
implement a different
authentication system. Both of these may be hard for operations.
Similarly, it’s
“easy” for operations to ensure that logs are maintained, or to
ensure that a
computer is in a locked cage. As this book was written, there’s
also an important
www.it-ebooks.info
http://www.it-ebooks.info/
xxvi Introduction
fl ast.indd 11:57:23:AM 01/17/2014 Page xxvi
model of “devops” emerging. The lessons for developers and
operations can
likely be applied with minor adjustments. This book also
pretends that security
expertise is separate from either development or operations
expertise, again,
simply as a convenience.
Naturally, this means that the same parts of the book will bring
different les-
sons for different people. The breakdown below gives a focused
value proposi-
tion for each audience.
Software Developers and Testers
Software developers—those whose day jobs are focused on
creating software—
include software engineers, quality assurance, and a variety of
program or
project managers. If you’re in that group, you will learn to fi nd
and address
design issues early in the software process. This book will
enable you to deliver
more secure software that better meets customer requirements
and expecta-
tions. You’ll learn a simple, effective and fun approach to threat
modeling, as
well as different ways to model your software or fi nd threats.
You’ll learn how
to track threats with bugs that fi t into your development
process. You’ll learn to
use threats to help make your requirements more crisp, and vice
versa. You’ll
learn about areas such as authentication, cryptography, and
usability where the
interplay of mitigations and attacks has a long history, so you
can understand
how the recommended approaches have developed to their
current state. You’ll
learn about how to bring threat modeling into your development
process. And
a whole lot more!
Systems Architecture, Operations, and Management
For those whose day jobs involve bringing together software
components,
weaving them together into systems to deliver value, you’ll
learn to fi nd and
address threats as you design your systems, select your
components, and get
them ready for deployment. This book will enable you to deliver
more secure
systems that better meet business, customer, and compliance
requirements.
You’ll learn a simple, effective, and fun approach to threat
modeling, as well as
different ways to model the systems you’re building or have
built. You’ll learn
how to fi nd security and privacy threats against those systems.
You’ll learn
about the building blocks which are available for you to
operationally address
those threats. You’ll learn how to make tradeoffs between the
threats you face,
and how to ensure that those threats are addressed. You’ll learn
about specifi c
threats to categories of technology, such as web and cloud
systems, and about
threats to accounts, both of which are deeply important to those
in operations.
It will cover issues of usability, and perhaps even change your
perspective on
www.it-ebooks.info
http://www.it-ebooks.info/
Introduction xxvii
fl ast.indd 11:57:23:AM 01/17/2014 Page xxvii
how to infl uence the security behavior of people within your
organization and/
or your customers. You will learn about cryptographic building
blocks, which
you may be using to protect systems. And a whole lot more!
Security Professionals
If you work in security, you will learn two major things from
this book: First,
you’ll learn structured approaches to threat modeling that will
enhance your
productivity, and as you do, you’ll learn why many of the
“obvious” parts
of threat modeling are not as obvious, or as right, as you may
have believed.
Second, you’ll learn about bringing security into the
development, operational
and release processes that your organization uses.
Even if you are an expert, this book can help you threat model
better. Here,
I speak from experience. As I was writing the case study
appendix, I found
myself turning to both the tree in Appendix B and the
requirements chapter,
and fi nding threats that didn’t spring to mind from just
considering the models
of software.
T O M Y CO L L E AG U E S I N I N FO R M AT I O N S E C
U R I T Y
I want to be frank. This book is not about how to design
abstractly perfect software.
It is a practical, grounded book that acknowledges that most
software is built in some
business or organizational reality that requires tradeoff s. To the
dismay of purists,
software where tradeoff s were made runs the world these days,
and I’d like to make
such software more secure by making those tradeoff s better.
That involves a great
many elements, two of which are making security more
consistent and more acces-
sible to our colleagues in other specialties.
This perspective is grounded in my time as a systems
administrator, deploying
security technologies, and observing the issues people
encountered. It is grounded
in my time as a startup executive, learning to see security as a
property of a system
which serves a business goal. It is grounded in my
responsibility for threat model-
ing as part of Microsoft’s Security Development Lifecycle. In
that last role, I spoke
with thousands of people at Microsoft, its partners, and its
customers about our
approaches. These individuals ranged from newly hired
developers to those with
decades of experience in security, and included chief security
offi cers and Microsoft’s
Trustworthy Computing Academic Advisory Board. I learned
that there are an awful
lot of opinions about what works, and far fewer about what does
not. This book aims
to convince my fellow security professionals that pragmatism in
what we ask of devel-
opment and operations helps us deliver more secure software
over time. This perspec-
tive may be a challenge for some security professionals. They
should focus on Parts II,
IV, and V, and perhaps give consideration to the question of the
best as the enemy of
the good.
www.it-ebooks.info
http://www.it-ebooks.info/
xxviii Introduction
fl ast.indd 11:57:23:AM 01/17/2014 Page xxviii
How To Use This Book
You should start at the very beginning. It’s a very good place to
start, even if
you already know how to threat model, because it lays out a
framework that
will help you understand the rest of the book.
The Four-Step Framework
This book introduces the idea that you should see threat
modeling as composed
of steps which accomplish subgoals, rather than as a single
activity. The essential
questions which you ask to accomplish those subgoals are:
1. What are you building?
2. What can go wrong with it once it’s built?
3. What should you do about those things that can go wrong?
4. Did you do a decent job of analysis?
The methods you use in each step of the framework can be
thought of like
Lego blocks. When working with Legos, you can snap in other
Lego blocks.
In Chapter 1, you’ll use a data fl ow diagram to model what
you’re building,
STRIDE to help you think about what can go wrong and what
you should do
about it, and a checklist to see if you did a decent job of
analysis. In Chapter 2,
you’ll see how diagrams are the most helpful way to think about
what you’re
building. Different diagram types are like different building
blocks to help
you model what you’re building. In Chapter 3, you’ll go deep
into STRIDE (a
model of threats), while in Chapter 4, you’ll learn to use attack
trees instead of
STRIDE, while leaving everything else the same. STRIDE and
attack trees are
different building blocks for considering what can go wrong
once you’ve built
your new technology.
Not every approach can snap with every other approach. It takes
crazy glue
to make an Erector set and Lincoln logs stick together. Attempts
to glue threat
modeling approaches together has made for some confusing
advice. For example,
trying to consider how terrorists would attack your assets
doesn’t really lead
to a lot of actionable issues. And even with building blocks that
snap together,
you can make something elegant, or something confusing or
bizarre.
So to consider this as a framework, what are the building
blocks? The four-
step framework is shown graphically in Figure I-1.
The steps are:
1. Model the system you’re building, deploying, or changing.
2. Find threats using that model and the approaches in Part II.
www.it-ebooks.info
http://www.it-ebooks.info/
Introduction xxix
fl ast.indd 11:57:23:AM 01/17/2014 Page xxix
3. Address threats using the approaches in Part III.
4. Validate your work for completeness and effectiveness (also
Part III).
Model
System
Find
Threats
Address
Threats
Validate
Figure I-1: The Four-Step Framework
This framework was designed to align with software
development and opera-
tional deployment. It has proven itself as a way to structure
threat modeling.
It also makes it easier to experiment without replacing the
entire framework.
From here until you reach Part V, almost everything you
encounter is selected
because it plugs into this four-step framework.
This book is roughly organized according to the framework:
Part I “Getting Started” is about getting started. The opening
part of the book
(especially Chapter 1) is designed for those without much
security expertise.
The later parts of the book build on the security knowledge
you’ll gain from this
material (or combined with your own experience). You’ll gain
an understanding
of threat modeling, and a recommended approach for those who
are new to the
discipline. You’ll also learn various ways to model your
software, along with
why that’s a better place to start than other options, such as
attackers or assets.
Part II “Finding Threats” is about fi nding threats. It presents a
collection of
techniques and tools you can use to fi nd threats. It surveys and
analyzes the dif-
ferent ways people approach information technology threat
modeling, enabling
you to examine the pros and cons of the various techniques that
people bring to
bear. They’re grouped in a way that enables you to either read
them from start
to fi nish or jump in at a particular point where you need help.
Part III “Managing and Addressing Threats” is about managing
and address-
ing threats. It includes processing threats and how to manage
them, the tactics
and technologies you can use to address them, and how to make
risk tradeoffs
www.it-ebooks.info
http://www.it-ebooks.info/
xxx Introduction
fl ast.indd 11:57:23:AM 01/17/2014 Page xxx
you might need to make. It also covers validating that your
threats are addressed,
and tools you can use to help you threat model.
Part IV “Threat Modeling in Technologies and Tricky Areas” is
about threat
modeling in specifi c technologies and tricky areas where a
great deal of threat
modeling and analysis work has already been done. It includes
chapters on web
and cloud, accounts and identity, and cryptography, as well as a
requirements
“cookbook” that you can use to jump-start your own security
requirements
analysis.
Part V “Taking it to the Next Level” is about taking threat
modeling to the
next level. It targets the experienced threat modeler, security
expert, or process
designer who is thinking about how to build and customize
threat modeling
processes for a given organization.
Appendices include information to help you apply what you’ve
learned. They
include sets of common answers to “what’s your threat model,”
and “what are
our assets”; as well as threat trees that can help you fi nd
threats, lists of attackers
and attacker personas; and details about the Elevation of
Privilege game you’ll
use in Chapter 1; and lastly, a set of detailed example threat
models. These are
followed by a glossary, bibliography, and index.
Website: This book’s website, www.threatmodelingbook.com
will contain a
PDF of some of the fi gures in the book, and likely an errata list
to mitigate the
errors that inevitably threaten to creep in.
What This Book Is Not
Many security books today promise to teach you to hack. Their
intent is to teach
you what sort of attacks need to be defended against. The idea
is that if you have
an empirically grounded set of attacks, you can start with that to
create your
defense. This is not one of those books, because despite
millions of such books
being sold, vulnerable systems are still being built and
deployed. Besides, there
are solid, carefully considered defenses against many sorts of
attacks. It may
be useful to know how to execute an attack, but it’s more
important to know
where each attack might be executed, and how to effectively
defend against it.
This book will teach you that.
This book is not focused on a particular technology, platform,
or API set.
Platforms and APIs infl uence may offer security features you
can use, or mitigate
some threats for you. The threats and mitigations associated
with a platform
change from release to release, and this book aims to be a useful
reference
volume on your shelf for longer than the release of any
particular technology.
This book is not a magic pill that will make you a master of
threat modeling.
It is a resource to help you understand what you need to know.
Practice will
www.it-ebooks.info
http://www.it-ebooks.info/
Introduction xxxi
fl ast.indd 11:57:23:AM 01/17/2014 Page xxxi
help you get better, and deliberative practice with feedback and
hard problems
will make you a master.
New Lessons on Threat Modeling
Most experienced security professionals have developed an
approach to threat
modeling that works for them. If you’ve been threat modeling
for years, this
book will give you an understanding of other approaches you
can apply. This
book also give you a structured understanding of a set of
methods and how
they inter-relate. Lastly, there are some deeper lessons which
are worth bringing
to your attention, rather than leaving them for you to extract.
There’s More Than One Way to Threat Model
If you ask a programmer “What’s the right programming
language for my new
project?” you can expect a lot of clarifying questions. There is
no one ideal pro-
gramming language. There are certainly languages that are
better or worse at
certain types of tasks. For example, it’s easier to write text
manipulation code
in Perl than assembler. Python is easier to read and maintain
than assembler,
but when you need to write ultra-fast code for a device driver, C
or assembler
might be a better choice. In the same way, there are better and
worse ways to
threat model, which depend greatly on your situation: who will
be involved,
what skills they have, and so on.
So you can think of threat modeling like programming. Within
programming
there are languages, paradigms (such as waterfall or agile), and
practices (pair
programming or frequent deployment). The same is true of
threat modeling.
Most past writing on threat modeling has presented “the” way to
do it. This
book will help you see how “there’s more than one way to do it”
is not just the
Perl motto, but also applies to threat modeling.
The Right Way Is the Way That Finds Good Threats
The right way to threat model is the way that empowers a
project team to fi nd
more good threats against a system than other techniques that
could be employed
with the resources available. (A “good threat” is a threat that
illuminates work
that needs to be done.) That’s as true for a project team of one
as it is for a project
team of thousands. That’s also true across all levels of
resources, such as time,
expertise, and tooling. The right techniques empower a team to
really fi nd and
address threats (and gain assurance that they have done so).
www.it-ebooks.info
http://www.it-ebooks.info/
xxxii Introduction
fl ast.indd 11:57:23:AM 01/17/2014 Page xxxii
There are lots of people who will tell you that they know the
one true way.
(That’s true in fi elds far removed from threat modeling.) Avoid
a religious war
and fi nd a way that works for you.
Threat Modeling Is Like Version Control
Threat modeling is sometimes seen as a specialist skill that only
a few people
can do well. That perspective holds us back, because threat
modeling is more
like version control than a specialist skill. This is not intended
to denigrate or
minimize threat modeling; rather, no professional developer
would think of
building software of any complexity without a version control
system of some
form. Threat modeling should aspire to be that fundamental.
You expect every professional software developer to know the
basics of a
version control system or two, and similarly, many systems
administrators
will use version control to manage confi guration fi les. Many
organizations get
by with a simple version control approach, and never need an
expert. If you
work at a large organization, you might have someone who
manages the build
tree full time. Threat modeling is similar. With the lessons in
this book, it will
become reasonable to expect professionals in software and
operations to have
basic experience threat modeling.
Threat Modeling Is Also Like Playing a Violin
When you learn to play the violin, you don’t start with the most
beautiful violin
music ever written. You learn to play scales, a few easy pieces,
and then progress
to trickier and trickier music.
Similarly, when you start threat modeling, you need to practice
to learn the
skills, and it may involve challenges or frustration as you learn.
You need to
understand threat modeling as a set of skills, which you can
apply in a variety
of ways, and which take time to develop. You’ll get better if
you practice. If
you expect to compete with an expert overnight, you might be
disappointed.
Similarly, if you threat model only every few years, you should
expect to be
rusty, and it will take you time to rebuild the muscles you need.
Technique versus Repertoire
Continuing with the metaphor, the most talented violinist
doesn’t learn to play
a single piece, but they develop a repertoire, a set of knowledge
that’s relevant
to their fi eld.
As you get started threat modeling, you’ll need to develop both
techniques
and a repertoire—a set of threat examples that you can build
from to imagine
how new systems might be attacked. Attack lists or libraries can
act as a partial
www.it-ebooks.info
http://www.it-ebooks.info/
Introduction xxxiii
fl ast.indd 11:57:23:AM 01/17/2014 Page xxxiii
substitute for the mental repertoire of known threats an expert
knows about.
Reading about security issues in similar products can also help
you develop a
repertoire of threats. Over time, this can feed into how you
think about new and
different threats. Learning to think about threats is easier with
training wheels.
“Think Like an Attacker” Considered Harmful
A great deal of writing on threat modeling exhorts people to
“think like an
attacker.” For many people, that’s as hard as thinking like a
professional chef.
Even if you’re a great home cook, a restaurant-managing chef
has to wrestle
with problems that a home cook does not. For example, how
many chickens
should you buy to meet the needs of a restaurant with 78 seats,
each of which
will turn twice an evening? The advice to think like an attacker
doesn’t help
most people threat model.
Worse, you may end up with implicit or incorrect assumptions
about how an
attacker will think and what they will choose to do. Such
models of an attacker’s
mindset may lead you to focus on the wrong threats. You don’t
need to focus
on the attacker to fi nd threats, but personifi cation may help
you fi nd resources
to address them.
The Interplay of Attacks, Mitigations, & Requirements
Threat modeling is all about making more secure software. As
you use models
of software and threats to fi nd potential problems, you’ll
discover that some
threats are hard or impossible to address, and you’ll adjust
requirements to
match. This interplay is a rarely discussed key to useful threat
modeling.
Sometimes it’s a matter of wanting to defend against
administrators, other
times it’s a matter of what your customers will bear. In the
wake of the 9/11
hijackings, the US government reputedly gave serious
consideration to banning
laptops from airplanes. (A battery and a mass of explosives
reportedly look the
same on the x-ray machines.) Business customers, who buy last
minute expen-
sive tickets and keep the airlines aloft, threatened to revolt. So
the government
implemented other measures, whose effectiveness might be
judged with some
of the tools in this book.
This interplay leads to the conclusion that there are threats that
cannot be
effectively mitigated. That’s a painful thought for many security
professionals.
(But as the Man in Black said, “Life is pain, Highness! Anyone
who says differ-
ently is selling something.”) When you fi nd threats that violate
your requirements
and cannot be mitigated, it generally makes sense to adjust your
requirements.
Sometimes it’s possible to either mitigate the threat
operationally, or defer a
decision to the person using the system.
With that, it’s time to dive in and threat model!
www.it-ebooks.info
http://www.it-ebooks.info/
www.it-ebooks.info
http://www.it-ebooks.info/
c01.indd 11:33:50:AM 01/17/2014 Page 1
This part of the book is for those who are new to threat
modeling, and it assumes
no prior knowledge of threat modeling or security. It focuses on
the key new
skills that you’ll need to threat model and lays out a
methodology that’s designed
for people who are new to threat modeling.
Part I also introduces the various ways to approach threat
modeling using a
set of toy analogies. Much like there are many children’s toys
for modeling, there
are many ways to threat model. There are model kits with
precisely molded
parts to create airplanes or ships. These kits have a high degree
of fi delity and
a low level of fl exibility. There are also numerous building
block systems such
as Lincoln Logs, Erector Sets, and Lego blocks. Each of these
allows for more
fl exibility, at the price of perhaps not having a propeller that’s
quite right for
the plane you want to model.
In threat modeling, there are techniques that center on attackers,
assets, or
software, and these are like Lincoln Logs, Erector Sets, and
Lego blocks, in that
each is powerful and fl exible, each has advantages and
disadvantages, and it
can be tricky to combine them into something beautiful.
P a r t
I
Getting Started
www.it-ebooks.info
http://www.it-ebooks.info/
2 Part I ■ Getting Started
c01.indd 11:33:50:AM 01/17/2014 Page 2
Part I contains the following chapters:
■ Chapter 1: Dive In and Threat Model! contains everything
you need to
get started threat modeling, and does so by focusing on four
questions:
■ What are you building?
■ What can go wrong?
■ What should you do about those things that can go wrong?
■ Did you do a decent job of analysis?
These questions aren’t just what you need to get started, but are
at the
heart of the four-step framework, which is the core of this book.
■ Chapter 2: Strategies for Threat Modeling covers a great
many ways
to approach threat modeling. Many of them are “obvious”
approaches,
such as thinking about attackers or the assets you want to
protect. Each
is explained, along with why it works less well than you hope.
These
and others are contrasted with a focus on software. Software is
what
you can most reasonably expect a software professional to
understand,
and so models of software are the most important lesson of
Chapter 2.
Models of software are one of the two models that you should
focus on
when threat modeling.
www.it-ebooks.info
http://www.it-ebooks.info/
3
c01.indd 11:33:50:AM 01/17/2014 Page 3
Anyone can learn to threat model, and what’s more, everyone
should. Threat
modeling is about using models to fi nd security problems.
Using a model means
abstracting away a lot of details to provide a look at a bigger
picture, rather than
the code itself. You model because it enables you to fi nd issues
in things you
haven’t built yet, and because it enables you to catch a problem
before it starts.
Lastly, you threat model as a way to anticipate the threats that
could affect you.
Threat modeling is fi rst and foremost a practical discipline, and
this chapter
is structured to refl ect that practicality. Even though this book
will provide you
with many valuable defi nitions, theories, philosophies,
effective approaches,
and well-tested techniques, you’ll want those to be grounded in
experience.
Therefore, this chapter avoids focusing on theory and ignores
variations for
now and instead gives you a chance to learn by experience.
To use an analogy, when you start playing an instrument, you
need to develop
muscles and awareness by playing the instrument. It won’t
sound great at the
start, and it will be frustrating at times, but as you do it, you’ll
fi nd it gets easier.
You’ll start to hit the notes and the timing. Similarly, if you use
the simple four-
step breakdown of how to threat model that’s exercised in Parts
I-III of this book,
you’ll start to develop your muscles. You probably know the old
joke about the
person who stops a musician on the streets of New York and
asks “How do I
get to Carnegie Hall?” The answer, of course, is “practice,
practice, practice.”
Some of that includes following along, doing the exercises, and
developing an
C H A P T E R
1
Dive In and Threat Model!
www.it-ebooks.info
http://www.it-ebooks.info/
4 Part I ■ Getting Started
c01.indd 11:33:50:AM 01/17/2014 Page 4
understanding of the steps involved. As you do so, you’ll start
to understand how
the various tasks and techniques that make up threat modeling
come together.
In this chapter you’re going to fi nd security fl aws that might
exist in a design,
so you can address them. You’ll learn how to do this by
examining a simple
web application with a database back end. This will give you an
idea of what
can go wrong, how to address it, and how to check your work.
Along the way,
you’ll learn to play Elevation of Privilege, a serious game
designed to help you
start threat modeling. Finally you’ll get some hands-on
experience building
your own threat model, and the chapter closes with a set of
checklists that help
you get started threat modeling.
Learning to Threat Model
You begin threat modeling by focusing on four key questions:
1. What are you building?
2. What can go wrong?
3. What should you do about those things that can go wrong?
4. Did you do a decent job of analysis?
In addressing these questions, you start and end with tasks that
all technolo-
gists should be familiar with: drawing on a whiteboard and
managing bugs. In
between, this chapter will introduce a variety of new techniques
you can use to
think about threats. If you get confused, just come back to these
four questions.
Everything in this chapter is designed to help you answer one of
these ques-
tions. You’re going to fi rst walk through these questions using
a three-tier web
app as an example, and after you’ve read that, you should walk
through the
steps again with something of your own to threat model. It
could be software
you’re building or deploying, or software you’re considering
acquiring. If you’re
feeling uncertain about what to model, you can use one of the
sample systems
in this chapter or an exercise found in Appendix E, “Case
Studies.”
The second time you work through this chapter, you’ll need a
copy of the
Elevation of Privilege threat-modeling game. The game uses a
deck of cards
that you can download free from
http://www.microsoft.com/security/sdl/
adopt/eop.aspx. You should get two–four friends or colleagues
together for
the game part.
You start with building a diagram, which is the fi rst of four
major activities
involved in threat modeling and is explained in the next section.
The other
three include fi nding threats, addressing them, and then
checking your work.
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 1 ■ Dive In and Threat Model! 5
c01.indd 11:33:50:AM 01/17/2014 Page 5
What Are You Building?
Diagrams are a good way to communicate what you are
building. There are
lots of ways to diagram software, and you can start with a
whiteboard diagram
of how data fl ows through the system. In this example, you’re
working with
a simple web app with a web browser, web server, some
business logic and a
database (see Figure 1-1).
Web browser Web server Business Logic Database
Figure 1-1: A whiteboard diagram
Some people will actually start thinking about what goes wrong
right here.
For example, how do you know that the web browser is being
used by the person
you expect? What happens if someone modifi es data in the
database? Is it OK
for information to move from one box to the next without being
encrypted? You
might want to take a minute to think about some things that
could go wrong
here because these sorts of questions may lead you to ask “is
that allowed?”
You can create an even better model of what you’re building if
you think about
“who controls what” a little. Is this a website for the whole
Internet, or is it an
intranet site? Is the database on site, or at a web provider?
For this example, let’s say that you’re building an Internet site,
and you’re
using the fi ctitious Acme storage-system. (I’d put a specifi c
product here, but
then I’d get some little detail wrong and someone, certainly not
you, would
get all wrapped around the axle about it and miss the threat
modeling lesson.
Therefore, let’s just call it Acme, and pretend it just works the
way I’m saying.
Thanks! I knew you’d understand.)
Adding boundaries to show who controls what is a simple way
to improve
the diagram. You can pretty easily see that the threats that cross
those bound-
aries are likely important ones, and may be a good place to start
identifying
threats. These boundaries are called trust boundaries, and you
should draw
www.it-ebooks.info
http://www.it-ebooks.info/
6 Part I ■ Getting Started
c01.indd 11:33:50:AM 01/17/2014 Page 6
them wherever different people control different things. Good
examples of this
include the following:
■ Accounts (UIDs on unix systems, or SIDS on Windows)
■ Network interfaces
■ Different physical computers
■ Virtual machines
■ Organizational boundaries
■ Almost anywhere you can argue for different privileges
T R U S T B O U N DA RY V E R S U S AT TAC K S U R FAC
E
A closely related concept that you may have encountered is
attack surface. For example,
the hull of a ship is an attack surface for a torpedo. The side of
a ship presents a larger
attack surface to a submarine than the bow of the same ship.
The ship may have inter-
nal “trust” boundaries, such as waterproof bulkheads or a
Captain’s safe. A system that
exposes lots of interfaces presents a larger attack surface than
one that presents few
APIs or other interfaces. Network fi rewalls are useful
boundaries because they reduce the
attack surface relative to an external attacker. However, much
like the Captain’s safe, there
are still trust boundaries inside the fi rewall. A trust boundary
and an attack surface are
very similar views of the same thing. An attack surface is a
trust boundary and a direction
from which an attacker could launch an attack. Many people
will treat the terms are inter-
changeable. In this book, you’ll generally see “trust boundary”
used.
In your diagram, draw the trust boundaries as boxes (see Figure
1-2), show-
ing what’s inside each with a label (such as “corporate data
center”) near the
edge of the box.
Web browser Web server
Corporate data center
Web storage
(offsite)
Business Logic Database
Figure 1-2: Trust boundaries added to a whiteboard diagram
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 1 ■ Dive In and Threat Model! 7
c01.indd 11:33:50:AM 01/17/2014 Page 7
As your diagram gets larger and more complex, it becomes easy
to miss
a part of it, or to become confused by labels on the data fl ows.
Therefore, it
can be very helpful to number each process, data fl ow, and data
store in the
diagram, as shown in Figure 1-3. (Because each trust boundary
should have a
unique name, representing the unique trust inside of it, there’s
limited value
to numbering those.)
Web browser Web server
1 2 3 4 5 6 7
Corporate data center
Web storage
(offsite)
Business Logic Database
Figure 1-3: Numbers and trust boundaries added to a whiteboard
diagram
Regarding the physical form of the diagram: Use whatever
works for you.
If that’s a whiteboard diagram and a camera phone picture,
great. If it’s Visio,
or OmniGraffl e, or some other drawing program, great. You
should think of
threat model diagrams as part of the development process, so try
to keep it in
source control with everything else.
Now that you have a diagram, it’s natural to ask, is it the right
diagram? For
now, there’s a simple answer: Let’s assume it is. Later in this
chapter there are
some tips and checklists as well as a section on updating the
diagram, but at
this stage you have a good enough diagram to get started on
identifying threats,
which is really why you bought this book. So let’s identify.
What Can Go Wrong?
Now that you have a diagram, you can really start looking for
what can go wrong
with its security. This is so much fun that I turned it into a
game called, Elevation
of Privilege. There’s more on the game in Appendix D,
“Elevation of Privilege:
The Cards,” which discusses each card, and in Chapter 11,
“Threat Modeling
Tools,” which covers the history and philosophy of the game,
but you can get
started playing now with a few simple instructions. If you
haven’t already done
so, download a deck of cards from
http://www.microsoft.com/security/sdl/
adopt/eop.aspx. Print the pages in color, and cut them into
individual cards.
Then shuffl e the deck and deal it out to those friends you’ve
invited to play.
www.it-ebooks.info
http://www.it-ebooks.info/
8 Part I ■ Getting Started
c01.indd 11:33:50:AM 01/17/2014 Page 8
N O T E Some people aren’t used to playing games at work.
Others approach new
games with trepidation, especially when those games involve
long, complicated instruc-
tions. Elevation of Privilege takes just a few lines to explain.
You should give it a try.
How To Play Elevation of Privilege
Elevation of Privilege is a serious game designed to help you
threat model. A
sample card is shown in Figure 1-4. You’ll notice that like
playing cards, it has a
number and suit in the upper left, and an example of a threat as
the main text on
the card. To play the game, simply follow the instructions in the
upcoming list.
Tampering
An attacker can take
advantage of your custom key
exchange or integrity control
which you built instead of
using standard crypto.
Figure 1-4: An Elevation of Privilege card
1. Deal the deck. (Shuffl ing is optional.)
2. The person with the 3 of Tampering leads the fi rst round.
(In card games
like this, rounds are also called “tricks” or “hands.”)
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 1 ■ Dive In and Threat Model! 9
c01.indd 11:33:50:AM 01/17/2014 Page 9
3. Each round works like so:
A. Each player plays one card, starting with the person leading
the round,
and then moving clockwise.
B. To play a card, read it aloud, and try to determine if it
affects the
system you have diagrammed. If you can link it, write it down,
and
score yourself a point. Play continues clockwise with the next
player.
C. When each player has played a card, the player who has
played the
highest card wins the round. That player leads the next round.
4. When all the cards have been played, the game ends and the
person with
the most points wins.
5. If you’re threat modeling a system you’re building, then you
go fi le any
bugs you fi nd.
There are some folks who threat model like this in their sleep,
or even have
trouble switching it off. Not everyone is like that. That’s OK.
Threat modeling
is not rocket science. It’s stuff that anyone who participates in
software devel-
opment can learn. Not everyone wants to dedicate the time to
learn to do it in
their sleep.
Identifying threats can seem intimidating to a lot of people. If
you’re one of
them, don’t worry. This section is designed to gently walk you
through threat
identifi cation. Remember to have fun as you do this. As one
reviewer said:
“Playing Elevation of Privilege should be fun. Don’t downplay
that. We play it
every Friday. It’s enjoyable, relaxing, and still has business
value.”
Outside of the context of the game, you can take the next step in
threat model-
ing by thinking of things that might go wrong. For instance,
how do you know
that the web browser is being used by the person you expect?
What happens
if someone modifi es data in the database? Is it OK for
information to move
from one box to the next without being encrypted? You don’t
need to come up
with these questions by just staring at the diagram and
scratching your chin. (I
didn’t!) You can identify threats like these using the simple
mnemonic STRIDE,
described in detail in the next section.
Using the STRIDE Mnemonic to Find Threats
STRIDE is a mnemonic for things that go wrong in security. It
stands for Spoofi ng,
Tampering, Repudiation, Information Disclosure, Denial of
Service, and Elevation
of Privilege:
■ Spoofi ng is pretending to be something or someone you’re
not.
www.it-ebooks.info
http://www.it-ebooks.info/
10 Part I ■ Getting Started
c01.indd 11:33:50:AM 01/17/2014 Page 10
■ Tampering is modifying something you’re not supposed to
modify. It can
include packets on the wire (or wireless), bits on disk, or the
bits in memory.
■ Repudiation means claiming you didn’t do something
(regardless of
whether you did or not).
■ Denial of Service are attacks designed to prevent a system
from provid-
ing service, including by crashing it, making it unusably slow,
or fi lling
all its storage.
■ Information Disclosure is about exposing information to
people who are
not authorized to see it.
■ Elevation of Privilege is when a program or user is
technically able to
do things that they’re not supposed to do.
N O T E This is where Elevation of Privilege, the game, gets
its name. This book uses
Elevation of Privilege, italicized, or abbreviated to EoP, for the
game—to avoid confusion
with the threat.
Recall the three example threats mentioned in the preceding
section:
■ How do you know that the web browser is being used by the
person you
expect?
■ What happens if someone modifi es data in the database?
■ Is it ok for information to go from one box to the next
without being encrypted?
These are examples of spoofi ng, tampering, and information
disclosure. Using
STRIDE as a mnemonic can help you walk through a diagram
and select example
threats. Pair that with a little knowledge of security and the
right techniques,
and you’ll fi nd the important threats faster and more reliably.
If you have a
process in place for ensuring that you develop a threat model,
document it, and
you can increase confi dence in your software.
Now that you have STRIDE in your tool belt, walk through your
diagram
again and look for more threats, this time using the mnemonic.
Make a list as
you go with the threat and what element of the diagram it
affects. (Generally,
the software, data fl ow, or storage is affected, rather than the
trust boundary.)
The following list provides some examples of each threat.
■ Spoofi ng: Someone might pretend to be another customer, so
you’ll need a
way to authenticate users. Someone might also pretend to be
your website, so
you should ensure that you have an SSL certifi cate and that you
use a single
domain for all your pages (to help that subset of customers who
read URLs
to see if they’re in the right place). Someone might also place a
deep link to
one of your pages, such as logout.html or placeorder.aspx. You
should be
checking the Referrer fi eld before taking action. That’s not a
complete solution
to what are called CSRF (Cross Site Request Forgery) attacks,
but it’s a start.
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 1 ■ Dive In and Threat Model! 11
c01.indd 11:33:50:AM 01/17/2014 Page 11
■ Tampering: Someone might tamper with the data in your back
end at
Acme. Someone might tamper with the data as it fl ows back
and forth
between their data center and yours. A programmer might
replace the
operational code on the web front end without testing it,
thinking they’re
uploading it to staging. An angry programmer might add a
coupon code
“PayBobMore” that offers a 20 percent discount on all goods
sold.
■ Repudiation: Any of the preceding actions might require
digging into
what happened. Are there system logs? Is the right information
being
logged effectively? Are the logs protected against tampering?
■ Information Disclosure: What happens if Acme reads your
database?
Can anyone connect to the database and read or write
information?
■ Denial of Service: What happens if a thousand customers
show up at
once at the website? What if Acme goes down?
■ Elevation of Privilege: Perhaps the web front end is the only
place
customers should access, but what enforces that? What prevents
them
from connecting directly to the business logic server, or
uploading new
code? If there’s a fi rewall in place, is it correctly confi gured?
What controls
access to your database at Acme, or what happens if an
employee at Acme
makes a mistake, or even wants to edit your fi les?
The preceding possibilities aren’t intended to be a complete list
of how each
threat might manifest against every model. You can fi nd a more
complete list
in Chapter 3, “STRIDE.” This shorter version will get you
started though, and
it is focused on what you might need to investigate based on the
very simple
diagram shown in Figure 1-2. Remember the musical instrument
analogy. If
you try to start playing the piano with Ravel’s Gaspard
(regarded as one of the
most complex piano pieces ever written), you’re going to be
frustrated.
Tips for Identifying Threats
Whether you are identifying threats using Elevation of
Privilege, STRIDE, or both,
here are a few tips to keep in mind that can help you stay on the
right track to
determine what could go wrong:
■ Start with external entities: If you’re not sure where to start,
start with
the external entities or events which drive activity. There are
many other
valid approaches though: You might start with the web browser,
look-
ing for spoofi ng, then tampering, and so on. You could also
start with
the business logic if perhaps your lead developer for that
component is
in the room. Wherever you choose to begin, you want to aspire
to some
level of organization. You could also go in “STRIDE order”
through the
diagram. Without some organization, it’s hard to tell when
you’re done,
but be careful not to add so much structure that you stifl e
creativity.
www.it-ebooks.info
http://www.it-ebooks.info/
12 Part I ■ Getting Started
c01.indd 11:33:50:AM 01/17/2014 Page 12
■ Never ignore a threat because it’s not what you’re looking
for right now.
You might come up with some threats while looking at other
categories.
Write them down and come back to them. For example, you
might have
thought about “can anyone connect to our database,” which is
listed under
information disclosure, while you were looking for spoofi ng
threats. If so,
that’s awesome! Good job! Redundancy in what you fi nd can
be tedious,
but it helps you avoid missing things. If you fi nd yourself
asking whether
“someone not authorized to connect to the database who reads
informa-
tion” constitutes spoofi ng or information disclosure, the answer
is, who
cares? Record the issue and move along to the next one.
STRIDE is a tool
to guide you to threats, not to ask you to categorize what you’ve
found;
it makes a lousy taxonomy, anyway. (That is to say, there are
plenty of
security issues for which you can make an argument for various
different
categorizations. Compare and contrast it with a good taxonomy,
such
as the taxonomy of life. Does it have a backbone? If so, it’s a
vertebrae.)
■ Focus on feasible threats: Along the way, you might come up
with threats
like “someone might insert a back door at the chip factory,” or
“someone
might hire our janitorial staff to plug in a hardware key logger
and steal
all our passwords.” These are real possibilities but not very
likely com-
pared to using an exploit to attack a vulnerability for which you
haven’t
applied the patch, or tricking someone into installing software.
There’s
also the question of what you can do about either, which brings
us to the
next section.
Addressing Each Threat
You should now have a decent-sized list or lists of threats. The
next step in the
threat modeling process is to go through the lists and address
each threat. There
are four types of action you can take against each threat:
Mitigate it, eliminate
it, transfer it, or accept it. The following list looks briefl y at
each of these ways
to address threats, and then in the subsequent sections you will
learn how to
address each specifi c threat identifi ed with the STRIDE list in
the “What Can
Go Wrong” section. For more details about each of the
strategies and techniques
to address these threats, see Chapters 8 and 9, “Defensive
Building Blocks” and
“Tradeoffs When Addressing Threats.”
■ Mitigating threats is about doing things to make it harder to
take advan-
tage of a threat. Requiring passwords to control who can log in
mitigates
the threat of spoofi ng. Adding password controls that enforce
complex-
ity or expiration makes it less likely that a password will be
guessed or
usable if stolen.
■ Eliminating threats is almost always achieved by eliminating
features. If
you have a threat that someone will access the administrative
function of
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 1 ■ Dive In and Threat Model! 13
c01.indd 11:33:50:AM 01/17/2014 Page 13
a website by visiting the /admin/URL, you can mitigate it with
passwords
or other authentication techniques, but the threat is still present.
You can
make it less likely to be found by using a URL like
/j8e8vg21euwq/, but
the threat is still present. You can eliminate it by removing the
interface,
handling administration through the command line. (There are
still threats
associated with how people log in on a command line. Moving
away from
HTTP makes the threat easier to mitigate by controlling the
attack surface.
Both threats would be found in a complete threat model.)
Incidentally,
there are other ways to eliminate threats if you’re a mob boss or
you run
a police state, but I don’t advocate their use.
■ Transferring threats is about letting someone or something
else handle the
risk. For example, you could pass authentication threats to the
operating
system, or trust boundary enforcement to a fi rewall product.
You can also
transfer risk to customers, for example, by asking them to click
through
lots of hard-to-understand dialogs before they can do the work
they
need to do. That’s obviously not a great solution, but sometimes
people
have knowledge that they can contribute to making a security
tradeoff.
For example, they might know that they just connected to a
coffee shop
wireless network. If you believe the person has essential
knowledge to
contribute, you should work to help her bring it to the decision.
There’s
more on doing that in Chapter 15, “Human Factors and
Usability.”
■ Accepting the risk is the fi nal approach to addressing
threats. For most
organizations most of the time, searching everyone on the way
in and out
of the building is not worth the expense or the cost to the
dignity and
job satisfaction of those workers. (However, diamond mines and
some-
times government agencies take a different approach.)
Similarly, the cost
of preventing someone from inserting a back door in the
motherboard
is expensive, so for each of these examples you might choose to
accept
the risk. And once you’ve accepted the risk, you shouldn’t
worry over it.
Sometimes worry is a sign that the risk hasn’t been fully
accepted, or that
the risk acceptance was inappropriate.
The strategies listed in the following tables are intended to
serve as examples
to illustrate ways to address threats. Your “go-to” approach
should be to miti-
gate threats. Mitigation is generally the easiest and the best for
your customers.
(It might look like accepting risk is easier, but over time,
mitigation is easier.)
Mitigating threats can be hard work, and you shouldn’t take
these examples
as complete. There are often other valid ways to address each of
these threats,
and sometimes trade-offs must be made in the way the threats
are addressed.
Addressing Spoofi ng
Table 1-1 and the list that follows show targets of spoofi ng,
mitigation strategies
that address spoofi ng, and techniques to implement those
mitigations.
www.it-ebooks.info
http://www.it-ebooks.info/
14 Part I ■ Getting Started
c01.indd 11:33:50:AM 01/17/2014 Page 14
Table 1-1: Addressing Spoofi ng Threats
THREAT
TARGET MITIGATION STRATEGY MITIGATION
TECHNIQUE
Spoofi ng a
person
Identifi cation and authen-
tication (usernames and
something you know/have/
are)
Usernames, real names, or other identifi ers:
❖ Passwords
❖ Tokens
❖ Biometrics
Enrollment/maintenance/expiry
Spoofi ng a “fi le”
on disk
Leverage the OS ❖ Full paths
❖ Checking ACLs
❖ Ensuring that pipes are created properly
Cryptographic
authenticators
Digital signatures or authenticators
Spoofi ng a net-
work address
Cryptographic ❖ DNSSEC
❖ HTTPS/SSL
❖ IPsec
Spoofi ng a
program in
memory
Leverage the OS Many modern operating systems have
some form of application identifi er that the
OS will enforce.
■ When you’re concerned about a person being spoofed, ensure
that each
person has a unique username and some way of authenticating.
The tradi-
tional way to do this is with passwords, which have all sorts of
problems
as well as all sorts of advantages that are hard to replicate. See
Chapter
14, “Accounts and Identity” for more on passwords.
■ When accessing a fi le on disk, don’t ask for the fi le with
open(file). Use
open(/path/to/file). If the fi le is sensitive, after opening, check
vari-
ous security elements of the fi le descriptor (such as fully
resolved name,
permissions, and owner). You want to check with the fi le
descriptor to
avoid race conditions. This applies doubly when the fi le is an
executable,
although checking after opening can be tricky. Therefore, it may
help to
ensure that the permissions on the executable can’t be changed
by an
attacker. In any case, you almost never want to call exec() with
./file.
■ When you’re concerned about a system or computer being
spoofed when
it connects over a network, you’ll want to use DNSSEC, SSL,
IPsec, or a
combination of those to ensure you’re connecting to the right
place.
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 1 ■ Dive In and Threat Model! 15
c01.indd 11:33:50:AM 01/17/2014 Page 15
Addressing Tampering
Table 1-2 and the list that follows show targets of tampering,
mitigation strate-
gies that address tampering, and techniques to implement those
mitigations.
Table 1-2: Addressing Tampering Threats
THREAT TARGET
MITIGATION
STRATEGY MITIGATION TECHNIQUE
Tampering with a fi le Operating system ACLs
Cryptographic ❖ Digital Signatures
❖ Keyed MAC
Racing to create a fi le
(tampering with the fi le
system)
Using a directory that’s
protected from arbitrary
user tampering
ACLs
Using private directory structures
(Randomizing your fi le names just
makes it annoying to execute the
attack.)
Tampering with a net-
work packet
Cryptographic ❖ HTTPS/SSL
❖ IPsec
Anti-pattern Network isolation (See note on
network isolation anti-pattern.)
■ Tampering with a fi le: Tampering with fi les can be easy if
the attacker
has an account on the same machine, or by tampering with the
network
when the fi les are obtained from a server.
■ Tampering with memory: The threats you want to worry
about are those
that can occur when a process with less privileges than you, or
that you
don’t trust, can alter memory. For example, if you’re getting
data from a
shared memory segment, is it ACLed so only the other process
can see it?
For a web app that has data coming in via AJAX, make sure you
validate
that the data is what you expect after you pull in the right
amount.
■ Tampering with network data: Preventing tampering with
network data
requires dealing with both spoofi ng and tampering. Otherwise,
someone
who wants to tamper can simply pretend to be the other end,
using what’s
called a man-in-the-middle attack. The most common solution
to these
problems is SSL, with IP Security (IPsec) emerging as another
possibility.
SSL and IPsec both address confi dentiality and tampering, and
can help
address spoofi ng.
www.it-ebooks.info
http://www.it-ebooks.info/
16 Part I ■ Getting Started
c01.indd 11:33:50:AM 01/17/2014 Page 16
■ Tampering with networks anti-pattern: It’s somewhat
common for peo-
ple to hope that they can isolate their network, and so not worry
about
tampering threats. It’s also very hard to maintain isolation over
time.
Isolation doesn’t work as well as you would hope. For example,
the isolated
United States SIPRNet was thoroughly infested with malware,
and the
operation to clean it up took 14 months (Shachtman, 2010).
N O T E A program can’t check whether it’s authentic after it
loads. It may be
possible for something to rely on “trusted bootloaders” to
provide a chain of signatures,
but the security decisions are being made external to that code.
(If you’re not familiar
with the technology, don’t worry, the key lesson is that a
program cannot check its own
authenticity.)
Addressing Repudiation
Addressing repudiation is generally a matter of ensuring that
your system
is designed to log and ensuring that those logs are preserved and
protected.
Some of that can be handled with simple steps such as using a
reliable trans-
port for logs. In this sense, syslog over UDP was almost always
silly from
a security perspective; syslog over TCP/SSL is now available
and is vastly
better.
Table 1-3 and the list that follows show targets of repudiation,
mitigation strate-
gies that address repudiation, and techniques to implement those
mitigations.
Table 1-3: Addressing Repudiation Threats
THREAT TARGET MITIGATION STRATEGY MITIGATION
TECHNIQUE
No logs means you can’t
prove anything.
Log Be sure to log all the security-
relevant information.
Logs come under attack Protect your logs. ❖ Send over the
network.
❖ ACL
Logs as a channel for attack Tightly specifi ed logs
Documenting log design
early in the development
process
■ No logs means you can’t prove anything: This is self-
explanatory. For
example, when a customer calls to complain that they never got
their order,
how will this be resolved? Maintain logs so that you can
investigate what
happens when someone attempts to repudiate something.
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 1 ■ Dive In and Threat Model! 17
c01.indd 11:33:50:AM 01/17/2014 Page 17
■ Logs come under attack: Attackers will do things to prevent
your logs
from being useful, including fi lling up the log to make it hard
to fi nd the
attack or forcing logs to “roll over.” They may also do things to
set off
so many alarms that the real attack is lost in a sea of troubles.
Perhaps
obviously, sending logs over a network exposes them to other
threats
that you’ll need to handle.
■ Logs as a channel for attack: By design, you’re collecting
data from sources
outside your control, and delivering that data to people and
systems with
security privileges. An example of such an attack might be
sending mail
addressed to "</html> [email protected]", causing trouble for
web-based
tools that don’t expect inline HTML.
You can make it easier to write secure code to process your logs
by clearly
communicating what your logs can’t contain, such as “Our logs
are all plaintext,
and attackers can insert all sorts of things,” or “Fields 1–5 of
our logs are tightly
controlled by our software, fi elds 6–9 are easy to inject data
into. Field 1 is time
in GMT. Fields 2 and 3 are IP addresses (v4 or 6)...” Unless you
have incredibly
strict control, documenting what your logs can contain will
likely miss things.
(For example, can your logs contain Unicode double-wide
characters?)
Addressing Information Disclosure
Table 1-4 and the list which follows show targets of information
disclosure,
mitigation strategies that address information disclosure, and
techniques to
implement those mitigations.
Table 1-4: Addressing Information Disclosure Threats
THREAT TARGET MITIGATION STRATEGY MITIGATION
TECHNIQUE
Network monitoring Encryption ❖ HTTPS/SSL
❖ IPsec
Directory or fi lename (for
example
layoff-letters/
adamshostack.docx)
Leverage the OS. ACLs
File contents Leverage the OS. ACLS
Cryptography File encryption such as PGP, disk
encryption (FileVault, BitLocker)
API information
disclosure
Design Careful design control
Consider pass by reference or
value.
www.it-ebooks.info
http://www.it-ebooks.info/
18 Part I ■ Getting Started
c01.indd 11:33:50:AM 01/17/2014 Page 18
■ Network monitoring: Network monitoring takes advantage of
the archi-
tecture of most networks to monitor traffi c. (In particular, most
networks
now broadcast packets, and each listener is expected to decide if
the packet
matters to them.) When networks are architected differently,
there are a
variety of techniques to draw traffi c to or through the
monitoring station.
If you don’t address spoofi ng, much like tampering, an attacker
can just
sit in the middle and spoof each end. Mitigating network
information
disclosure threats requires handling both spoofi ng and
tampering threats.
If you don’t address tampering, then there are all sorts of clever
ways to
get information out. Here again, SSL and IP Security options
are your
simplest choices.
■ Names reveal information: When the name of a directory or a
fi lename
itself will reveal information, then the best way to protect it is
to create
a parent directory with an innocuous name and use operating
system
ACLs or permissions.
■ File content is sensitive: When the contents of the fi le need
protection,
use ACLs or cryptography. If you want to protect all the data
should the
machine fall into unauthorized hands, you’ll need to use
cryptography.
The forms of cryptography that require the person to manually
enter a key
or passphrase are more secure and less convenient. There’s fi le,
fi lesystem,
and database cryptography, depending on what you need to
protect.
■ APIs reveal information: When designing an API, or
otherwise passing
information over a trust boundary, select carefully what
information you
disclose. You should assume that the information you provide
will be
passed on to others, so be selective about what you provide. For
example,
website errors that reveal the username and password to a
database are a
common form of this fl aw, others are discussed in Chapter 3.
Addressing Denial of Service
Table 1-5 and the list that follows show targets of denial of
service, mitigation
strategies that address denial of service, and techniques to
implement those
mitigations.
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 1 ■ Dive In and Threat Model! 19
c01.indd 11:33:50:AM 01/17/2014 Page 19
Table 1-5: Addressing Denial of Service Threats
THREAT
TARGET
MITIGATION
STRATEGY MITIGATION TECHNIQUE
Network
fl ooding
Look for exhaustible
resources.
❖ Elastic resources
❖ Work to ensure attacker resource consumption is
as high as or higher than yours.
Network ACLS
Program
resources
Careful design Elastic resource management, proof of work
Avoid multipliers. Look for places where attackers can multiply
CPU
consumption on your end with minimal eff ort on
their end: Do something to require work or enable
distinguishing attackers, such as client does crypto
fi rst or login before large work factors (of course, that
can’t mean that logins are unencrypted).
System
resources
Leverage the OS. Use OS settings.
■ Network flooding: If you have static structures for the
number of
connections, what happens if those fi ll up? Similarly, to the
extent that it’s
under your control, don’t accept a small amount of network data
from
a possibly spoofed address and return a lot of data. Lastly, fi
rewalls can
provide a layer of network ACLs to control where you’ll accept
(or send)
traffi c, and can be useful in mitigating network denial-of-
service attacks.
■ Look for exhaustible resources: The fi rst set of exhaustible
resources are
network related, the second set are those your code manages,
and the third
are those the OS manages. In each case, elastic resourcing is a
valuable
technique. For example, in the 1990s some TCP stacks had a
hardcoded
limit of fi ve half-open TCP connections. (A half-open
connection is one
in the process of being opened. Don’t worry if that doesn’t
make sense,
but rather ask yourself why the code would be limited to fi ve of
them.)
Today, you can often obtain elastic resourcing of various types
from
cloud providers.
www.it-ebooks.info
http://www.it-ebooks.info/
20 Part I ■ Getting Started
c01.indd 11:33:50:AM 01/17/2014 Page 20
■ System resources: Operating systems tend to have limits or
quotas to
control the resource consumption of user-level code. Consider
those
resources that the operating system manages, such as memory or
disk
usage. If your code runs on dedicated servers, it may be sensible
to allow
it to chew up the entire machine. Be careful if you unlimit your
code, and
be sure to document what you’re doing.
■ Program resources: Consider resources that your program
manages
itself. Also, consider whether the attacker can make you do
more
work than they’re doing. For example, if he sends you a packet
full
of random data and you do expensive cryptographic operations
on it,
then your vulnerability to denial of service will be higher than
if you
make him do the cryptography fi rst. Of course, in an age of
botnets,
there are limits to how well one can reassign this work. There’s
an
excellent paper by Ben Laurie and Richard Clayton, “Proof of
work
proves not to work,” which argues against proof of work
schemes
(Laurie, 2004).
Addressing Elevation of Privilege
Table 1-6 and the list that follows show targets of elevation of
privilege, mitiga-
tion strategies that address elevation of privilege, and
techniques to implement
those mitigations.
Table 1-6: Addressing Elevation of Privilege Threats
THREAT TARGET
MITIGATION
STRATEGY MITIGATION TECHNIQUE
Data/code
confusion
Use tools and
architectures that
separate data and
code.
❖ Prepared statements or stored procedures in
SQL
❖ Clear separators with canonical forms
❖ Late validation that data is what the next func-
tion expects
Control fl ow/
memory corrup-
tion attacks
Use a type-safe
language.
Writing code in a type-safe language protects
against entire classes of attack.
Leverage the
OS for memory
protection.
Most modern operating systems have memory-
protection facilities.
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 1 ■ Dive In and Threat Model! 21
c01.indd 11:33:50:AM 01/17/2014 Page 21
THREAT TARGET
MITIGATION
STRATEGY MITIGATION TECHNIQUE
Use the sandbox. ❖ Modern operating systems support sand-
boxing in various ways (AppArmor on Linux,
AppContainer or the MOICE pattern on
Windows, Sandboxlib on Mac OS).
❖ Don’t run as the “nobody” account, create a
new one for each app. Postfi x and QMail are
examples of the good pattern of one account
per function.
Command injec-
tion attacks
Be careful. ❖ Validate that your input is the size and form
you expect.
❖ Don’t sanitize. Log and then throw it away if
it’s weird.
■ Data/code confusion: Problems where data is treated as code
are common.
As information crosses layers, what’s tainted and what’s pure
can be lost.
Attacks such as XSS take advantage of HTML’s freely
interweaving code
and data. (That is, an .html fi le contains both code, such as
Javascript, and
data, such as text, to be displayed and sometimes formatting
instructions
for that text.) There are a few strategies for dealing with this.
The fi rst
is to look for ways in which frameworks help you keep code and
data
separate. For example, prepared statements in SQL tell the
database what
statements to expect, and where the data will be.
You can also look at the data you’re passing right before you
pass it, so
you know what validation you might be expected to perform for
the func-
tion you’re calling. For example, if you’re sending data to a
web page,
you might ensure that it contains no <, >, #, or & characters, or
whatever.
In fact, the value of “whatever” is highly dependent on exactly
what exists
between “you” and the rendition of the web page, and what
security
checks it may be performing. If “you” means a web server, it
may be very
important to have a few < and > symbols in what you produce.
If “you”
is something taking data from a database and sending it to, say
PHP, then
the story is quite different. Ideally, the nature of “you” and the
additional
steps are clear in your diagrams.
■ Control fl ow/memory corruption attacks: This set of attacks
generally
takes advantage of weak typing and static structures in C-like
languages
to enable an attacker to provide code and then jump to that
code. If you
www.it-ebooks.info
http://www.it-ebooks.info/
22 Part I ■ Getting Started
c01.indd 11:33:50:AM 01/17/2014 Page 22
use a type-safe language, such as Java or C#, many of these
attacks are
harder to execute.
Modern operating systems tend to contain memory protection
and random-
ization features, such as Address Space Layout Randomization
(ASLR).
Sometimes the features are optional, and require a compiler or
linker switch.
In many cases, such features are almost free to use, and you
should at
least try all such features your OS supports. (It’s not completely
effortless,
you may need to recompile, test, or make other such small
investments.)
The last set of controls to address memory corruption are
sandboxes.
Sandboxes are OS features that are designed to protect the OS
or the rest
of the programs running as the user from a corrupted program.
N O T E Details about each of these features are outside the
scope of this book, but
searching on terms such as type safety, ASLR, and sandbox
should provide a plethora of
details.
■ Command injection attacks: Command injection attacks are a
form of
code/data confusion where an attacker supplies a control
character, fol-
lowed by commands. For example, in SQL injection, a single
quote will
often close a dynamic SQL statement; and when dealing with
unix shell
scripts, the shell can interpret a semicolon as the end of input,
taking
anything after that as a command.
In addition to working through each STRIDE threat you
encounter, a few other
recurring themes will come up as you address your threats;
these are covered
in the following two sections.
Validate, Don’t Sanitize
Know what you expect to see, how much you expect to see, and
validate that
that’s what you’re receiving. If you get something else, throw it
away and return
an error message. Unless your code is perfect, errors in
sanitization will hurt
a lot, because after you write that sanitize input function you’re
going to rely
on it. There have been fascinating attacks that rely on a sanitize
function to get
their code into shape to execute.
Trust the Operating System
One of the themes that recurs in the preceding tables is “trust
the operating
system.” Of course, you may want to discount that because I did
much of this
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 1 ■ Dive In and Threat Model! 23
c01.indd 11:33:50:AM 01/17/2014 Page 23
work while working for Microsoft, a purveyor of a variety of fi
ne operating
system software, so there might be some bias here. It’s a valid
point, and good
for you for being skeptical. See, you’re threat modeling already!
More seriously, trusting the operating system is a good idea for
a number
of reasons:
■ The operating system provides you with security features so
you can
focus on your unique value proposition.
■ The operating system runs with privileges that are probably
not available
to your program or your attacker.
■ If your attacker controls the operating system, you’re likely
in a world of
hurt regardless of what your code tries to do.
With all of that “trust the operating system” advice, you might
be tempted to
ask why you need this book. Why not just rely on the operating
system?
Well, many of the building blocks just discussed are
discretionary. You can
use them well or you can use them poorly. It’s up to you to
ensure that you don’t
set the permissions on a fi le to 777, or the ACLs to allow Guest
accounts to write.
It’s up to you to write code that runs well as a normal or even
sandboxed user,
and it’s certainly up to you in these early days of client/server,
web, distributed
systems, web 2.0, cloud, or whatever comes next to ensure that
you’re building
the right security mechanisms that these newfangled widgets
don’t yet offer.
File Bugs
Now that you have a list of threats and ways you would like to
mitigate them,
you’re through the complex, security-centered parts of the
process. There are just
a few more things to do, the fi rst of which is to treat each line
of the preceding
tables as a bug. You want to treat these as bugs because if you
ship software,
you’ve learned to handle bugs in some way. You presumably
have a way to track
them, prioritize them, and ensure that you’re closing them with
an appropriate
degree of consistency. This will mean something very different
to a three-person
start-up versus a medical device manufacturer, but both
organizations will have
a way to handle bugs. You want to tap into that procedure to
ensure that threat
modeling isn’t just a paper exercise.
You can write the text of the bugs in a variety of ways, based on
what your
organization does. Examples of fi ling a bug might include the
following:
■ Someone might use the /admin/ interface without proper
authorization.
■ The admin interface lacks proper authorization controls,
■ There’s no automated security testing for the /admin/
interface.
www.it-ebooks.info
http://www.it-ebooks.info/
24 Part I ■ Getting Started
c01.indd 11:33:50:AM 01/17/2014 Page 24
Whichever way you go, it’s great if you can include the entire
threat in the
bug, and mark it as a security bug if your bug-tracking tool
supports that. (If
you’re a super-agile scrum shop, use a uniquely colored Post-it
for security bugs.)
You’ll also have to prioritize the bugs. Elevation-of-privilege
bugs are almost
always going to fall into the highest priority category, because
when they’re
exploited they lead to so much damage. Denial of service often
falls toward
the bottom of the stack, but you’ll have to consider each bug to
determine how
to rank it.
Checking Your Work
Validation of your threat model is the last thing you do as part
of threat model-
ing. There are a few tasks to be done here, and it is best to keep
them aligned
with the order in which you did the previous work. Therefore,
the validation
tasks include checking the model, checking that you’ve looked
for each threat,
and checking your tests. You probably also want to validate the
model a second
time as you get close to shipping or deploying.
Checking the model
You should ensure that the fi nal model matched what you built.
If it doesn’t,
how can you know that you found the right, relevant threats? To
do so, try to
arrange a meeting during which everyone looks at the diagram,
and answer
the following questions:
■ Is this complete?
■ Is it accurate?
■ Does it cover all the security decisions we made?
■ Can I start the next version with this diagram without any
changes?
If everyone says yes, your diagram is suffi ciently up to date for
the next step.
If not, you’ll need to update it.
Updating the Diagram
As you went through the diagram, you might have noticed that
it’s missing
key data. If it were a real system, there might be extra
interfaces that were not
drawn in, or there might be additional databases. There might
be details that you
jumped to the whiteboard to draw in. If so, you need to update
the diagram with
those details. A few rules of thumb are useful as you create or
update diagrams:
■ Focus on data fl ow, not control fl ow.
■ Anytime you need to qualify your answer with “sometimes”
or “also,”
you should consider adding more detail to break out the various
cases. For
example, if you say, “Sometimes we connect to this web service
via SSL,
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 1 ■ Dive In and Threat Model! 25
c01.indd 11:33:50:AM 01/17/2014 Page 25
and sometimes we fall back to HTTP,” you should draw both of
those data
fl ows (and consider whether an attacker can make you fall back
like that).
■ Anytime you fi nd yourself needing more detail to explain
security-relevant
behavior, draw it in.
■ Any place you argued over the design or construction of the
system, draw
in the agreed-on facts. This is an important way to ensure that
everyone
ended that discussion on the same page. It’s especially
important for larger
teams when not everyone is in the room for the threat model
discussions.
If they see a diagram that contradicts their thinking, they can
either accept
it or challenge the assumptions; but either way, a good clear
diagram can
help get everyone on the same page.
■ Don’t have data sinks: You write the data for a reason. Show
who uses it.
■ Data can’t move itself from one data store to another: Show
the process
that moves it.
■ The diagram should tell a story, and support you telling
stories while
pointing at it.
■ Don’t draw an eye chart (a diagram with so much detail that
you need to
squint to read the tiny print).
Diagram Details
If you’re wondering how to reconcile that last rule of thumb,
don’t draw an eye
chart, with all the details that a real software project can entail,
one technique
is to use a sub diagram that shows the details of one particular
area. You should
look for ways to break things out that make sense for your
project. For example,
if you have one hyper-complex process, maybe everything in
that process should
be covered in one diagram, and everything outside it in another.
If you have
a dispatcher or queuing system, that’s a good place to break
things up. Your
databases or the fail-over system is also a good split. Maybe
there’s a set of a
few elements that really need more detail. All of these are good
ways to break
things out.
The key thing to remember is that the diagram is intended to
help ensure
that you understand and can discuss the system. Recall the
quote that opens
this book: “All models are wrong. Some models are useful.”
Therefore, when
you’re adding additional diagrams, don’t ask, “Is this the right
way to do it?”
Instead, ask, “Does this help me think about what might go
wrong?”
Checking Each Threat
There are two main types of validation activities you should do.
The fi rst is
checking that you did the right thing with each threat you
found. The other is
asking if you found all the threats you should fi nd.
www.it-ebooks.info
http://www.it-ebooks.info/
26 Part I ■ Getting Started
c01.indd 11:33:50:AM 01/17/2014 Page 26
In terms of checking that you did the right thing with each
threat you did
fi nd, the fi rst and foremost question here is “Did I do
something with each
unique threat I found?” You really don’t want to drop stuff on
the fl oor. This is
“turning the crank” sort of work. It’s rarely glamorous or
exciting until you fi nd
the thing you overlooked. You can save a lot of time by taking
meeting minutes
and writing a bug number next to each one, checking that
you’ve addressed
each when you do your bug triage.
The next question is “Did I do the right something with each
threat?” If you’ve
fi led bugs with some sort of security tag, run a query for all the
security bugs,
and give each one a going-over. This can be as lightweight as
reading each
bug and asking yourself, “Did I do the right thing?” or you
could use a short
checklist, an example of which (“Validating threats”) is
included at the end of
this chapter in the “Checklists for Diving in and Threat
Modeling” section.
Checking Your Tests
For each threat that you address, ensure you’ve built a good test
to detect the
problem. Your test can be a manual testing process or an
automated test. Some
of these tests will be easy, and others very tricky. For example,
if you want to
ensure that no static web page under /beta can be accessed
without the beta
cookie, you can build a quick script that retrieves all the pages
from your source
repository, constructs a URL for it, and tries to collect the page.
You could extend
the script to send a cookie with each request, and then re-
request with an admin
cookie. Ideally, that’s easy to do in your existing web testing
framework. It
gets a little more complex with dynamic pages, and a lot more
complex when
the security risk is something such as SQL injection or secure
parsing of user
input. There are entire books written on those subjects, not to
mention entire
books on the subject of testing. The key question you should
ask is something
like “Are my security tests in line with the other software tests
and the sorts of
risks that failures expose?”
Threat Modeling on Your Own
You have now walked through your fi rst threat model.
Congratulations! Remember
though: You’re not going to get to Carnegie Hall if you don’t
practice, practice,
practice. That means it is time to do it again, this time on your
own, because
doing it again is the only way to get better. Pick a system
you’re working on and
threat model it. Follow this simplifi ed, fi ve-step process as
you go:
1. Draw a diagram.
2. Use the EoP game to fi nd threats.
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 1 ■ Dive In and Threat Model! 27
c01.indd 11:33:50:AM 01/17/2014 Page 27
3. Address each threat in some way.
4. Check your work with the checklists at the end of this
chapter.
5. Celebrate and share your work.
Right now, if you’re new to threat modeling, your best bet is to
do it often,
applying it to the software and systems that matters to you.
After threat model-
ing a few systems, you’ll fi nd yourself getting more
comfortable with the tools
and techniques. For now, the thing to do is practice. Build your
fi rst muscles to
threat model with.
This brings up the question, what should you threat model next?
What you’re working on now is the fi rst place to look for the
next system to
threat model. If it has a trust boundary of some sort, it may be a
good candidate.
If it’s too simple to have trust boundaries, threat modeling it
probably won’t
be very satisfying. If it has too many boundaries, it may be too
big a project to
chew on all at once. If you’re collaborating closely on it with a
few other people
who you trust, that may be a good opportunity to play EoP with
them. If you’re
working on a large team, or across organizational boundaries, or
things are tense,
then those people may not be good fi rst collaborators on threat
modeling. Start
with what you’re working on now, unless there are tangible
reasons to wait.
Checklists for Diving In and Threat Modeling
There’s a lot in this chapter. As you sit down to really do the
work yourself, it can
be tricky to assess how you’re doing. Here are some checklists
that are designed
to help you avoid the most common problems. Each question is
designed to be
read aloud and to have an affi rmative answer from everyone
present. After read-
ing each question out loud, encourage questions or clarifi cation
from everyone
else involved.
Diagramming
1. Can we tell a story without changing the diagram?
2. Can we tell that story without using words such as
“sometimes” or “also”?
3. Can we look at the diagram and see exactly where the
software will make
a security decision?
4. Does the diagram show all the trust boundaries, such as
where different
accounts interact? Do you cover all UIDs, all application roles,
and all
network interfaces?
5. Does the diagram refl ect the current or planned reality of
the software?
www.it-ebooks.info
http://www.it-ebooks.info/
28 Part I ■ Getting Started
c01.indd 11:33:50:AM 01/17/2014 Page 28
6. Can we see where all the data goes and who uses it?
7. Do we see the processes that move data from one data store
to another?
Threats
1. Have we looked for each of the STRIDE threats?
2. Have we looked at each element of the diagram?
3. Have we looked at each data fl ow in the diagram?
N O T E Data fl ows are a type of element, but they are
sometimes overlooked as people
get started, so question 3 is a belt-and-suspenders question to
add redundancy. (A belt-
and-suspenders approach ensures that a gentleman’s pants stay
up.)
Validating Threats
1. Have we written down or fi led a bug for each threat?
2. Is there a proposed/planned/implemented way to address
each threat?
3. Do we have a test case per threat?
4. Has the software passed the test?
Summary
Any technical professional can learn to threat model. Threat
modeling involves
the intersection of two models: a model of what can go wrong
(threats), applied
to a model of the software you’re building or deploying, which
is encoded in
a diagram. One model of threats is STRIDE: spoofi ng,
tampering, repudiation,
information disclosure, denial of service, and elevation of
privilege. This model
of threats has been made into the Elevation of Privilege game,
which adds struc-
ture and hints to the model.
With a whiteboard diagram and a copy of Elevation of
Privilege, developers
can threat model software that they’re building, systems
administrators can
threat model software they’re deploying or a system they’re
constructing, and
security professionals can introduce threat modeling to those
with skillsets
outside of security.
It’s important to address threats, and the STRIDE threats are the
inverse of
properties you want. There are mitigation strategies and
techniques for devel-
opers and for systems administrators.
Once you’ve created a threat model, it’s important to check
your work by
making sure you have a good model of the software in an up-to-
date diagram,
and that you’ve checked each threat you’ve found.
www.it-ebooks.info
http://www.it-ebooks.info/
29
c02.indd 11:35:5:AM 01/17/2014 Page 29
The earlier you fi nd problems, the easier it is to fi x them.
Threat modeling is all
about fi nding problems, and therefore it should be done early
in your develop-
ment or design process, or in preparing to roll out an
operational system. There
are many ways to threat model. Some ways are very specifi c,
like a model air-
plane kit that can only be used to build an F-14 fi ghter jet.
Other methods are
more versatile, like Lego building blocks that can be used to
make a variety
of things. Some threat modeling methods don’t combine easily,
in the same
way that Erector set pieces and Lego set blocks don’t fi t
together. This chapter
covers the various strategies and methods that have been
brought to bear on
threat modeling, presents each one in depth, and sets the stage
for effectively
fi nding threats.
You’ll start with very simple methods such as asking “what’s
your threat
model?” and brainstorming about threats. Those can work for a
security expert,
and they may work for you. From there, you’ll learn about three
strategies for
threat modeling: focusing on assets, focusing on attackers, and
focusing on
software. These strategies are more structured, and can work for
people with
different skillsets. A focus on software is usually the most
appropriate strategy.
The desire to focus on assets or attackers is natural, and often
presented as an
unavoidable or essential aspect of threat modeling. It would be
wrong not to
present each in its best light before discussing issues with those
strategies. From
there, you’ll learn about different types of diagrams you can use
to model your
system or software.
C H A P T E R
2
Strategies for Threat Modeling
www.it-ebooks.info
http://www.it-ebooks.info/
30 Part I ■ Getting Started
c02.indd 11:35:5:AM 01/17/2014 Page 30
N O T E This chapter doesn’t include the specifi c threat
building blocks that discover
threats, which are the subject of the next few chapters.
“What’s Your Threat Model?”
The question “what’s your threat model?” is a great one because
in just four
words, it can slice through many conundrums to determine what
you are
worried about. Answers are often of the form “an attacker with
the laptop” or “
insiders,” or (unfortunately, often) “huh?” The “huh?” answer is
useful because
it reveals how much work would be needed to fi nd a consistent
and structured
approach to defense. Consistency and structure are important
because they
help you invest in defenses that will stymie attackers. There’s a
compendium
of standard answers to “what’s your threat model?” in Appendix
A, “Helpful
Tools,” but a few examples are listed here as well:
■ A thief who could steal your money
■ The company stakeholders (employees, consultants,
shareholders, etc.)
who access sensitive documents and are not trusted
■ An untrusted network
■ An attacker who could steal your cookie (web or otherwise)
N O T E Throughout this book, you’ll visit and revisit the same
example for each of
these approaches. Your main targets are the fi ctitious Acme
Corporation’s “Acme/SQL,”
which is a commercial database server, and Acme’s operational
network. Using Acme
examples, you can see how the diff erent approaches play out
against the
same systems.
Applying the question “what’s your threat model?” to the Acme
Corporation
example, you might get the following answers:
■ For the Acme SQL database, the threat model would be an
attacker who
wants to read or change data in the database. A more subtle
model might
also include people who want to read the data without showing
up in
the logs.
■ For Acme’s fi nancial system, the answers might include
someone getting
a check they didn’t deserve, customers who don’t make a
payment they
owe, and/or someone reading or altering fi nancial results
before reporting.
If you don’t have a clear answer to the question, “what’s your
threat model?” it
can lead to inconsistency and wasted effort. For example, start-
up Zero-Knowledge
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 2 ■ Strategies for Threat Modeling 31
c02.indd 11:35:5:AM 01/17/2014 Page 31
Systems didn’t have a clear answer to the question “what’s your
threat model?”
Because there was no clear answer, there wasn’t consistency in
what security
features were built. A great deal of energy went into building
defenses against
the most complex attacks, and these choices to defend against
such attackers had
performance impacts on the whole system. While preventing
governments from
spying on customers was a fun technical challenge and an
emotionally resonant
goal, both the challenge and the emotional impact made it hard
to make techni-
cal decisions that could have made the business more
successful. Eventually,
a clearer answer to “what’s your threat model?” let Zero-
Knowledge Systems
invest in mitigations that all addressed the same subset of
possible threats.
So how do you ensure you have a clear answer to this question?
Often, the
answers are not obvious, even to those who think regularly
about security,
and the question itself offers little structure for fi guring out the
answers. One
approach, often recommended is to brainstorm. In the next
section, you’ll learn
about a variety of approaches to brainstorming and the tradeoffs
associated
with those approaches.
Brainstorming Your Threats
Brainstorming is the most traditional way to enumerate threats.
You get a
set of experienced experts in a room, give them a way to take
notes (white-
boards or cocktail napkins are traditional) and let them go. The
quality of the
brainstorm is bounded by the experience of the brainstormers
and the amount
of time spent brainstorming.
Brainstorming involves a period of idea-generation, followed by
a period of
analyzing and selecting the ideas. Brainstorming for threat
modeling involves
coming up with possible attacks of all sorts. During the idea
generation phase,
you should forbid criticism. You want to explore the space of
possible threats,
and an atmosphere of criticism will inhibit such idea generation.
A moderator
can help keep brainstorming moving.
During brainstorming, it is key to have an expert on the
technology being
modeled in the room. Otherwise, it’s easy to make bad
assumptions about how
it works. However, when you have an expert who’s proud of
their technology,
you need to ensure that you don’t end up with a “proud parent”
offended that
their software baby is being called ugly. A helpful rule is that
it’s the software
being attacked, not the software architects. That doesn’t always
suffi ce, but it’s
a good start. There’s also a benefi t to bringing together a
diverse grouping of
experts with a broader set of experience.
Brainstorming can also devolve into out-of-scope attacks. For
example, if
you’re designing a chat program, attacks by the memory
management unit
against the CPU are probably out of scope, but if you’re
designing a motherboard,
www.it-ebooks.info
http://www.it-ebooks.info/
32 Part I ■ Getting Started
c02.indd 11:35:5:AM 01/17/2014 Page 32
these attacks may be the focus of your threat modeling. One
way to handle this
issue is to list a set of attacks that are out of scope, such as “the
administrator is
malicious” or “an attacker edits the hard drive on another
system,” as well as
a set of attack equivalencies, like, “an attacker can smash the
stack and execute
code,” so that those issues can be acknowledged and handled in
a consistent
way. A variant of brainstorming is the exhortation to “think like
an attacker,”
which is discussed in more detail in Chapter 18, “Experimental
Approaches.”
Some attacks you might brainstorm in threat modeling the
Acme’s fi nancial
statements include breaking in over the Internet, getting the
CFO drunk, bribing
a janitor, or predicting the URL where the fi nancials will be
published. These
can be a bit of a grab bag, so the next section provides
somewhat more focused
approaches.
Brainstorming Variants
Free-form or “normal” brainstorming, as discussed in the
preceding sec-
tion, can be used as a method for threat modeling, but there are
more specifi c
methods you can use to help focus your brainstorming. The
following sections
describe variations on classic brainstorming: scenario analyses,
pre-mortems,
and movie-plotting.
Scenario Analysis
It may help to focus your brainstorming with scenarios. If
you’re using written
scenarios in your overall engineering, you might start from
those and ask what
might go wrong, or you could use a variant of Chandler’s law
(“When in doubt,
have a man come through a door with a gun in his hand.”) You
don’t need to
restrict yourself to a man with a gun, of course; you can use any
of the attackers
listed in Appendix C, “Attacker Lists.”
For an example of scenario-specifi c brainstorming, try to threat
model for
handing your phone to a cute person in a bar. It’s an interesting
exercise. The
recipient could perhaps text donations to the Red Cross, text an
important
person to “stop bothering me,” or post to Facebook that “I don’t
take hints well”
or “I’m skeevy,” not to mention possibilities of running away
with the phone
or dropping it in a beer.
Less frivolously, your sample scenarios might be based on the
product
scenarios or use cases for the current development cycle, and
therefore cover
failover and replication, and how those services could be
exploited when not
properly authenticated and authorized.
Pre-Mortem
Decision-sciences expert Gary Klein has suggested another
brainstorming tech-
nique he calls the pre-mortem (Klein, 1999). The idea is to
gather those involved
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 2 ■ Strategies for Threat Modeling 33
c02.indd 11:35:5:AM 01/17/2014 Page 33
in a decision and ask them to assume that it’s shortly after a
project deadline,
or after a key milestone, and that things have gone totally off
the rails. With
an “assumption of failure” in mind, the idea is to explore why
the participants
believe it will go off the rails. The value to calling this a pre-
mortem is the fram-
ing it brings. The natural optimism that accompanies a project is
replaced with
an explicit assumption that it has failed, giving you and other
participants a
chance to express doubts. In threat modeling, the assumption is
that the product
is being successfully attacked, and you now have permission to
express doubts
or concerns.
Movie Plotting
Another variant of brainstorming is movie plotting. The key
difference between
“normal brainstorming” and “movie plotting” is that the attack
ideas are intended
to be outrageous and provocative to encourage the free fl ow of
ideas. Defending
against these threats likely involves science-fi ction-type
devices that impinge
on human dignity, liberty, and privacy without actually
defending anyone.
Examples of great movies for movie plot threats include
Ocean’s Eleven, The
Italian Job, and every Bond movie that doesn’t stink. If you’d
like to engage in
more structured movie plotting, create three lists: fl awed
protagonists, brilliant
antagonists, and whiz-bang gadgetry. You can then combine
them as you see fi t.
Examples of movie plot threats include a foreign spy writing
code for Acme
SQL so that a fourth connection attempt lets someone in as
admin, a scheming
CFO stealing from the fi rm, and someone rappelling from the
ceiling to avoid
the pressure mats in the fl oor while hacking into the database
from the console.
Note that these movie plots are equally applicable to Acme and
its customers.
The term movie plotting was coined by Bruce Schneier, a
respected security
expert. Announcing his contest to elicit movie plot threats, he
said: “The purpose
of this contest is absurd humor, but I hope it also makes a point.
Terrorism is a
real threat, but we’re not any safer through security measures
that require us
to correctly guess what the terrorists are going to do next”
(Schneier, 2006). The
point doesn’t apply only to terrorism; convoluted but vividly
described threats
can be a threat to your threat modeling methodology.
Literature Review
As a precursor to brainstorming (or any other approach to fi
nding threats),
reviewing threats to systems similar to yours is a helpful
starting point in threat
modeling. You can do this using search engines, or by checking
the academic
literature and following citations. It can be incredibly helpful to
search on
competitors or related products. To start, search on a
competitor, appending
terms such as “security,” “security bug,” “penetration test,”
“pwning,” or “Black
Hat,” and use your creativity. You can also review common
threats in this book,
www.it-ebooks.info
http://www.it-ebooks.info/
34 Part I ■ Getting Started
c02.indd 11:35:5:AM 01/17/2014 Page 34
especially Part III, “Managing and Addressing Threats” and the
appendixes.
Additionally, Ross Anderson’s Security Engineering is a great
collection of real
world attacks and engineering lessons you can draw on,
especially if what you’re
building is similar to what he covers (Wiley, 2008).
A literature review of threats against databases might lead to an
understanding
of SQL injection attacks, backup failures, and insider attacks,
suggesting the
need for logs. Doing a review is especially helpful for those
developing their
skills in threat modeling. Be aware that a lot of the threats that
may come up
can be super-specifi c. Treat them as examples of more general
cases, and look
for variations and related problems as you brainstorm.
Perspective on Brainstorming
Brainstorming and its variants suffer from a variety of
problems. Brainstorming
often produces threats that are hard or impossible to address.
Brainstorming inten-
tionally requires the removal of scoping or boundaries, and the
threats are very
dependent on the participants and how the session happens to
progress. When
experts get together, unstructured discussion often ensues. This
can be fun for the
experts and it usually produces interesting results, but
oftentimes, experts are in
short supply. Other times, engineers get frustrated with the
inconsistency of “ask
two experts, get three answers.”
There’s one other issue to consider, and that relates to exit
criteria. It’s diffi cult
to know when you’re done brainstorming, and whether you’re
done because
you have done a good job or if everyone is just tired.
Engineering management
may demand a time estimate that they can insert into their
schedule, and these
are diffi cult to predict. The best approach to avoid this timing
issue is simply
to set a meeting of defi ned length. Unfortunately, this option
doesn’t provide a
high degree of confi dence that all interesting threats have been
found.
Because of the diffi culty of addressing threats illuminated with
a limitless
brainstorming technique and the poorly defi ned exit criteria to
a brainstorming
session, it is important to consider other approaches to threat
modeling that are
more prescriptive, formal, repeatable, or less dependent on the
aptitudes and
knowledge of the participants. Such approaches are the subject
of the rest of this
chapter and also discussed in the rest of Part II, “Finding
Threats.”
Structured Approaches to Threat Modeling
When it’s hard to answer “What’s your threat model?” people
often use an
approach centered on models of their assets, models of
attackers, or models of
their software. Centering on one of those is preferable to using
approaches that
attempt to combine them because these combinations tend to be
confusing.
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 2 ■ Strategies for Threat Modeling 35
c02.indd 11:35:5:AM 01/17/2014 Page 35
Assets are the valuable things you have. The people who might
go after your
assets are attackers, and the most common way for them to
attack is via the
software you’re building or deploying.
Each of these is a natural place to start thinking about threats,
and each has
advantages and disadvantages, which are covered in this
section. There are
people with very strong opinions that one of these is right (or
wrong). Don’t
worry about “right” or “wrong,” but rather “usefulness.” That
is, does your
approach help you fi nd problems? If it doesn’t, it’s wrong for
you, however
forcefully someone might argue its merits.
These three approaches can be thought of as analogous to
Lincoln Log sets,
Erector sets, and Lego sets. Each has a variety of pieces, and
each enables you
to build things, but they may not combine in ways as arbitrary
as you’d like.
That is, you can’t snap Lego blocks to an Erector set model.
Similarly, you can’t
always snap attackers onto a software model and have
something that works
as a coherent whole.
To understand these three approaches, it can be useful to apply
them to some-
thing concrete. Figure 2-1 shows a data fl ow diagram of the
Acme/SQL system.
Web Clients
SQL Clients
Acme
Front End(s)
External Entity
Key:
Process Data Store
DB Admin
Data Management Logs
Log Analysis
Original SQL Account
DB Cluster
DBA (Human)
DB
Users
(Human)
Database
data flow
Trust
Boundary
Figure 2-1: Data flow diagram of the Acme/SQL database
Looking at the diagram, and reading from left to right, you can
see two
types of clients accessing the front ends and the core database,
which manages
www.it-ebooks.info
http://www.it-ebooks.info/
36 Part I ■ Getting Started
c02.indd 11:35:5:AM 01/17/2014 Page 36
transactions, access control, atomicity, and so on. Here, assume
that the Acme/
SQL system comes with an integrated web server, and that
authorized clients
are given nearly raw access to the data. There could
simultaneously be web
servers offering deeper business logic, access to multiple back
ends, integration
with payment systems, and so on. Those web servers would
access Acme/SQL
via the SQL protocol over a network connection.
Back to the diagram, Figure 2-1 also shows there is also a set of
DB Admin
tools that the DBA (the human database administrator) uses to
manage the
system. As shown in the diagram, there are three conceptual
data stores: Data,
Management (including metadata such as locks, policies, and
indices), and Logs.
These might be implemented in memory, as fi les, as custom
stores on raw disk,
or delegated to network storage. As you dig in, the details
matter greatly, but
you usually start modeling from the conceptual level, as shown.
Finally, there’s a log analysis package. Note that only the
database core has
direct access to the data and management information in this
design. You
should also note that most of the arrows are two-way, except
Database ➪ Logs
and Logs ➪ Log Analysis. Of course, the Log Analysis process
will be query-
ing the logs, but because it’s intended as a read-only interface,
it is represented
as a one-way arrow. Very occasionally, you might have strictly
one-way fl ows,
such as those implemented by SNMP traps or syslog. Some
threat modelers
prefer two one-way arrows, which can help you see threats in
each direction,
but also lead to busy diagrams that are hard to label or read. If
your diagrams
are simple, the pair of one way arrows helps you fi nd threats,
and is therefore
better than two-way. If your diagram is complex, either
approach can be used.
The data fl ow diagram is discussed in more detail later in this
chapter, in the
section “Data Flow Diagrams.” In the next few sections, you’ll
see how to apply
asset, attacker, and software-centric models to fi nd threats
against Acme/SQL.
Focusing on Assets
It seems very natural to center your approach on assets, or
things of value.
After all, if a thing has no value, why worry about how someone
might attack
it? It turns out that focusing on assets is less useful than you
may hope, and is
therefore not the best approach to threat modeling. However,
there are a small
number of people who will benefi t from asset-centered threat
modeling. The
most likely to benefi t are a team of security experts with
experience structuring
their thinking around assets. (Having found a way that works
for them, there
may be no reason to change.) Less technical people may be able
to contribute
to threat modeling by saying “focus on this asset.” If you are in
either of these
groups, or work with them, congratulations! This section is for
you. If you aren’t
one of those people, however, don’t be too quick to skip ahead.
It is still important
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 2 ■ Strategies for Threat Modeling 37
c02.indd 11:35:5:AM 01/17/2014 Page 37
to have a good understanding of the role assets can play in
threat modeling,
even if they’re not in a starring role). It can also help for you to
understand why
the approach is not as useful as it may appear, so you can have
an informed
discussion with those advocating it.
The term asset usually refers to something of value. When
people bring up
assets as part of a threat modeling activity, they often mean
something an attacker
wants to access, control, or destroy. If everyone who touches
the threat modeling
process doesn’t have a working agreement about what an asset
is, your process
will either get bogged down, or participants will just talk past
each other.
There are three ways the term asset is commonly used in threat
modeling:
■ Things attackers want
■ Things you want to protect
■ Stepping stones to either of these
You should think of these three types of assets as families
rather than categories
because just as people can belong to more than one family at a
time, assets can
take on more than one meaning at a time. In other words, the
tags that apply
to assets can overlap, as shown in Figure 2-2. The most common
usage of asset
in discussing threat models seems to be a marriage of “things
attackers want”
and “things you want to protect.”
Stepping stones
Things you
protect
Things attackers
want
Figure 2-2: The overlapping definitions of assets
N O T E There are a few other ways in which the term asset is
used by those who
are threat modeling—such as a synonym for computer, or a type
of computer (for
example, “Targeted assets: mail server, database”). For the sake
of clarity, this book
only uses asset with explicit reference to one or more of the
three families previously
defi ned, and you should try to do the same.
www.it-ebooks.info
http://www.it-ebooks.info/
38 Part I ■ Getting Started
c02.indd 11:35:5:AM 01/17/2014 Page 38
Things Attackers Want
Usually assets that attackers want are relatively tangible things
such as “this
database of customer medical data.” Good examples of things
attackers want
include the following:
■ User passwords or keys
■ Social security numbers or other identifi ers
■ Credit card numbers
■ Your confi dential business data
Things You Want to Protect
There’s also a family of assets you want to protect. Unlike the
tangible things
attackers want, many of these assets are intangibles. For
example, your com-
pany’s reputation or goodwill is something you want to protect.
Competitors
or activists may well attack your reputation by engaging in
smear tactics. From
a threat modeling perspective, your reputation is usually too
diffuse to be able
to technologically mitigate threats against it. Therefore, you
want to protect it
by protecting the things that matter to your customers.
As an example of something you want to protect, if you had an
empty safe,
you intuitively don’t want someone to come along and stick
their stethoscope
to it. But there’s nothing in it, so what’s the damage? Changing
the combination
and letting the right folks (but only the right folks) know the
new combination
requires work. Therefore you want to protect this empty safe,
but it would be
an unlikely target for a thief. If that same safe has one million
dollars in it, it
would be much more likely to pique a thief’s interest. The
million dollars is part
of the family of things you want that attackers want, too.
Stepping Stones
The fi nal family of assets is stepping stones to other assets. For
example, every-
thing drawn in a threat model diagram is something you want to
protect because
it may be a stepping stone to the targets that attackers want. In
some ways, the
set of stepping stone assets is an attractive nuisance. For
example, every com-
puter has CPU and storage that an attacker can use. Most also
have Internet
connectivity, and if you’re in systems management or
operational security,
many of the computers you worry most about will have special
access to your
organization’s network. They’re behind a fi rewall or have VPN
access. These
are stepping stones. If they are uniquely valuable stepping
stones in some way,
note that. In practice, it’s rarely helpful to include “all our PCs”
in an asset list.
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 2 ■ Strategies for Threat Modeling 39
c02.indd 11:35:5:AM 01/17/2014 Page 39
N O T E Referring back to the safe example in the previous
section, the safe combina-
tion is a member of the stepping stone family. It may well be
that stepping-stones and
things you protect are, in practice, very similar. The list of
technical elements you
protect that are not members of the stepping-stone family
appears fairly short.
Implementing Asset-Centric Modeling
If you were to threat model with an asset-focused approach, you
would make a
list of your assets and then consider how an attacker could
threaten each. From
there, you’d consider how to address each threat.
After an asset list is created, you should connect each item on
the list to
particular computer systems or sets of systems. (If an asset
includes something
like “Amazon Web Services” or “Microsoft Azure,” then you
don’t need to be
able to point to the computer systems in question, you just need
to understand
where they are—eventually you’ll need to identify the threats to
those systems
and determine how to mitigate them.)
The next step is to draw the systems in question, showing the
assets and
other components as well as interconnections, until you can tell
a story about
them. You can use this model to apply either an attack set like
STRIDE or an
attacker-centered brainstorm to understand how those assets
could be attacked.
Perspective on Asset-Centric Threat Modeling
Focusing on assets appears to be a common-sense approach to
threat model-
ing, to the point where it seems hard to argue with.
Unfortunately, much of the
time, a discussion of assets does not improve threat modeling.
However, the
misconception is so common that it’s important to examine why
it doesn’t help.
There’s no direct line from assets to threats, and no prescriptive
set of steps.
Essentially, effort put into enumerating assets is effort you’re
not spending fi nd-
ing or fi xing threats. Sometimes, that involves a discussion of
what’s an asset,
or which type of asset you’re discussing. That discussion, at
best, results in a
list of things to look for in your software or operational model,
so why not start
by creating such a model? Once you have a list of assets, that
list is not (ahem) a
stepping stone to fi nding threats; you still need to apply some
methodology or
approach. Finally, assets may help you prioritize threats, but if
that’s your goal, it
doesn’t mean you should start with or focus on assets.
Generally, such informa-
tion comes out naturally when discussing impacts as you
prioritize and address
threats. Those topics are covered in Part III, “Managing and
Addressing Threats.”
How you answer the question “what are our assets?” should help
focus your
threat modeling. If it doesn’t help, there is no point to asking
the question or
spending time answering it.
www.it-ebooks.info
http://www.it-ebooks.info/
40 Part I ■ Getting Started
c02.indd 11:35:5:AM 01/17/2014 Page 40
Focusing on Attackers
Focusing on attackers seems like a natural way to threat model.
After all, if
no one is going to attack your system, why would you bother
defending it?
And if you’re worried because people will attack your systems,
shouldn’t you
understand them? Unfortunately, like asset-centered threat
modeling, attacker-
centered threat modeling is less useful than you might
anticipate. But there are
also a small number of scenarios in which focusing on attackers
can come in
handy, and they’re the same scenarios as assets: experts, less-
technical input
to your process, and prioritization. And similar to the “Focusing
on Assets”
section, you can also learn for yourself why this approach isn’t
optimal, so you
can discuss the possibility with those advocating this approach.
Implementing Attacker-Centric Modeling
Security experts may be able to use various types of attacker
lists to fi nd threats
against a system. When doing so, it’s easy to fi nd yourself
arguing about the
resources or capabilities of such an archetype, and needing to fl
esh them out.
For example, what if your terrorist is state-sponsored, and has
access to govern-
ment labs? These questions make the attacker-centric approach
start to resemble
“personas,” which are often used to help think about human
interface issues.
There’s a spectrum of detail in attacker models, from simple
lists to data-derived
personas, and examples of each are given in Appendix C,
“Attacker Lists”
That appendix may help security experts and will help anyone
who wants
to try attacker-centric modeling and learn faster than if they
have to start by
creating a list.
Given a list of attackers, it’s possible to use the list to provide
some structure
to a brainstorming approach. Some security experts use attacker
lists as a way
to help elicit the knowledge they’ve accumulated as they’ve
become experts.
Attacker-driven approaches are also likely to bring up
possibilities that are
human-centered. For example, when thinking about what a spy
would do, it
may be more natural (and fun) to think about them seducing
your sysadmin
or corrupting a janitor, rather than think about technical attacks.
Worse, it will
probably be challenging to think about what those human
attacks mean for
your system’s security.
Where Attackers Can Help You
Talking about human threat agents can help make the threats
real. That is, it’s
sometimes tough to understand how someone could tamper with
a confi gura-
tion fi le, or replace client software to get around security
checks. Especially
when dealing with management or product teams who “just want
to ship,”
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 2 ■ Strategies for Threat Modeling 41
c02.indd 11:35:5:AM 01/17/2014 Page 41
it’s helpful to be able to explain who might attack them and
why. There’s real
value in this, but it’s not a suffi cient argument for centering
your approach
on those threat agents; you can add that information at a later
stage. (The risk
associated with talking about attackers is the claim that “no one
would ever
do that.” Attempting to humanize a risk by adding an actor can
exacerbate
this, especially if you add a type of actor who someone thinks
“wouldn’t be
interested in us.”).
You were promised an example, and the spies stole it. More
seriously, carefully
walking through the attacker lists and personas in Appendix C
likely doesn’t
help you (or the author) fi gure out what they might want to do
to Acme/SQL,
and so the example is left empty to avoid false hope.
Perspective on Attacker-Centric Modeling
Helping security experts structure and recall information is
nice, but doesn’t
lead to reproducible results. More importantly, attacker lists or
even personas
are not enough structure for most people to fi gure out what
those people will
do. Engineers may subconsciously project their own biases or
approaches into
what an attacker might do. Given that the attacker has his own
motivations,
skills, background, and perspective (and possibly organizational
priorities),
avoiding such projection is tricky.
In my experience, this combination of issues makes attacker-
centric approaches
less effective than other approaches. Therefore, I recommend
against using
attackers as the center of your threat modeling process.
Focusing on Software
Good news! You’ve offi cially reached the “best” structured
threat modeling
approach. Congrats! Read on to learn about software-centered
threat modeling,
why it’s the most helpful and effective approach, and how to do
it.
Software-centric models are models that focus on the software
being built
or a system being deployed. Some software projects have
documented models
of various sorts, such as architecture, UML diagrams, or APIs.
Other projects
don’t bother with such documentation and instead rely on
implicit models.
Having large project teams draw on a whiteboard to explain how
the software
fi ts together can be a surprisingly useful and contentious
activity. Understandings
differ, especially on large projects that have been running for a
while, but fi nd-
ing where those understandings differ can be helpful in and of
itself because
it offers a focal point where threats are unlikely to be blocked.
(“I thought you
were validating that for a SQL call!”)
The same complexity applies to any project that is larger than a
few people
or has been running longer than a few years. Projects
accumulate complexity,
www.it-ebooks.info
http://www.it-ebooks.info/
42 Part I ■ Getting Started
c02.indd 11:35:5:AM 01/17/2014 Page 42
which makes many aspects of development harder, including
security. Software-
centric threat modeling can have a useful side effect of
exposing this accumu-
lated complexity.
The security value of this common understanding can also be
substantial,
even before you get to looking for threats. In one project it
turned out that a
library on a trust boundary had a really good threat model, but
unrealistic
assumptions had been made about what the components behind
it were doing.
The work to create a comprehensive model led to an explicit list
of common
assumptions that could and could not be made. The
comprehensive model and
resultant understanding led to a substantial improvement in the
security of
those components.
N O T E As complexity grows, so will the assumptions that are
made, and such lists
are never complete. They grow as experience requires and
feedback loops allow.
Threat Modeling Diff erent Types of Software
The threat discovery approaches covered in Part II, can be
applied to models of
all sorts of software. They can be applied to software you’re
building for others
to download and install, as well as to software you’re building
into a larger
operational system. The software they can be applied to is
nearly endless, and
is not dependent on the business model or deployment model
associated with
the software.
Even though software no longer comes in boxes sold on store
shelves, the
term boxed software is a convenient label for a category. That
category is all the
software whose architecture is defi nable, because there’s a
clear edge to what
is the software: It’s everything in the box (installer, application
download, or
open source repository). This edge can be contrasted with the
deployed systems
that organizations develop and change over time.
N O T E You may be concerned that the techniques in this book
focus on either
boxed software or deployed systems, and that the one you’re
concerned about isn’t
covered. In the interests of space, the examples and discussion
only cover both when
there’s a clear diff erence and reason. That’s because the
recommended ways to model
software will work for both with only a few exceptions.
The boundary between boxed software models and network
models gets
blurrier every year. One important difference is that the network
models tend
to include more of the infrastructural components such as
routers, switches, and
data circuits. Trust boundaries are often operationalized by
these components,
or by whatever group operates the network, the platforms, or the
applications.
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 2 ■ Strategies for Threat Modeling 43
c02.indd 11:35:5:AM 01/17/2014 Page 43
Data fl ow models (which you met in Chapter 1, “Dive In and
Threat Model!”
and which you’ll learn more about in the next section) are
usually a good choice
for both boxed software and operational models. Some large
data center operators
have provided threat models to teams, showing how the data
center is laid out.
The product group can then overlay its models “on top” of that
to align with the
appropriate security controls that they’ll get from operations.
When you’re using
someone else’s data center, you may have discussions about
their infrastructure
choices that make it easy to derive a model, or you might have
to assume the worst.
Perspective on Software-Centric Modeling
I am fond of software-centric approaches because you should
expect software
developers to understand the software they’re developing.
Indeed, there is
nothing else you should expect them to understand better. That
makes software
an ideal place to start the threat-modeling tasks in which you
ask developers
to participate. Almost all software development is done with
software models
that are good enough for the team’s purposes. Sometimes they
require work to
make them good enough for effective threat modeling.
In contrast, you can merely hope that developers understand the
business or
its assets. You may aspire to them understanding the people
who will attack their
product or system. But these are hopes and aspirations, rather
than reasonable
expectations. To the extent that your threat modeling strategy
depends on these
hopes and aspirations, you’re adding places where it can fail.
The remainder of
this chapter is about modeling your software in ways that help
you fi nd threats,
and as such enabling software centric-modeling. (The methods
for fi nding these
threats are covered in the rest of Part II.)
Models of Software
Making an explicit model of your software helps you look for
threats without
getting bogged down in the many details that are required to
make the software
function properly. Diagrams are a natural way to model
software.
As you learned in Chapter 1, whiteboard diagrams are an
extremely effective
way to start threat modeling, and they may be suffi cient for
you. However, as
a system hits a certain level of complexity, drawing and
redrawing on white-
boards becomes infeasible. At that point, you need to either
simplify the system
or bring in a computerized approach.
In this section, you’ll learn about the various types of diagrams,
how they
can be adapted for use in threat modeling, and how to handle
the complexities
of larger systems. You’ll also learn more detail about trust
boundaries, effective
labeling, and how to validate your diagrams.
www.it-ebooks.info
http://www.it-ebooks.info/
44 Part I ■ Getting Started
c02.indd 11:35:5:AM 01/17/2014 Page 44
Types of Diagrams
There are many ways to diagram, and different diagrams will
help in differ-
ent circumstances. The types of diagrams you’ll encounter most
frequently are
probably data fl ow diagrams (DFDs). However, you may also
see UML, swim
lane diagrams, and state diagrams. You can think of these
diagrams as Lego
blocks, looking them over to see which best fi ts whatever
you’re building. Each
diagram type here can be used with the models of threats in Part
II.
The goal of all these diagrams is to communicate how the
system works,
so that everyone involved in threat modeling has the same
understanding. If
you can’t agree on how to draw how the software works, then in
the process of
getting to agreement, you’re highly likely to discover
misunderstandings about
the security of the system. Therefore, use the diagram type that
helps you have
a good conversation and develop a shared understanding.
Data Flow Diagrams
Data fl ow models are often ideal for threat modeling; problems
tend to follow
the data fl ow, not the control fl ow. Data fl ow models more
commonly exist for
network or architected systems than software products, but they
can be created
for either.
Data fl ow diagrams are used so frequently they are sometimes
called
“threat model diagrams.” As laid out by Larry Constantine in
1967, DFDs
consist of numbered elements (data stores and processes)
connected by data
fl ows, interacting with external entities (those outside the
developer’s or the
organization’s control).
The data fl ows that give DFDs their name almost always fl ow
two ways,
with exceptions such as radio broadcasts or UDP data sent off
into the Ethernet.
Despite that, fl ows are usually represented using one-way
arrows, as the threats
and their impact are generally not symmetric. That is, if data fl
owing to a
web server is read, it might reveal passwords or other data,
whereas a data
fl ow from the web server might reveal your bank balance. This
diagramming
convention doesn’t help clarify channel security versus message
security. (The
channel might be something like SMTP, with messages being e-
mail messages.)
Swim lane diagrams may be more appropriate as a model if this
channel/message
distinction is important. (Swim lane diagrams are described in
the eponymous
subsection later in this chapter.)
The main elements of a data fl ow diagram are shown in Table
2-1.
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 2 ■ Strategies for Threat Modeling 45
c02.indd 11:35:5:AM 01/17/2014 Page 45
Table 2-1: Elements of a Data Flow Diagram
ELEMENT APPEARANCE MEANING EXAMPLES
Process Rounded rect-
angle, circle, or
concentric circles
Any running code Code written in C,
C#, Python, or PHP
Data fl ow Arrow Communication between
processes, or between
processes and data stores
Network connec-
tions, HTTP, RPC,
LPC
Data store Two parallel
lines with a label
between them
Things that store data Files, databases, the
Windows Registry,
shared memory
segments
External
entity
Rectangle with
sharp corners
People, or code outside
your control
Your customer,
Microsoft.com
Figure 2-3 shows a classic DFD based on the elements from
Table 2-1; how-
ever, it’s possible to make these models more usable. Figure 2-4
shows this same
model with a few changes, which you can use as an example for
improving
your own models.
1 Web Clients
2 SQL Clients
3 Front
End(s)
External Entity
Key:
Process Data Store
5 DB
Admin
9 Data 10 Management 11 Logs
8 Log
Analysis
6 DBA (Human)
7 DB
Users
4 Database
data flow
Figure 2-3: A classic DFD model
www.it-ebooks.info
http://www.it-ebooks.info/
46 Part I ■ Getting Started
c02.indd 11:35:5:AM 01/17/2014 Page 46
Web Clients
SQL Clients
Acme
Front End(s)
External Entity
Key:
Process Data Store
DB Admin
Data Management Logs
Log Analysis
Original SQL Account
DB Cluster
DBA (Human)
DB
Users
(Human)
Database
data flow
Trust
Boundary
Figure 2-4: A modern DFD model (previously shown as Figure
2-1)
The following list explains the changes made from classic DFDs
to more
modern ones:
■ The processes are rounded rectangles, which contain text
more effi ciently
than circles.
■ Straight lines are used, rather than curved, because straight
lines are easier
to follow, and you can fi t more in larger diagrams.
Historically, many descriptions of data flow diagrams contained
both
“process” elements and “complex process” elements. A process
was depicted
as a circle, a complex process as two concentric circles. It isn’t
entirely clear,
however, when to use a normal process versus a complex one.
One possible
rule is that anything that has a subdiagram should be a complex
process. That
seems like a decent rule, if (ahem) a bit circular.
DFDs can be used for things other than software products. For
example,
Figure 2-5 shows a sample operational network in a DFD. This
is a typical model
for a small to mid-sized corporate network, with a
representative sampling
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 2 ■ Strategies for Threat Modeling 47
c02.indd 11:35:5:AM 01/17/2014 Page 47
of systems and departments shown. It is discussed in depth in
Appendix E,
“Case Studies.”
Acme Corporate Network
Internet
Payroll
HR
Directory
Sales/CRM
Operations
Production
Desktop &
Mobile
E-mail &
Intranet Servers
Development
Servers
HR Mgmt
Figure 2-5: An operational network model
UML
UML is an abbreviation for Unifi ed Modeling Language. If you
use UML
in your software development process, it’s likely that you can
adapt UML
diagrams for threat modeling, rather than redrawing them. The
most impor-
tant way to adapt UML for threat modeling diagrams is the
addition of trust
boundaries.
UML is fairly complex. For example, the Visio stencils for
UML offer roughly
80 symbols, compared to six for DFDs. This complexity brings
a good deal
of nuance and expressiveness as people draw structure
diagrams, behavior
diagrams, and interaction diagrams. If anyone involved in the
threat model-
ing isn’t up on all the UML symbols, or if there’s
misunderstanding about
what those symbols mean, then the diagram’s effectiveness as a
tool is greatly
diminished. In theory, anyone who’s confused can just ask, but
that requires
them to know they’re confused (they might assume that the
symbol for fi sh
www.it-ebooks.info
http://www.it-ebooks.info/
48 Part I ■ Getting Started
c02.indd 11:35:5:AM 01/17/2014 Page 48
excludes sharks). It also requires a willingness to expose one’s
ignorance by
asking a “simple” question. It’s probably easier for a team
that’s invested in
UML to add trust boundaries to those diagrams than to create
new diagrams
just for threat modeling.
Swim Lane Diagrams
Swim lane diagrams are a common way to represent fl ows
between various
participants. They’re drawn using long lines, each representing
participants
in a protocol, with each participant getting a line. Each lane
edge is labeled
to identify the participant; each message is represented by a line
between
participants; and time is represented by fl ow down the diagram
lanes. The
diagrams end up looking a bit like swim lanes, thus the name.
Messages
should be labeled with their contents; or if the contents are
complex, it may
make more sense to have a diagram key that abstracts out some
details.
Computation done by the parties or state should be noted along
that partici-
pant’s line. Generally, participants in such protocols are entities
like comput-
ers; and as such, swim lane diagrams usually have implicit trust
boundaries
between each participant. Cryptographer and protocol designer
Carl Ellison
has extended swim lanes to include the human participants as a
way to
structure discussion of what people are expected to know and
do. He calls
this extension ceremonies, which is discussed in more detail in
Chapter 15,
“Human Factors and Usability.”
A sample swim lane diagram is shown in Figure 2-6.
SYN
SYN-ACK
ACK
Data
Client Server
Figure 2-6: Swim lane diagram (showing the start of a TCP
connection)
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 2 ■ Strategies for Threat Modeling 49
c02.indd 11:35:5:AM 01/17/2014 Page 49
State Diagrams
State diagrams represent the various states a system can be in,
and the transi-
tions between those states. A computer system is modeled as a
machine with
state, memory, and rules for moving from one state to another,
based on the
valid messages it receives, and the data in its memory. (The
computer should
course test the messages it receives for validity according to
some rules.) Each
box is labeled with a state, and the lines between them are
labeled with the
conditions that cause the state transition. You can use state
diagrams in threat
modeling by checking whether each transition is managed in
accordance with
the appropriate security validations.
A very simple state machine for a door is shown in Figure 2-7
(derived from
Wikipedia). The door has three states: opened, closed, and
locked. Each state is
entered by a transition. The “deadbolt” system is much easier to
draw than locks
on the knob, which can be locked from either state, creating a
more complex
diagram and user experience. Obviously, state diagrams can
become complex
quickly. You could imagine a more complex state diagram that
includes “ajar,” a
state that can result from either open or closed. (I started
drawing that but had
trouble deciding on labels. Obviously, doors that can be ajar are
poorly specifi ed
and should not be deployed.) You don’t want to make
architectural decisions
just to make modeling easier, but often simple models are easier
to work with,
and refl ect better engineering.
Opened
Closed Locked
State
Transition
Open doorClose door
Unlock deadbolt
Lock deadbolt
Transition
condition
Figure 2-7: A state machine diagram
www.it-ebooks.info
http://www.it-ebooks.info/
50 Part I ■ Getting Started
c02.indd 11:35:5:AM 01/17/2014 Page 50
Trust Boundaries
As you saw in Chapter 1, a trust boundary is anyplace where
various principals
come together—that is, where entities with different privileges
interact.
Drawing Boundaries
After a software model has been drawn, there are two ways to
add boundaries:
You can add the boundaries you know and look for more, or you
can enumerate
principals and look for boundaries. To start from boundaries,
add any sorts of
enforced trust boundary you can. Boundaries between unix
UIDs, Windows
sessions, machines, network segments, and so on should be
drawn in as boxes,
and the principal inside each box should be shown with a label.
To start from principals, begin from one end or the other of the
privilege
spectrum (often that’s root/admin or anonymous Internet users),
and then add
boundaries each time they talk to “someone else.”
You can always add at least one boundary, as all computation
takes place in
some context. (So you might criticize Figure 2-1 for showing
Web Clients and
SQL Clients without an identifi ed context.)
If you don’t see where to draw trust boundaries of any sort,
your diagram
may be detailed as everything is inside a single trust boundary,
or you may
be missing boundaries. Ask yourself two questions. First, does
everything in
the system have the same level of privilege and access to
everything else on
the system? Second, is everything your software communicates
with inside
that same boundary? If either of these answers are a no, then
you should now
have clarifi ed either a missing boundary or a missing element
in the diagram,
or both. If both are yes, then you should draw a single trust
boundary around
everything, and move on to other development activities. (This
state is unlikely
except when every part of a development team has to create a
software model.
That “bottom up” approach is discussed in more detail in
Chapter 7, “Processing
and Managing Threats.”)
A lot of writing on threat modeling claims that trust boundaries
should
only cross data fl ows. This is useful advice for the most
detailed level of your
model. If a trust boundary crosses over a data store (that is, a
database), that
might indicate that there are different tables or stored
procedures with different
trust levels. If a boundary crosses over a given host, it may refl
ect that members
of, for example, the group “software installers,” have different
rights from the
“web content updaters.” If you fi nd a trust boundary crossing
an element of a
diagram other than a data fl ow, either break that element into
two (in the model,
in reality, or both), or draw a subdiagram to show them
separated into multiple
entities. What enables good threat models is clarity about what
boundaries exist
and how those boundaries are to be protected. Contrariwise, a
lack of clarity
will inhibit the creation of good models.
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 2 ■ Strategies for Threat Modeling 51
c02.indd 11:35:5:AM 01/17/2014 Page 51
Using Boundaries
Threats tend to cluster around trust boundaries. This may seem
obvious: The
trust boundaries delineate the attack surface between principals.
This leads some
to expect that threats appear only between the principals on the
boundary, or
only matter on the trust boundaries. That expectation is
sometimes incorrect. To
see why, consider a web server performing some complex order
processing. For
example, imagine assembling a computer at Dell’s online store
where thousands
of parts might be added, but only a subset of those have been
tested and are
on offer. A model of that website might be constructed as shown
in Figure 2-8.
Web browser TCP/IP stack
Kernel
Platform
Application
Web server
Sales
Order
processing
Marketing
Figure 2-8: Trust boundaries in a web server
The web server in Figure 2-8 is clearly at risk of attack from the
web browser,
even though it talks through a TCP/IP stack that it presumably
trusts. Similarly,
the sales module is at risk; plus an attacker might be able to
insert random part
numbers into the HTML post in which the data is checked in an
order processing
module. Even though there’s no trust boundary between the
sales module and
the order processing module, and even though data might be
checked at three
boundaries, the threats still follow the data fl ows. The client is
shown simply
as a web browser because the client is an external entity. Of
course, there are
many other components around that web browser, but you can’t
do anything
about threats to them, so why model them?
Therefore, it is more accurate to say that threats tend to cluster
around trust
boundaries and complex parsing, but may appear anywhere that
information
is under the control of an attacker.
www.it-ebooks.info
http://www.it-ebooks.info/
52 Part I ■ Getting Started
c02.indd 11:35:5:AM 01/17/2014 Page 52
What to Include in a Diagram
So what should be in your diagram? Some rules of thumb
include the following:
■ Show the events that drive the system.
■ Show the processes that are driven.
■ Determine what responses each process will generate and
send.
■ Identify data sources for each request and response.
■ Identify the recipient of each response.
■ Ignore the inner workings, focus on scope.
■ Ask if something will help you think about what goes wrong,
or what
will help you fi nd threats.
This list is derived from Howard and LeBlanc’s Writing Secure
Code, Second
Edition (Microsoft Press, 2009).
Complex Diagrams
When you’re building complex systems, you may end up with
complex diagrams.
Systems do become complex, and that complexity can make
using the diagrams
(or understanding the full system) diffi cult.
One rule of thumb is “don’t draw an eye chart.” It is important
to balance
all the details that a real software project can entail with what
you include in
your actual model. As mentioned in Chapter 1, one technique
you can use to
help you do this is a subdiagram showing the details of one
particular area.
You should look for ways to break out highly-detailed areas that
make sense
for your project. For example, if you have one very complex
process, maybe
everything inside it is one diagram, and everything outside it is
another. If
you have a dispatcher or queuing system, that might be a good
place to break
things up. Maybe your databases or the fail over system is a
good place to split.
Maybe there are a few elements that really need more detail. All
of these are
good ways to break things out.
One helpful approach to subdiagrams is to ensure that there are
not more
subdiagrams than there are processes. Another approach is to
use different
diagrams to show different scenarios.
Sometimes it’s also useful to simplify diagrams. When two
elements of the
diagram are equivalent from a security perspective, you can
combine them.
Equivalent means inside the same trust boundary, relying on the
same technol-
ogy, and handling the same sort of data.
The key thing to remember is that the diagram is intended to
help ensure that
you understand and can discuss the system. Remember the quote
that opens
this book: “All models are wrong, some models are useful.”
Therefore, when
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 2 ■ Strategies for Threat Modeling 53
c02.indd 11:35:5:AM 01/17/2014 Page 53
you’re adding additional diagrams, don’t ask “is this the right
way to do it?”
Instead, ask “does this help us think about what might go
wrong?”
Labels in Diagrams
Labels in diagrams should be short, descriptive, and
meaningful. Because you
want to use these names to tell stories, start with the outsiders
who are driving
the system; those are nouns, such as “customer” or “vibration
sensor.” They
communicate information via data fl ows, which are nouns or
noun phrases, such
as “books to buy” or “vibration frequency.” Data fl ows should
almost never be
labeled using verbs. Even though it can be hard, you should
work to fi nd more
descriptive labels than “read” or “write,” which are implied by
the direction of
the arrows. In other words, data fl ows communicate their
information (nouns)
to processes, which are active: verbs, verb phrases, or
verb/noun chains.
Many people fi nd it helpful to label data fl ows with sequence
numbers to help
keep track of what happens in what order. It can also be helpful
to number ele-
ments within a diagram to help with completeness or
communication. You can
number each thing (data fl ow 1, a process 1, et cetera) or you
can have a single
count across the diagram, with external entity 1 talking over
data fl ows 2 and 3
to process 4. Generally, using a single counter for everything is
less confusing.
You can say “number 1” rather than “data fl ow 1, not process
1.”
Color in Diagrams
Color can add substantial amounts of information without
appearing over-
whelming. For example, Microsoft’s Peter Torr uses green for
trusted, red for
untrusted and blue for what’s being modeled (Torr, 2005).
Relying on color alone
can be problematic. Roughly one in twelve people suffer from
color blindness,
the most common being red/green confusion (Heitgerd, 2008).
The result is that
even with a color printer, a substantial number of people are
unable to easily
access this critical information. Box boundaries with text labels
address both
problems. With box trust boundaries, there is no reason not to
use color.
Entry Points
One early approach to threat modeling was the “asset/entry
point” approach,
which can be effective at modeling operational systems. This
approach can be
partially broken down into the following steps:
1. Draw a DFD.
2. Find the points where data fl ows cross trust boundaries.
3. Label those intersections as “entry points.”
www.it-ebooks.info
http://www.it-ebooks.info/
54 Part I ■ Getting Started
c02.indd 11:35:5:AM 01/17/2014 Page 54
N O T E There were other steps and variations in the
approaches, but we as a com-
munity have learned a lot since then, and a full explanation
would be tedious and
distracting.
In the Acme/SQL example (as shown in Figure 2-1) the entry
points are the
“front end(s)” and the “database admin” console process.
“Database” would
also be an entry point, because nominally, other software could
alter data in
the databases and use failures in the parsers to gain control of
the system. For
the fi nancials, the entry points shown are “external reporting,”
“fi nancial plan-
ning and analysis,” “core fi nance software,” “sales” and
“accounts receivable.”
Validating Diagrams
Validating that a diagram is a good model of your software has
two main goals:
ensuring accuracy and aspiring to goodness. The fi rst is easier,
as you can ask
whether it refl ects reality. If important components are
missing, or the diagram
shows things that are not being built, then you can see that it
doesn’t refl ect
reality. If important data fl ows are missing, or nonexistent fl
ows are shown,
then it doesn’t refl ect reality. If you can’t tell a story about the
software without
editing the diagram, then it’s not accurate.
Of course, there’s that word “important” in there, which leads
to the second
criterion: aspiring to goodness. What’s important is what helps
you fi nd issues.
Finding issues is a matter of asking questions like “does this
element have any
security impact?” and “are there things that happen sometimes
or in special
circumstances?” Knowing the answers to these questions is a
matter of expe-
rience, just like many aspects of building software. A good and
experienced
architect can quickly assess requirements and address them, and
a good threat
modeler can quickly see which elements will be important. A
big part of gain-
ing that experience is practice. The structured approaches to fi
nding threats in
Part II, are designed to help you identify which elements are
important.
How To Validate Diagrams
To best validate your diagrams, bring together the people who
understand the
system best. Someone should stand in front of the diagram and
walk through
the important use cases, ensuring the following:
■ They can talk through stories about the diagram.
■ They don’t need to make changes to the diagram in its
current form.
■ They don’t need to refer to things not present in the diagram.
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 2 ■ Strategies for Threat Modeling 55
c02.indd 11:35:5:AM 01/17/2014 Page 55
The following rules of thumb will be useful as you update your
diagram
and gain experience:
■ Anytime you fi nd someone saying “sometimes” or “also”
you should
consider adding more detail to break out the various cases. For
example,
if you say, “Sometimes we connect to this web service via SSL,
and some-
times we fall back to HTTP,” you should draw both of those
data fl ows
(and consider whether an attacker can make you fall back like
that).
■ Anytime you need more detail to explain security-relevant
behavior, draw
it in.
■ Each trust boundary box should have a label inside it.
■ Anywhere you disagreed over the design or construction of
the system,
draw in those details. This is an important step toward ensuring
that
everyone ended that discussion on the same page. It’s especially
important
for larger teams where not everyone is in the room for the threat
model
discussions. If anyone sees a diagram that contradicts their
thinking, they
can either accept it or challenge the assumptions; but either
way, a good
clear diagram can help get everyone on the same page.
■ Don’t have data sinks: You write the data for a reason. Show
who uses it.
■ Data can’t move itself from one data store to another: Show
the process
that moves it.
■ All ways data can arrive should be shown.
■ If there are mechanisms for controlling data fl ows (such as
fi rewalls or
permissions) they should be shown.
■ All processes must have at least one entry data fl ow and one
exit data fl ow.
■ As discussed earlier in the chapter, don’t draw an eye chart.
■ Diagrams should be visible on a printable page.
N O T E Writing Secure Code author David LeBlanc notes that
“A process without
input is a miracle, while one without output is a black hole.
Either you’re missing
something, or have mistaken a process for people, who are
allowed to be black holes
or miracles.”
When to Validate Diagrams
For software products, there are two main times to validate
diagrams: when you
create them and when you’re getting ready to ship a beta.
There’s also a third
triggering event (which is less frequent), which is if you add a
security boundary.
www.it-ebooks.info
http://www.it-ebooks.info/
56 Part I ■ Getting Started
c02.indd 11:35:5:AM 01/17/2014 Page 56
For operational software diagrams, you also validate when you
create them,
and then again using a sensible balance between effort and up-
to-dateness. That
sensible balance will vary according to the maturity of a system,
its scale, how
tightly the components are coupled, the cadence of rollouts, and
the nature of
new rollouts. Here are a few guidelines:
■ Newer systems will experience more diagram changes than
mature ones.
■ Larger systems will experience more diagram changes than
smaller ones.
■ Tightly coupled systems will experience more diagram
changes than
loosely coupled systems.
■ Systems that roll out changes quickly will likely experience
fewer diagram
changes per rollout.
■ Rollouts or sprints focused on refactoring or paying down
technical
debt will likely see more diagram changes. In either case, create
an
appropriate tracking item to ensure that you recheck your
diagrams
at a good time. The appropriate tracking item is whatever you
use to
gate releases or rollouts, such as bugs, task management
software,
or checklists. If you have no formal way to gate releases, then
you
might focus on a clearly defi ned release process before
worrying about
rechecking threat models. Describing such a process is beyond
the
scope of this book.
Summary
There’s more than one way to threat model, and some of the
strategies you can
employ include modeling assets, modeling attackers, or
modeling software.
“What’s your threat model” and brainstorming are good for
security experts,
but they lack structure that less experienced threat modelers
need. There are
more structured approaches to brainstorming, including scenario
analysis,
pre-mortems, movie plotting, and literature reviews, which can
help bring a
little structure, but they’re still not great.
If your threat modeling starts from assets, the multiple
overlapping defi ni-
tions of the term, including things attackers want, things you’re
protecting, and
stepping stones, can trip you up. An asset-centered approach
offers no route to
fi gure out what will go wrong with the assets.
Attacker modeling is also attractive, but trying to predict how
another person
will attack is hard, and the approach can invite arguments that
“no one would
do that.” Additionally, human-centered approaches may lead
you to human-
centered threats that can be hard to address.
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 2 ■ Strategies for Threat Modeling 57
c02.indd 11:35:5:AM 01/17/2014 Page 57
Software models are focused on that what software people
understand. The
best models are diagrams that help participants understand the
software and
fi nd threats against it. There are a variety of ways you can
diagram your soft-
ware, and DFDs are the most frequently useful.
Once you have a model of the software, you’ll need a way to fi
nd threats
against it, and that is the subject of Part II.
www.it-ebooks.info
http://www.it-ebooks.info/
www.it-ebooks.info
http://www.it-ebooks.info/
c03.indd 07:51:54:AM 01/15/2014 Page 59
At the heart of threat modeling are the threats.
There are many approaches to fi nding threats, and they are the
subject of
Part II. Each has advantages and disadvantages, and different
approaches may
work in different circumstances. Each of the approaches in this
part is like a
Lego block. You can substitute one for another in the midst of
this second step
in the four-step framework and expect to get good results.
Knowing what aspects of security can go wrong is the unique
element that
makes threat modeling threat modeling, rather than some other
form of mod-
eling. The models in this part are abstractions of threats,
designed to help you
think about these security problems. The more specifi c models
(such as attack
libraries) will be more useful to those new to threat modeling,
and are less
freewheeling. As you become more experienced, the less
structured approaches
such as STRIDE become more useful.
In this part, you’ll learn about the following approaches to fi
nding threats:
■ Chapter 3: STRIDE covers the STRIDE mnemonic you met in
chapter 1,
and its many variants.
■ Chapter 4: Attack Trees are either a way for you to think
through threats
against your system, or a way to help others structure their
thinking about
those threats. Both uses of attack trees are covered in this
chapter.
P a r t
II
Finding Threats
www.it-ebooks.info
http://www.it-ebooks.info/
60 Part II ■ Finding Threats
c03.indd 07:51:54:AM 01/15/2014 Page 60
■ Chapter 5: Attack Libraries are libraries constructed to track
and organize
threats. They can be very useful to those new to security or
threat modeling.
■ Chapter 6: Privacy Tools covers a collection of tools for fi
nding privacy
threats.
Part II focuses on the second question in the four-step
framework: What can
go wrong? As you’ll recall from Part I, before you start fi nding
threats with any
of the techniques in this part, you should fi rst have an idea of
scope: where are
you looking for threats? A diagram, such as a data fl ow
diagram discussed
in Part I, can help scope the threat modeling session, and thus is
an excellent
input condition. As you discuss threats, however, you’ll likely
fi nd imperfec-
tions in the diagram, so it isn’t necessary to “perfect” your
diagram before you
start fi nding threats.
www.it-ebooks.info
http://www.it-ebooks.info/
61
c03.indd 07:51:54:AM 01/15/2014 Page 61
As you learned in Chapter 1, “Dive in and Threat Model!,”
STRIDE is an acro-
nym that stands for Spoofing, Tampering, Repudiation,
Information Disclosure,
Denial of Service, and Elevation of Privilege. The STRIDE
approach to threat
modeling was invented by Loren Kohnfelder and Praerit Garg
(Kohnfelder,
1999). This framework and mnemonic was designed to help
people developing
software identify the types of attacks that software tends to
experience.
The method or methods you use to think through threats have
many differ-
ent labels: fi nding threats, threat enumeration, threat analysis,
threat elicitation,
threat discovery. Each connotes a slightly different fl avor of
approach. Do the
threats exist in the software or the diagram? Then you’re fi
nding them. Do they
exist in the minds of the people doing the analysis? Then you’re
doing analysis
or elicitation. No single description stands out as always or
clearly preferable, but
this book generally talks about fi nding threats as a superset of
all these ideas.
Using STRIDE is more like an elicitation technique, with an
expectation that
you or your team understand the framework and know how to
use it. If you’re
not familiar with STRIDE, the extensive tables and examples
are designed to
teach you how to use it to discover threats.
This chapter explains what STRIDE is and why it’s useful,
including sections
covering each component of the STRIDE mnemonic. Each
threat-specifi c sec-
tion provides a deeper explanation of the threat, a detailed table
of examples
for that threat, and then a discussion of the examples. The tables
and examples
are designed to teach you how to use STRIDE to discover
threats. You’ll also
C H A P T E R
3
STRIDE
www.it-ebooks.info
http://www.it-ebooks.info/
62 Part II ■ Finding Threats
c03.indd 07:51:54:AM 01/15/2014 Page 62
learn about approaches built on STRIDE: STRIDE-per-element,
STRIDE-per-
interaction, and DESIST. The other approach built on STRIDE,
the Elevation of
Privilege game, is covered in Chapters 1, “Dive In and Threat
Model!” and 12,
“Requirements Cookbook,” and Appendix C, “Attacker Lists.”
Understanding STRIDE and Why It’s Useful
The STRIDE threats are the opposite of some of the properties
you would like
your system to have: authenticity, integrity, non-repudiation,
confidentiality,
availability, and authorization. Table 3-1 shows the STRIDE
threats, the cor-
responding property that you’d like to maintain, a defi nition,
the most typical
victims, and examples.
Table 3-1: The STRIDE Threats
THREAT
PROPERT Y
VIOLATED
THREAT
DEFINITION
T YPICAL
VICTIMS EXAMPLES
Spoofi ng Authentication Pretending to be
something or some-
one other than
yourself
Processes,
external
entities,
people
Falsely claiming to be
Acme.com, winsock
.dll, Barack Obama, a
police offi cer, or the
Nigerian Anti-Fraud
Group
Tampering Integrity Modifying some-
thing on disk, on
a network, or in
memory
Data
stores,
data fl ows,
processes
Changing a spread-
sheet, the binary of an
important program,
or the contents of
a database on disk;
modifying, adding,
or removing packets
over a network, either
local or far across
the Internet, wired
or wireless; chang-
ing either the data a
program is using or
the running program
itself
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 3 ■ STRIDE 63
c03.indd 07:51:54:AM 01/15/2014 Page 63
THREAT
PROPERT Y
VIOLATED
THREAT
DEFINITION
T YPICAL
VICTIMS EXAMPLES
Repudiation Non-
Repudiation
Claiming that you
didn’t do some-
thing, or were
not responsible.
Repudiation can be
honest or false, and
the key question for
system designers is,
what evidence do
you have?
Process Process or system: “I
didn’t hit the big red
button” or “I didn’t
order that Ferrari.”
Note that repudia-
tion is somewhat the
odd-threat-out here;
it transcends the
technical nature of
the other threats to
the business layer.
Information
Disclosure
Confi dentiality Providing informa-
tion to someone
not authorized to
see it
Processes,
data
stores,
data fl ows
The most obvious
example is allowing
access to fi les, e-mail,
or databases, but
information disclosure
can also involve fi le-
names (“Termination
for John Doe.docx”),
packets on a network,
or the contents of
program memory.
Denial of
Service
Availability Absorbing resources
needed to provide
service
Processes,
data
stores,
data fl ows
A program that can
be tricked into using
up all its memory, a
fi le that fi lls up the
disk, or so many net-
work connections
that real traffi c can’t
get through
Elevation of
Privilege
Authorization Allowing someone
to do something
they’re not autho-
rized to do
Process Allowing a normal
user to execute code
as admin; allowing a
remote person with-
out any privileges to
run code
In Table 3-1, “typical victims” are those most likely to be
victimized: For
example, you can spoof a program by starting a program of the
same name, or
www.it-ebooks.info
http://www.it-ebooks.info/
64 Part II ■ Finding Threats
c03.indd 07:51:54:AM 01/15/2014 Page 64
by putting a program with that name on disk. You can spoof an
endpoint on
the same machine by squatting or splicing. You can spoof users
by capturing
their authentication info by spoofing a site, by assuming they
reuse credentials
across sites, by brute forcing (online or off) or by elevating
privilege on their
machine. You can also tamper with the authentication database
and then spoof
with falsified credentials.
Note that as you’re using STRIDE to look for threats, you’re
simply enumerat-
ing the things that might go wrong. The exact mechanisms for
how it can go
wrong are something you can develop later. (In practice, this
can be easy or it
can be very challenging. There might be defenses in place, and
if you say, for
example, “Someone could modify the management tables,”
someone else can
say, “No, they can’t because...”) It can be useful to record those
possible attacks,
because even if there is a mitigation in place, that mitigation is
a testable feature,
and you should ensure that you have a test case.
You’ll sometimes hear STRIDE referred to as “STRIDE
categories” or “the STRIDE
taxonomy.” This framing is not helpful because STRIDE was
not intended as, nor
is it generally useful for, categorization. It is easy to find things
that are hard to
categorize with STRIDE. For example, earlier you learned about
tampering with
the authentication database and then spoofi ng. Should you
record that as a tam-
pering threat or a spoofi ng threat? The simple answer is that it
doesn’t matter. If
you’ve already come up with the attack, why bother putting it in
a category? The
goal of STRIDE is to help you find attacks. Categorizing them
might help you
figure out the right defenses, or it may be a waste of effort.
Trying to use STRIDE
to categorize threats can be frustrating, and those efforts cause
some people to
dismiss STRIDE, but this is a bit like throwing out the baby
with the bathwater.
Spoofing Threats
Spoofing is pretending to be something or someone other than
yourself.
Table 3-1 includes the examples of claiming to be Acme.com,
winsock.dll, Barack
Obama, or the Nigerian Anti-Fraud Offi ce. Each of these is an
example of a
different subcategory of spoofing. The first example, pretending
to be Acme.
com (or Google.com, etc.) entails spoofing the identity of an
entity across a net-
work. There is no mediating authority that takes responsibility
for telling you
that Acme.com is the site I mean when I write these words. This
differs from
the second example, as Windows includes a winsock.dll. You
should be able
to ask the operating system to act as a mediating authority and
get you to
winsock. If you have your own DLLs, then you need to ensure
that you’re open-
ing them with the appropriate path (%installdir%dll);
otherwise, someone
might substitute one in a working directory, and get your code
to do what they
want. (Similar issues exist with unix and LD_PATH.) The third
example, spoofing
Barack Obama, is an instance of pretending to be a specific
person. Contrast that
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 3 ■ STRIDE 65
c03.indd 07:51:54:AM 01/15/2014 Page 65
with the fourth example, pretending to be the President of the
United States or
the Nigerian Anti-Fraud Offi ce. In those cases, the attacker is
pretending to be
in a role. These spoofi ng threats are laid out in Table 3-2.
Table 3-2: Spoofi ng Threats
THREAT EXAMPLES WHAT THE ATTACKER DOES NOTES
Spoofi ng a process on
the same machine
Creates a fi le before the real
process
Renaming/linking Creating a Trojan “su” and alter-
ing the path
Renaming Naming your process “sshd”
Spoofi ng a fi le Creates a fi le in the local
directory
This can be a library, execut-
able, or confi g fi le.
Creates a link and changes it From the attacker’s perspec-
tive, the change should hap-
pen between the link being
checked and the link being
accessed.
Creates many fi les in the
expected directory
Automation makes it easy to cre-
ate 10,000 fi les in /tmp, to fi ll the
space of fi les called /tmp
/”pid.NNNN, or similar.
Spoofi ng a machine ARP spoofi ng
IP spoofi ng
DNS spoofi ng Forward or reverse
DNS Compromise Compromise TLD, registrar or
DNS operator
IP redirection At the switch or router level
Spoofi ng a person Sets e-mail display name
Takes over a real account
Spoofi ng a role Declares themselves to be
that role
Sometimes opening a special
account with a relevant name
Spoofing a Process or File on the Same Machine
If an attacker creates a file before the real process, then if your
code is not care-
ful to create a new file, the attacker may supply data that your
code interprets,
thinking that your code (or a previous instantiation or thread)
wrote that data,
www.it-ebooks.info
http://www.it-ebooks.info/
66 Part II ■ Finding Threats
c03.indd 07:51:54:AM 01/15/2014 Page 66
and it can be trusted. Similarly, if file permissions on a pipe,
local procedure
call, and so on, are not managed well, then an attacker can
create that endpoint,
confusing everything that attempts to use it later.
Spoofi ng a process or fi le on a remote machine can work
either by creating
spoofed fi les or processes on the expected machine (possibly
having taken admin
rights) or by pretending to be the expected machine, covered
next.
Spoofing a Machine
Attackers can spoof remote machines at a variety of levels of
the network stack.
These spoofing attacks can influence your code’s view of the
world as a client,
server, or peer. They can spoof ARP requests if they’re local,
they can spoof IP
packets to make it appear that they’re coming from somewhere
they are not, and
they can spoof DNS packets. DNS spoofing can happen when
you do a forward
or reverse lookup. An attacker can spoof a DNS reply to a
forward query they
expect you to make. They can also adjust DNS records for
machines they control
such that when your code does a reverse lookup (translating IP
to FQDN) their
DNS server returns a name in a domain that they do not
control—for example,
claiming that 10.1.2.3 is update.microsoft.com. Of course, once
attackers have
spoofed a machine, they can either spoof or act as a man-in-the-
middle for the
processes on that machine. Second-order variants of this threat
involve stealing
machine authenticators such as cryptographic keys and abusing
them as part
of a spoofing attack.
Attackers can also spoof at higher layers. For example, phishing
attacks involve
many acts of spoofi ng. There’s usually spoofi ng of e-mail
from “your” bank,
and spoofi ng of that bank’s website. When someone falls for
that e-mail, clicks
the link and visits the bank, they then enter their credentials,
sending them
to that spoofed website. The attacker then engages in one last
act of spoofi ng:
They log into your bank account and transfer your money to
themselves or an
accomplice. (It may be one attacker, or it may be a set of
attackers, contracting
with one another for services rendered.)
Spoofing a Person
Major categories of spoofing people include access to the
person’s account and
pretending to be them through an alternate account. Phishing is
a common way
to get access to someone else’s account. However, there’s often
little to prevent
anyone from setting up an account and pretending to be you. For
example,
an attacker could set up accounts on sites like LinkedIn,
Twitter, or Facebook
and pretend to be you, the Adam Shostack who wrote this book,
or a rich and
deposed prince trying to get their money out of the country.
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 3 ■ STRIDE 67
c03.indd 07:51:54:AM 01/15/2014 Page 67
Tampering Threats
Tampering is modifying something, typically on disk, on a
network, or in
memory. This can include changing data in a spreadsheet (using
either a program
such as Excel or another editor), changing a binary or
configuration file on disk,
or modifying a more complex data structure, such as a database
on disk. On
a network, packets can be added, modified, or removed. It’s
sometimes easier
to add packets than to edit them as they fly by, and programs
are remarkably
bad about handling extra copies of data securely. More
examples of tampering
are in Table 3-3.
Table 3-3: Tampering Threats
THREAT EXAMPLES WHAT THE ATTACKER DOES NOTES
Tampering with a fi le Modifi es a fi le they own and on
which you rely
Modifi es a fi le you own
Modifi es a fi le on a fi le server that
you own
Modifi es a fi le on their fi le server Loads of fun when you
include fi les from remote
domains
Modifi es a fi le on their fi le server Ever notice how much
XML includes remote
schemas?
Modifi es links or redirects
Tampering with memory Modifi es your code Hard to defend
against
once the attacker is run-
ning code as the same
user
Modifi es data they’ve supplied to
your API
Pass by value, not by refer-
ence when crossing a trust
boundary
Tampering with a network Redirects the fl ow of data to their
machine
Often stage 1 of
tampering
Modifi es data fl owing over the
network
Even easier and more fun
when the network is wire-
less (WiFi, 3G, et cetera)
Enhances spoofi ng attacks
www.it-ebooks.info
http://www.it-ebooks.info/
68 Part II ■ Finding Threats
c03.indd 07:51:54:AM 01/15/2014 Page 68
Tampering with a File
Attackers can modify fi les wherever they have write
permission. When your
code has to rely on fi les others can write, there’s a possibility
that the fi le was
written maliciously. While the most obvious form of tampering
is on a local disk,
there are also plenty of ways to do this when the file is remotely
included, like
most of the JavaScript on the Internet. The attacker can breach
your security by
breaching someone else’s site. They can also (because of poor
privileges, spoofing,
or elevation of privilege) modify files you own. Lastly, they can
modify links
or redirects of various sorts. Links are often left out of integrity
checks. There’s
a somewhat subtle variant of this when there are caches between
things you
control (such as a server) and things you don’t (such as a web
browser on the
other side of the Internet). For example, cache poisoning attacks
insert data into
web caches through poor security controls at caches (OWASP,
2009).
Tampering with Memory
Attackers can modify your code if they’re running at the same
privilege level.
At that point, defense is tricky. If your API handles data by
reference (a pat-
tern often chosen for speed), then an attacker can modify it after
you perform
security checks.
Tampering with a Network
Network tampering often involves a variety of tricks to bring
the data to the
attacker’s machine, where he forwards some data intact and
some data modi-
fi ed. However, tricks to bring you the data are not always
needed; with radio
interfaces like WiFi and Bluetooth, more and more data flow
through the air.
Many network protocols were designed with the assumption you
needed spe-
cial hardware to create or read arbitrary packets. The
requirement for special
hardware was the defense against tampering (and often spoofi
ng). The rise of
software-defined radio (SDR) has silently invalidated the need
for special hard-
ware. It is now easy to buy an inexpensive SDR unit that can be
programmed
to tamper with wireless protocols.
Repudiation Threats
Repudiation is claiming you didn’t do something, or were not
responsible for
what happened. People can repudiate honestly or deceptively.
Given the increas-
ing knowledge often needed to understand the complex world,
those honestly
repudiating may really be exposing issues in your user
experiences or service
architectures. Repudiation threats are a bit different from other
security threats,
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 3 ■ STRIDE 69
c03.indd 07:51:54:AM 01/15/2014 Page 69
as they often appear at the business layer. (That is, above the
network layer such
as TCP/IP, above the application layer such as HTTP/HTML,
and where the
business logic of buying products would be implemented.)
Repudiation threats are also associated with your logging
system and process.
If you don’t have logs, don’t retain logs, or can’t analyze logs,
repudiation threats
are hard to dispute. There is also a class of attacks in which
attackers will drop
data in the logs to make log analysis tricky. For example, if you
display your
logs in HTML and the attacker sends </tr> or </html>, your log
display needs
to treat those as data, not code. More repudiation threats are
shown in Table 3-4.
Table 3-4: Repudiation Threats
THREAT EXAMPLES WHAT THE ATTACKER DOES NOTES
Repudiating an action Claims to have not clicked Maybe they
really did
Claims to have not received Receipt can be strange; does
mail being downloaded by
your phone mean you’ve
read it? Did a network proxy
pre-fetch images? Did some-
one leave a package on the
porch?
Claims to have been a fraud
victim
Uses someone else’s account
Uses someone else’s pay-
ment instrument without
authorization
Attacking the logs Notices you have no logs
Puts attacks in the logs to con-
fuse logs, log-reading code, or a
person reading the logs
Attacking the Logs
Again, if you don’t have logs, don’t retain logs, or can’t analyze
logs, repudiation
actions are hard to dispute. So if you aren’t logging, you
probably need to start.
If you have no log centralization or analysis capability, you
probably need that
as well. If you don’t properly define what you will be logging,
an attacker may
be able to break your log analysis system. It can be challenging
to work through
the layers of log production and analysis to ensure reliability,
but if you don’t,
it’s easy to have attacks slip through the cracks or
inconsistencies.
www.it-ebooks.info
http://www.it-ebooks.info/
70 Part II ■ Finding Threats
c03.indd 07:51:54:AM 01/15/2014 Page 70
Repudiating an Action
When you’re discussing repudiation, it’s helpful to discuss
“someone” rather
than “an attacker.” You want to do this because those who
repudiate are often
not actually attackers, but people who have been failed by
technology or pro-
cess. Maybe they really didn’t click (or didn’t perceive that they
clicked). Maybe
the spam filter really did eat that message. Maybe UPS didn’t
deliver, or maybe
UPS delivered by leaving the package on a porch. Maybe
someone claims to
have been a victim of fraud when they really were not (or
maybe someone else
in a household used their credit card, with or without their
knowledge). Good
technological systems that both authenticate and log well can
make it easier to
handle repudiation issues.
Information Disclosure Threats
Information disclosure is about allowing people to see
information they are not
authorized to see. Some information disclosure threats are
shown in Table 3-5.
Table 3-5: Information Disclosure Threats
THREAT
EXAMPLES WHAT THE ATTACKER DOES NOTES
Information dis-
closure against
a process
Extracts secrets from error messages
Reads the error messages from username/passwords to entire
database
tables
Extracts machine secrets from error cases Can make defense
against memory
corruption such as
ASLR far less useful
Extracts business/personal secrets from error cases
Information dis-
closure against
data stores
Takes advantage of inappropriate or missing ACLs
Takes advantage of bad database permissions
Finds fi les protected by obscurity
Finds crypto keys on disk (or in memory)
Sees interesting information in fi lenames
Reads fi les as they traverse the network
Gets data from logs or temp fi les
Gets data from swap or other temp storage
Extracts data by obtaining device, changing OS
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 3 ■ STRIDE 71
c03.indd 07:51:54:AM 01/15/2014 Page 71
THREAT
EXAMPLES WHAT THE ATTACKER DOES NOTES
Information dis-
closure against
a data fl ow
Reads data on the network
Redirects traffi c to enable reading data on the
network
Learns secrets by analyzing traffi c
Learns who’s talking to whom by watching the DNS
Learns who’s talking to whom by social network
info disclosure
Information Disclosure from a Process
Many instances in which a process will disclose information are
those that inform
further attacks. A process can do this by leaking memory
addresses, extracting
secrets from error messages, or extracting design details from
error messages.
Leaking memory addresses can help bypass ASLR and similar
defenses. Leaking
secrets might include database connection strings or passwords.
Leaking design
details might mean exposing anti-fraud rules like “your account
is too new to
order a diamond ring.”
Information Disclosure from a Data Store
As data stores, well, store data, there’s a profusion of ways they
can leak it. The first
set of causes are failures to properly use security mechanisms.
Not setting permis-
sions appropriately or hoping that no one will find an obscure
file are common
ways in which people fail to use security mechanisms.
Cryptographic keys are a
special case whereby information disclosure allows additional
attacks. Files read
from a data store over the network are often readable as they
traverse the network.
An additional attack, often overlooked, is data in filenames. If
you have a
directory named “May 2013 layoffs,” the fi lename itself,
“Termination Letter
for Alice.docx,” reveals important information.
There’s also a group of attacks whereby a program emits
information
into the operating environment. Logs, temp files, swap, or other
places can
contain data. Usually, the OS will protect data in swap, but for
things like
crypto keys, you should use OS facilities for preventing those
from being
swapped out.
Lastly, there is the class of attacks whereby data is extracted
from the device
using an operating system under the attacker’s control. Most
commonly (in
2013), these attacks affect USB keys, but they also apply to
CDs, backup tapes,
hard drives, or stolen laptops or servers. Hard drives are often
decommissioned
without full data deletion. (You can address the need to delete
data from hard
www.it-ebooks.info
http://www.it-ebooks.info/
72 Part II ■ Finding Threats
c03.indd 07:51:54:AM 01/15/2014 Page 72
drives by buying a hard drive chipper or smashing machine, and
since such
machines are awesome, why on earth wouldn’t you?)
Information Disclosure from a Data Flow
Data flows are particularly susceptible to information disclosure
attacks when
information is flowing over a network. However, data flows on
a single machine
can still be attacked, particularly when the machine is shared by
cloud co-tenants
or many mutually distrustful users of a compute server. Beyond
the simple
reading of data on the network, attackers might redirect traffi c
to themselves
(often by spoofing some network control protocol) so they can
see it when they’re
not on the normal path. It’s also possible to obtain information
even when the
network traffi c itself is encrypted. There are a variety of ways
to learn secrets
about who’s talking to whom, including watching DNS, friend
activity on a site
such as LinkedIn, or other forms of social network analysis.
N O T E Security mavens may be wondering if side channel
attacks and covert channels
are going to be mentioned. These attacks can be fun to work on
(and side channels are
covered a bit in Chapter 16, “Threats to Cryptosystems”), but
they are not relevant until
you’ve mitigated the issues covered here.
Denial-of-Service Threats
Denial-of-service attacks absorb a resource that is needed to
provide service.
Examples are described in Table 3-6.
Table 3-6: Denial-of-Service Threats
THREAT EXAMPLES
WHAT THE ATTACKER
DOES NOTES
Denial of service against a
process
Absorbs memory (RAM or
disk)
Absorbs CPU
Uses process as an amplifi er
Denial of service against a
data store
Fills data store up
Makes enough requests to
slow down the system
Denial of service against a
data fl ow
Consumes network
resources
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 3 ■ STRIDE 73
c03.indd 07:51:54:AM 01/15/2014 Page 73
Denial-of-service attacks can be split into those that work while
the attacker
is attacking (say, filling up bandwidth) and those that persist.
Persistent attacks
can remain in effect until a reboot (for example,
while(1){fork();}), or even
past a reboot (for example, filling up a disk). Denial-of-service
attacks can also
be divided into amplified and unamplified. Amplified attacks
are those whereby
small attacker effort results in a large impact. An example
would take advantage
of the old unix chargen service, whose purpose was to generate
a semi-random
character scheme for testing. An attacker could spoof a single
packet from the
chargen port on machine A to the chargen port on machine B.
The hilarity
continues until someone pulls a network cable.
Elevation of Privilege Threats
Elevation of privilege is allowing someone to do something
they’re not autho-
rized to do—for example, allowing a normal user to execute
code as admin, or
allowing a remote person without any privileges to run code.
Two important
ways to elevate privileges involve corrupting a process and
getting past autho-
rization checks. Examples are shown in Table 3-7.
Table 3-7: Elevation of Privilege Threats
THREAT EXAMPLES
WHAT THE ATTACKER
DOES NOTES
Elevation of privilege against
a process by corrupting the
process
Send inputs that the code
doesn’t handle properly
These errors are very com-
mon, and are usually high
impact.
Gains access to read or write
memory inappropriately
Writing memory is (hope-
fully obviously) bad, but
reading memory can enable
further attacks.
Elevation through missed
authorization checks
Elevation through buggy
authorization checks
Centralizing such checks
makes bugs easier to
manage
Elevation through data
tampering
Modifi es bits on disk to do
things other than what the
authorized user intends
www.it-ebooks.info
http://www.it-ebooks.info/
74 Part II ■ Finding Threats
c03.indd 07:51:54:AM 01/15/2014 Page 74
Elevate Privileges by Corrupting a Process
Corrupting a process involves things like smashing the stack,
exploiting data
on the heap, and a whole variety of exploitation techniques.
The impact of
these techniques is that the attacker gains influence or control
over a program’s
control flow. It’s important to understand that these exploits are
not limited to
the attack surface. The first code that attacker data can reach is,
of course, an
important target. Generally, that code can only validate data
against a limited
subset of purposes. It’s important to trace the data flows further
to see where else
elevation of privilege can take place. There’s a somewhat
unusual case whereby
a program relies on and executes things from shared memory,
which is a trivial
path for elevation if everything with permissions to that shared
memory is not
running at the same privilege level.
Elevate Privileges through Authorization Failures
There is also a set of ways to elevate privileges through
authorization failures.
The simplest failure is to not check authorization on every path.
More complex
for an attacker is taking advantage of buggy authorization
checks. Lastly, if a
program relies on other programs, configuration files, or
datasets being trust-
worthy, it’s important to ensure that permissions are set so that
each of those
dependencies is properly secured.
Extended Example: STRIDE Threats against Acme-DB
This extended example discusses how STRIDE threats could
manifest against
the Acme/SQL database described in Chapter 1, “Dive In and
Threat Model!”
and 2, “Strategies for Threat Modeling,” and shown in Figure 2-
1. You’ll fi rst look
at these threats by STRIDE category, and then examine the
same set according
to who can address them.
Spoofi ng
■ A web client could attempt to log in with random credentials
or stolen
credentials, as could a SQL client.
■ If you assume that the SQL client is the one you wrote and
allow it to
make security decisions, then a spoofed (or tampered with)
client could
bypass security checks.
■ The web client could connect to a false (spoofed) front end,
and end up
disclosing credentials.
■ A program could pretend to be the database or log analysis
program, and
try to read data from the various data stores.
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 3 ■ STRIDE 75
c03.indd 07:51:54:AM 01/15/2014 Page 75
Tampering
■ Someone could also tamper with the data they’re sending, or
with any of
the programs or data files.
■ Someone could tamper with the web or SQL clients. (This is
nominally
out of scope, as you shouldn’t be trusting external entities
anyway.)
N O T E These threats, once you consider them, can easily be
addressed with operating
system permissions. More challenging is what can alter what
data within the database.
Operating system permissions will only help a little there; the
database will need to
implement an access control system of some sort.
Repudiation
■ The customers using either SQL or web clients could claim
not to have
done things. These threats may already be mitigated by the
presence of
logs and log analysis. So why bother with these threats? They
remind you
that you need to configure logging to be on, and that you need
to log the
“right things,” which probably include successes and failures of
authen-
tication attempts, access attempts, and in particular, the server
needs to
track attempts by clients to access or change logs.
Information Disclosure
■ The most obvious information disclosure issues occur when
confidential
information in the database is exposed to the wrong client. This
informa-
tion may be either data (the contents of the salaries table) or
metadata (the
existence of the termination plans table). The information
disclosure may
be accidental (failure to set an ACL) or malicious
(eavesdropping on the
network). Information disclosure may also occur by the front
end(s)—
for example, an error message like “Can’t connect to database
foo with
password bar!”
■ The database files (partitions, SAN attached storage) need to
be protected
by the operating system and by ACLs for data within the files.
■ Logs often store confidential information, and therefore need
to be protected.
Denial of Service
■ The front ends could be overwhelmed by random or crafted
requests,
especially if there are anonymous (or free) web accounts that
can craft
requests designed to be slow to execute.
■ The network connections could be overwhelmed with data.
■ The database or logs could be filled up.
www.it-ebooks.info
http://www.it-ebooks.info/
76 Part II ■ Finding Threats
c03.indd 07:51:54:AM 01/15/2014 Page 76
■ If the network between the main processes, or the processes
and databases,
is shared, it may become congested.
Elevation of Privilege
■ Clients, either web or SQL, could attempt to run queries
they’re not autho-
rized to run.
■ If the client is enforcing security, then anyone who tampers
with their
client or its network stream will be able to run queries of their
choice.
■ If the database is capable of running arbitrary commands,
then that capa-
bility is available to the clients.
■ The log analysis program (or something pretending to be the
log analysis
program) may be able to run arbitrary commands or queries.
N O T E The log analysis program may be thought of as
trusted, but it’s drawn outside
the trust boundaries. So either the thinking or the diagram (in
Figure 2-1) is incorrect.
■ If the DB cluster is connected to a corporate directory
service and no action
is taken to restrict who can log in to the database servers (or
file servers),
then anyone in the corporate directory, including perhaps
employees,
contractors, build labs, and partners can make changes on those
systems.
N O T E The preceding lists in this extended example are
intended to be illustrative;
other threats may exist.
It is also possible to consider these threats according to the
person or team
that must address them, divided between Acme and its
customers. As shown
in Table 3-8, this illustrates the natural overlap of threat and
mitigation, fore-
shadowing the Part III, “Managing and Addressing Threats” on
how to mitigate
threats. It also starts to enumerate things that are not
requirements for Acme/
SQL. These non-requirements should be documented and
provided to customers,
as covered in Chapter 12. In this table, you’re seeing more and
more actionable
threats. As a developer or a systems administrator, you can start
to see how to
handle these sorts of issues. It’s tempting to start to address
threats in the table
itself, and a natural extension to the table would be a set of
ways for each actor
to address the threats that apply.
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 3 ■ STRIDE 77
c03.indd 07:51:54:AM 01/15/2014 Page 77
Table 3-8: Addressing Threats According to Who Handles Them
THREAT INSTANCES THAT ACME MUST HANDLE
INSTANCES THAT
IT DEPARTMENTS
MUST HANDLE
Spoofi ng Web/SQL/other client brute forcing logins
DBA (human)
DB users
Web client
SQL client
DBA (human)
DB users
Tampering Data
Management
Logs
Front end(s)
Database
DB admin
Repudiation Logs (Log analysis must be protected.)
Certain actions from web and SQL clients will need
careful logging.
Certain actions from DBAs will need careful logging.
Logs (Log analysis
must be protected.)
If DBAs are not fully
trusted, a system in
another privilege
domain to log all
commands might
be required.
Information
disclosure
Data, management, and logs must be protected.
Front ends must implement access control.
Only the front ends should be able to access the data.
ACLs and security
groups must be
managed.
Backups must be
protected.
Denial of
service
Front ends must be designed to minimize DoS risks. The system
must be
deployed with suf-
fi cient resources.
Continues
www.it-ebooks.info
http://www.it-ebooks.info/
78 Part II ■ Finding Threats
c03.indd 07:51:54:AM 01/15/2014 Page 78
THREAT INSTANCES THAT ACME MUST HANDLE
INSTANCES THAT
IT DEPARTMENTS
MUST HANDLE
Elevation of
privilege
Trusting client
The DB should support prepared statements to make
injection harder.
No “run this command” tools should be in the default
install.
No default way to run commands on the server, and
calls like exec()and system() must be permis-
sioned and confi gurable if they exist.
Inappropriately
trusting clients that
are written locally
Confi gure the DB
appropriately.
STRIDE Variants
STRIDE can be a very useful mnemonic when looking for
threats, but it’s not
perfect. In this section, you’ll learn about variants of STRIDE
that may help
address some of its weaknesses.
STRIDE-per-Element
STRIDE-per-element makes STRIDE more prescriptive by
observing that certain
threats are more prevalent with certain elements of a diagram.
For example, a
data store is unlikely to spoof another data store (although
running code can
be confused as to which data store it’s accessing.) By focusing
on a set of threats
against each element, this approach makes it easier to find
threats. For example,
Microsoft uses Table 3-9 as a core part of its Security
Development Lifecycle
threat modeling training.
Table 3-9: STRIDE-per-Element
S T R I D E
External Entity x x
Process x x x x x x
Data Flow x x x
Data Store x ? x x
Table 3-8 (continued)
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 3 ■ STRIDE 79
c03.indd 07:51:54:AM 01/15/2014 Page 79
Applying this chart, you can focus threat analysis on how an
attacker might
tamper with, read data from, or prevent access to a data flow.
For example, if data
is flowing over a network such as Ethernet, it’s trivial for
someone attached to
that same Ethernet to read all the content, modify it, or send a
flood of packets
to cause a TCP timeout. You might argue that you have some
form of network
segmentation, and that may mitigate the threats suffi ciently for
you. The ques-
tion mark under repudiation indicates that logging data stores
are involved in
addressing repudiation, and sometimes logs will come under
special attack to
allow repudiation attacks.
The threat is to the element listed in Table 3-9. Each element is
the victim,
not the perpetrator. Therefore, if you’re tampering with a data
store, the threat
is to the data store and the data within. If you’re spoofing in a
way that affects
a process, then the process is the victim. So, spoofing by
tampering with the
network is really a spoof of the endpoint, regardless of the
technical details.
In other words, the other endpoint (or endpoints) are confused
about what’s
at the other end of the connection. The chart focuses on
spoofing of a process,
not spoofing of the data flow. Of course, if you happen to find
spoofing when
looking at the data flow, obviously you should record the threat
so you can
address it, not worry about what sort of threat it is. STRIDE-
per-element has
the advantage of being prescriptive, helping you identify what
to look for where
without being a checklist of the form “web component: XSS,
XSRF...” In skilled
hands, it can be used to find new types of weaknesses in
components. In less
skilled hands, it can still find many common issues.
STRIDE-per-element does have two weaknesses. First, similar
issues tend to
crop up repeatedly in a given threat model; second, the chart
may not represent
your issues. In fact, Table 3-9 is somewhat specifi c to
Microsoft. The easiest
place to see this is “information disclosure by external entity,”
which is a good
description of some privacy issues. (It is by no means a
complete description
of privacy.) However, the table doesn’t indicate that this could
be a problem.
That’s because Microsoft has a separate set of processes for
analyzing privacy
problems. Those privacy processes are outside the security
threat modeling
space. Therefore, if you’re going to adopt this approach, it’s
worth analyzing
whether the table covers the set of issues you care about, and if
it doesn’t, create
a version that suits your scenario. Another place you might see
the specifi city
is that many people want to discuss spoofi ng of data fl ows.
Should that be part
of STRIDE-per-element? The spoofi ng action is a spoofi ng of
the endpoint, but
that description may help some people to look for those threats.
Also note that
the more “x” marks you add, the closer you come to “consider
STRIDE for each
element of the diagram.” The editors ask if that’s a good or bad
thing, and it’s
a fi ne question. If you want to be comprehensive, this is
helpful; if you want to
focus on the most likely issues, however, it will likely be a
distraction.
So what are the exit criteria for STRIDE-per-element? When
you have a threat
per checkbox in the STRIDE-per-element table, you are doing
reasonably well.
www.it-ebooks.info
http://www.it-ebooks.info/
80 Part II ■ Finding Threats
c03.indd 07:51:54:AM 01/15/2014 Page 80
If you circle around and consider threats against your
mitigations (or ways to
bypass them) you’ll be doing pretty well.
STRIDE-per-Interaction
STRIDE-per-element is a simplified approach to identifying
threats, designed
to be easily understood by the beginner. However, in reality,
threats don’t show
up in a vacuum. They show up in the interactions of the system.
STRIDE-per-
interaction is an approach to threat enumeration that considers
tuples of (origin,
destination, interaction) and enumerates threats against them.
Initially, another
goal of this approach was to reduce the number of things that a
modeler would
have to consider, but that didn’t work out as planned. STRIDE-
per-interaction
leads to the same number of threats as STRIDE-per-element, but
the threats
may be easier to understand with this approach. This approach
was developed
by Larry Osterman and Douglas MacIver, both of Microsoft.
The STRIDE-
per-interaction approach is shown in Tables 3-10 and 3-11. Both
reference two
processes, Contoso.exe and Fabrikam.dll. Table 3-10 shows
which threats apply
to each interaction, and Table 3-11 shows an example of
STRIDE per interaction
applied to Figure 3-1. The relationships and trust boundaries
used for the named
elements in both tables are shown in Figure 3-1.
Browser database
widgetsCreate(widgets)
Write
Results
Commands
Responses
Fabrikam.dll
Contoso.exe
Figure 3-1: The system referenced in Table 3-10
In Table 3-10, the table columns are as follows:
■ A number for referencing a line (For example, “Looking at
line 2, let’s
look for spoofi ng and information disclosure threats.”)
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 3 ■ STRIDE 81
c03.indd 07:51:54:AM 01/15/2014 Page 81
■ The main element you’re looking at
■ The interactions that element has
■ The STRIDE threats applicable to the interaction
Table 3-10: STRIDE-per-Interaction: Threat Applicability
# ELEMENT INTERACTION S T R I D E
1 Process
(Contoso)
Process has outbound data
fl ow to data store.
x x
2 Process sends output to
another process.
x x x x x
3 Process sends output to
external interactor (code).
x x x x
4 Process sends output to
external interactor (human).
x
5 Process has inbound data
fl ow from data store.
x x x x
6 Process has inbound data
fl ow from a process.
x x x x
7 Process has inbound
data fl ow from external
interactor.
x x x
8 Data Flow
(com-
mands/
responses)
Crosses machine boundary x x x
9 Data Store
(database)
Process has outbound data
fl ow to data store.
x x x x
10 Process has inbound data
fl ow from data store.
x x x
11 External
Interactor
(browser)
External interactor passes
input to process.
x x x
12 External interactor gets
input from process.
x
www.it-ebooks.info
http://www.it-ebooks.info/
c03.indd 07:51:54:AM 01/15/2014 Page 82
T
a
b
le
3
-1
1
:
S
T
R
ID
E
-p
e
r-
In
te
ra
c
ti
o
n
(
E
x
a
m
p
le
)
E
L
E
M
E
N
T
IN
T
E
R
A
C
T
IO
N
S
T
R
I
D
E
1
P
ro
c
e
ss
(C
o
n
to
so
)
P
ro
c
e
ss
h
a
s
o
u
t-
b
o
u
n
d
d
a
ta
fl
o
w
to
d
a
ta
s
to
re
.
“D
a
ta
b
a
se
”
is
s
p
o
o
fe
d
,
a
n
d
C
o
n
to
so
w
ri
te
s
to
t
h
e
w
ro
n
g
p
la
c
e
.
P
2
: C
o
n
to
so
w
ri
te
s
in
fo
rm
a
-
ti
o
n
in
“
d
a
ta
-
b
a
se
w
h
ic
h
sh
o
u
ld
n
o
t
b
e
in
d
a
ta
b
a
se
”
(e
.g
.,
p
a
ss
w
o
rd
s)
.
2
P
ro
c
e
ss
s
e
n
d
s
o
u
tp
u
t
to
o
th
e
r
p
ro
c
e
ss
.
Fa
b
ri
k
a
m
is
s
p
o
o
fe
d
, a
n
d
C
o
n
to
so
w
ri
te
s
to
t
h
e
w
ro
n
g
p
la
c
e
.
Fa
b
ri
k
a
m
c
la
im
s
n
o
t
to
h
a
v
e
b
e
e
n
c
a
lle
d
b
y
C
o
n
to
so
.
P
2
: F
a
b
ri
k
a
m
is
n
o
t
a
u
th
o
ri
ze
d
to
r
e
c
e
iv
e
d
a
ta
.
N
o
n
e
u
n
le
ss
c
a
lls
a
re
sy
n
c
h
ro
n
o
u
s
Fa
b
ri
k
a
m
c
a
n
im
p
e
r-
so
n
a
te
C
o
n
to
so
a
n
d
u
se
it
s
p
ri
v
ile
g
e
s.
3
P
ro
c
e
ss
s
e
n
d
s
o
u
tp
u
t
to
e
x
te
r-
n
a
l i
n
te
ra
c
to
r
(w
it
h
t
h
e
in
te
ra
c-
to
r
b
e
in
g
c
o
d
e
).
C
o
n
to
so
is
c
o
n
fu
se
d
a
b
o
u
t
th
e
id
e
n
ti
ty
o
f
th
e
b
ro
w
se
r.
B
ro
w
se
r
d
is
c
la
im
s
a
n
d
d
o
e
sn
’t
a
c
k
n
o
w
le
d
g
e
th
e
o
u
tp
u
t.
P
2
: B
ro
w
se
r
g
e
ts
d
a
ta
it
’s
n
o
t
a
u
th
o
ri
ze
d
to
g
e
t.
N
o
n
e
u
n
le
ss
c
a
lls
a
re
sy
n
c
h
ro
n
o
u
s
4
P
ro
c
e
ss
s
e
n
d
s
o
u
tp
u
t
to
e
x
te
r-
n
a
l i
n
te
ra
c
to
r
(f
o
r
a
h
u
m
a
n
in
te
ra
c
to
r)
.
H
u
m
a
n
d
is
-
c
la
im
s
se
e
in
g
th
e
o
u
tp
u
t.
www.it-ebooks.info
http://www.it-ebooks.info/
c03.indd 07:51:54:AM 01/15/2014 Page 83
E
L
E
M
E
N
T
IN
T
E
R
A
C
T
IO
N
S
T
R
I
D
E
5
P
ro
c
e
ss
h
a
s
in
b
o
u
n
d
d
a
ta
fl
o
w
f
ro
m
d
a
ta
st
o
re
.
“D
a
ta
b
a
se
”
is
s
p
o
o
fe
d
, a
n
d
C
o
n
to
so
r
e
a
d
s
th
e
w
ro
n
g
d
a
ta
.
C
o
n
to
so
st
a
te
is
c
o
r-
ru
p
te
d
b
y
d
a
ta
r
e
a
d
fr
o
m
t
h
e
d
a
ta
s
to
re
.
P
ro
c
e
ss
s
ta
te
is
c
o
rr
u
p
te
d
b
y
t
h
e
d
a
ta
re
tr
ie
v
e
d
fr
o
m
t
h
e
d
a
ta
s
to
re
.
P
ro
c
e
ss
in
te
rn
a
l
st
a
te
is
c
o
rr
u
p
te
d
b
a
se
d
o
n
d
a
ta
r
e
a
d
fr
o
m
t
h
e
fi
l
e
,
le
a
d
in
g
t
o
c
o
d
e
e
xe
c
u
-
ti
o
n
.
6
P
ro
c
e
ss
h
a
s
in
b
o
u
n
d
d
a
ta
fl
o
w
f
ro
m
a
p
ro
c
e
ss
.
C
o
n
to
so
b
e
lie
v
e
s
it
’s
g
e
t-
ti
n
g
d
a
ta
f
ro
m
F
a
b
ri
k
a
m
.
C
o
n
to
so
d
e
n
ie
s
g
e
t-
ti
n
g
d
a
ta
f
ro
m
Fa
b
ri
k
a
m
.
C
o
n
to
so
c
ra
sh
e
s/
st
o
p
s
d
u
e
to
F
a
b
ri
k
a
m
in
te
ra
c
ti
o
n
.
Fa
b
ri
k
a
m
p
a
ss
e
s
d
a
ta
o
r
a
rg
s
th
a
t
a
llo
w
it
t
o
c
h
a
n
g
e
fl
o
w
o
f
e
xe
c
u
ti
o
n
o
f
C
o
n
to
so
.
7
P
ro
c
e
ss
h
a
s
in
b
o
u
n
d
d
a
ta
fl
o
w
f
ro
m
e
x
te
r-
n
a
l i
n
te
ra
c
to
r.
C
o
n
to
so
b
e
lie
v
e
s
it
’s
g
e
tt
in
g
d
a
ta
f
ro
m
t
h
e
b
ro
w
se
r,
w
h
e
n
in
f
a
c
t
it
’s
a
ra
n
d
o
m
a
tt
a
c
ke
r.
C
o
n
to
so
c
ra
sh
e
s/
st
o
p
s
d
u
e
to
b
ro
w
se
r
in
te
ra
c
ti
o
n
.
B
ro
w
se
r
p
a
ss
e
s
d
a
ta
o
r
a
rg
s
th
a
t
a
llo
w
it
t
o
c
h
a
n
g
e
fl
o
w
o
f
e
xe
c
u
ti
o
n
o
f
C
o
n
to
so
.
C
o
n
ti
n
u
e
s
T
a
b
le
3
-1
1
(c
o
n
ti
n
u
e
d
)
www.it-ebooks.info
http://www.it-ebooks.info/
c03.indd 07:51:54:AM 01/15/2014 Page 84
E
L
E
M
E
N
T
IN
T
E
R
A
C
T
IO
N
S
T
R
I
D
E
8
D
a
ta
F
lo
w
(c
o
m
m
a
n
d
s/
re
sp
o
n
se
s)
C
ro
ss
e
s
m
a
c
h
in
e
b
o
u
n
d
a
ry
D
a
ta
fl
o
w
is
m
o
d
ifi
e
d
b
y
M
IT
M
a
tt
a
c
k
.
T
h
e
c
o
n
te
n
ts
o
f
th
e
d
a
ta
fl
o
w
a
re
s
n
iff
e
d
o
n
th
e
w
ir
e
.
T
h
e
d
a
ta
fl
o
w
is
in
te
r-
ru
p
te
d
b
y
a
n
e
x
te
rn
a
l
e
n
ti
ty
(
e
.g
.,
m
e
ss
in
g
w
it
h
T
C
P
se
q
u
e
n
c
e
n
u
m
b
e
rs
.)
9
D
a
ta
S
to
re
(d
a
ta
b
a
se
)
P
ro
c
e
ss
h
a
s
o
u
t-
b
o
u
n
d
d
a
ta
fl
o
w
to
d
a
ta
s
to
re
.
D
a
ta
b
a
se
is
c
o
rr
u
p
te
d
.
C
o
n
to
so
c
la
im
s
n
o
t
to
h
a
v
e
w
ri
tt
e
n
to
d
a
ta
b
a
se
.
D
a
ta
b
a
se
re
v
e
a
ls
in
fo
rm
a
ti
o
n
.
D
a
ta
b
a
se
c
a
n
n
o
t
b
e
w
ri
tt
e
n
t
o
.
1
0
P
ro
c
e
ss
h
a
s
in
b
o
u
n
d
d
a
ta
fl
o
w
f
ro
m
d
a
ta
st
o
re
.
C
o
n
to
so
c
la
im
s
n
o
t
to
h
a
v
e
r
e
a
d
f
ro
m
d
a
ta
b
a
se
.
D
a
ta
b
a
se
d
is
c
lo
se
s
in
fo
rm
a
ti
o
n
.
D
a
ta
b
a
se
c
a
n
n
o
t
b
e
re
a
d
f
ro
m
.
11
E
x
te
rn
a
l
In
te
ra
c
to
r
(b
ro
w
se
r)
E
x
te
rn
a
l i
n
te
ra
c-
to
r
p
a
ss
e
s
in
p
u
t
to
p
ro
c
e
ss
.
C
o
n
to
so
is
c
o
n
fu
se
d
a
b
o
u
t
th
e
id
e
n
ti
ty
o
f
th
e
b
ro
w
se
r.
C
o
n
to
so
c
la
im
s
n
o
t
to
h
a
v
e
r
e
c
e
iv
e
d
th
e
d
a
ta
.
P
2
: p
ro
c
e
ss
n
o
t
a
u
th
o
ri
ze
d
t
o
re
c
e
iv
e
t
h
e
d
a
ta
(W
e
c
a
n
’t
s
to
p
it
.)
12
E
x
te
rn
a
l i
n
te
ra
c-
to
r
g
e
ts
in
p
u
t
fr
o
m
p
ro
c
e
ss
.
B
ro
w
se
r
is
c
o
n
fu
se
d
a
b
o
u
t
th
e
id
e
n
ti
ty
o
f
C
o
n
to
so
.
C
o
n
to
so
c
la
im
s
n
o
t
to
h
a
v
e
s
e
n
t
th
e
d
a
ta
(N
o
t
o
u
r
p
ro
b
le
m
.)
T
a
b
le
3
-1
1
(c
o
n
ti
n
u
e
d
)
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 3 ■ STRIDE 85
c03.indd 07:51:54:AM 01/15/2014 Page 85
When you have a threat per checkbox in the STRIDE-per-
interaction table,
you are doing reasonably well. If you circle through and
consider threats against
your mitigations (or ways to bypass them) you’ll be doing
pretty well.
STRIDE-per-interaction is too complex to use without a
reference chart handy.
(In contrast, STRIDE is an easy mnemonic, and STRIDE-per-
element is simple
enough that the chart can be memorized or printed on a wallet
card.)
DESIST
DESIST is a variant of STRIDE created by Gunnar Peterson.
DESIST stands for
Dispute, Elevation of privilege, Spoofing, Information
disclosure, Service denial,
and Tampering. (Dispute replaces repudiation with a less fancy
word, and
Service denial replaces Denial of Service to make the acronym
work.) Starting
from scratch, it might make sense to use DESIST over STRIDE,
but after more
than a decade of STRIDE, it would be expensive to displace at
Microsoft. (CEO
of Scorpion Software, Dana Epp, has pointed out that acronyms
with repeated
letters can be challenging, a point in STRIDE’s favor.)
Therefore, STRIDE-per-
element, rather than DESIST-per-element, exists as the norm.
Either way, it’s
always useful to have mnemonics for helping people look for
threats.
Exit Criteria
There are three ways to judge whether you’re done fi nding
threats with STRIDE.
The easiest way is to see if you have a threat of each type in
STRIDE. Slightly
harder is ensuring you have one threat per element of the
diagram. However,
both of these criterion will be reached before you’ve found all
threats. For more
comprehensiveness, use STRIDE-per-element, and ensure you
have one threat
per check.
Not having met these criteria will tell you that you’re not done,
but having
met them is not a guarantee of completeness.
Summary
STRIDE is a useful mnemonic for fi nding threats against all
sorts of techno-
logical systems. STRIDE is more useful with a repertoire of
more detailed
threats to draw on. The tables of threats can provide that for
those who are
new to security, or act as reference material for security experts
(a function
also served by Appendix B, “Threat Trees”). There are variants
of STRIDE that
attempt to add focus and attention. STRIDE-per-element is very
useful for
www.it-ebooks.info
http://www.it-ebooks.info/
86 Part II ■ Finding Threats
c03.indd 07:51:54:AM 01/15/2014 Page 86
this purpose, and can be customized to your needs. STRIDE-
per-interaction
provides more focus, but requires a crib sheet (or perhaps
software) to use. If
threat modeling experts were to start over, perhaps DESIST
would help us make
better ... progress in fi nding threats.
www.it-ebooks.info
http://www.it-ebooks.info/
87
c04.indd 11:38:53:AM 01/17/2014 Page 87
As Bruce Schneier wrote in his introduction to the subject,
“Attack trees provide
a formal, methodical way of describing the security of systems,
based on vary-
ing attacks. Basically, you represent attacks against a system in
a tree structure,
with the goal as the root node and different ways of achieving
that goal as leaf
nodes” (Schneier, 1999).
In this chapter you’ll learn about the attack tree building block
as an alterna-
tive to STRIDE. You can use attack trees as a way to fi nd
threats, as a way to
organize threats found with other building blocks, or both.
You’ll start with
how to use an attack tree that’s provided to you, and from there
learn various
ways you can create trees. You’ll also examine several example
and real attack
trees and see how they fi t into fi nding threats. The chapter
closes with some
additional perspective on attack trees.
Working with Attack Trees
Attack trees work well as a building block for threat
enumeration in the four-
step framework. They have been presented as a full approach to
threat modeling
(Salter, 1998), but the threat modeling community has learned a
lot since then.
There are three ways you can use attack trees to enumerate
threats: You can
use an attack tree someone else created to help you fi nd threats.
You can create
C H A P T E R
4
Attack Trees
www.it-ebooks.info
http://www.it-ebooks.info/
88 Part II ■ Finding Threats
c04.indd 11:38:53:AM 01/17/2014 Page 88
a tree to help you think through threats for a project you’re
working on. Or you
can create trees with the intent that others will use them.
Creating new trees
for general use is challenging, even for security experts.
Using Attack Trees to Find Threats
If you have an attack tree that is relevant to the system you’re
building, you
can use it to fi nd threats. Once you’ve modeled your system
with a DFD or
other diagram, you use an attack tree to analyze it. The attack
elicitation
task is to iterate over each node in the tree and consider if that
issue (or a
variant of that issue) impacts your system. You might choose to
track either
the threats that apply or each interaction. If your system or trees
are complex,
or if process documentation is important, each interaction may
be help-
ful, but otherwise that tracking may be distracting or tedious.
You can use
the attack trees in this chapter or in Appendix B “Threat Trees”
for this
purpose.
If there’s no tree that applies to your system, you can either
create one, or use
a different threat enumeration building block.
Creating New Attack Trees
If there are no attack trees that you can use for your system, you
can create a
project-specifi c tree. A project-specifi c tree is a way to
organize your thinking
about threats. You may end up with one or more trees, but this
section assumes
you’re putting everything in one tree. The same approach
enables you to create
trees for a single project or trees for general use.
The basic steps to create an attack tree are as follows:
1. Decide on a representation.
2. Create a root node.
3. Create subnodes.
4. Consider completeness.
5. Prune the tree.
6. Check the presentation.
Decide on a Representation
There are AND trees, where the state of a node depends on all
of the nodes
below it being true, and OR trees, where a node is true if any of
its subnodes
are true. You need to decide, will your tree be an AND or an OR
tree? (Most
will be OR trees.) Your tree can be created or presented
graphically or as an
outline. See the section “Representing a Tree” later in this
chapter for more on
the various forms of representation.
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 4 ■ Attack Trees 89
c04.indd 11:38:53:AM 01/17/2014 Page 89
Create a Root Node
To create an attack tree, start with a root node. The root node
can be the com-
ponent that prompts the analysis, or an adversary’s goal. Some
attack trees use
the problematic state (rather than the goal) as the root. Which
you should use
is a matter of preference. If the root node is a component, the
subnodes should
be labeled with what can go wrong for the node. If the root node
is an attacker
goal, consider ways to achieve that goal. Each alternative way
to achieve the
goal should be drawn in as a subnode.
The guidance in “Toward a Secure System Engineering
Methodology” (Salter,
1999) is helpful to security experts; however, it doesn’t shed
much light on how to
actually generate the trees, comparative advice about what a
root node should be
(in other words, whether it’s a goal or a system component and,
most important,
when one is better than the other), or how to evaluate trees in a
structured fashion
that would be suitable for those who are not security experts. To
be prescriptive:
■ Create a root node with an attacker goal or high-impact
action.
■ Use OR trees.
■ Draw them into a grid that the eye can track linearly.
Create Subnodes
You can create subnodes by brainstorming, or you can look for
a structured way
to fi nd more nodes. The relation between your nodes can be
AND or OR, and
you’ll have to make a choice and communicate it to those who
are using your
tree. Some possible structures for first-level subnodes include:
■ Attacking a system:
■ physical access
■ subvert software
■ subvert a person
■ Attacking a system via:
■ People
■ Process
■ Technology
■ Attacking a product during:
■ Design
■ Production
■ Distribution
■ Usage
■ Discard
www.it-ebooks.info
http://www.it-ebooks.info/
90 Part II ■ Finding Threats
c04.indd 11:38:53:AM 01/17/2014 Page 90
You can use these as a starting point, and make them more
specifi c to your
system. Iterate on the trees, adding subnodes as appropriate.
N O T E Here the term subnode is used to include leaf (end)
nodes and nodes with
children, because as you create something you may not always
know whether it is a
leaf or whether it has more branches.
Consider Completeness
For this step, you want to determine whether your set of attack
trees is complete
enough. For example, if you are using components, you might
need to add addi-
tional trees for additional components. You can also look at
each node and ask
“is there another way that could happen?” If you’re using
attacker motivations,
consider additional attackers or motivations. The lists of
attackers in Appendix
C “Attacker Lists” can be used as a basis.
An attack tree can be checked for quality by iterating over the
nodes, look-
ing for additional ways to reach the goal. It may be helpful to
use STRIDE,
one of the attack libraries in the next chapter, or a literature
review to help
you check the quality.
Prune the Tree
In this step, go through each node in the tree and consider
whether the action in
each subnode is prevented or duplicative. (An attack that’s
worth putting in a tree
will generally only be prevented in the context of a project.) If
an attack is prevented,
by some mitigation you can mark those nodes to indicate that
they don’t need to
be analyzed. (For example, you can use the test case ID, an “I”
for impossible, put a
slash through the node, or shade it gray.) Marking the nodes
(rather than deleting
them) helps people see that the attacks were considered. You
might choose to test
the assumption that a given node is impossible. See the “Test
Process Integration”
section in Chapter 10 “Validating That Threats Are Addressed”
for more details.
Check the Presentation
Regardless of graphical form, you should aim to present each
tree or subtree
in no more than a page. If your tree is hard to see on a page, it
may be help-
ful to break it into smaller trees. Each top level subnode can be
the root of a
new tree, with a “context” tree that shows the overall relations.
You may also
be able to adjust presentation details such as font size, within
the constraints
of usability.
The node labels should be of the same form, focusing on active
terms. Finally,
draw the tree on a grid to make it easy to track. Ideally, the
equivalent level
subnodes will show on a single line. That becomes more
challenging as you go
deeper into a tree.
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 4 ■ Attack Trees 91
c04.indd 11:38:53:AM 01/17/2014 Page 91
Representing a Tree
Trees can be represented in two ways: as a free-form (human-
viewable) model
without any technical structure, or as a structured representation
with variable
types and/or metadata to facilitate programmatic analysis.
Human-Viewable Representations
Attack trees can be drawn graphically or shown in outline form.
Graphical
representations are a bit more work to create but have more
potential to focus
attention. In either case, if your nodes are not all related by the
same logic
(AND/OR), you’ll need to decide on a way to represent the
relationship and
communicate that decision. If your tree is being shown
graphically, you’ll also
want to decide if you use a distinct shape for a terminal node:
The labels in a
node should be carefully chosen to be rich in information,
especially if you’re
using a graphical tree. Words such as “attack” or “via” can
distract from the key
information. Choose “modify fi le” over “attack via modifying
fi le.” Words such
as “weak” are more helpful when other nodes say “no.” So
“weak cryptography”
is a good contrast to “no cryptography.”
As always, care should be taken to ensure that the graphics are
actually
information-rich and communicative. For instance, consider the
three repre-
sentations of a tree shown in Figure 4–1.
Asset/Revenue
Overstatement
Asset/Revenue
Overstatement
Asset/Revenue
Overstatement
Timing
Differences
Fictitious
Revenue
Timing
Differences
Fictitious
Revenue
Timing
Differences
Fictitious
Revenue
Figure 4–1: Three representations of a tree
The left tree shows an example of a real tree that simply uses
boxes. This rep-
resentation does not clearly distinguish hierarchy, making it
hard to tell which
nodes are at the same level of the tree. Compare that to the
center tree, which
uses a tree to show the equivalence of the leaf nodes. The
rightmost tree adds
the “OR gate” symbol from circuit design to show that any of
the leaf nodes
lead to the parent condition.
Additionally, tree layout should make considered use of space.
In the very
small tree in Figure 4–2, note the pleasant grid that helps your
eye follow the
layout. In contrast, consider the layout of Figure 4–3, which
feels jumbled. To
focus your attention on the layout, both are shown too small to
read.
www.it-ebooks.info
http://www.it-ebooks.info/
92 Part II ■ Finding Threats
c04.indd 11:38:53:AM 01/17/2014 Page 92
Conflicts of
Interest
Bribery
Corruption AssetMisappropriation
Fraudulent
Statements
Purchases
Schemes
Sales
Schemes
Other Other
Cash
Fraudulent
Disbursements
Inventory
and all
Other Assets
Larceny Skimming Misuse Larceny
Bid
Rigging
Invoice
Kickbacks
Illegal
Gratuities
Economic
Extortion
Financial
Non-
Financial
Employment
Credentials
Asset/Revenue
Overstatements
Asset/Revenue
Understatements
Timing
Differences
Fictitious
Revenues
Concealed
Liabilities
Unconcealed
Unconcealed
Larceny
Billing
Schemes
Shell
Company
Non-Accomplice
Vendor
Personal
Purchases
Workers
Compensation
Falsified
Wages
Ghost
Employees
Commission
Schemes
Mischaracterized
Expenses
Overstated
Expenses
Fictitious
Expenses
Multiple
Reimbursements
Altered
Payee
Authorized
Maker
Forged
Endorsement
Concealed
Checks
Forged
Maker
Payroll
Schemes
Expense
Reimbursement
Schemes
Check
Tampering
Register
Disbursements
False Voids
False
Refunds
Improper
Disclosures
Improper
Asset
Valuations
Sales Receivables
Refunds &
other
Asset Req. &
Transfers
Fake Sales
& Shipping
Purchasing &
Receiving
Write-off
Schemes
Lapping
Schemes
Of Cash
on Hand
From the
Deposit
Other Understated
Unrecorded
Internal
Documents
External
Documents
Figure 4–2: A tree drawn on a grid
Repudiate
message
Weak
signature
system
Replay
attacks
Weak
logging
Spoofing
external
entity
Spoofing
external
entity
Logs weaker than
authentication system
Logging unauthenticated
or weakly authenticated
data
No logs
Logging
insufficient
data
Tampering
threats
against logs
Repudiate
transaction
Repudiation,
external entity
or process
Figure 4–3: A tree drawn without a grid
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 4 ■ Attack Trees 93
c04.indd 11:38:53:AM 01/17/2014 Page 93
N O T E In Writing Secure Code 2 (Microsoft Press, 2003),
Michael Howard and David
LeBlanc suggest the use of dotted lines for unlikely threats,
solid lines for likely
threats, and circles to show mitigations, although including
mitigations may make
the trees too complex.
Outline representations are easier to create than graphical
representations,
but they tend to be less attention-grabbing. Ideally, an outline
tree is shown on
a single page, not crossing pages. The question of how to
effectively represent
AND/OR is not simple. Some representations leave them out,
others include
an indicator either before or after a line. The next three samples
are modeled
after the trees in “Election Operations Assessment Threat
Trees” later in this
chapter. As you look at them, ask yourself precisely what is
needed to achieve
the goal in node 1, “Attack voting equipment.”
1. Attack voting equipment
1.1 Gather knowledge
1.1.1 From insider
1.1.2 From components
1.2 Gain insider access
1.2.1 At voting system vendor
1.2.2 By illegal insider entry
The preceding excerpt isn’t clear. Should the outline be read as
a need to do
each of these steps, or one or the other to achieve the goal of
attacking voting
equipment? Contrast that with the next tree, which is somewhat
better:
1. Attack voting equipment
1.1 Gather knowledge (and)
1.1.1 From insider (or)
1.1.2 From components
1.2 Gain insider access (and)
1.2.1 At voting system vendor (or)
1.2.2 By illegal insider entry
This representation is useful at the end nodes: It is clearly 1.1.1
or 1.1.2. But
what does the “and” on line 1.1 refer to? 1.1.1 or 1.1.2? The
representation is not
clear. Another possible form is shown next:
www.it-ebooks.info
http://www.it-ebooks.info/
94 Part II ■ Finding Threats
c04.indd 11:38:53:AM 01/17/2014 Page 94
1. Attack voting equipment
O 1.1 Gather knowledge
T 1.1.1 From insider
O 1.1.2 From components
O 1.2 Gain insider access
T 1.2.1 At voting system vendor
T 1.2.2 By illegal insider entry
This is intended to be read as “AND Node: 1: Attack voting
equipment, involves
1.1, gather knowledge either from insider or from components
AND 1.2, gain
insider access . . .” This can be confusing if read as the children
of that node
are to be ORed, rather than being ORed with its sibling nodes.
This is much
clearer in the graphical presentation. Also note that the steps
are intended to
be sequential. You must gather knowledge, then gain insider
access, then attack
the components to pull off the attack.
As you can see from the preceding examples, the question of
how to use an out-
line representation of a tree is less simple than you might
expect. If you are using
someone else’s tree, be sure you understand their intent. If you
are creating a tree, be
sure you are clear on your intent, and clear in your
communication of your intent.
Structured Representations
Graphical and outline presentation of trees are useful for
humans, but a tree
is also a data structure, and a structured representation of a tree
makes it pos-
sible to apply logic to the tree and in turn, the system you’re
modeling. Several
software packages enable you to create and manage complex
trees. One such
package allows the modeler to add costs to each node, and then
assess what
attacks an attacker with a given budget can execute. As your
trees become more
complex, such software is more likely to be worthwhile. See
Chapter 11 “Threat
Modeling Tools” for a list of tree management software.
Example Attack Tree
The following simple example of an attack tree (and a useful
component for other
attack tree activity) models how an attacker might get into a
building. The entire
tree is an OR tree; any of the methods listed will achieve the
goal. (This tree is
derived from “An Attack Tree for the Border Gateway Protocol”
[Convery, 2004].)
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 4 ■ Attack Trees 95
c04.indd 11:38:53:AM 01/17/2014 Page 95
Goal: Access to the building
1. Go through a door
a. When it’s unlocked:
i. Get lucky.
ii. Obstruct the latch plate (the “Watergate Classic”).
iii. Distract the person who locks the door at night.
b. Drill the lock.
c. Pick the lock.
d. Use the key.
i. Find a key.
ii. Steal a key.
iii. Photograph and reproduce the key.
iv. Social engineer a key from someone.
1. Borrow the key.
2. Convince someone to post a photo of their key ring.
e. Social engineer your way in.
i. Act like you’re authorized and follow someone in.
ii. Make friends with an authorized person.
iii. Carry a box, a cup of coffee in each hand, etc.
2. Go through a window.
a. Break a window.
b. Lift the window.
3. Go through a wall.
a. Use a sledgehammer or axe.
b. Use a truck to go through the wall.
4. Gain access via other means.
a. Use a fi re escape.
b. Use roof access from a helicopter (preferably black) or
adjacent
building.
c. Enter another part of the building, using another tenant’s
access.
www.it-ebooks.info
http://www.it-ebooks.info/
96 Part II ■ Finding Threats
c04.indd 11:38:53:AM 01/17/2014 Page 96
Real Attack Trees
A variety of real attack trees have been published. These trees
may be helpful
to you either directly, because they model systems like the one
you’re model-
ing, or as examples of how to build an attack tree. The three
attack trees in this
section show how insiders commit fi nancial fraud, how to
attack elections, and
threats against SSL.
Each of these trees has the nice property of being available
now, either as
an extended example, as a model for you to build from, or (if
you’re working
around fraud, elections, or SSL), to use directly in analyzing a
system which
matters to you.
The fraud tree is designed for you to use. In contrast, the
election trees
were developed to help the team think through their threats and
organize the
possibilities.
Fraud Attack Tree
An attack tree from the Association of Certifi ed Fraud
Examiners is shown with
their gracious permission in Figure 4–4, and it has a number of
good qualities.
First, it’s derived from actual experience in fi nding and
exposing fraud. Second,
it has a structure put together by subject matter experts, so it’s
not a random
collection of threats. Finally, it has an associated set of
mitigations, which are
discussed at great length in Joseph Wells’ Corporate Fraud
Handbook (Wiley, 2011).
Election Operations Assessment Threat Trees
The largest publicly accessible set of threat trees was created
for the Elections
Assistance Commission by a team centered at the University of
Southern Alabama.
There are six high-level trees. They are useful both as an
example and for you
to use directly, and there are some process lessons you can
learn.
N O T E This model covers a wider scope of attacks than
typical for software threat
models, but is scoped like many operational threat models.
1. Attack voting equipment.
2. Perform an insider attack.
3. Subvert the voting process.
4. Experience technical failure.
5. Attack audit.
6. Disrupt operations.
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 4 ■ Attack Trees 97
c04.indd 11:38:53:AM 01/17/2014 Page 97
Conflicts of
Interest
Bribery
Corruption AssetMisappropriation
Fraudulent
Statements
Purchases
Schemes
Sales
Schemes
Other Other
Cash
Fraudulent
Disbursements
Inventory
and all
Other Assets
Larceny Skimming Misuse Larceny
Bid
Rigging
Invoice
Kickbacks
Illegal
Gratuities
Economic
Extortion
Financial
Non-
Financial
Employment
Credentials
Asset/Revenue
Overstatements
Asset/Revenue
Understatements
Timing
Differences
Fictitious
Revenues
Concealed
Liabilities
Unconcealed
Unconcealed
Larceny
Billing
Schemes
Shell
Company
Non-Accomplice
Vendor
Personal
Purchases
Workers
Compensation
Falsified
Wages
Ghost
Employees
Commission
Schemes
Mischaracterized
Expenses
Overstated
Expenses
Fictitious
Expenses
Multiple
Reimbursements
Altered
Payee
Authorized
Maker
Forged
Endorsement
Concealed
Checks
Forged
Maker
Payroll
Schemes
Expense
Reimbursement
Schemes
Check
Tampering
Register
Disbursements
False Voids
False
Refunds
Improper
Disclosures
Improper
Asset
Valuations
Sales Receivables
Refunds &
other
Asset Req. &
Transfers
Fake Sales
& Shipping
Purchasing &
Receiving
Write-off
Schemes
Lapping
Schemes
Of Cash
on Hand
From the
Deposit
Other Understated
Unrecorded
Internal
Documents
External
Documents
Figure 4–4: The ACFE fraud tree
www.it-ebooks.info
http://www.it-ebooks.info/
98 Part II ■ Finding Threats
c04.indd 11:38:53:AM 01/17/2014 Page 98
If your system is vulnerable to threats such as equipment attack,
insider attack,
process subversion or disruption, these attack trees may work
well to help you
fi nd threats against those systems.
The team created these trees to organize their thinking around
what might
go wrong. They described their process as having found a very
large set of
issues via literature review, brainstorming, and original
research. They then
broke the threats into high-level sets, and had individuals
organize them into
trees. An attempt to sort the sets into a tree in a facilitated
group process did not
work (Yanisac, 2012). The organization of trees may require a
single person or a
very close-knit team; you should be cautious about trying for
consensus trees.
Mind Maps
Application security specialist Ivan Ristic (Ristić, 2009)
conducted an interesting
experiment using a mind map for what he calls an SSL threat
model, as shown
in Figure 4–5.
This is an interesting way to present a tree. There are very few
mind-map trees
out there. This tree, like the election trees, shows a set of
editorial decisions and those
who use mind maps may fi nd the following perspective on this
mind map helpful:
■ The distinction between “Protocols/Implementation bugs”
and “End
points/Client side/secure implementation” is unclear.
■ There’s “End points/Client side/secure implementation” but
no “server
side” counterpart to that.
■ Under “End points/server side/server confi g” there’s a large
subtree.
Compare that to Client side where there’s no subtree at all.
■ Some items have an asterisk (*) but it’s unclear what that
means. After
discussion with Ivan, it turns out that those “may not apply to
everyone.”
■ There’s an entire set of traffi c analytic threats that allow
you to see where
on a secure site someone is. These issues are made worse by
AJAX, but
more important here, how should they fi t into this mind map?
Perhaps
under “Protocols/specifi cations/scope limits”?
■ It’s hard to fi nd elements of the map, as it draws the eye in
various direc-
tions, some of which don’t align with the direction of reading.
Perspective on Attack Trees
Attack trees can be a useful way to convey information about
threats. They can
be helpful even to security experts as a way to quickly consider
possible attack
types. However, despite their surface appeal, it is very hard to
create attack trees.
www.it-ebooks.info
http://www.it-ebooks.info/
c04.indd 11:38:53:AM 01/17/2014 Page 99
Trust path validation bugs
NUL-byte Certificates
Leaked CA Certificates
Rogue CA Certificates
Rogue Sysadmin
Server Compromise
Backup Compromise
Attacks against sysadmins
Social engineering
Validation software subversion
Forgery
BriberyFailure to enforce SSL
Expired certificate
Incorrectly configured chain
Invalid hostname
Not valid for all required hostnames
Insufficient assurance (*)
Self-signed Certificates
Unprotected Private Key
Private Key Duplication (*)
Private Key reuse
Lack of trust validation
Validation against other root certs
Lack of revocation checking
Client Authentication
Use of weak protocols
Weak key exchange (*)
Weak ciphers (*)
Non-FIPS approved ciphers (*)
Anonymous key exchange
Invalid Certificates
Configuration errors
Server Configuration
Server-side
End Points
SSL Threat
Model
Protocols
Specifications
Scope limitations
No IP layer protection
Not end-to-end
No certificate information protection
Hostname leakage (via SNI)
Downgrade attack (SSLv2)
Truncation attack (SSLv2)
Bleichenbacher adaptive
chosen-ciphertext attack
Klima-Pokorny-Rosa adaptive
chosen-ciphertext attack
etc..
Implementation bugs
Usability
Prevalence of self-signed certificates
Domain name spoofing
Internationalised domain names
Similar domain names
DNS Cache Poisoning
MITM
LAN
Wireless
Route hijacking (BGP)
Phishing
Corporate interception
XSS
Weaknesses
Users
Attacks
Configuration Weaknesses
Use of unpactched SSL libraries
Mixed SSL/Non-SSL Areas
Insecure cookies
Site
Implementation
User Interface (Usability)
Client Configuration
Secure Implementation
Lack of revocation checking
Client Side
Validation errors
Theft
Site certificate
attacks
Trust (PKI)
Certificate Validation Bugs
CA Certificate Attacks
Figure 4–5: Ristic’s SSL mind map
www.it-ebooks.info
http://www.it-ebooks.info/
100 Part II ■ Finding Threats
c04.indd 11:38:53:AM 01/17/2014 Page 100
I hope that we’ll see experimentation and perhaps progress in
the quality of
advice. There are also a set of issues that can make trees hard to
use, including
completeness, scoping, and meaning:
■ Completeness: Without the right set of root nodes, you could
miss entire
attack groupings. For example, if your threat model for a safe
doesn’t
include “pour liquid nitrogen on the metal, then hit with a
hammer,” then
your safe is unlikely to resist this attack. Drawing a tree
encourages specifi c
questions, such as “how could I open the safe without the
combination?”
It may or may not bring you to the specifi c threat. Because
there’s no way
of knowing how many nodes a branch should have, you may
never reach
that point. A close variant of the this is how do you know that
you’re
done? (Schneier’s attack tree article alludes to these problems.)
■ Scoping: It may be unreasonable to consider what happens
when the
computer’s main memory is rapidly cooled and removed from
the moth-
erboard. If you write commercial software for word processing,
this may
seem like an operating system issue. If you create commercial
operating
systems, it may seem like a hardware issue. The nature of attack
trees means
many of the issues discovered will fall under the category of
“there’s no
way for us to fi x that.”
■ Meaning: There is no consistency around AND/OR, or around
sequence,
which means that understanding a new tree takes longer.
Summary
Attack trees fi t well into the four-step framework for threat
modeling. They can
be a useful tool for fi nding threats, or a way to organize
thinking about threats
(either for your project or more broadly).
To create a new attack tree to help you organize thinking, you
need to decide
on a representation, and then select a root node. With that root
node, you can
brainstorm, use STRIDE, or use a literature review to fi nd
threats to add to
nodes. As you iterate over the nodes, consider if the tree is
complete or overly-
full, aiming to ensure the right threats are in the tree. When
you’re happy with
the content of the tree, you should check the presentation so
others can use it.
Attack trees can be represented as graphical trees, as outlines,
or in software.
You saw a sample tree for breaking into a building, and real
trees for fraud,
elections, and SSL. Each can be used as presented, or as an
example for you to
consider how to construct trees of your own.
www.it-ebooks.info
http://www.it-ebooks.info/
101
c05.indd 07:52:5:AM 01/15/2014 Page 101
Some practitioners have suggested that STRIDE is too high
level, and should
be replaced with a more detailed list of what can go wrong.
Insofar as STRIDE
being abstract, they’re right. It could well be useful to have a
more detailed list
of common problems.
A library of attacks can be a useful tool for fi nding threats
against the system
you’re building. There are a number of ways to construct such a
library. You
could collect sets of attack tools; either proof-of-concept code
or fully developed
(“weaponized”) exploit code can help you understand the
attacks. Such a collec-
tion, where no modeling or abstraction has taken place, means
that each time
you pick up the library, each participant needs to spend time
and energy creat-
ing a model from the attacks. Therefore, a library that provides
that abstraction
(and at a more detailed level than STRIDE) could well be
useful. In this chapter,
you’ll learn about several higher-level libraries, including how
they compare
to checklists and literature reviews, and a bit about the costs
and benefi ts of
creating a new one.
Properties of Attack Libraries
As stated earlier, there are a number of ways to construct an
attack library, so
you probably won’t be surprised to learn that selecting one
involves trade-offs,
C H A P T E R
5
Attack Libraries
www.it-ebooks.info
http://www.it-ebooks.info/
102 Part II ■ Finding Threats
c05.indd 07:52:5:AM 01/15/2014 Page 102
and that different libraries address different goals. The major
decisions to be
made, either implicitly or explicitly, are as follows:
■ Audience
■ Detail versus abstraction
■ Scope
Audience refers to whom the library targets. Decisions about
audience dra-
matically infl uence the content and even structure of a library.
For example,
the “Library of Malware Traffi c Patterns” is designed for
authors of intrusion
detection tools and network operators. Such a library doesn’t
need to spend
much, if any, time explaining how malware works.
The question of detail versus abstraction is about how many
details are included
in each entry of the library. Detail versus abstraction is, in
theory, simple. You
pick the level of detail at which your library should deliver, and
then make sure
it lands there. Closely related is structure, both within entries
and between them.
Some libraries have very little structure, others have a great
deal. Structure
between entries helps organize new entries, while structure
within an entry
helps promote consistency between entities. However, all that
structure comes at
a cost. Elements that are hard to categorize are inevitable, even
when the things
being categorized have some form of natural order, such as they
all descend
from the same biological origin. Just ask that egg-laying
mammal, the duck-
billed platypus. When there is less natural order (so to speak),
categorization is
even harder. You can conceptualize this as shown in Figure 5-1.
DetailedAbstract
STRIDE OWASP Top 10 CAPEC Checklist
Figure 5-1: Abstraction versus detail
Scope is also an important characteristic of an attack library. If
it isn’t shown
by a network trace, it probably doesn’t fi t the malware traffi c
attack library. If
it doesn’t impact the web, it doesn’t make sense to include it in
the OWASP
attack libraries.
There’s probably more than one sweet spot for libraries. They
are a balance
of listing detailed threats while still being thought provoking.
The thought-
provoking nature of a library is important for good threat
modeling. A thought-
provoking list means that some of the engineers using it will
find interesting and
different threats. When the list of threats reaches a certain level
of granularity,
it stops prompting thinking, risks being tedious to apply, and
becomes more
and more of a checklist.
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 5 ■ Attack Libraries 103
c05.indd 07:52:5:AM 01/15/2014 Page 103
The library should contain something to help remind people
using it that
it is not a complete enumeration of what could go wrong. The
precise form of
that reminder will depend on the form of the library. For
example, in Elevation
of Privilege, it is the ace card(s), giving an extra point for a
threat not in the game.
Closely related to attack libraries are checklists and literature
reviews, so
before examining the available libraries, the following section
looks at checklists
and literature reviews.
Libraries and Checklists
Checklists are tremendously useful tools for preventing certain
classes of prob-
lems. If a short list of problems is routinely missed for some
reason, then a
checklist can help you ensure they don’t recur. Checklists must
be concise and
actionable.
Many security professionals are skeptical, however, of
“checklist security”
as a substitute for careful consideration of threats. If you hate
the very idea
of checklists, you should read The Checklist Manifesto by Atul
Gawande. You
might be surprised by how enjoyable a read it is. But even if
you take a big-tent
approach to threat modeling, that doesn’t mean checklists can
replace the work
of trained people using their judgment.
A checklist helps people avoid common problems, but the
modeling of threats
has already been done when the checklist is created. Therefore,
a checklist can
help you avoid whatever set of problems the checklist creators
included, but it is
unlikely to help you think about security. In other words, using
a checklist won’t
help you fi nd any threats not on the list. It is thus narrower
than threat modeling.
Because checklists can still be useful as part of a larger threat
modeling
process, you can fi nd a collection of them at the end of Chapter
1, “Dive In and
Threat Model!” and throughout this book as appropriate. The
Elevation of Privilege
game, by the way, is somewhat similar to a checklist. Two
things distinguish
it. The fi rst is the use of aces to elicit new threats. The second
is that by making
threat modeling into a game, players are given social permission
to playfully
analyze a system, to step beyond the checklist, and to engage
with the security
questions in play. The game implicitly abandons the “stop and
check in” value
that a checklist provides.
Libraries and Literature Reviews
A literature review is roughly consulting the library to learn
what has happened
in the past. As you saw in Chapter 2, “Strategies for Threat
Modeling,” reviewing
threats to systems similar to yours is a helpful starting point in
threat modeling.
If you write up the input and output of such a review, you may
have the start of
www.it-ebooks.info
http://www.it-ebooks.info/
104 Part II ■ Finding Threats
c05.indd 07:52:5:AM 01/15/2014 Page 104
an attack library that you can reuse later. It will be more like an
attack library
if you abstract the attacks in some way, but you may defer that
to the second or
third time you review the attack list.
Developing a new library requires a very large time investment,
which is
probably part of why there are so few of them. However,
another reason might
be the lack of prescriptive advice about how to do so. If you
want to develop a
literature review into a library, you need to consider how the
various attacks
are similar and how they differ. One model you can use for this
is a zoo. A zoo
is a grouping—whether of animals, malware, attacks, or other
things—that
taxonomists can use to test their ideas for categorization. To
track your zoo of
attacks, you can use whatever form suits you. Common choices
include a wiki,
or a Word or Excel document. The main criteria are ease of use
and a space for
each entry to contain enough concrete detail to allow an analyst
to dig in.
As you add items to such a zoo, consider which are similar, and
how to
group them. Be aware that all such categorizations have tricky
cases, which
sometimes require reorganization to refl ect new ways of
thinking about them.
If your categorization technique is intended to be used by
multiple independent
people, and you want what’s called “inter-rater consistency,”
then you need to
work on a technique to achieve that. One such technique is to
create a fl owchart,
with specifi c questions from stage to stage. Such a fl owchart
can help produce
consistency.
The work of grouping and regrouping can be a considerable and
ongoing
investment. If you’re going to create a new library, consider
spending some time
fi rst researching the history and philosophy of taxonomies.
Books like Sorting
Things Out: Classifi cation and Its Consequences (Bowker,
2000) can help.
CAPEC
The CAPEC is MITRE’s Common Attack Pattern Enumeration
and Classifi cation.
As of this writing, it is a highly structured set of 476 attack
patterns, organized
into 15 groups:
■ Data Leakage
■ Attacks Resource Depletion
■ Injection (Injecting Control Plane content through the Data
Plane)
■ Spoofi ng
■ Time and State Attacks
■ Abuse of Functionality
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 5 ■ Attack Libraries 105
c05.indd 07:52:5:AM 01/15/2014 Page 105
■ Probabilistic Techniques
■ Exploitation of Authentication
■ Exploitation of Privilege/Trust
■ Data Structure Attacks
■ Resource Manipulation
■ Network Reconnaissance
■ Social Engineering Attacks
■ Physical Security Attacks
■ Supply Chain Attacks
Each of these groups contains a sub-enumeration, which is
available via MITRE
(2013b). Each pattern includes a description of its
completeness, with values
ranging from “hook” to “complete.” A complete entry includes
the following:
■ Typical severity
■ A description, including:
■ Summary
■ Attack execution fl ow
■ Prerequisites
■ Method(s) of attack
■ Examples
■ Attacker skills or knowledge required
■ Resources required
■ Probing techniques
■ Indicators/warnings of attack
■
Solution
s and mitigations
■ Attack motivation/consequences
■ Vector
■ Payload
■ Relevant security requirements, principles and guidance
■ Technical context
■ A variety of bookkeeping fi elds (identifi er, related attack
patterns and
vulnerabilities, change history, etc.)
www.it-ebooks.info
http://www.it-ebooks.info/
106 Part II ■ Finding Threats
c05.indd 07:52:5:AM 01/15/2014 Page 106
An example CAPEC is shown in Figure 5-2.
You can use this very structured set of information for threat
modeling in
a few ways. For instance, you could review a system being built
against either
each CAPEC entry or the 15 CAPEC categories. Reviewing
against the individual
entries is a large task, however; if a reviewer averages fi ve
minutes for each of the
475 entries, that’s a full 40 hours of work. Another way to use
this information
is to train people about the breadth of threats. Using this
approach, it would be
possible to create a training class, probably taking a day or
more.
Exit Criteria
The appropriate exit criteria for using CAPEC depend on the
mode in which
you’re using it. If you are performing a category review, then
you should have at
least one issue per categories 1–11 (Data Leakage, Resource
Depletion, Injection,
Spoofi ng, Time and State, Abuse of Functionality, Probabilistic
Techniques,
Exploitation of Authentication, Exploitation of Privilege/Trust,
Data Structure
Attacks, and Resource Manipulation) and possibly one for
categories 12–15
(Network Reconnaissance, Social Engineering, Physical
Security, Supply Chain).
Perspective on CAPEC
Each CAPEC entry includes an assessment of its completion,
which is a nice
touch. CAPECs include a variety of sections, and its scope
differs from STRIDE
in ways that can be challenging to unravel. (This is neither a
criticism of CAPEC,
which existed before this book, nor a suggestion that CAPEC
change.)
The impressive size and scope of CAPEC may make it
intimidating for people to
jump in. At the same time, that specifi city may make it easier
to use for someone
who’s just getting started in security, where specifi city helps to
identify attacks.
For those who are more experienced, the specifi city and
apparent completeness
of CAPEC may result in less creative thinking. I personally fi
nd that CAPEC’s
impressive size and scope make it hard for me to wrap my head
around it.
CAPEC is a classifi cation of common attacks, whereas STRIDE
is a set of
security properties. This leads to an interesting contrast.
CAPEC, as a set of
attacks, is a richer elicitation technique. However, when it
comes to addressing
the CAPEC attacks, the resultant techniques are far more
complex. The STRIDE
defenses are simply those approaches that preserve the property.
However,
looking up defenses is simpler than fi nding the attacks. As
such, CAPEC may
have more promise than STRIDE for many populations of threat
modelers. It
would be fascinating to see efforts made to improve CAPEC’s
usability, perhaps
with cheat sheets, mnemonics, or software tools.
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 5 ■ Attack Libraries 107
c05.indd 07:52:5:AM 01/15/2014 Page 107
Figure 5-2: A sample CAPEC entry
www.it-ebooks.info
http://www.it-ebooks.info/
108 Part II ■ Finding Threats
c05.indd 07:52:5:AM 01/15/2014 Page 108
OWASP Top Ten
OWASP, The Open Web Application Security Project, offers a
Top Ten Risks list
each year. In 2013, the list was as follows:
■ Injection
■ Broken Authentication and Session Management
■ Cross-Site Scripting
■ Insecure Direct Object References
■ Security Misconfi guration
■ Sensitive Data Exposure
■ Missing Function Level Access Control
■ Cross Site Request Forgery
■ Components with Known Vulnerabilities
■ Invalidated Requests and Forwards [sic]
This is an interesting list from the perspective of the threat
modeler. The list
is a good length, and many of these attacks seem like they are
well-balanced
in terms of attack detail and its power to provoke thought. A
few (cross-site
scripting and cross-site request forgery) seem overly specifi c
with respect to
threat modeling. They may be better as input into test planning.
Each has backing information, including threat agents, attack
vectors, security
weaknesses, technical and business impacts, as well as details
covering whether
you are vulnerable to the attack and how you prevent it.
To the extent that what you’re building is a web project, the
OWASP Top Ten
list is probably a good adjunct to STRIDE. OWASP updates the
Top Ten list each
year based on the input of its volunteer membership. Over time,
the list may be
more or less valuable as a threat modeling attack library.
The OWASP Top Ten are incorporated into a number of
OWASP-suggested
methodologies for web security. Turning the Top Ten into a
threat modeling
methodology would likely involve creating something like a
STRIDE-per-element
approach (Top Ten per Element?) or looking for risks in the list
at each point
where a data fl ow has crossed a trust boundary.
Summary
By providing mode specifi cs, attack libraries may be useful to
those who are not
deeply familiar with the ways attackers work. It is challenging
to fi nd generally
useful sweet spots between providing lots of details and
becoming tedious. It
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 5 ■ Attack Libraries 109
c05.indd 07:52:5:AM 01/15/2014 Page 109
is also challenging to balance details with the threat of fooling a
reader into
thinking a checklist is comprehensive. Performing a literature
review and cap-
turing the details in an attack library is a good way for someone
to increase
their knowledge of security.
There are a number of attack libraries available, including
CAPEC and the
OWASP Top Ten. Other libraries may also provide value
depending on the
technology or system on which you’re working.
www.it-ebooks.info
http://www.it-ebooks.info/
www.it-ebooks.info
http://www.it-ebooks.info/
111
c06.indd 02:15:11:PM 01/16/2014 Page 111
Threat modeling for privacy issues is an emergent and important
area. Much
like security threats violate a required security property,
privacy threats are
where a required privacy property is violated. Defi ning privacy
requirements
is a delicate balancing act, however, for a few reasons: First,
the organization
offering a service may want or even need a lot of information
that the people
using the service don’t want to provide. Second, people have
very different
perceptions of what privacy is, and what data is private, and
those perceptions
can change with time. (For example, someone leaving an
abusive relation-
ship should be newly sensitive to the value of location privacy,
and perhaps
consider their address private for the fi rst time.) Lastly, most
people are “privacy
pragmatists” and will make value tradeoffs for personal
information.
Some people take all of this ambiguity to mean that engineering
for privacy
is a waste. They’re wrong. Others assert that concern over
privacy is a waste,
as consumers don’t behave in ways that expose privacy
concerns. That’s also
wrong. People often pay for privacy when they understand the
threat and the
mitigation. That’s why advertisements for curtains, mailboxes,
and other privacy-
enhancing technologies often lead with the word “privacy.”
Unlike the previous three chapters, each of which focused on a
single type
of tool, this chapter is an assemblage of tools for fi nding
privacy threats. The
approaches described in this chapter are more developed than
“worry about
privacy,” yet they are somewhat less developed than security
attack libraries
such as CAPEC (discussed in Chapter 5, “Attack Libraries”). In
either event, they
C H A P T E R
6
Privacy Tools
www.it-ebooks.info
http://www.it-ebooks.info/
112 Part II ■ Finding Threats
c06.indd 02:15:11:PM 01/16/2014 Page 112
are important enough to include. Because this is an emergent
area, appropriate
exit criteria are less clear, so there are no exit criteria sections
here.
In this chapter, you’ll learn about the ways to threat model for
privacy, includ-
ing Solove’s taxonomy of privacy harms, the IETF’s “Privacy
Considerations
for Internet Protocols,” privacy impact assessments (PIAs), the
nymity slider,
contextual integrity, and the LINDDUN approach, a mirror of
STRIDE created
to fi nd privacy threats. It may be reasonable to treat one or
more of contextual
integrity, Solove’s taxonomy or (a subset of) LINDDUN as a
building block
that can snap into the four-stage model, either replacing or
complementing the
security threat discovery.
N O T E Many of these techniques are easier to execute when
threat modeling
operational systems, rather than boxed software. (Will your
database be used to
contain medical records? Hard to say!) The IETF process is
more applicable than other
processes to “boxed software” designs.
Solove’s Taxonomy of Privacy
In his book, Understanding Privacy (Harvard University Press,
2008), George
Washington University law professor Daniel Solove puts forth a
taxonomy of
privacy harms. These harms are analogous to threats in many
ways, but also
include impact. Despite Solove’s clear writing, the descriptions
might be most
helpful to those with some background in privacy, and
challenging for tech-
nologists to apply to their systems. It may be possible to use the
taxonomy as
a tool, applying it to a system under development, considering
whether each
of the harms presented is enabled. The following list presents a
version of this
taxonomy derived from Solove, but with two changes. First, I
have added “iden-
tifi er creation,” in parentheses. I believe that the creation of an
identifi er is a
discrete harm because it enables so many of the other harms in
the taxonomy.
(Professor Solove and I have agreed to disagree on this issue.)
Second, exposure
is in brackets, because those using the other threat modeling
techniques in this
Part should already be handling such threats.
■ (Identifi er creation)
■ Information collection: surveillance, interrogation
■ Information processing: aggregation, identifi cation,
insecurity, secondary
use, exclusion
■ Information dissemination: breach of confi dentiality,
disclosure, increased
accessibility, blackmail, appropriation, distortion, [exposure]
■ Invasion: intrusion, decisional interference
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 6 ■ Privacy Tools 113
c06.indd 02:15:11:PM 01/16/2014 Page 113
Many of the elements of this list are self-explanatory, and all
are explained
in depth in Solove’s book. A few may benefi t from a brief
discussion. The harm
of surveillance is twofold: First is the uncomfortable feeling of
being watched
and second are the behavioral changes it may cause. Identifi
cation means the
association of information with a fl esh-and-blood person.
Insecurity refers to
the psychological state of a person made to feel insecure, rather
than a techni-
cal state. The harm of secondary use of information relates to
societal trust.
Exclusion is the use of information provided to exclude the
provider (or others)
from some benefi t.
Solove’s taxonomy is most usable by privacy experts, in the
same way that
STRIDE as a mnemonic is most useful for security experts. To
make use of it in
threat modeling, the steps include creating a model of the data
fl ows, paying
particular attention to personal data.
Finding these harms may be possible in parallel to or replacing
security
threat modeling. Below is advice on where and how to focus
looking for these.
■ Identifi er creation should be reasonably easy for a developer
to identify.
■ Surveillance is where data is collected about a broad swath
of people or
where that data is gathered in a way that’s hard for a person to
notice.
■ Interrogation risks tend to correlate around data collection
points, for
example, the many “* required” fi elds on web forms. The
tendency to
lie on such forms may be seen as a response to the interrogation
harm.
■ Aggregation is most frequently associated with inbound data
fl ows from
external entities.
■ Identifi cation is likely to be found in conjunction with
aggregation or
where your system has in-person interaction.
■ Insecurity may associate with where data is brought together
for decision
purposes.
■ Secondary use may cross trust boundaries, possibly including
boundaries
that your customers expect to exist.
■ Exclusion happens at decision points, and often fraud
management
decisions.
■ Information dissemination threats (all of them) are likely to
be associated
with outbound data fl ows; you should look for them where data
crosses
trust boundaries.
■ Intrusion is an in-person intrusion; if your system has no
such features,
you may not need to look at these.
■ Decisional interference is largely focused on ways in which
information
collection and processing may infl uence decisions, and as such
it most
likely plays into a requirements discussion.
www.it-ebooks.info
http://www.it-ebooks.info/
114 Part II ■ Finding Threats
c06.indd 02:15:11:PM 01/16/2014 Page 114
Privacy Considerations for Internet Protocols
The Internet Engineering Task Force (IETF) requires
consideration of security
threats, and has a process to threat model focused on their
organizational needs,
as discussed in Chapter 17, “Bringing Threat Modeling to Your
Organization.”
As of 2013, they sometimes require consideration of privacy
threats. An infor-
mational RFC “Privacy Considerations for Internet Protocols,”
outlines a set of
security-privacy threats, a set of pure privacy threats, and offers
a set of mitiga-
tions and some general guidelines for protocol designers
(Cooper, 2013). The
combined security-privacy threats are as follows:
■ Surveillance
■ Stored data compromise
■ Mis-attribution or intrusion (in the sense of unsolicited
messages and
denial-of-service attacks, rather than break-ins)
The privacy-specifi c threats are as follows:
■ Correlation
■ Identifi cation
■ Secondary use
■ Disclosure
■ Exclusion (users are unaware of the data that others may be
collecting)
Each is considered in detail in the RFC. The set of mitigations
includes data
minimization, anonymity, pseudonymity, identity confi
dentiality, user participa-
tion and security. While somewhat specifi c to the design of
network protocols,
the document is clear, free, and likely a useful tool for those
attempting to threat
model privacy. The model, in terms of the abstracted threats and
methods to
address them, is an interesting step forward, and is designed to
be helpful to
protocol engineers.
Privacy Impact Assessments (PIA)
As outlined by Australian privacy expert Roger Clarke in his
“An Evaluation
of Privacy Impact Assessment Guidance Documents,” a PIA “is
a systematic
process that identifi es and evaluates, from the perspectives of
all stakeholders,
the potential effects on privacy of a project, initiative, or
proposed system or
scheme, and includes a search for ways to avoid or mitigate
negative privacy
impacts.” Thus, a PIA is, in several important respects, a
privacy analog to security
threat modeling. Those respects include the systematic tools for
identifi cation
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 6 ■ Privacy Tools 115
c06.indd 02:15:11:PM 01/16/2014 Page 115
and evaluation of privacy issues, and the goal of not simply
identifying issues,
but also mitigating them. However, as usually presented, PIAs
have too much
integration between their steps to snap into the four-stage
framework used in
this book.
There are also important differences between PIAs and threat
modeling. PIAs
are often focused on a system as situated in a social context,
and the evaluation
is often of a less technical nature than security threat modeling.
Clarke’s evalu-
ation criteria include things such as the status, discoverability,
and applicability
of the PIA guidance document; the identifi cation of a
responsible person; and
the role of an oversight agency; all of which would often be
considered out of
scope for threat modeling. (This is not a critique, but simply a
contrast.) One
sample PIA guideline from the Offi ce of the Victorian Privacy
Commissioner
states the following:
“Your PIA Report might have a Table of Contents that looks
something like this:
1. Description of the project
2. Description of the data fl ows
3. Analysis against ‘the’ Information Privacy Principles
4. Analysis against the other dimensions to privacy
5. Analysis of the privacy control environment
6. Findings and recommendations”
Note that step 2, “description of the data fl ows,” is highly
reminiscent of “data
fl ow diagrams,” while steps 3 and 4 are very similar to the
“threat fi nding”
building blocks. Therefore, this approach might be highly
complementary to
the four-step model of threat modeling.
The appropriate privacy principles or other dimensions to
consider are some-
what dependent on jurisdiction, but they can also focus on
classes of intrusion,
such as those offered by Solove, or a list of concerns such as
informational, bodily,
territorial, communications, and locational privacy. Some of
these documents,
such as those from the Offi ce of the Victorian Privacy
Commissioner (2009a),
have extensive lists of common privacy threats that can be used
to support a
guided brainstorming approach, even if the documents are not
legally required.
Privacy impact assessments that are performed to comply with a
law will often
have a formal structure for assessing suffi ciency.
The Nymity Slider and the Privacy Ratchet
University of Waterloo professor Ian Goldberg has defi ned a
measurement he
calls nymity, the “amount of information about the identity of
the participants
that is revealed [in a transaction].” Nymity is from the Latin for
name, from
which anonymous (“without a name”) and pseudonym (“like a
name”) are derived.
www.it-ebooks.info
http://www.it-ebooks.info/
116 Part II ■ Finding Threats
c06.indd 02:15:11:PM 01/16/2014 Page 116
Goldberg has pointed out that you can graph nymity on a
continuum (Goldberg,
2000). Figure 6-1 shows the nymity slider. On the left-hand
side, there is less
privacy than on the right-hand side. As Goldberg points out, it
is easy to move
towards more nymity, and extremely diffi cult to move away
from it. For example,
there are protocols for electronic cash that have most of the
privacy-preserving
properties of physical cash, but if you deliver it over a TCP
connection you lose
many of those properties. As such, the nymity slider can be used
to examine
how privacy-threatening a protocol is, and to compare the
amount of nymity a
system uses. To the extent that it can be designed to use less
identifying infor-
mation, other privacy features will be easier to achieve.
Verinymity Persistent Pseudonym
Pen name
Linkable anonymity
Prepaid phone cards
Unlinkable anonymity
Cash paymentsGovernment ID
Credit Card #
Address
Figure 6-1: The nymity slider
When using nymity privacy in threat modeling, the goal is to
measure how
much information a protocol, system, or design exposes or
gathers. This enables
you to compare it to other possible protocols, systems, or
designs. The nymity
slider is thus an adjunct to other threat-fi nding building blocks,
not a replace-
ment for them.
Closely related to nymity is the idea of linkability. Linkability
is the ability to
bring two records together, combining the data in each into a
single record or
virtual record. Consider several databases, one containing
movie preferences,
another containing book purchases, and a third containing
telephone records.
If each contains an e-mail address, you can learn that
[email protected] likes
religious movies, that he’s bought books on poison, and that
several of the
people he talks with are known religious extremists. Such
intersections might
be of interest to the FBI, and it’s a good thing you can link
them all together!
(Unfortunately, no one bothered to include the professional
database showing
he’s a doctor, but that’s beside the point!) The key is that
you’ve engaged in link-
ing several datasets based on an identifi er. There is a set of
identifi ers, including
e-mail addresses, phone numbers, and government-issued ID
numbers, that are
often used to link data, which can be considered strong evidence
that multiple
records refer to the same person. The presence of these strongly
linkable data
points increases linkability threats.
Linkability as a concept relates closely to Solove’s concept of
identifi cation and
aggregation. Linkability can be seen as a spectrum from
strongly linkable with
multiple validated identifi ers to weakly linkable based on
similarities in the data.
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 6 ■ Privacy Tools 117
c06.indd 02:15:11:PM 01/16/2014 Page 117
(“John Doe and John E. Doe is probably the same person.”) As
data becomes
richer, the threat of linkage increases, even if the strongly
linkable data points
are removed. For example, Harvard professor Latanya Sweeney
has shown
how data with only date of birth, gender, and zip code uniquely
identifi es 87
percent of the U.S. population (Sweeney, 2002). There is an
emergent scientifi c
research stream into “re-identifi cation” or “de-anonymization,”
which discloses
more such results on a regular basis. The release of anonymous
datasets carries
a real threat of re-identifi cation, as AOL, Netfl ix, and others
have discovered.
(McCullagh, 2006; Narayanan, 2008; Buley, 2010).
Contextual Integrity
Contextual integrity is a framework put forward by New York
University
professor Helen Nissenbaum. It is based on the insight that
many privacy issues
occur when information is taken from one context and brought
into another. A
context is a term of art with a deep grounding in discussions of
the spheres, or
arenas, of our lives. A context has associated roles, activities,
norms, and val-
ues. Nissenbaum’s approach focuses on understanding contexts
and changes
to those contexts. This section draws very heavily from Chapter
7 of her book
Privacy in Context, (Stanford Univ. Press, 2009) to explain how
you might apply
the framework to product development.
Start by considering what a context is. If you look at a hospital
as a context,
then the roles might include doctors, patients, and nurses, but
also family mem-
bers, administrators, and a host of other roles. Each has a reason
for being in a
hospital, and associated with that reason are activities that they
tend to perform
there, norms of behavior, and values associated with those
norms and activities.
Contexts are places or social areas such as restaurants,
hospitals, work, the
Boy Scouts, and schools (or a type of school, or even a specifi c
school). An
event can be “in a work context” even if it takes place
somewhere other than
your normal offi ce. Any instance in which there is a defi ned or
expected set
of “normal” behaviors can be treated as a context. Contexts nest
and overlap.
For example, normal behavior in a church in the United States
is infl uenced
by the norms within the United States, as well as the narrower
context of the
parishioners. Thus, what is normal at a Catholic Church in
Boston or a Baptist
Revival in Mississippi may be inappropriate at a Unitarian
Congregation in
San Francisco (or vice versa). Similarly, there are shared roles
across all schools,
those of student or teacher, and more specifi c roles as you
specify an elementary
school versus a university. There are specifi c contexts within a
university or
even the particular departments of a university.
Contextual integrity is violated when the informational norms
of a context
are breached. Norms, in Nissenbaum’s sense, are “characterized
by four key
parameters: context, actors, attributes, and transmission
principles.” Context
www.it-ebooks.info
http://www.it-ebooks.info/
118 Part II ■ Finding Threats
c06.indd 02:15:11:PM 01/16/2014 Page 118
is roughly as just described. Actors are senders, recipients, and
information
subjects. Attributes refer to the nature of the information—for
example, the nature
or particulars of a disease from which someone is suffering. A
transmission
principle is “a constraint on the fl ow (distribution,
dissemination, transmission)
of information from party to party.” Nussbaum fi rst provides
two presentations
of contextual integrity, followed by an augmented contextual
integrity heuristic.
As the technique is new, and the “augmented” approach is not a
strict superset
of the initial presentation, it may help you to see both.
Contextual Integrity Decision Heuristic
Nissenbaum fi rst presents contextual integrity as a post-
incident analytic tool.
The essence of this is to document the context as follows:
1. Establish the prevailing context.
2. Establish key actors.
3. Ascertain what attributes are affected.
4. Establish changes in principles of transmission.
5. Red fl ag
Step 5 means “if the new practice generates changes in actors,
attributes, or
transmission principles, the practice is fl agged as violating
entrenched infor-
mational norms and constitutes a prima facie violation of
contextual integrity.”
You might have noticed a set of interesting potential overlaps
with software
development and threat modeling methodologies. In particular,
actors over-
lap fairly strongly with personas, in Cooper’s sense of personas
(discussed in
Appendix B, “Threat Trees”). A contextual integrity analysis
probably does not
require a set of personas for bad actors, as any data fl ow
outside the intended
participants (and perhaps some between them) is a violation.
The information
transmissions, and the associated attributes are likely visible in
data fl ow or
swim lane diagrams developed for normal security threat
modeling.
Thus, to the extent that threat models are being enhanced from
version to
version, a set of change types could be used to trigger
contextual integrity
analysis. The extant diagram is the “prevailing context.” The
important change
types would include the addition of new human entities or new
data fl ows.
Nissenbaum takes pains to explore the question of whether a
violation of
contextual integrity is a worthwhile reason to avoid the change.
From the
perspective of threat elicitation, such discussions are out of
scope. Of course,
they are in scope as you decide what to do with the identifi ed
privacy threats.
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 6 ■ Privacy Tools 119
c06.indd 02:15:11:PM 01/16/2014 Page 119
Augmented Contextual Integrity Heuristic
Nissenbaum also presents a longer, ‘augmented’ heuristic,
which is more pre-
scriptive about steps, and may work better to predict privacy
issues.
1. Describe the new practice in terms of information fl ows.
2. Identify the prevailing context.
3. Identify information subjects, senders, and recipients.
4. Identify transmission principles.
5. Locate applicable norms, identify signifi cant changes.
6. Prima facie assessment
7. Evaluation
a. Consider moral and political factors.
b. Identify threats to autonomy and freedom.
c. Identify effects on power structures.
d. Identify implications for justice, fairness, equality, social
hierarchy,
democracy and so on.
8. Evaluation 2
a. Ask how the system directly impinges on the values, goals,
and ends
of the context.
b. Consider moral and ethical factors in light of the context.
9. Decide.
This is, perhaps obviously, not an afternoon’s work. However,
in considering
how to tie this to a software engineering process, you should
note that steps 1,
3, and 4 look very much like creating data fl ow diagrams. The
context of most
organizations is unlikely to change substantially, and thus
descriptions of the
context may be reusable, as may be the work products to
support the evalua-
tions of steps 7 and 8.
Perspective on Contextual Integrity
I very much like contextual integrity. It strikes me as providing
deep insight into
and explanations for a great number of privacy problems. That
is, it may be pos-
sible to use it to predict privacy problems for products under
design. However,
that’s an untested hypothesis. One area of concern is that the
effort to spell out
www.it-ebooks.info
http://www.it-ebooks.info/
120 Part II ■ Finding Threats
c06.indd 02:15:11:PM 01/16/2014 Page 120
all the aspects of a context may be quite time consuming, but
without spelling
out all the aspects, the privacy threats many be missed. This
sort of work is
challenging when you’re trying to ship software and
Nissenbaum goes so far
as to describe it as “tedious” (Privacy In Context, page 142).
Additionally, the act
of fi xing a context in software or structured defi nitions may
present risks that
the fi xed representation will deviate as social norms evolve.
This presents a somewhat complex challenge to the idea of
using contextual
integrity as a threat modeling methodology within a software
engineering
process. The process of creating taxonomies or categories is an
essential step in
structuring data in a database. Software engineers do it as a
matter of course as
they develop software, and even those who are deeply cognizant
of taxonomies
often treat it as an implicit step. These taxonomies can thus
restrict the evolution
of a context—or worse; generate dissonance between the
software-engineered
version of the context or the evolving social context. I
encourage security and
privacy experts to grapple with these issues.
LINDDUN
LUNDDUN is a mnemonic developed by Mina Deng for her
PhD at the Katholieke
Universiteit in Leuven, Belgium (Deng, 2010). LINDDUN is an
explicit mirroring
of STRIDE-per-element threat modeling. It stands for the
following violations
of privacy properties:
■ Linkability
■ Identifi ability
■ Non-Repudiation
■ Detectability
■ Disclosure of information
■ Content Unawareness
■ Policy and consent Noncompliance
LINDDUN is presented as a complete approach to threat
modeling with a
process, threats, and requirements discovery method. It may be
reasonable to use
the LINDDUN threats or a derivative as a tool for privacy threat
enumeration
in the four-stage framework, snapping it either in place of or
next to STRIDE
security threat enumeration. However, the threats in LINDDUN
are somewhat
unusual terminology; therefore, the training requirements may
be higher, or
the learning curve steeper than other privacy approaches.
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 6 ■ Privacy Tools 121
c06.indd 02:15:11:PM 01/16/2014 Page 121
N O T E LINDDUN leaves your author deeply confl icted. The
privacy terminology it
relies on will be challenging for many readers. However, it is,
in many ways, one of
the most serious and thought-provoking approaches to privacy
threat modeling, and
those seriously interested in privacy threat modeling should
take a look. As an aside,
the tension between non-repudiation as a privacy threat and
repudiation as a security
threat is delicious.
Summary
Privacy is no less important to society than security. People will
usually act to
protect their privacy given an understanding of the threats and
how they can
address them. As such, it may help you to look for privacy
threats in addition
to security threats. The ways to do so are less prescriptive than
ways to look
for security threats.
There are many tools you can use to fi nd privacy issues,
including Solove’s
taxonomy of privacy harms. (A harm is a threat with its impact.)
Solove’s taxonomy
helps you understand the harm associated with a privacy
violation, and thus,
perhaps, how best to prioritize it. The IETF has an approach to
privacy threats
for new Internet protocols. That approach may complement or
substitute Privacy
Impact Assessments. PIAs and the IETF’s processes are
appropriate when a regu-
latory or protocol design context calls for their use. Both are
more prescriptive
than the nymity slider, a tool for assessing the amount of
personal information
in a system and measuring privacy invasion for comparative
purposes. They are
also more prescriptive than contextual integrity, an approach
which attempts to
tease out the social norms of privacy. If your goal is to identify
when a design
is likely to raise privacy concerns, however, then contextual
integrity may be
the most helpful. Far more closely related to STRIDE-style
threat identifi cation
is LINDDUN, which considers privacy violations in the manner
that STRIDE
considers security violations.
www.it-ebooks.info
http://www.it-ebooks.info/
www.it-ebooks.info
http://www.it-ebooks.info/
c07.indd 09:44:3:AM 01/09/2014 Page 123
P a r t
III
Managing and Addressing
Threats
Part III is all about managing threats and the activities involved
in threat mod-
eling. While threats themselves are at the heart of threat
modeling, the reason
you threat model is so that you can deliver more secure
products, services, or
technologies. This part of the book focuses on the third step in
the four-step
framework, what to do after you’ve found threats and need to do
something
about them; but it also covers the fi nal step: validation.
Chapters in this part include the following:
■ Chapter 7: Processing and Managing Threats describes how
to start a
threat modeling project, how to iterate across threats, the tables
and lists
you may want to use, and some scenario-specifi c process
elements.
■ Chapter 8: Defensive Tactics and Technologies are tools you
can use to
address threats, ranging from simple to complex. This chapter
focuses on
a STRIDE breakdown of security threats and a variety of ways
to address
privacy.
■ Chapter 9: Trade-Offs When Addressing Threats includes
risk manage-
ment strategies, how to use those strategies to select
mitigations, and
threat-modeling specifi c prioritization approaches.
www.it-ebooks.info
http://www.it-ebooks.info/
124 Part III ■ Managing and Addressing Threats
c07.indd 09:44:3:AM 01/09/2014 Page 124
■ Chapter 10: Validating That Threats Are Addressed includes
how to test
your threat mitigations, QA’ing threat modeling, and process
aspects of
addressing threats. This is the last step of the four-step
approach.
■ Chapter 11: Threat Modeling Tools covers the various tools
that you can
use to help you threat model, ranging from the generic to the
specifi c.
www.it-ebooks.info
http://www.it-ebooks.info/
125
c07.indd 09:44:3:AM 01/09/2014 Page 125
Finding threats against arbitrary things is fun, but when you’re
building some-
thing with many moving parts, you need to know where to start,
and how to
approach it. While Part II is about the tasks you perform and the
methodolo-
gies you can use to perform them, this chapter is about the
processes in which
those tasks are performed. Questions of “what to do when”
naturally come up
as you move from the specifi cs of looking at a particular
element of a system
to looking at a complete system. To the extent that these are
questions of what
an individual or small team does, they are addressed in this
chapter; questions
about what an organization does are covered in Chapter 17,
“Bringing Threat
Modeling to Your Organization.”
Each of the approaches covered here should work with any of
the “Lego
blocks” covered in Part II. In this chapter, you’ll learn how to
get started look-
ing for threats, including when and where to start and how to
iterate through
a diagram. The chapter continues with a set of tables and lists
that you might
use as you threat model, and ends with a set of scenario-specifi
c guidelines,
including the importance of the vendor-customer trust boundary,
threat model-
ing new technologies, and how to threat model an API.
C H A P T E R
7
Processing and Managing Threats
www.it-ebooks.info
http://www.it-ebooks.info/
126 Part III ■ Managing and Addressing Threats
c07.indd 09:44:3:AM 01/09/2014 Page 126
Starting the Threat Modeling Project
The basic approach of “draw a diagram and use the Elevation of
Privilege game
to fi nd threats” is functional, but people prefer different
amounts of prescrip-
tiveness, so this section provides some additional structure that
may help you
get started.
When to Threat Model
You can threat model at various times during a process, with
each choice having
a different value. Most important, you should threat model as
you get started
on a project. The act of drawing trust boundaries early on can
greatly help you
improve your architecture. You can also threat model as you
work through fea-
tures. This allows you to have smaller, more focused threat
modeling projects,
keeping your skills sharp and reducing the chance that you’ll fi
nd big problems
at the end. It is also a good idea to revisit the threat model as
you get ready to
deliver, to ensure that you haven’t made decisions which
accidentally altered
the reality underlying the model.
Starting at the Beginning
Threat modeling as you get started involves modeling the
system you’re plan-
ning or building, fi nding threats against the model, and fi ling
bugs that you’ll
track and manage, such as other issues discovered throughout
the develop-
ment process. Some of those bugs may be test cases, some
might be feature
work, and some may be deployment decisions. It depends on
what you’re
threat modeling.
Working through Features
As you develop each feature, there may be a small amount of
threat modeling
work to do. That work involves looking deeply at the threats to
that feature (and
possibly refreshing or validating your understanding of the
context by checking
the software model). As you start work on a feature or
component, it can also
be a good time to work through second- or third-order threats.
These are the
threats in which an attacker will try to bypass the features or
design elements
that you put in place to block the most immediate threats. For
example, if the
primary threat is a car thief breaking a window, a secondary
threat is them
jumping the ignition. You can mitigate that with a steering-
wheel lock, which
is thus a second-order mitigation. There’s more on this concept
of ordered
threats in the “Digging Deeper into Mitigations” section later in
this chapter,
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 7 ■ Processing and Managing Threats 127
c07.indd 09:44:3:AM 01/09/2014 Page 127
as well as more on considering planned mitigations, and how an
attacker might
work around them.
Threat modeling as you work through a feature has several
important value
propositions. One is that if you do a small threat model as you
start a component
or feature, the design is probably closer to mind. In other
words, you’ll have a
more detailed model with which to look for threats. Another is
that if you fi nd
threats, they are closer to mind as you’re working on that
feature. Threat model-
ing as you work through features can also help you maintain
your awareness
of threats and your skills at threat modeling. (This is especially
true if your
project is a long one.)
Close to Delivery
Lastly, you should threat model as you get ready to ship by
reexamining the
model and checking your bugs. (Shipping here is inclusive of
delivering, deploy-
ing, or going live.) Reexamining the model means ensuring that
everyone still
agrees it’s a decent model of what you’re building, and that it
includes all the
trust boundaries and data fl ows that cross them. Checking your
bugs involves
checking each bug that’s tagged threat modeling (or however
else you’re track-
ing them), and ensuring it didn’t slip through the cracks.
Time Management
So how long should all this take? The answer to that varies
according to system
size and complexity, the familiarity of the participants with the
system, their
skill in threat modeling, and even the culture of meetings in an
organization.
Some very rough rules of thumb are that you should be able to
diagram and fi nd
threats against a “component” and decide if you need to do
more enumeration
in a one-hour session with an experienced threat modeler to
moderate or help.
For the sort of system that a small start-up might build, the end-
to-end threat
modeling could take a few hours to a week, or possibly longer if
the data the
system holds is particularly sensitive. At the larger end of the
spectrum, a project
to diagram the data fl ows of a large online service has been
known to require
four people working for several months. That level of effort was
required to help
fi nd threat variations and alternate routes through a system that
had grown to
serve millions of people.
Whatever you’re modeling, familiarity with threat modeling
helps. If you need
to refer back to this book every few minutes, your progress will
be slower. One
of the reasons to threat model regularly is to build skill and
familiarity with the
tasks and techniques. Organizational culture also plays a part.
Organizations
that run meetings with nowhere to sit will likely create a list of
threats faster
www.it-ebooks.info
http://www.it-ebooks.info/
128 Part III ■ Managing and Addressing Threats
c07.indd 09:44:3:AM 01/09/2014 Page 128
than a consensus-oriented organization that encourages
exploring ideas. (Which
list will be better is a separate and fascinating question.)
What to Start and (Plan to) End With
When you start looking for threats, a diagram is something
between useful and
essential input. Experienced modelers may be able to start
without it, but will
likely iterate through creating a diagram as they fi nd threats.
The diagram is
likely to change as you use it to fi nd threats; you’ll discover
things you missed
or don’t need. That’s normal, and unless you run a strict
waterfall approach to
engineering, it’s a process that evolves much like the way
requirements evolve
as you discover what’s easy or hard to build.
Use the following two testable states to help assess when you’re
done:
■ You have fi led bugs.
■ You have a diagram or diagrams that everyone agrees
represents the system.
To be more specifi c, you should probably have a number of
bugs that’s roughly
scaled to the number of things in your diagram. If you’re using
a data fl ow
diagram and STRIDE, expect to have about fi ve threats per
diagram element.
N O T E Originally, this text suggested that you should have:
(# of processes * 6) +
(# of data fl ows * 3) + (# of data stores * 3.5) + (# of distinct
external entities *2) threats,
but that requires keeping four separate counts, and is thus more
work to get approxi-
mately the same answer.
You might notice that says “STRIDE” rather than “STRIDE-per-
element” or
“STRIDE-per-interaction,” and fi ve turns out to match the
number you get if
you tally up the checkmarks in those charts. That’s because
those charts are
derived from where the threats usually show up.
Where to Start
When you are staring at a blank whiteboard and wondering
where to start,
there are several commonly recommended places. Many people
have recom-
mended assets or attackers, but as you learned in Chapter 2,
“Strategies for
Threat Modeling,” the best starting place is a diagram that
covers the system
as a whole, and from there start looking at the trust boundaries.
For advice on
how to create diagrams, see Chapter 2.
When you assemble a group of people in a room to look for
threats, you
should include people who know about the software, the data fl
ows and (if
possible) threat modeling. Begin the process with the
component(s) on which
the participants in the room are working. You’ll want to start
top-down, and
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 7 ■ Processing and Managing Threats 129
c07.indd 09:44:3:AM 01/09/2014 Page 129
then work across the system, going “breadth fi rst,” rather than
delving deep
into any component (“depth fi rst”).
Finding Threats Top Down
Almost any system should be modeled from the highest-level
view you can
build of the entire system, for some appropriate value of “the
entire system”.
What constitutes an entire system is, of course, up for debate,
just like what
constitutes the entire Internet, the entirety of (say) Amazon’s
website, and so
on, isn’t a simple question. In such cases more scoping is
needed. The ideal is
probably what is within an organization’s control and, to the
extent possible,
cross-reviews with those responsible for other components.
In contrast, bottom-up threat modeling starts from features, and
then attempts
to derive a coherent model from those feature-level models.
This doesn’t work
well, but advice that implies you should do this is common, so a
bit of discussion
may be helpful. The reason this doesn’t work is because it turns
out to be very
challenging to bring threat models together when they are not
derived from a
system-level view. As such, you should start from the highest-
level view you
can build of the entire system.
E
It may help to understand the sorts of issues that can lead to a
bottom-up approach.
At Microsoft in the mid-2000s, there was an explosion of
bottom-up threat model-
ing. There were three drivers for this: specifi c words in the
Security Development
Lifecycle (SDL) threat model requirement, aspects of
Microsoft’s approach to func-
tion teams, and the work involved in creating top-level models.
The SDL required
“all new features” be threat modeled. This intersected with an
approach to features
whereby a particular team of developer, tester, and program
manager owns a feature
and collaborates to ship it. Because the team owned its feature,
it was natural to ask
it to add threat models to the specifi cations involved in
producing it. As Microsoft’s
approach to security evolved, product security teams had
diverse sets of important
tasks to undertake. Creating all-up threat models was usually
not near the top of the
list. (Many large product diagrams have now done that work and
found it worthwhile.
Some of these diagrams require more than one poster-size sheet
of paper.)
Finding Threats “Across”
Even with a top-down approach, you want to go breadth fi rst,
and there are
three different lists you can iterate “across”: A list of the trust
boundaries, a
list of diagram elements, or a list of threats. A structure can
help you look
www.it-ebooks.info
http://www.it-ebooks.info/
130 Part III ■ Managing and Addressing Threats
c07.indd 09:44:3:AM 01/09/2014 Page 130
for threats, either because you and your team like structure, or
because the
task feels intimidating and you want to break it down, Table 7-1
shows three
approaches.
Table 7-1: Lists to Iterate Across
METHOD SAMPLE STATEMENT COMMENTS
Start from what
goes across trust
boundaries.
“What can go wrong as foo comes
across this trust boundary?
This is likely to identify the
highest-value threats.
Iterate across
diagram
elements.
“What can go wrong with this data-
base fi le?” “What can go wrong with
the logs?”
Focusing on diagram ele-
ments may work well when a
lot of teams are collaborating.
Iterate across
the threats.
“Where are the spoofi ng threats in
this diagram?” “Where are the tam-
pering threats?”
Making threats the focus of
discussion may help you fi nd
related threats.
Each of these approaches can be a fi ne way to start, as long as
you don’t let
them become straightjackets. If you don’t have a preference, try
starting from
what crosses trust boundaries, as threats tend to cluster there.
However you
iterate, ensure that you capture each threat as it comes up,
regardless of the
planned approach.
Digging Deeper into Mitigations
Many times threats will be mitigated by the addition of features,
which can
be designed, developed, tested and delivered much like other
features. (Other
times, mitigation might be a confi guration change, or at the
other end of the
effort scale, require re-design.) However, mitigations are not
quite like other
features. An attacker will rarely try to work around the bold
button and fi nd
an unintended, unsupported way to bold their text.
Finding threats is great, and to the extent that you plan to be
attacked only
by people who are exactly lazy enough to fi nd a threat but not
enthusiastic
enough to try to bypass your mitigation, you don’t need to
worry about going
deeper into the mitigations. (You may have to worry about a
new job, but that,
as they say, is beyond the scope of this book. I recommend
Mike Murray’s Forget
the Parachute: Let Me Fly the Plane.) In this section, you’ll
learn about how to go
deeper into the interplay of how attackers can attempt to bypass
the design
choices and features you put in place to make their lives harder.
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 7 ■ Processing and Managing Threats 131
c07.indd 09:44:3:AM 01/09/2014 Page 131
The Order of Mitigation
Going back to the example of threat modeling a home from the
introduction,
it’s easy and fun for security experts to focus on how attackers
could defeat
the security system by cutting the alarm wire. If you consider
the window to
be the attack surface, then threats include someone smashing
through it and
someone opening it. The smashing is addressed by re-enforced
glass, which
is thus “first-order” mitigation. The smashing threat is also
addressed by an
alarm, which is a second-order defense. But, oh no! Alarms can
be defeated by
cutting power. To address that third-level threat, the system
designer can add
more defenses. For example, alarm systems can include an alert
if the line is ever
dropped. Therefore, the defender can add a battery, or perhaps a
cell phone or
some other radio. (See how much fun this is?) These multiple
layers, or orders,
of attack and defense are shown in Table 7-2.
N O T E If you become obsessed with the window-smashing
threat and forget to put
a lock on the window, or you never discover that there’s a door
key under the mat, you
have unaddressed problems, and are likely mis-investing your
resources.
Table 7-2: Threat and Mitigation “Orders” or “Layers”
ORDER THREAT MITIGATION
1st Window smashing Reinforced glass
2nd Window smashing Alarm
3rd Cut alarm wire Heartbeat
4th Fake heartbeat Cryptographic signal integrity
Threat modeling should usually proceed from attack surfaces,
and ensure
that all fi rst-order threats are mitigated before attention is paid
to the second-
order threats. Even if a methodology is in place to ensure that
the full range
of fi rst-order threats is addressed, a team may run out of time
to follow the
methodology. Therefore, you should fi nd threats breadth-fi rst.
Playing Chess
It’s also important to think about what an attacker will do next,
given your
mitigations. Maybe that means following the path to faking a
heartbeat on the
alarm wire, but maybe the attacker will fi nd that door key, or
maybe they’ll move
on to another victim. Don’t think of attacks and mitigations as
static. There’s a
www.it-ebooks.info
http://www.it-ebooks.info/
132 Part III ■ Managing and Addressing Threats
c07.indd 09:44:3:AM 01/09/2014 Page 132
dynamic interplay between them, usually driven by attackers.
You might think
of threats and mitigations like the black and white pieces on a
chess board. The
attacker can move, and when they move, their relationship to
other pieces can
change. As you design mitigation, ask what an attacker could do
once you deliver
that mitigation. How could they work around it? (This is subtly
different from
asking what they will do. As Nobel-prize winning physicist
Niels Bohr said,
“Prediction is very diffi cult, especially about the future.”)
Generally, attackers look for the weakest link they can easily fi
nd. As you
consider your threats and where to go into depth, start with the
weakest links.
This is an area where experience, including a repertoire of real
scenarios, can
be very helpful. If you don’t have that repertoire, a literature
review can help,
as you saw in Chapter 2. This interplay is a place where artful
threat modeling
and clever redesign can make a huge difference.
Believing that an attacker will stop because you put a mitigation
in place is
optimistic, or perhaps naive would be a better word. What
happens several
moves ahead can be important. (Attackers are tricky like that.)
This differs from
thinking through threat ordering in that your attacker will likely
move to the
“next easiest” attack available. That is, an attacker doesn’t need
to stick to the
attack you’re planning for or coding against at the moment, but
rather can go
anywhere in your system. The attacker gets to choose where to
go, so you need
to defend everywhere. This isn’t really fair, but no one
promised you fair.
Prioritizing
You might feel that the advice in this section about the layers of
mitigations
and the suggestion to fi nd threats across fi rst is somewhat
contradictory. Which
should you do fi rst? Consider the chess game or cover
everything? Covering
breadth fi rst is probably wise. As you manage the bugs and
select ways to
mitigate threats, you can consider the chess game and deeper
threat variants.
However, it’s important to cover both.
The unfortunate reality is that attackers with enough interest in
your
technology will try to fi nd places where you didn’t have
enough time to
investigate or build defenses. Good requirements, along with
their interplay
with threats and mitigations, can help you create a target that is
consistently
hard to attack.
Running from the Bear
There’s an old joke about Alice and Bob hiking in the woods
when they come
across an angry bear. Alice takes off running, while Bob pauses
to put on
some running shoes. Alice stops and says, “What the heck are
you doing?”
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 7 ■ Processing and Managing Threats 133
c07.indd 09:44:3:AM 01/09/2014 Page 133
Bob looks at her and replies, “I don’t need to outrun the bear, I
just need to
outrun you.”
OK, it’s not a very good joke. But it is a good metaphor for bad
threat model-
ing. You shouldn’t assume that there’s exactly one bear out
there in the woods.
There are a lot of people out there actively researching new
vulnerabilities, and
they publish information about not only the vulnerabilities they
fi nd, but also
about their tools and techniques. Thus, more vulnerabilities are
being found
more effi ciently than in past years. Therefore, not only are
there multiple bears,
but they have conferences in which they discuss techniques for
eating both
Alice and Bob for lunch.
Worse, many of today’s attacks are largely automated, and can
be scaled up
nearly infi nitely. It’s as if the bears have machine guns. Lastly,
it will get far
worse as the rise of social networking empowers automated
social engineering
attacks (just consider the possibilities when attackers start
applying modern
behavior-based advertising to the malware distribution
business).
If your iteration ends with “we just have to run faster than the
next target,”
you may well be ending your analysis early.
Tracking with Tables and Lists
Threat modeling can lead you to generate a lot of information,
and good tracking
mechanisms can make a big difference. Discovering what works
best for you
may require a bit of experimentation. This section lays out some
sample lists
and sample entries in such lists. These are all intended as
advice, not straight-
jackets. If you regularly fi nd something that you’re writing on
the side, give
yourself a way to track it.
Tracking Threats
The fi rst type of table to consider is one that tracks threats.
There are (at least)
three major ways to organize such a table, including by diagram
element (see
Table 7-3), by threat type (see Table 7-4), or by order of
discovery (see Table 7-5).
Each of these tables uses these methods to examine threats
against the super-
simple diagram from Chapter 1, “Dive In and Threat Model!”
reprised here in
Figure 7-1.
If you organize by diagram element, the column headers are
Diagram Element,
Threat Type, Threat, and Bug ID/Title. You can check your
work by validating
that the expected threats are all present in the table. For
example, if you’re using
STRIDE-per-element, then you should have at least one
tampering, information
disclosure, and denial-of-service threat for each data fl ow. An
example table for
use in iterating over diagram elements is shown in Table 7-3.
www.it-ebooks.info
http://www.it-ebooks.info/
134 Part III ■ Managing and Addressing Threats
c07.indd 09:44:3:AM 01/09/2014 Page 134
Web browser Web server
1 2 3 4 5 6 7
Corporate data center
Web storage
(offsite)
Business Logic Database
Figure 7-1: The diagram considered in threat tables
Table 7-3: A Table of Threats, Indexed by Diagram Element
(excerpt)
DIAGRAM
ELEMENT
THREAT
T YPE THREAT BUG ID AND TITLE
Data fl ow (4)
web server
to Biz
Logic
Tampering Add orders without
payment checks.
4553 “need integrity controls on
channel”
Information
disclosure
Payment instruments
in the clear
4554 “need crypto” #PCI #p0
Denial of
service
Can we just accept all
these inside the data
center?
4555 “Check with Alice in IT if
these are acceptable”.
To check the completeness of Table 7-3, confi rm that each
element has at least
one threat. If you’re using STRIDE-per-element, each process
should have six
threats, each data fl ow three, each external entity two, and each
data store three
or four if the data store is a log.
You can also organize a table by threats, and use threats as
what’s being iterated
“across.” If you do that, you end up with a table like the one
shown in Table 7-4.
Table 7-4: A Table of Threats, Organized by Threat (excerpt)
THREAT
DIAGRAM
ELEMENT THREAT BUG ID AND TITLE
Tampering Web browser (1) Attacker modifi es
our JavaScript order
checker.
4556 “Add order-checking
logic to the server”.
Data fl ow (2)
from browser to
server
Failure to use HTTPS* 4557 “Build unit tests to
ensure that there are no HTTP
listeners for endpoints to
these data fl ows.”
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 7 ■ Processing and Managing Threats 135
c07.indd 09:44:3:AM 01/09/2014 Page 135
THREAT
DIAGRAM
ELEMENT THREAT BUG ID AND TITLE
Web server Someone who tam-
pers with the web
server can attack our
customers.
4558 “Ensure all changes to
server code are pulled from
source control so changes are
accountable”.
Web server Web server can add
items to orders.
4559 “Investigate moving con-
trols to Biz Logic, which is less
accessible to attackers”.
* The entry for “failure to use HTTPS” is an example that
illustrates how knowledge of the scenario and
mitigating techniques can lead you to jump to a fix, rather than
perhaps tediously working through the threats.
Be careful that when you (literally) jump to a conclusion like
this, you’re not missing other threats.
N O T E You may have noticed that Table 7-4 has two entries
for web server—this is
totally fi ne. You shouldn’t let having something in a space
prevent you from fi nding
more threats against that thing, or recording additional threats
that you discover.
The last way to formulate a table is by order of discovery, as
shown in Table 7-5.
This is the easiest to fi ll out, as the next threat is simply added
to the next line. It is,
however, the hardest to validate, because the threats will be
“jumbled together.”
Table 7-5: Threats by Order of Discovery (excerpt)
THREAT
DIAGRAM
ELEMENT THREAT BUG ID
Tampering Web browser (1) Attacker modifi es
the JavaScript
order checker.
4556 “Add order-checking logic
to the server”.
With three variations, the obvious question is which one should
you use? If you’re
new to threat modeling and need structure, then either of the fi
rst two forms help
you organize your approach. The third is more natural, but
requires checking at
the end; so as you become more comfortable threat modeling,
jumping around
will become natural, and therefore the third type of table will
become more useful.
Making Assumptions
The key reason to track assumptions as you discover them is so
that you can
follow up and ensure that you’re not assuming your way into a
problem. To do
that, you should track the following:
■ The assumption
■ The impact if it’s wrong
www.it-ebooks.info
http://www.it-ebooks.info/
136 Part III ■ Managing and Addressing Threats
c07.indd 09:44:3:AM 01/09/2014 Page 136
■ Who can tell you if it’s wrong
■ Who’s going to follow-up
■ When they need to do so
■ A bug for tracking
Table 7-6 shows an example entry for such a table of
assumptions.
Table 7-6: A Table for Recording Assumptions
ASSUMPTION
IMPACT IF
WRONG
WHO TO
TALK TO
WHO’S
FOLLOWING
UP
BY DATE BUG #
It’s OK to ignore
denial of service
within the data
center.
Unhandled
vulnerabilities
Alice Bob April 15 4555
External Security Notes
Many of the documented Microsoft threat-modeling approaches
have a section
for “external security notes.” That name frames these notes with
respect to the
threat model. That is, they’re notes for those outside the threat
discovery process
in some way, and they’ll probably emerge or crystalize as you
look for threats.
Therefore, like tracking threats and assumptions, you want to
track external
security notes. You can be clearer by framing the notes in terms
of two sets of
audiences: your customers and those calling your APIs. One of
the most illus-
trative forms of these notes appears in the IETF “RFC Security
Considerations”
section, so you’ll get a brief tour of those here.
Notes for Customers
Security notes that are designed for your customers or the
people using your
system are generally of the form “we can’t fi x problem X.” Not
being able to
fi x problem X may be acceptable, and it’s more likely to be
acceptable if it’s not
a surprise—for example, “This product is not designed to
defend against the
system administrator.” This sort of note is better framed as
“non-requirements,”
and they are discussed at length in Chapter 12, “Requirements
Cookbook.”
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 7 ■ Processing and Managing Threats 137
c07.indd 09:44:3:AM 01/09/2014 Page 137
Notes for API Callers
The right design for an API involves many trade-offs, including
utility, usability,
and security. Threat modeling that leads to security notes for
your callers can
serve two functions. First, those notes can help you understand
the security
implications of your design decisions before you fi nalize those
decisions. Second,
they can help your customers understand those implications.
These notes address the question “What does someone calling
your API need
to do to use it in a secure way?” The notes help those callers
know what threats
you address (what security checks you perform), and thus you
can tell them
about a subset of the checks they’ll need to perform. (If your
security depends
on not telling them about threats then you have a problem; see
the section on
Kerckhoffs’s Principles in Chapter 16, “Threats to
Cryptosystems.”) Notes to
API callers are generally one of the following types:
■ Our threat model is [description]—That is, the things you
worry about
are… This should hopefully be obvious to readers of this book.
■ Our code will only accept input that looks like [some
description]—What
you’ll accept is simply that—a description of what validation
you’ll perform,
and thus what inputs you would reject. This is, at a surface
level, at odds
with the Internet robustness principle of “be conservative in
what you
send, and liberal in what you accept”; but being liberal does not
require
foolishness. Ideally, this description is also matched by a set of
unit tests.
■ Common mistakes in using our code include [description]—
This is a
set of things you know callers should or should not do, and are
two sides
of the same coin:
■ We do not validate this property that callers might expect us
to vali-
date—In other words, what should callers check for themselves
in their
context, especially when they might expect you to have done
something?
■ Common mistakes that our code doesn’t or can’t address—In
other
words, if you regularly see bug reports (or security issues)
because
your callers are not doing something, consider treating that as a
design
fl aw and fi xing it.
For an example of callers having trouble with an API, consider
the strcpy
function. According to the manual pages included with various
fl avors of unix,
“strcpy does not validate that s2 will fi t buffer s1,” or “strcpy
requires that s1
be of at least length (s2) + 1.” These examples are carefully
chosen, because as
the fi ne manual continues, “Avoid using strcat.” (You should
now use SafeStr*
on Windows, strL* on unix.) The manual says this because,
although notes
www.it-ebooks.info
http://www.it-ebooks.info/
138 Part III ■ Managing and Addressing Threats
c07.indd 09:44:3:AM 01/09/2014 Page 138
about safer use were eventually added, the function was simply
too hard to
use correctly, and no amount of notes to callers was going to
overcome that. If
your notes to those calling your API boil down to “it is
impossible to use this
API without jumping through error-prone hoops,” then your API
is going to
need to change (or deliver some outstanding business value).
Sometimes, however, it’s appropriate to use these sorts of notes
to API callers—
for example, “This API validates that the SignedDataBlob you
pass in is signed
by a valid root CA. You will still need to ensure that the
OrganizationName
fi eld matches the name in the URL, as you do not pass us the
URL.” That’s a
reasonable note, because the blob might not be associated with a
URL. It might
also be reasonable to have a ValidateSignatureURL() API call.
RFC Security Considerations
IETF RFCs are a form of external security notes, so it’s worth
looking at them
as an evolved example of what such notes might contain
(Rescorla, 2003). If you
need a more structured form of note, this framework is a good
place to start. The
security considerations of a modern RFC include discussion of
the following:
■ What is in scope
■ What is out of scope—and why
■ Foreseeable threats that the protocol is susceptible to
■ Residual risk to users or operators
■ Threats the protocol protects against
■ Assumptions which underlie security
Scope is reasonably obvious, and the RFC contains interesting
discussion
about not arbitrarily defi ning either foreseeable threats or
residual risk as out
of scope. The point about residual risk is similar to non-
requirements, as cov-
ered in Chapter 12. It discusses those things that the protocol
designers can’t
address at their level.
Scenario-Specifi c Elements of Threat Modeling
There are a few scenarios where the same issues with threat
modeling show up
again and again. These scenarios include issues with
customer/vendor bound-
aries, threat modeling new technologies, and threat modeling an
API (which
is broader than just writing up external security notes). The
customer/vendor
trust boundary is dropped with unfortunate regularity, and how
to approach
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 7 ■ Processing and Managing Threats 139
c07.indd 09:44:3:AM 01/09/2014 Page 139
an API or new technology often appears intimidating. The
following sections
address each scenario separately.
Customer/Vendor Trust Boundary
It is easy to assume that because someone is running your code,
they trust you,
and/or you can trust them. This can lead to things like Acme
engineers saying,
“Acme.com isn’t really an external entity... ” While this may be
true, it may also
be wrong. Your customers may have carefully audited the code
they received
from you. They may believe that your organization is generally
trustworthy
without wanting to expose their secrets to you. You holding
those secrets is
a different security posture. For example, if you hold backups
of their cryp-
tographic keys, they may be subject to an information
disclosure threat via a
subpoena or other legal demand that you can’t reveal to them.
Good security
design involves minimizing risk by appropriate design and
enforcement of the
customer/vendor trust boundary.
This applies to traditional programs such as installed software
packages, and
it also applies to the web. Believing that a web browser is
faithfully executing
the code you sent it is optimistic. The other end of an HTTPS
connection might
not even be a browser. If it is a browser, an attacker may have
modifi ed your
JavaScript, or be altering the data sent back to you via a proxy.
It is important to
pay attention to the trust boundary once your code has left your
trust context.
New Technologies
From mobile to cloud to industrial control systems to the
emergent “Internet of
Things,” technologists are constantly creating new and exciting
technologies.
Sometimes these technologies genuinely involve new threat
categories. More
often, the same threats manifest themselves. Models of threats
that are intended
to elicit or organize thinking about skilled threat modelers (such
as STRIDE in
its mnemonic form) can help in threat modeling these new
technologies. Such
models of threats enable skilled practitioners to fi nd many of
the threats that
can occur even as the new technologies are being imagined.
As your threat elicitation technique moves from the abstract to
the detailed,
changes in the details of both the technologies and the threats
may inhibit your
ability to apply the technique.
From a threat-modeling perspective, the most important thing
that design-
ers of new technologies can do is clearly defi ne and
communicate the trust
relationships by drawing their trust boundaries. What’s essential
is not just
identifi cation, but also communication. For example, in the
early web, the trust
model was roughly as shown in Figure 7-2.
www.it-ebooks.info
http://www.it-ebooks.info/
140 Part III ■ Managing and Addressing Threats
c07.indd 09:44:3:AM 01/09/2014 Page 140
Web server
Browser context Server context
Origin 1
Origin 2
Web browser
Figure 7-2: The early web threat model
In that model, servers and clients both ran code, and what
passed between
them via HTTP was purely data in the form of HTML and
images. (As a model,
this is simplifi ed; early web browsers supported more than
HTTP, including
gopher and FTP.) However, the boundaries were clearly
drawable. As the web
evolved and web developers pushed the boundary of what was
possible with
the browser’s built-in functionality, a variety of ways for the
server to run code
on the client were added, including JavaScript, Java, ActiveX,
and Flash. This
was an active transformation of the security model, which now
looks more like
Figure 7-3.
Flash
Java
Active content
Web server
Browser context Server context
Origin 1
Origin 2
Web browser
Figure 7-3: The evolved web threat model
In this model, content from the server has dramatically more
access to the
browser, leading to two new categories of threats. One is intra-
page threats, whereby
code from different servers can attack other information in the
browser. The
other is escape threats, whereby code from the server can fi nd a
way to infl uence
what’s happening on the client computer. In and of themselves,
these changes
are neither good nor bad. The new technologies created a
dramatic transforma-
tion of what’s possible on the web for both web developers and
web attackers.
The transformation of the web would probably have been
accomplished with
more security if boundaries had been clearly identifi ed. Thus,
those using new
technology will get substantial security benefi ts from its
designers by defi ning,
communicating, and maintaining clear trust boundaries.
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 7 ■ Processing and Managing Threats 141
c07.indd 09:44:3:AM 01/09/2014 Page 141
Threat Modeling an API
API threat models are generally very similar. Each API has a
low trust side,
regardless of whether it is called from a program running on the
local machine or
called by anonymous parties on the far side of the Internet. On a
local machine,
the low trust side is more often clear: The program running as a
normal user is
running at a low trust level compared to the kernel. (It may also
be running at
the same trust level as other code running with the same user
ID, but with the
introduction of things like AppContainer on Windows and Mac
OS sandbox,
two programs running with the same UID may not be fully
equivalent. Each
should treat the other as being untrusted.) In situations where
there’s a clear
“lower trust” side, that unprivileged code has little to do, except
ensure that
data is validated for the context in which that data will be used.
If it really is at a
lower trust level, then it will have a hard time defending itself
from a malicious
kernel, and should generally not try. This applies only to
relationships like that
of a program to a kernel, where there’s a defi ned hierarchy of
privilege. Across
the Internet, each side must treat the other as untrusted. The
“high trust” side
of the API needs to do the following seven things for security:
■ Perform all security checks inside the trust boundary. A
system has
a trust boundary because the other side is untrusted or less
trusted. To
allow the less trusted side to perform security checks for you is
missing the
point of having a boundary. It is often useful to test input
before sending
it (for example, a user might fi ll out a long web form, miss
something, and
then get back an error message). However, that’s a usability
feature, not
a security feature. You must test inside the trust boundary.
Additionally,
for networked APIs/restful APIs/protocol endpoints, it is
important to
consider authentication, authorization, and revocation.
■ When reading, ensure that all data is copied in before
validation for
purpose (see next bullet). The low, or untrusted side, can’t be
trusted to
validate data. Nor can it be trusted to not change data under its
control
after you’ve checked it. There is an entire genus of security fl
aws called
TOCTOU (“time of check, time of use”) in which this pattern is
violated.
The data that you take from the low side needs to be copied into
secured
memory, validated for some purpose, and then used without
further
reference to the data on the low side.
■ Know what purpose the data will be put to, and validate that
it matches
what the rest of your system expects from it. Knowing what the
data
will be used for enables you to check it for that purpose—for
example,
ensure an IPv4 address is four octets, or that an e-mail address
matches
some regular expression (an e-mail regular expression is an
easily grasped
example, but sending e-mail to the address is better [Celis,
2006]). If the
www.it-ebooks.info
http://www.it-ebooks.info/
142 Part III ■ Managing and Addressing Threats
c07.indd 09:44:3:AM 01/09/2014 Page 142
API is a pass-through (for example, listening on a socket), then
you may
be restricted to validating length, length to a C-style string, or
perhaps
nothing at all. In that case, your documentation should be very
clear that
you are doing minimal or no validation, and callers should be
cautious.
■ Ensure that your messages enable troubleshooting without
giving away
secrets. Trusted code will often know things that untrusted code
should
not. You need to balance the capability to debug problems with
return-
ing too much data. For example, if you have a database
connection, you
might want to return an error like “can’t connect to server
instance with
username dba password dfug90845b4j,” but anyone who
connected
then now knows the DBA’s password. Oops! Messages of the
form “An
error of type X occurred. This is instance Y in the error log Z”
are helpful
enough to a systems administrator, who can search for Y in Z
while only
disclosing the existence of the logs to the attacker. Even better
are errors
that include information about who to contact. Messages that
merely say
“Contact your system administrator” are deeply frustrating.
■ Document what security checks you perform, and the checks
you expect
callers to perform for themselves. Very few APIs take
unconstrained
input. The HTTP interface is a web interface: It expects a verb
(GET, POST,
HEAD) and data. For a GET or HEAD, the data is a URL, an
HTTP version,
and a set of HTTP headers. Old-fashioned CGI programs knew
that the
web server would pass them a set of headers as environment
variables
and then a set of name-value pairs. The environment variables
were not
always unique, leading to a number of bugs. What a CGI could
rely on
could be documented, but the diverse set of attacks and
assumptions that
people could read into it was not documentable.
■ Ensure that any cryptographic function runs in a constant
time. All your
crypto functions should run in constant time from the
perspective of the
low trust side. It should be obvious that cryptographic keys
(except the
public portion of asymmetric systems) are a critical subset of
the things
you should not expose to low. Crypto keys are usually both a
stepping
stone asset and a thing you want to protect asset. See also
Chapter 16 on
threats to cryptosystems.
■ Handle the unique security demands of your API. While the
preceding
issues show up with great consistency, you’re hopefully
building a new
API to deliver new value, and that API may also bring new risks
that you
should consider. Sometimes it’s useful to use the “rogue
insider” model
to help ask “what could we do wrong with this?”
In addition to the preceding checklist, it may be helpful to look
to similar or
competitive APIs and see what security changes they’ve
executed, although the
security changes may not be documented as such.
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 7 ■ Processing and Managing Threats 143
c07.indd 09:44:3:AM 01/09/2014 Page 143
Summary
There are a set of tools and techniques that you can use to help
threat modeling
fi t with other development, architecture, or technology
deployments. Threat
modeling tasks that use these tools happen at the start of a
project, as you’re
working through features, and as you’re close to delivery.
Threat modeling should start with the creation of a software
diagram (or
updating the diagram from the previous release). It should end
with a set of
security bugs being fi led, so that the normal development
process picks up and
manages those bugs.
When you’re creating the diagram, start with the broadest
description you
can, and add details as appropriate. Work top down, and as you
do, at each level
of the diagram(s), work across something: trust boundaries,
software elements
or your threat discovery technique.
As you look to create mitigations, be aware that attackers may
try to bypass
those mitigations. You want to mitigate the most accessible (aka
“fi rst order”)
threats fi rst, and then mitigate attacks against your mitigations.
You have to
consider your threats and mitigations not as a static
environment, but as a game
where the attacker can move pieces, and possibly cheat.
As you go through these analyses, you’ll want to track
discoveries, includ-
ing threats, assumptions, and things your customers need to
know. Customers
here include your customers, who need to understand what your
goals and
non-goals are, and API callers, who need to understand what
security checks
you perform, and what checks they need to perform.
There are some scenario-specifi c call outs: It is important to
respect the
customer/vendor security boundary; new technologies can and
should
be threat modeled, especially with respect to all the trust
boundaries, not just the
customer/vendor one; all APIs have very similar threat models,
although there
may be new and interesting security properties of your new and
interesting API.
www.it-ebooks.info
http://www.it-ebooks.info/
c07.indd 09:44:3:AM 01/09/2014 Page 144
www.it-ebooks.info
http://www.it-ebooks.info/
145
c08.indd 12:26:46:PM 01/13/2014 Page 145
So far you’ve learned to model your software using diagrams
and learned to
fi nd threats using STRIDE, attack trees, and attack libraries.
The next step in
the threat modeling process is to address every threat you’ve
found.
When it works, the fastest and easiest way to address threats is
through
technology-level implementations of defensive patterns or
features. This chapter
covers the standard tactics and technologies that you will use to
mitigate threats.
These are often operating system or program features that you
can confi gure,
activate, apply or otherwise rapidly engage to defend against
one or more threats.
Sometimes, they involve additional code that is widely available
and designed to
quickly plug in. (For example, tunneling connections over SSH
to add security is
widely supported, and some unix packages even have options to
make that easier.)
Because you likely found your threats via STRIDE, the bulk of
this chapter
is organized according to STRIDE. The main part of the chapter
addresses
STRIDE and privacy threats, because most pattern collections
already include
information about how to address the threats.
Tactics and Technologies for Mitigating Threats
The mitigation tactics and technologies in this chapter are
organized by STRIDE
because that’s most likely how you found them. This section is
therefore orga-
nized by ways to mitigate each of the STRIDE threats, each of
which includes
C H A P T E R
8
Defensive Tactics and
Technologies
www.it-ebooks.info
http://www.it-ebooks.info/
146 Part III ■ Managing and Addressing Threats
c08.indd 12:26:46:PM 01/13/2014 Page 146
a brief recap of the threat, tactics that can be brought to bear
against it, and the
techniques for accomplishing that by people with various skills
and respon-
sibilities. For example, if you’re a developer who wants to add
cryptographic
authentication to address spoofi ng, the techniques you use are
different from
those used by a systems administrator. Each subsection ends
with a list of
specifi c technologies.
Authentication: Mitigating Spoofi ng
Spoofi ng threats against code come in a number of forms:
faking the program on
disk, squatting a port (IP, RPC, etc.), splicing a port, spoofi ng
a remote machine,
or faking the program in memory (related problems with
libraries and depen-
dencies are covered under tampering); but in general, only
programs running
at the same or a lower level of trust are spoofable, and you
should endeavor to
trust only code running at a higher level of trust, such as in the
OS.
There is also spoofi ng of people, of course, a big, complex
subject covered in
Chapter 14, “Accounts and Identity.” Mitigating spoofi ng
threats often requires
unusually tight integration between layers of systems. For
example, a mainte-
nance engineer from Acme, Inc. might want remote (or even
local) access to
your database. Is it enough to know that the person is an
employee of Acme? Is
it enough to know that he or she can initiate a connection from
Acme’s domain?
You might reasonably want to create an account on your
database to allow Joe
Engineer to log in to it, but how do you bind that to Acme’s
employee database?
When Joe leaves Acme and gets a job at Evil Geniuses for a
Better Tomorrow,
what causes his access to Acme’s database to go away?
N O T E Authentication and authorization are related concepts,
and sometimes
confused. Knowing that someone really is Adam Shostack
should not authorize a
bank to take money from my account (there are several people
of that name in the
U.S.). Addressing authorization is covered in the Authorization:
Mitigating Elevation
of Privilege
From here, let’s dig into the specifi c ways in which you can
ensure authen-
tication is done well.
Tactics for Authentication
You can authenticate a remote machine either with or without
cryptographic
trust mechanisms. Without crypto involves verifying via IP or
“classic” DNS
entries. All the noncryptographic methods are unreliable. Before
they existed,
there were attempts to make hostnames reliable, such as the
double-reverse
DNS lookup. At the time, this was sometimes the best tactic for
authentication.
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 8 ■ Defensive Tactics and Technologies 147
c08.indd 12:26:46:PM 01/13/2014 Page 147
Today, you can do better, and there’s rarely an excuse for doing
worse. (SNMP
may be an excuse, and very small devices may be another). As
mentioned
earlier, authenticating a person is a complex subject, covered in
Chapter 14.
Authenticating on-system entities is somewhat operating system
dependent.
Whatever the underlying technical mechanisms are, at some
point crypto-
graphic keys are being managed to ensure that there’s a
correspondence between
technical names and names people use. That validation cannot
be delegated
entirely to machines. You can choose to delegate it to one of the
many compa-
nies that assert they validate these things. These companies
often do business
as “PKI” or “public key infrastructure” companies, and are
often referred to
as “certifi cation authorities” or “CAs”. You should be careful
about relying on
that delegation for any transaction valued at more than what the
company will
accept for liability. (In most cases, certifi cate authorities limit
their liability to
nothing). Why you should assign it a higher value is a question
their market-
ing departments hope will not be asked, but the answer roughly
boils down to
convenience, limited alternatives, and accepted business
practice.
Developer Ways to Address Spoofi ng
Within an operating system, you should aim to use full and
canonical path names
for libraries, pipes, and so on to help mitigate spoofi ng. If you
are relying on
something being protected by the operating system, ensure that
the permissions
do what you expect. (In particular, unix fi les in /tmp are
generally unreliable,
and Windows historically has had similarly shared directories.)
For networked
systems in a single trust domain, using operating system
mechanisms such
as Active Directory or LDAP makes sense. If the system spans
multiple trust
domains, you might use persistence or a PKI. If the domains
change only rarely,
it may be appropriate to manually cross-validate keys, or to use
a contract to
specify who owns what risks.
You can also use cryptographic ways to address spoofi ng, and
these are covered
in Chapter 16, “Threats to Cryptosystems.” Essentially, you tie
a key to a person, and
then work to authenticate that the key is correctly associated
with the person who’s
connecting or authenticating.
Operational Ways to Address Spoofi ng
Once a system is built, a systems administrator has limited
options for improv-
ing spoofi ng defenses. To the extent that the system is internal,
pressure can be
brought to bear on system developers to improve authentication.
It may also be
possible to use DNSSEC, SSH, or SSL tunneling to add or
improve authentication.
Some network providers will fi lter outbound traffi c to make
spoofi ng harder.
That’s helpful, but you cannot rely on it.
www.it-ebooks.info
http://www.it-ebooks.info/
148 Part III ■ Managing and Addressing Threats
c08.indd 12:26:46:PM 01/13/2014 Page 148
Authentication Technologies
Technologies for authenticating computers (or computer
accounts) include the
following:
■ IPSec
■ DNSSEC
■ SSH host keys
■ Kerberos authentication
■ HTTP Digest or Basic authentication
■ “Windows authentication” (NTLM)
■ PKI systems, such as SSL or TLS with certifi cates
Technologies for authenticating bits (fi les, messages, etc.)
include the following:
■ Digital signatures
■ Hashes
Methods for authenticating people can involve any of the
following:
■ Something you know, such as a password
■ Something you have, such as an access card
■ Something you are, such as a biometric, including
photographs
■ Someone you know who can authenticate you
Technologies for maintaining authentication across connections
include the
following:
■ Cookies
Maintaining authentication across connections is a common
issue as you
integrate systems. The cookie pattern has fl aws, but generally,
it has fewer fl aws
than re-authenticating with passwords.
Integrity: Mitigating Tampering
Tampering threats come in several fl avors, including tampering
with bits on
disk, bits on a network, and bits in memory. Of course, no one
is limited to
tampering with a single bit at a time.
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 8 ■ Defensive Tactics and Technologies 149
c08.indd 12:26:46:PM 01/13/2014 Page 149
Tactics for Integrity
There are three main ways to address tampering threats: relying
on system
defenses such as permissions, use of cryptographic mechanisms,
and use of
logging technology and audit activities as a deterrent.
Permission mechanisms can protect things that are within their
scope of
control, such as fi les on disk, data in a database, or paths
within a web server.
Examples of such permissions include ACLs on Windows, unix
fi le permissions,
or .htaccess fi les on a web server.
There are two main cryptographic primitives for integrity:
hashes and
signatures. A hash takes an input of some arbitrary length, and
produces a fi xed-
length digest or hash of the input. Ideally, any change to the
input completely
transforms the output. If you store a protected hash of a digital
object, you can
later detect tampering. Actually, anyone with that hash can
detect tampering,
so, for example, many software projects list a hash of the
software on their
website. Anyone who gets the bits from any source can rely on
them being the
bits described on the project website, to a level of security
based on the security
of the hash and the operation of the web site. A signature is a
cryptographic
operation with a private key and a hash that does much the same
thing. It has
the advantage that once someone has obtained the right public
key, they can
validate a lot of hashes. Hashes can also be used in binary trees
of various forms,
where large sets of hashes are collected together and signed.
This can enable, for
example, inserting data into a tree and noting the time in a way
that’s hard to
alter. There are also systems for using hashes and signatures to
detect changes
to a fi le system. The fi rst was co-invented by Gene Kim, and
later commercial-
ized by Tripwire, Inc. (Kim, 1994).
Logging technology is a weak third in this list. If you log how fi
les change,
you may be able to recover from integrity failures.
Implementing Integrity
If you’re implementing a permission system, you should ensure
that there’s a
single permissions kernel, also called a reference monitor. That
reference monitor
should be the one place that checks all permissions for
everything. This has
two main advantages. First, you have a single monitor, so there
are no bugs,
synchronization failures, or other issues based on which code
path called.
Second, you only have to fi x bugs in one place.
Creating a good reference monitor is a fairly intricate bit of
work. It’s hard to
get right, and easy to get wrong. For example, it’s easy to run
checks on references
www.it-ebooks.info
http://www.it-ebooks.info/
150 Part III ■ Managing and Addressing Threats
c08.indd 12:26:46:PM 01/13/2014 Page 150
(such as symlinks) that can change when the code fi nally opens
the fi le. If you
need to implement a reference monitor, perform a literature
review fi rst.
If you’re implementing a cryptographic defense, see Chapter 16.
If you’re
implementing an auditing system, you need to ensure it is suffi
ciently perfor-
mant that people will leave it on, that security successes and
failures are both
logged, and that there’s a usable way to access the logs. You
also need to ensure
that the data is protected from attackers. Ideally, this involves
moving it off the
generating system to an isolated logging system.
Operational Assurance of Integrity
The most important element of assuring integrity is about
process, not
technology. Mechanisms for ensuring integrity only work to the
extent that
integrity failures generate operational exceptions or
interruptions that are
addressed by a person. All the cryptographic signatures in the
world only help
if someone investigates the failure, or if the user cannot or does
not override
the message about a failure. You can devote all your disk access
operations to
running checksums, but if no one investigates the alarms, they
won’t do any
good. Some systems use “whitelists” of applications so only
code on the whitelist
runs. That reduces risk, but carries an operational cost.
It may be possible to use SSH or SSL tunneling or IPSec to
address network
tampering issues. Systems like Tripwire, OSSEC, or L5 can help
with system
integrity.
Integrity Technologies
Technologies for protecting fi les include:
■ ACLs or permissions
■ Digital signatures
■ Hashes
■ Windows Mandatory Integrity Control (MIC) feature
■ Unix immutable bits
Technologies for protecting network traffi c:
■ SSL
■ SSH
■ IPSec
■ Digital signatures
Non-Repudiation: Mitigating Repudiation
Repudiation is a somewhat different threat because it bridges
the business
realm, in which there are four elements to addressing it:
preventing fraudulent
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 8 ■ Defensive Tactics and Technologies 151
c08.indd 12:26:46:PM 01/13/2014 Page 151
transactions, taking note of contested issues, investigating them,
and respond-
ing to them. In an age when anyone can instantly be a publisher,
assuming
that you can ignore the possibility of a customer (or
noncustomer) complaint or
contested charge is foolish. Ensuring you can accept customer
complaints and
investigate them is outside the scope of this book, but the
output from such a
system provides a key validation that you have the right logs.
Note that repudiation is sometimes a feature. As Professor Ian
Goldberg
pointed out when introducing his Off-the-Record messaging
protocol, signed
conversations can be embarrassing, incriminating, or otherwise
undesirable
(Goldberg, 2008). Two features of the Off-the-Record (OTR)
messaging system
are that it’s secure (encrypted and authenticated) and deniable.
This duality of
feature or threat also comes up in the LINDDUN approach to
privacy threat
modeling.
Tactics for Non-Repudiation
The technical elements of addressing repudiation are fraud
prevention, logs,
and cryptography. Fraud prevention is sometimes considered
outside the scope
of repudiation. It’s included here because managing repudiation
is easier if you
have fewer contested transactions. Fraud prevention can be
divided into fraud
by internal actors (embezzlement and the like) and external
fraud. Internal
fraud prevention is a complex matter; for a full treatment see
The Corporate Fraud
Handbook (Wells, 2011). You should have good account
management practices,
including ensuring that your tools work well enough that people
are not tempted
or forced to share passwords as part of getting their jobs done.
Be sure you log
and audit the data in those logs.
Logs are the traditional technical core of addressing repudiation
issues. What
is logged depends on the transaction, but generally includes
signatures or an
IP address and all related information. There are also
cryptographic ways to
address repudiation, which are currently mostly used between
larger businesses.
Tactics for Preventing Fraud by External Parties
External fraud prevention can be seen as a matter of payment
fraud prevention,
and ensuring that your customers remain in control of their
account. In both
cases, details about the state of the art changes quickly, so talk
to your peers.
Even the most tight-lipped companies have been willing to have
very frank
discussions with peers under NDA.
In essence, stability is good. For example, someone who has
been buying
two romance novels a month from you for a decade and is still
living at the
same address is likely the person who just ordered another one.
If that person
suddenly moves to the other side of the world, and orders
technical books in
Slovakian with a new credit card with a billing address in the
Philippines, you
www.it-ebooks.info
http://www.it-ebooks.info/
152 Part III ■ Managing and Addressing Threats
c08.indd 12:26:46:PM 01/13/2014 Page 152
might have a problem. (Then again, they might have fi nally
found true love,
and you don’t want to upset your loyal customers.)
Tools for Preventing Fraud by External Parties
In their annual report on online fraud, CyberSource includes a
survey of
popular fraud detection tools and their perceived effectiveness
(CyberSource,
2013). Their 2013 survey includes a set of automated tools:
■ Validation services
■ Proprietary data/customer history
■ Multi-merchant data
■ Purchase device tracing
Validation services include tracking verifi cation numbers (aka
CVN/CVV),
address verifi cation services, postal address verifi cation,
Verifi ed by Visa/
MasterCard SecureCode, telephone number verifi cation/reverse
lookups, public
records services, credit checks, and “out-of-wallet/in-wallet”
verifi cation services.
Proprietary data and customer history includes customer order
history, in
house “negative lists” of problematic customers, “positive lists”
of VIP or reliable
customers, order velocity monitoring, company-specifi c fraud
models (these
are usually built with manual, statistical, or machine learning
analyses of past
fraudulent orders), and customer website behavioral analysis.
Multi-merchant data focuses on shared negative lists or multi-
merchant
purchase velocity analyzed by the merchant. (This analysis is
nominally also
performed by the card processors and clearing houses, so the
additional value
may be transient.)
Finally, purchase device tracking includes device “fi
ngerprinting” and IP
address geolocation. The CyberSource report also discusses the
importance of
tools to help manual review, and how a varied list is both very
helpful and time
consuming. Because manual review is one of the most expensive
components
of an anti-fraud approach to repudiation threats, it may be worth
investing in
tools to gather all the data into one (or at least fewer) places to
improve analyst
productivity.
Implementing Non-Repudiation
The two key tools for non-repudiation are logging and digital
signatures. Digital
signatures are probably most useful for business-to-business
systems.
Log as much as you can keep for as long as you need to keep it.
As the price
of storage continues to fall, this advice becomes easier and
easier to follow. For
example, with a web transaction, you might log IP address,
current geoloca-
tion of that address, and browser details. You might also
consider services that
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 8 ■ Defensive Tactics and Technologies 153
c08.indd 12:26:46:PM 01/13/2014 Page 153
either provide information on fraud or allow you to request
decision advice. To
the extent that these companies specialize, and may have
broader visibility into
fraud, this may be a good area of security to outsource. Some of
the information
you log or transfer may interact with your privacy policies, and
it’s important
to check.
There are also cryptographic digital signatures. Digital
signature should be
distinguished from electronic signature, which is a term of art
under U.S. law
referring to a variety of mechanisms with which to produce a
signature, some
as minimalistic as “press 1 to agree to these terms and
conditions.” In contrast,
a digital signature is a mathematical transformation that
demonstrates irrefut-
ably that someone in possession of a mathematical key took an
action to cause a
signature to be made. The strength of “irrefutable” here depends
on the strength
of the math, and the tricky bits are possession of the key and
what human intent
(if any) may have lain behind the signature.
Operational Assurance of Non-Repudiation
When a customer or partner attempts to repudiate a transaction,
someone needs
to investigate it. If repudiation attempts are frequent, you may
need dedicated
people, and those people might require specialized tools.
Non-Repudiation Technologies
Technologies you can use to address repudiation include:
■ Logging
■ Log analysis tools
■ Secured log storage
■ Digital signatures
■ Secure time stamps
■ Trusted third parties
■ Hash trees
■ As mentioned in “tools for preventing fraud” above
Confi dentiality: Mitigating Information Disclosure
Information disclosure can happen with information at rest (in
storage) or in
motion (over a network). The information disclosed can range
from the con-
tent of communication to the existence of an entity with which
someone is
communicating.
www.it-ebooks.info
http://www.it-ebooks.info/
154 Part III ■ Managing and Addressing Threats
c08.indd 12:26:46:PM 01/13/2014 Page 154
Tactics for Confi dentiality
Much like with integrity, there are two main ways to prevent
information dis-
closure: Within the confi nes of a system, you can use ACLs,
and outside of it
you must use cryptography.
If what must be protected is the content of the communication,
then traditional
cryptography will be suffi cient. If you need to hide who is
communicating with
whom and how often, you’ll need a system that protects that
data, such as a
cryptographic mix or onion network. If you must hide the fact
that communica-
tion is taking place at all, steganography will be required.
Implementing Confi dentiality
If your system can act as a reference monitor and control all
access to the data,
you can use a permissions system. Otherwise, you’ll need to
encrypt either the
data or its “container.” The data might be a fi le on disk, a
record in a database, or
an e-mail message as it transits over the network. The container
might be a fi le
system, database, or network channel, such as all e-mail
between two systems,
or all packets between a web client and a web server.
In each cryptographic case, you have to consider who needs
access to the keys
for encrypting and the decrypting data. For fi le encryption, that
might be as
simple as asking the operating system to securely store the key
for the user so
that the user can get to it later. Also, note that encrypted data
is not integrity
controlled. The details can be complex and tricky, but consider
a database of
salaries, where the cells are encrypted. You don’t need to know
the CEO’s sal-
ary to know that replacing your salary with it is likely a good
thing (for you);
and if there’s no integrity control, replacing the encrypted value
of your salary
with the CEO’s salary will do just fi ne.
An important subset of information disclosure cases related to
the storage
of passwords or backup authentication mechanisms is
considered in depth in
Chapter 14.
Operational Assurance of Confi dentiality
It may be possible to add ACLs to an already developed system,
or to use chroot
or similar sandboxes to restrict what it can access. On
Windows, the addition
of a SID to a program and an inherited deny ACL for that SID
may help (or it
may break things). It is usually possible to add a disk or fi le
encryption layer to
protect information at rest from disclosure. Disk crypto will
work “by default”
with all the usual caveats about how keys are managed. It works
for adversarial
custody of the machine, but not if the password is written down
or otherwise
stored with the machine. With regard to a network, it may be
possible to use
SSH or SSL tunneling or IPSec to address network tampering
issues.
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 8 ■ Defensive Tactics and Technologies 155
c08.indd 12:26:46:PM 01/13/2014 Page 155
Confi dentiality Technologies
Technologies for confi dentiality include:
■ Protecting fi les:
■ ACLs/permissions
■ Encryption
■ Appropriate key management
■ Protecting network data:
■ Encryption
■ Appropriate key management
■ Protecting communication headers or the fact of
communication:
■ Mix networks
■ Onion routing
■ Steganography
N O T E In the preceding lists, “appropriate key management”
is not quite a
technology, but is so important that it’s included.
Availability: Mitigating Denial of Service
Denial-of-service attacks work by exhausting some resource.
Traditionally,
those resources are CPU, memory (both RAM and hard drive
space can be
exhausted), and bandwidth. Denial-of-service attacks can also
exhaust human
availability. Consider trying to call the reservations line of a
very exclusive
restaurant—the French Laundry in Napa Valley books all its
tables within 5
minutes of the phone being open every day (for a day 30 days in
the future).
The resource under contention is the phone lines, and in
particular the people
answering them.
Tactics for Availability
There are two forms of denial of service attacks: brute force and
clever. Using
the restaurant example, brute force involves bringing 100
people to a restaurant
that can seat only 25. Clever attacks bring 20 people, each of
whom makes an
ever-escalating list of requests and changes, and runs the staff
ragged. In the
online world, brute force attacks on networks are somewhat
common under the
name DDoS (Distributed Denial of Service). They can also be
carried out against
CPU (for example, while(1) fork()) or disk. It’s simple to
construct a small zip
www.it-ebooks.info
http://www.it-ebooks.info/
156 Part III ■ Managing and Addressing Threats
c08.indd 12:26:46:PM 01/13/2014 Page 156
fi le that will expand to whatever limit might be in place: the
maximum size of a
fi le or space on the fi le system. Recall that a zip fi le is
structured to describe the
contents of the real fi le as simply as possible, such as 65,535
0s. That three-byte
description will expand to 64K, for a magnifi cation effect of
over 21,000—which
is awfully cool if you’re an attacker.
Clever denial-of-service attacks involve a small amount of work
by an attacker
that causes you to do a lot of work. For example, connecting to
an SSL v2 server,
the client sends a client master key challenge, which is a
random key encrypted
such that the server does (relatively) expensive public key
operations to decrypt
it. The client does very little work compared to the server. This
can be partially
addressed in a variety of ways, most notably the Photuris key
management
protocol. The core of such protocols is proof that the client has
done more work
than the server, and the body of approaches is called proof of
work. However,
in a world of abundant bots and volunteers to run DDoS
software for political
causes, Ben Laurie and Richard Clayton have shown reasonably
conclusively
that “Proof-of-Work Proves Not to Work” (in a paper of that
name [Laurie]).
A second important strategy for defending against denial-of-
service attacks
is to ensure your attacker can receive data from you. For
example, defenses
against SYN fl ooding attacks now take this form. In a SYN fl
ood attack, a host
receives a lot of connection attempts (TCP SYNchronize) and it
needs to keep
track of each one to set up new connections. By sending a slew
of those, operat-
ing systems in the 1990s could be run out of memory in the fi
xed-size buffers
allocated to track SYNs, and no new connections could be
established. Modern
TCP stacks calculate certain parts of their response to a SYN
packet using
some cryptography. They maintain no state for incoming
packets, and use the
cryptographic tools to validate that new connections are real
(Rescorla, 2003).
Implementing Availability
If you’re implementing a system, consider what resources an
attacker might
consume, and look for ways to limit those resources on a per-
user basis.
Understand that there are limits to what you can achieve when
dealing with
systems on the other side of a trust boundary, and some of the
response needs
to be operational. Ensure that the operators have such
mechanisms.
Operational Assurance of Availability
Addressing brute force denial-of-service attacks is simple:
Acquire more resources
such that they don’t run out, or apply limits so that one bad
apple can’t spoil
things for others. For example, multi-user operating systems
implement quota
systems, and business ISPs may be able to fi lter traffi c coming
from certain sources.
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 8 ■ Defensive Tactics and Technologies 157
c08.indd 12:26:46:PM 01/13/2014 Page 157
Addressing clever attacks is generally in the realm of
implementation, not
operations.
Availability Technologies
Technologies for protecting fi les include:
■ ACLs
■ Filters
■ Quotas (rate limiting, thresholding, throttling)
■ High-availability design
■ Extra bandwidth (rate limiting, throttling)
■ Cloud services
Authorization: Mitigating Elevation of Privilege
Elevation of privilege threats are one category of unauthorized
use, and the only
one addressed in this section. The overall question of designing
authorization
systems fi lls other books.
Tactics for Authorization
As discussed in the section “Implementing Integrity,” having a
reference moni-
tor that can control access between objects is a precursor to
avoiding several
forms of a problem, including elevation of privilege. Limiting
the attack surface
makes the problem more tractable. For example, limiting the
number of setuid
programs limits the opportunity for a local user to become root.
(Technically,
programs can be setuid to something other than root, but
generally those
other accounts are also privileged.) Each program should do a
small number
of things, and carefully manage their input, including user
input, environment,
and so on. Each should be sandboxed to the extent that the
system supports it.
Ensure that you have layers of defense, such that an anonymous
Internet user
can’t elevate to administrator with a single bug. You can do this
by having the
code that listens on the network run as a limited user. An
attacker who exploits
a bug will not have complete run of the system. (If they’re a
normal user, they
may well have easy access to many elevation paths, so lock
down the account.)
The permission system needs to be comprehensible, both to
administrators
trying to check things and to people trying to set things. A
permission system
that’s hard to use often results in people incorrectly setting
permissions, (tech-
nically) enabling actions that policy and intent mean to forbid.
www.it-ebooks.info
http://www.it-ebooks.info/
158 Part III ■ Managing and Addressing Threats
c08.indd 12:26:46:PM 01/13/2014 Page 158
Implementing Authorization
Having limited the attack surface, you’ll need to very carefully
manage the
input you accept at each point on the attack surface. Ensure that
you know what
you want to accept and how you’re going to use that input.
Reject anything that
doesn’t match, rather than trying to make a complete list of bad
characters. Also,
if you get a non-match, reject it, rather than try to clean it up.
Operational Assurance of Authorization
Operational details, such as “we need to expose this to the
Internet” can often
lead to those deploying technology wanting to improve their
defensive stance.
This usually involves adding what can be referred to as defense
in depth or layered
defense. There are several ways to do this.
First, run as a normal or limited user, not as administrator/root.
While techni-
cally that’s not a mitigation against an elevation-of-privilege
threat, but a har-
binger of such, it’s inline with the “principle of least privilege.”
Each program
should run as its own limited user. When unix made “nobody”
the default
account for services, the nobody account ended up with
tremendous levels of
authorization. Second, apply all the sandboxing you can.
Authorization Technologies
Technologies for improving authorization include:
■ ACLs
■ Group or role membership
■ Role based access control
■ Claims-based access control
■ Windows privileges (runas)
■ Unix sudo
■ Chroot, AppArmor or other unix sandboxes
■ The “MOICE” Windows sandbox pattern
■ Input validation for a defi ned purpose
N O T E MOICE is the “Microsoft Offi ce Isolated Conversion
Environment.” The name
comes from the problem that led to the pattern being invented,
but the approach can
now be considered a pattern for sandboxing on Windows. For
more on MOICE, see
(LeBlanc, 2007).
www.it-ebooks.info
http://www.it-ebooks.info/
Chapter 8 ■ Defensive Tactics and Technologies 159
c08.indd 12:26:46:PM 01/13/2014 Page 159
N O T E Many Windows privileges are functionally equivalent
to administrator, and
may not be as helpful as you desire. See (Margosis, 2006) for
more details.
Tactic and Technology Traps
There are two places where it’s easy to get pulled into wasting
time when
working through these technologies and tactics. The fi rst
distraction is risk
management. The tactics and technologies in this chapter aren’t
the only ways
to address threats, but they are the best place to start. When you
can use them,
they will be easier to implement and work better than more
complex or nuanced
risk management approaches. For example, if you can address a
threat by chang-
ing a network endpoint to a local endpoint, there’s no point to
engaging in
the more time consuming risk management approaches covered
in the next
chapter. The second distraction is trying to categorize threats. If
you found a
threat via brainstorming or just the free fl ow of ideas, don’t let
the organization
of this chapter fool you into thinking you should try to
categorize that threat.
Instead, focus on fi nding the best way to address it. (Teams can
spend longer
in debate around categorization than it would take to implement
the fi x they
identifi ed—changing permissions on a fi le.)
Addressing Threats with Patterns
In his book, A Pattern Language, architect Christopher
Alexander and his col-
leagues introduced the concept of architectural patterns
(Alexander, 1977). A
pattern is a way of expressing how experts capture ways of
solving recurring
problems. Patterns have since been adapted to software. There
are well-understood
development patterns, such as the three-tier enterprise app.
Security patterns seem like a natural way to group and
communicate about
tactics and technologies to address security problems into
something larger.
You can create and distribute patterns in a variety of ways, and
this section
discusses some of them. However, in practice, these patterns
have not been
popular. The reasons for this are not clear, and those investing
in using patterns
to address security problems would likely benefi t from
studying the factors that
have limited their popularity.
Some of those factors might include engineers not knowing
when to reach
for such a text, or the presentation of security patterns as a
distinct subset,
apart from other patterns. At least one web patterns book (Van
Duyne, 2007)
includes a chapter on security patterns. Embedding security
patterns where
non-specialists are likely to fi nd them seems like a good
pattern.
www.it-ebooks.info
http://www.it-ebooks.info/
160 Part III ■ Managing and Addressing Threats
c08.indd 12:26:46:PM 01/13/2014 Page 160
Standard Deployments
In many larger organizations, an operations group will have a
standard way
to deploy systems, or possibly several standard ways, depending
on the data’s
sensitivity. In these cases, the operations group can document
what sorts of
threats their standard deployment mitigates, and provide that
document as
part of their “on-boarding” process. For example, a standard
data center at
an organization might include defenses against DDoS, or state
that “network
information disclosure is an accepted risk for risk categories 1–
3.”
Addressing CAPEC Threats
CAPEC (MITRE’s Common Attack Pattern Enumeration and
Classifi cation) is
primarily a collection of attack patterns, but most CAPEC threat
patterns include
defenses. This chapter has primarily organized threats according
to STRIDE.
If you are using CAPEC, each CAPEC pattern includes advice
about how to
address it in its “
httpswww.huffpost.comentryonline-dating-vs-offline_b_4037867.docx

More Related Content

PDF
A World-Class Dating Coach
PDF
5 steps to_online_dating_success
PDF
5 steps to online dating success
PDF
Online Dating Flipboard - Scott O´Brien
PDF
Essays On Social Networking. ️ Pros of social networking essay. Social media ...
PDF
A Short Essay About Christmas
PDF
Online Romance is a Myth.
PPTX
Love on the web presentation
A World-Class Dating Coach
5 steps to_online_dating_success
5 steps to online dating success
Online Dating Flipboard - Scott O´Brien
Essays On Social Networking. ️ Pros of social networking essay. Social media ...
A Short Essay About Christmas
Online Romance is a Myth.
Love on the web presentation

Similar to httpswww.huffpost.comentryonline-dating-vs-offline_b_4037867.docx (20)

PDF
Pin On Teaching Ideas
PDF
How To Write An Essay In English. Online assignment writing service.
PPTX
Dating app
DOCX
Information Technology and EthicsSocial Networking and Business.docx
DOCX
All this just need to be reworded. NOT APA nor MLA needed.Questi.docx
PDF
Sample Business Essay
PPTX
Cloud computing & disadvantages of technology
PDF
PDF
PDF
PDF
C Level
PDF
Romantic Relationships
PDF
TIPS FOR WRITING A RESEARCH PAPER Scie
PDF
E book the-art_of_internet_dating
PDF
IS Undergrads Class 4
PDF
Cision Influencer Lists - Technology
DOCX
PHL 111 Milestone Two WorksheetUse the following guiding q.docx
PDF
July 2023 Top Cyber News MAGAZINE. Dr. Djalila RAHALI on Human Factors.pdf
PDF
Husband And Wife Store. Com manual
PDF
Electronic Media Essay.pdf
Pin On Teaching Ideas
How To Write An Essay In English. Online assignment writing service.
Dating app
Information Technology and EthicsSocial Networking and Business.docx
All this just need to be reworded. NOT APA nor MLA needed.Questi.docx
Sample Business Essay
Cloud computing & disadvantages of technology
C Level
Romantic Relationships
TIPS FOR WRITING A RESEARCH PAPER Scie
E book the-art_of_internet_dating
IS Undergrads Class 4
Cision Influencer Lists - Technology
PHL 111 Milestone Two WorksheetUse the following guiding q.docx
July 2023 Top Cyber News MAGAZINE. Dr. Djalila RAHALI on Human Factors.pdf
Husband And Wife Store. Com manual
Electronic Media Essay.pdf
Ad

More from pooleavelina (20)

DOCX
httpswww.azed.govoelaselpsUse this to see the English Lang.docx
DOCX
httpscdnapisec.kaltura.comindex.phpextwidgetpreviewpartner_.docx
DOCX
httpsifes.orgsitesdefaultfilesbrijuni18countryreport_fi.docx
DOCX
httpfmx.sagepub.comField Methods DOI 10.117715258.docx
DOCX
httpsiexaminer.orgfake-news-personal-responsibility-must-trump.docx
DOCX
http1500cms.comBECAUSE THIS FORM IS USED BY VARIOUS .docx
DOCX
httpswww.medicalnewstoday.comarticles323444.phphttpsasco.docx
DOCX
httpstheater.nytimes.com mem theater treview.htmlres=9902e6.docx
DOCX
httpsfitsmallbusiness.comemployee-compensation-planThe pu.docx
DOCX
httpsdoi.org10.11770002764219842624American Behaviora.docx
DOCX
httpsdoi.org10.11770896920516649418Critical Sociology.docx
DOCX
httpsdoi.org10.11770894318420903495Nursing Science Qu.docx
DOCX
httpswww.youtube.comwatchtime_continue=8&v=rFV0aes0vYAN.docx
DOCX
httphps.orgdocumentspregnancy_fact_sheet.pdfhttpswww.docx
DOCX
httpswww.worldbank.orgencountryvietnamoverview---------.docx
DOCX
HTML WEB Page solutionAbout.htmlQuantum PhysicsHomeServicesAbou.docx
DOCX
httpswww.vitalsource.comproductscomparative-criminal-justice-.docx
DOCX
httpswww.nationaleatingdisorders.orglearnby-eating-disordera.docx
DOCX
httpswww.youtube.comwatchtime_continue=59&v=Bh_oEYX1zNM&featu.docx
DOCX
httpswww.fbi.govnewspressrelpress-releasesfbi-expects-a-ris.docx
httpswww.azed.govoelaselpsUse this to see the English Lang.docx
httpscdnapisec.kaltura.comindex.phpextwidgetpreviewpartner_.docx
httpsifes.orgsitesdefaultfilesbrijuni18countryreport_fi.docx
httpfmx.sagepub.comField Methods DOI 10.117715258.docx
httpsiexaminer.orgfake-news-personal-responsibility-must-trump.docx
http1500cms.comBECAUSE THIS FORM IS USED BY VARIOUS .docx
httpswww.medicalnewstoday.comarticles323444.phphttpsasco.docx
httpstheater.nytimes.com mem theater treview.htmlres=9902e6.docx
httpsfitsmallbusiness.comemployee-compensation-planThe pu.docx
httpsdoi.org10.11770002764219842624American Behaviora.docx
httpsdoi.org10.11770896920516649418Critical Sociology.docx
httpsdoi.org10.11770894318420903495Nursing Science Qu.docx
httpswww.youtube.comwatchtime_continue=8&v=rFV0aes0vYAN.docx
httphps.orgdocumentspregnancy_fact_sheet.pdfhttpswww.docx
httpswww.worldbank.orgencountryvietnamoverview---------.docx
HTML WEB Page solutionAbout.htmlQuantum PhysicsHomeServicesAbou.docx
httpswww.vitalsource.comproductscomparative-criminal-justice-.docx
httpswww.nationaleatingdisorders.orglearnby-eating-disordera.docx
httpswww.youtube.comwatchtime_continue=59&v=Bh_oEYX1zNM&featu.docx
httpswww.fbi.govnewspressrelpress-releasesfbi-expects-a-ris.docx
Ad

Recently uploaded (20)

PPTX
Introduction-to-Literarature-and-Literary-Studies-week-Prelim-coverage.pptx
PDF
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
PDF
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
PPTX
Institutional Correction lecture only . . .
PDF
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
PDF
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
PPTX
Tissue processing ( HISTOPATHOLOGICAL TECHNIQUE
PPTX
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
PPTX
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
PPTX
Final Presentation General Medicine 03-08-2024.pptx
PDF
Supply Chain Operations Speaking Notes -ICLT Program
PDF
STATICS OF THE RIGID BODIES Hibbelers.pdf
PPTX
human mycosis Human fungal infections are called human mycosis..pptx
PPTX
Lesson notes of climatology university.
PPTX
master seminar digital applications in india
PDF
O7-L3 Supply Chain Operations - ICLT Program
PDF
Anesthesia in Laparoscopic Surgery in India
PPTX
Pharmacology of Heart Failure /Pharmacotherapy of CHF
PPTX
Cell Structure & Organelles in detailed.
PDF
RMMM.pdf make it easy to upload and study
Introduction-to-Literarature-and-Literary-Studies-week-Prelim-coverage.pptx
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
Institutional Correction lecture only . . .
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
Tissue processing ( HISTOPATHOLOGICAL TECHNIQUE
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
Final Presentation General Medicine 03-08-2024.pptx
Supply Chain Operations Speaking Notes -ICLT Program
STATICS OF THE RIGID BODIES Hibbelers.pdf
human mycosis Human fungal infections are called human mycosis..pptx
Lesson notes of climatology university.
master seminar digital applications in india
O7-L3 Supply Chain Operations - ICLT Program
Anesthesia in Laparoscopic Surgery in India
Pharmacology of Heart Failure /Pharmacotherapy of CHF
Cell Structure & Organelles in detailed.
RMMM.pdf make it easy to upload and study

httpswww.huffpost.comentryonline-dating-vs-offline_b_4037867.docx

  • 1. https://www.huffpost.com/entry/online-dating-vs- offline_b_4037867 For your initial post, provide a sentence to share which article you are referring to so that you can best communicate with your peers. Include a link to your selection. · Explain how the argument contains or avoids bias. i. Provide specific examples to support your explanation. ii. What assumptions does it make? · Discuss the credibility of the overall argument. i. Were the resources the argument was built upon credible? ii. Does the credibility support or undermine the article’s claims in any important ways? In response to your peers, provide an additional resource to support or refute the argument your peer makes. Do you agree with their claims of credibility? Are there any other possible bias not identified? Response #1 Allysa Tantala posted Sep 22, 2019 10:17 PM Subscribe The article that I am looking at is Online Dating Vs. Offline Dating: Pros and Cons.It was written by Julie Spira, an online dating expert, bestselling author, and CEO of Cyber-Dating Expert. The name of the article is spot on in describing what it is about. The author goes through the pros and cons of dating online and offline in today’s day and age. The author avoids bias because she looks at both options in both their positive and negative attributes. She comes at the issues from both angles and I believe she does a very good job at remaining unbiased. She states that “if you're serious about meeting someone special, you must include a combination of both online and offline dating in your routine” (Spira, 2013, par. 18). She’s stating that both options have their pros and cons and that really
  • 2. a combination of both is needed to find someone. The only bias I could see anyone pointing out would be that she is a woman, so you do not get the male perspective on these things. That being said, I one hundred percent think she covers all of the questions people may have about online and offline dating in today’s world. The only assumption being made here is that the reader wants to be out in the dating world and they need to know what is best. But, the title of the article is pretty self- explanatory so if someone did not want to know these things, they would not have to waste their time reading it all because they could tell what it would be about by the title. The resource that she used was herself, and like I stated above, she is an online dating expert, bestselling author, and CEO of Cyber-Dating Expert; so she is more than qualified to give her perspective on these issues. I find her to be credible and thought provoking. Her credibility supports everything the article says and makes the reader feel like they are being told the truth by someone who completely understands all of the pros and cons. Resource: Spira, J. (2013, December 3). Online Dating Vs. Offline Dating: Pros and Cons. Retrieved from https://www.huffpost.com/entry/online-dating-vs- offline_b_4037867 Response #2 Jennifer Caforio posted Sep 22, 2019 3:07 PM Subscribe Hello all, I chose to look at online vs. offline dating. The article I will be looking at is Online Dating Vs. Offline Dating: Pros and Cons. The article I have selected avoids bias. It does this by offering pros and cons of both dating platforms. She does not claim that one is better, rather, they are both very important to today’s society when looking for a partner (Spira, 2013). Spira says “As
  • 3. one who believes in casting a wide net, I tell singles that you really need to do both. (Spira, 2013) I feel that this sentence clears any bias of any dating platform. She also goes on to list the pros and cons of online dating and the pros and cons on traditional dating. The listing of both platforms shows the reader the complete picture, this can lead the reader to decide independently. This article makes the assumption that online dating is just as important as offline dating. This platform is valuable and useful. Although this article is well written and contains no bias it is not very credible. It does express valuable and sensible claims; however, it does not support them. the resources in the article were not credible in that no other resources are listed. Julia Spira lists herself as an online dating expert, she does appear to have many years’ experience in the area. She references (in general) other dating experts or sites that agree with her views, she does not list them with a name or title of works published. The credibility of this article undermines the argument’s claims by not listing any other credible resources on the subject. Although she presents logical thinking and non-bias testimony it appears it is the opinion of only herself. Resources Spira, J. (2013, December 3). Online Dating Vs. Offline Dating: Pros and Cons. Retrieved from https://www.huffpost.com/entry/online-dating-vs- offline_b_4037867 www.it-ebooks.info http://www.it-ebooks.info/
  • 4. fl ast.indd 11:57:23:AM 01/17/2014 Page xx www.it-ebooks.info http://www.it-ebooks.info/ ffi rs.indd 12:57:18:PM 01/17/2014 Page i Threat Modeling Designing for Security Adam Shostack www.it-ebooks.info http://www.it-ebooks.info/ ffi rs.indd 12:57:18:PM 01/17/2014 Page ii Threat Modeling: Designing for Security Published by John Wiley & Sons, Inc. 10475 Crosspoint BoulevardIndianapolis, IN 46256 www.wiley.com Copyright © 2014 by Adam Shostack Published by John Wiley & Sons, Inc., Indianapolis, Indiana
  • 5. Published simultaneously in Canada ISBN: 978-1-118-80999-0 ISBN: 978-1-118-82269-2 (ebk) ISBN: 978-1-118-81005-7 (ebk) Manufactured in the United States of America 10 9 8 7 6 5 4 3 2 1 No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without either the prior written permis- sion of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 646-8600. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley .com/go/permissions. Limit of Liability/Disclaimer of Warranty: The publisher and
  • 6. the author make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifi cally disclaim all warranties, including without limitation warranties of fi tness for a particular purpose. No warranty may be created or extended by sales or promotional materials. The advice and strategies contained herein may not be suitable for every situation. This work is sold with the understanding that the publisher is not engaged in rendering legal, accounting, or other professional services. If professional assistance is required, the services of a competent professional person should be sought. Neither the publisher nor the author shall be liable for damages arising herefrom. The fact that an organization or Web site is referred to in this work as a citation and/or a potential source of further information does not mean that the author or the publisher endorses the information the organization or website may provide or recommendations it may make. Further, readers should be aware that Internet websites listed in this work may have changed or disappeared between when this work was written and when it is read. For general information on our other products and services please contact our Customer Care Department
  • 7. within the United States at (877) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002. Wiley publishes in a variety of print and electronic formats and by print-on-demand. Some material included with standard print versions of this book may not be included in e-books or in print-on-demand. If this book refers to media such as a CD or DVD that is not included in the version you purchased, you may download this material at http://booksupport.wiley.com. For more information about Wiley products, visit www.wiley.com. Library of Congress Control Number: 2013954095 Trademarks: Wiley and the Wiley logo are trademarks or registered trademarks of John Wiley & Sons, Inc. and/or its affi liates, in the United States and other countries, and may not be used without written permission. All other trademarks are the property of their respective owners. John Wiley & Sons, Inc. is not associated with any product or vendor mentioned in this book. www.it-ebooks.info http://www.it-ebooks.info/ ffi rs.indd 12:57:18:PM 01/17/2014 Page iii For all those striving to deliver more secure systems
  • 8. www.it-ebooks.info http://www.it-ebooks.info/ iv ffi rs.indd 12:57:18:PM 01/17/2014 Page iv Credits Executive Editor Carol Long Project Editors Victoria Swider Tom Dinse Technical Editor Chris Wysopal Production Editor Christine Mugnolo Copy Editor Luann Rouff Editorial Manager Mary Beth Wakefi eld Freelancer Editorial Manager Rosemarie Graham Associate Director of Marketing
  • 9. David Mayhew Marketing Manager Ashley Zurcher Business Manager Amy Knies Vice President and Executive Group Publisher Richard Swadley Associate Publisher Jim Minatel Project Coordinator, Cover Todd Klemme Technical Proofreader Russ McRee Proofreader Nancy Carrasco Indexer Robert Swanson Cover Image Courtesy of Microsoft Cover Designer Wiley www.it-ebooks.info http://www.it-ebooks.info/
  • 10. v ffi rs.indd 12:57:18:PM 01/17/2014 Page v Adam Shostack is currently a program manager at Microsoft. His security roles there have included security development processes, usable security, and attack modeling. His attack- modeling work led to security updates for Autorun being delivered to hundreds of millions of computers. He shipped the SDL Threat Modeling Tool and the Elevation of Privilege threat modeling game. While doing security development process work, he delivered threat modeling training across Microsoft and its partners and customers. Prior to Microsoft, he has been an executive at a number of successful information security and privacy startups. He helped found the CVE, the Privacy Enhancing Technologies Symposium and the International Financial Cryptography Association. He has been a consultant to banks, hospitals and startups and established software companies. For the fi rst several years of his
  • 11. career, he was a systems manager for a medical research lab. Shostack is a prolifi c author, blogger, and public speaker. With Andrew Stewart, he co-authored The New School of Information Security (Addison-Wesley, 2008). About the Author www.it-ebooks.info http://www.it-ebooks.info/ vi ffi rs.indd 12:57:18:PM 01/17/2014 Page vi Chris Wysopal, Veracode’s CTO and Co-Founder, is responsible for the company’s software security analysis capabilities. In 2008 he was named one of InfoWorld’s Top 25 CTO’s and one of the 100 most infl uential people in IT by eWeek. One of the original vulnerability researchers and a member of L0pht Heavy Industries, he has testifi ed on Capitol Hill in the US on the subjects of government computer security and how vulnerabilities are discovered in software. He
  • 12. is an author of L0phtCrack and netcat for Windows. He is the lead author of The Art of Software Security Testing (Addison-Wesley, 2006). About the Technical Editor www.it-ebooks.info http://www.it-ebooks.info/ vii ffi rs.indd 12:57:18:PM 01/17/2014 Page vii First and foremost, I’d like to thank countless engineers at Microsoft and else- where who have given me feedback about their experiences threat modeling. I wouldn’t have had the opportunity to have so many open and direct conversa- tions without the support of Eric Bidstrup and Steve Lipner, who on my fi rst day at Microsoft told me to go “wallow in the problem for a while.” I don’t think either expected “a while” to be quite so long. Nearly eight years later with countless deliverables along the way, this book is my most
  • 13. complete answer to the question they asked me: “How can we get better threat models?” Ellen Cram Kowalczyk helped me make the book a reality in the Microsoft context, gave great feedback on both details and aspects that were missing, and also provided a lot of the history of threat modeling from the fi rst security pushes through the formation of the SDL, and she was a great manager and mentor. Ellen and Steve Lipner were also invaluable in helping me obtain permission to use Microsoft documents. The Elevation of Privilege game that opens this book owes much to Jacqueline Beauchere, who saw promise in an ugly prototype called “Threat Spades,” and invested in making it beautiful and widely available. The SDL Threat Modeling Tool might not exist if Chris Peterson hadn’t given me a chance to build a threat modeling tool for the Windows team to use. Ivan Medvedev, Patrick McCuller, Meng Li, and Larry Osterman
  • 14. built the fi rst version of that tool. I’d like to thank the many engineers in Windows, and later across Microsoft, who provided bug reports and suggestions for improvements in the beta days, and acknowledge all those who just fl amed at us, reminding us of the importance of getting threat modeling right. Without that tool, my experience and breadth in threat modeling would be far poorer. Larry Osterman, Douglas MacIver, Eric Douglas, Michael Howard, and Bob Fruth gave me hours of their time and experience in understanding threat Acknowledgments www.it-ebooks.info http://www.it-ebooks.info/ viii Acknowledgments ffi rs.indd 12:57:18:PM 01/17/2014 Page viii modeling at Microsoft. Window Snyder’s perspective as I started the Microsoft
  • 15. job has been invaluable over the years. Knowing when you’re done . . . well, this book is nearly done. Rob Reeder was a great guide to the fi eld of usable security, and Chapter 15 would look very different if not for our years of collaboration. I can’t discuss usable security without thanking Lorrie Cranor for her help on that topic; but also for the chance to keynote the Symposium on Usable Privacy and Security, which led me to think about usable engineering advice, a perspective that is now suffused throughout this book. Andy Stiengrubel, Don Ankney, and Russ McRee all taught me important lessons related to operational threat modeling, and how the trade-offs change as you change context. Guys, thank you for beating on me— those lessons now permeate many chapters. Alec Yasinac, Harold Pardue, and Jeff Landry were generous with their time discussing their attack tree experience, and Chapters
  • 16. 4 and 17 are better for those conversations. Joseph Lorenzo Hall was also a gem in helping with attack trees. Wendy Nather argued strongly that assets and attackers are great ways to make threats real, and thus help overcome resistance to fi xing them. Rob Sama checked the Acme fi nancials example from a CPA’s perspective, correcting many of my errors. Dave Awksmith graciously allowed me to include his threat personas as a complete appendix. Jason Nehrboss gave me some of the best feedback I’ve ever received on very early chapters. I’d also like to acknowledge Jacob Appelbaum, Crispin Cowan, Dana Epp (for years of help, on both the book and tools), Jeremi Gosney, Yoshi Kohno, David LeBlanc, Marsh Ray, Nick Mathewson, Tamara McBride, Russ McRee, Talhah Mir, David Mortman, Alec Muffet, Ben Rothke, Andrew Stewart, and Bryan Sullivan for helpful feedback on drafts and/or ideas that made it into the book
  • 17. in a wide variety of ways. Of course, none of those acknowledged in this section are responsible for the errors which doubtless crept in or remain. Writing this book “by myself” (an odd phrase given everyone I’m acknowl- edging) makes me miss working with Andrew Stewart, my partner in writing on The New School of Information Security. Especially since people sometimes attribute that book to me, I want to be public about how much I missed his collaboration in this project. This book wouldn’t be in the form it is were it not for Bruce Schneier’s will- ingness to make an introduction to Carol Long, and Carol’s willingness to pick up the book. It wasn’t always easy to read the feedback and suggested changes from my excellent project editor, Victoria Swider, but this thing is better where I did. Tom Dinse stepped in as the project ended and masterfully took control of a very large number of open tasks, bringing them to resolution on
  • 18. a tight schedule. Lastly, and most importantly, thank you to Terri, for all your help, support, and love, and for putting up with “it’s almost done” for a very, very long time. —Adam Shostack www.it-ebooks.info http://www.it-ebooks.info/ ix ftoc.indd 11:56:7:AM 01/17/2014 Page ix Introduction xxi Part I Getting Started 1 Chapter 1 Dive In and Threat Model! 3 Learning to Threat Model 4 What Are You Building? 5 What Can Go Wrong? 7 Addressing Each Threat 12 Checking Your Work 24 Threat Modeling on Your Own 26 Checklists for Diving In and Threat Modeling 27
  • 19. Summary 28 Chapter 2 Strategies for Threat Modeling 29 “What’s Your Threat Model?” 30 Brainstorming Your Threats 31 Brainstorming Variants 32 Literature Review 33 Perspective on Brainstorming 34 Structured Approaches to Threat Modeling 34 Focusing on Assets 36 Focusing on Attackers 40 Focusing on Software 41 Models of Software 43 Types of Diagrams 44 Trust Boundaries 50 What to Include in a Diagram 52 Complex Diagrams 52 Labels in Diagrams 53 Contents www.it-ebooks.info http://www.it-ebooks.info/
  • 20. x Contents ftoc.indd 11:56:7:AM 01/17/2014 Page x Color in Diagrams 53 Entry Points 53 Validating Diagrams 54 Summary 56 Part II Finding Threats 59 Chapter 3 STRIDE 61 Understanding STRIDE and Why It’s Useful 62 Spoofing Threats 64 Spoofing a Process or File on the Same Machine 65 Spoofing a Machine 66 Spoofing a Person 66 Tampering Threats 67 Tampering with a File 68 Tampering with Memory 68 Tampering with a Network 68 Repudiation Threats 68 Attacking the Logs 69
  • 21. Repudiating an Action 70 Information Disclosure Threats 70 Information Disclosure from a Process 71 Information Disclosure from a Data Store 71 Information Disclosure from a Data Flow 72 Denial-of-Service Threats 72 Elevation of Privilege Threats 73 Elevate Privileges by Corrupting a Process 74 Elevate Privileges through Authorization Failures 74 Extended Example: STRIDE Threats against Acme-DB 74 STRIDE Variants 78 STRIDE-per-Element 78 STRIDE-per-Interaction 80 DESIST 85 Exit Criteria 85 Summary 85 Chapter 4 Attack Trees 87 Working with Attack Trees 87 Using Attack Trees to Find Threats 88 Creating New Attack Trees 88
  • 22. Representing a Tree 91 Human-Viewable Representations 91 Structured Representations 94 Example Attack Tree 94 Real Attack Trees 96 Fraud Attack Tree 96 Election Operations Assessment Threat Trees 96 Mind Maps 98 www.it-ebooks.info http://www.it-ebooks.info/ Contents xi ftoc.indd 11:56:7:AM 01/17/2014 Page xi Perspective on Attack Trees 98 Summary 100 Chapter 5 Attack Libraries 101 Properties of Attack Libraries 101 Libraries and Checklists 103 Libraries and Literature Reviews 103 CAPEC 104 Exit Criteria 106
  • 23. Perspective on CAPEC 106 OWASP Top Ten 108 Summary 108 Chapter 6 Privacy Tools 111 Solove’s Taxonomy of Privacy 112 Privacy Considerations for Internet Protocols 114 Privacy Impact Assessments (PIA) 114 The Nymity Slider and the Privacy Ratchet 115 Contextual Integrity 117 Contextual Integrity Decision Heuristic 118 Augmented Contextual Integrity Heuristic 119 Perspective on Contextual Integrity 119 LINDDUN 120 Summary 121 Part III Managing and Addressing Threats 123 Chapter 7 Processing and Managing Threats 125 Starting the Threat Modeling Project 126 When to Threat Model 126 What to Start and (Plan to) End With 128 Where to Start 128 Digging Deeper into Mitigations 130
  • 24. The Order of Mitigation 131 Playing Chess 131 Prioritizing 132 Running from the Bear 132 Tracking with Tables and Lists 133 Tracking Threats 133 Making Assumptions 135 External Security Notes 136 Scenario-Specifi c Elements of Threat Modeling 138 Customer/Vendor Trust Boundary 139 New Technologies 139 Threat Modeling an API 141 Summary 143 www.it-ebooks.info http://www.it-ebooks.info/ xii Contents ftoc.indd 11:56:7:AM 01/17/2014 Page xii Chapter 8 Defensive Tactics and Technologies 145
  • 25. Tactics and Technologies for Mitigating Threats 145 Authentication: Mitigating Spoofi ng 146 Integrity: Mitigating Tampering 148 Non-Repudiation: Mitigating Repudiation 150 Confi dentiality: Mitigating Information Disclosure 153 Availability: Mitigating Denial of Service 155 Authorization: Mitigating Elevation of Privilege 157 Tactic and Technology Traps 159 Addressing Threats with Patterns 159 Standard Deployments 160 Addressing CAPEC Threats 160 Mitigating Privacy Threats 160 Minimization 160 Cryptography 161 Compliance and Policy 164 Summary 164 Chapter 9 Trade-Off s When Addressing Threats 167 Classic Strategies for Risk Management 168 Avoiding Risks 168 Addressing Risks 168
  • 26. Accepting Risks 169 Transferring Risks 169 Ignoring Risks 169 Selecting Mitigations for Risk Management 170 Changing the Design 170 Applying Standard Mitigation Technologies 174 Designing a Custom Mitigation 176 Fuzzing Is Not a Mitigation 177 Threat-Specifi c Prioritization Approaches 178 Simple Approaches 178 Threat-Ranking with a Bug Bar 180 Cost Estimation Approaches 181 Mitigation via Risk Acceptance 184 Mitigation via Business Acceptance 184 Mitigation via User Acceptance 185 Arms Races in Mitigation Strategies 185 Summary 186 Chapter 10 Validating That Threats Are Addressed 189 Testing Threat Mitigations 190 Test Process Integration 190
  • 27. How to Test a Mitigation 191 Penetration Testing 191 www.it-ebooks.info http://www.it-ebooks.info/ Contents xiii ftoc.indd 11:56:7:AM 01/17/2014 Page xiii Checking Code You Acquire 192 Constructing a Software Model 193 Using the Software Model 194 QA’ing Threat Modeling 195 Model/Reality Conformance 195 Task and Process Completion 196 Bug Checking 196 Process Aspects of Addressing Threats 197 Threat Modeling Empowers Testing; Testing Empowers Threat Modeling 197 Validation/Transformation 197 Document Assumptions as You Go 198 Tables and Lists 198 Summary 202
  • 28. Chapter 11 Threat Modeling Tools 203 Generally Useful Tools 204 Whiteboards 204 Offi ce Suites 204 Bug-Tracking Systems 204 Open-Source Tools 206 TRIKE 206 SeaMonster 206 Elevation of Privilege 206 Commercial Tools 208 ThreatModeler 208 Corporate Threat Modeller 208 SecurITree 209 Little-JIL 209 Microsoft’s SDL Threat Modeling Tool 209 Tools That Don’t Exist Yet 213 Summary 213 Part IV Threat Modeling in Technologies and Tricky Areas 215 Chapter 12 Requirements Cookbook 217 Why a “Cookbook”? 218 The Interplay of Requirements, Threats,
  • 29. and Mitigations 219 Business Requirements 220 Outshining the Competition 220 Industry Requirements 220 Scenario-Driven Requirements 221 Prevent/Detect/Respond as a Frame for Requirements 221 Prevention 221 Detection 225 www.it-ebooks.info http://www.it-ebooks.info/ xiv Contents ftoc.indd 11:56:7:AM 01/17/2014 Page xiv Response 225 People/Process/Technology as a Frame for Requirements 227 People 227 Process 228 Technology 228
  • 30. Development Requirements vs. Acquisition Requirements 228 Compliance-Driven Requirements 229 Cloud Security Alliance 229 NIST Publication 200 230 PCI-DSS 231 Privacy Requirements 231 Fair Information Practices 232 Privacy by Design 232 The Seven Laws of Identity 233 Microsoft Privacy Standards for Development 234 The STRIDE Requirements 234 Authentication 235 Integrity 236 Non-Repudiation 237 Confi dentiality 238 Availability 238 Authorization 239 Non-Requirements 240 Operational Non-Requirements 240 Warnings and Prompts 241
  • 31. Microsoft’s “10 Immutable Laws” 241 Summary 242 Chapter 13 Web and Cloud Threats 243 Web Threats 243 Website Threats 244 Web Browser and Plugin Threats 244 Cloud Tenant Threats 246 Insider Threats 246 Co-Tenant Threats 247 Threats to Compliance 247 Legal Threats 248 Threats to Forensic Response 248 Miscellaneous Threats 248 Cloud Provider Threats 249 Threats Directly from Tenants 249 Threats Caused by Tenant Behavior 250 Mobile Threats 250 Summary 251 www.it-ebooks.info http://www.it-ebooks.info/
  • 32. Contents xv ftoc.indd 11:56:7:AM 01/17/2014 Page xv Chapter 14 Accounts and Identity 253 Account Life Cycles 254 Account Creation 254 Account Maintenance 257 Account Termination 258 Account Life-Cycle Checklist 258 Authentication 259 Login 260 Login Failures 262 Threats to “What You Have” 263 Threats to “What You Are” 264 Threats to “What You Know” 267 Authentication Checklist 271 Account Recovery 271 Time and Account Recovery 272 E-mail for Account Recovery 273 Knowledge-Based Authentication 274
  • 33. Social Authentication 278 Attacker-Driven Analysis of Account Recovery 280 Multi-Channel Authentication 281 Account Recovery Checklist 281 Names, IDs, and SSNs 282 Names 282 Identity Documents 285 Social Security Numbers and Other National Identity Numbers 286 Identity Theft 289 Names, IDs, and SSNs Checklist 290 Summary 290 Chapter 15 Human Factors and Usability 293 Models of People 294 Applying Behaviorist Models of People 295 Cognitive Science Models of People 297 Heuristic Models of People 302 Models of Software Scenarios 304
  • 34. Modeling the Software 304 Diagramming for Modeling the Software 307 Modeling Electronic Social Engineering Attacks 309 Threat Elicitation Techniques 311 Brainstorming 311 The Ceremony Approach to Threat Modeling 311 Ceremony Analysis Heuristics 312 Integrating Usability into the Four-Stage Framework 315 www.it-ebooks.info http://www.it-ebooks.info/ xvi Contents ftoc.indd 11:56:7:AM 01/17/2014 Page xvi Tools and Techniques for Addressing Human Factors 316 Myths That Inhibit Human Factors Work 317 Design Patterns for Good Decisions 317 Design Patterns for a Kind Learning Environment 320 User Interface Tools and Techniques 322
  • 35. Confi guration 322 Explicit Warnings 323 Patterns That Grab Attention 325 Testing for Human Factors 327 Benign and Malicious Scenarios 328 Ecological Validity 328 Perspective on Usability and Ceremonies 329 Summary 331 Chapter 16 Threats to Cryptosystems 333 Cryptographic Primitives 334 Basic Primitives 334 Privacy Primitives 339 Modern Cryptographic Primitives 339 Classic Threat Actors 341 Attacks against Cryptosystems 342 Building with Crypto 346 Making Choices 346 Preparing for Upgrades 346 Key Management 346 Authenticating before Decrypting 348 Things to Remember about Crypto 348
  • 36. Use a Cryptosystem Designed by Professionals 348 Use Cryptographic Code Built and Tested by Professionals 348 Cryptography Is Not Magic Security Dust 349 Assume It Will All Become Public 349 You Still Need to Manage Keys 349 Secret Systems: Kerckhoffs and His Principles 349 Summary 351 Part V Taking It to the Next Level 353 Chapter 17 Bringing Threat Modeling to Your Organization 355 How To Introduce Threat Modeling 356 Convincing Individual Contributors 357 Convincing Management 358 Who Does What? 359 Threat Modeling and Project Management 359 www.it-ebooks.info http://www.it-ebooks.info/ Contents xvii ftoc.indd 11:56:7:AM 01/17/2014 Page xvii Prerequisites 360
  • 37. Deliverables 360 Individual Roles and Responsibilities 362 Group Interaction 363 Diversity in Threat Modeling Teams 367 Threat Modeling within a Development Life Cycle 367 Development Process Issues 368 Organizational Issues 373 Customizing a Process for Your Organization 378 Overcoming Objections to Threat Modeling 379 Resource Objections 379 Value Objections 380 Objections to the Plan 381 Summary 383 Chapter 18 Experimental Approaches 385 Looking in the Seams 386 Operational Threat Models 387 FlipIT 388 Kill Chains 388 The “Broad Street” Taxonomy 392 Adversarial Machine Learning 398 Threat Modeling a Business 399
  • 38. Threats to Threat Modeling Approaches 400 Dangerous Deliverables 400 Enumerate All Assumptions 400 Dangerous Approaches 402 How to Experiment 404 Defi ne a Problem 404 Find Aspects to Measure and Measure Them 404 Study Your Results 405 Summary 405 Chapter 19 Architecting for Success 407 Understanding Flow 407 Flow and Threat Modeling 409 Stymieing People 411 Beware of Cognitive Load 411 Avoid Creator Blindness 412 Assets and Attackers 412 Knowing the Participants 413 Boundary Objects 414 The Best Is the Enemy of the Good 415 Closing Perspectives 416 “The Threat Model Has Changed” 417
  • 39. www.it-ebooks.info http://www.it-ebooks.info/ xviii Contents ftoc.indd 11:56:7:AM 01/17/2014 Page xviii On Artistry 418 Summary 419 Now Threat Model 420 Appendix A Helpful Tools 421 Common Answers to “What’s Your Threat Model?” 421 Network Attackers 421 Physical Attackers 422 Attacks against People 423 Supply Chain Attackers 423 Privacy Attackers 424 Non-Sentient “Attackers” 424 The Internet Threat Model 424 Assets 425 Computers as Assets 425
  • 40. People as Assets 426 Processes as Assets 426 Intangible Assets 427 Stepping-Stone Assets 427 Appendix B Threat Trees 429 STRIDE Threat Trees 430 Spoofi ng an External Entity (Client/ Person/Account) 432 Spoofi ng a Process 438 Spoofi ng of a Data Flow 439 Tampering with a Process 442 Tampering with a Data Flow 444 Tampering with a Data Store 446 Repudiation against a Process (or by an External Entity) 450 Repudiation, Data Store 452 Information Disclosure from a Process 454 Information Disclosure from a Data Flow 456 Information Disclosure from a Data Store 459 Denial of Service against a Process 462 Denial of Service against a Data Flow 463
  • 41. Denial of Service against a Data Store 466 Elevation of Privilege against a Process 468 Other Threat Trees 470 Running Code 471 Attack via a “Social” Program 474 Attack with Tricky Filenames 476 Appendix C Attacker Lists 477 Attacker Lists 478 Barnard’s List 478 Verizon’s Lists 478 OWASP 478 Intel TARA 479 www.it-ebooks.info http://www.it-ebooks.info/ Contents xix ftoc.indd 11:56:7:AM 01/17/2014 Page xix Personas and Archetypes 480 Aucsmith’s Attacker Personas 481 Background and Defi nitions 481 Personas 484
  • 42. David “Ne0phyate” Bradley – Vandal 484 JoLynn “NightLily” Dobney – Trespasser 486 Sean “Keech” Purcell – Defacer 488 Bryan “CrossFyre” Walton – Author 490 Lorrin Smith-Bates – Insider 492 Douglas Hite – Thief 494 Mr. Smith – Terrorist 496 Mr. Jones – Spy 498 Appendix D Elevation of Privilege: The Cards 501 Spoofi ng 501 Tampering 503 Repudiation 504 Information Disclosure 506 Denial of Service 507 Elevation of Privilege (EoP) 508 Appendix E Case Studies 511 The Acme Database 512 Security Requirements 512 Software Model 512 Threats and Mitigations 513 Acme’s Operational Network 519 Security Requirements 519 Operational Network 520 Threats to the Network 521 Phones and One-Time Token Authenticators 525
  • 43. The Scenario 526 The Threats 527 Possible Redesigns 528 Sample for You to Model 528 Background 529 The iNTegrity Data Flow Diagrams 530 Exercises 531 Glossary 533 Bibliography 543 Index 567 www.it-ebooks.info http://www.it-ebooks.info/ fl ast.indd 11:57:23:AM 01/17/2014 Page xx www.it-ebooks.info http://www.it-ebooks.info/ xxi fl ast.indd 11:57:23:AM 01/17/2014 Page xxi All models are wrong, some models are useful.
  • 44. — George Box This book describes the useful models you can employ to address or mitigate these potential threats. People who build software, systems, or things with software need to address the many predictable threats their systems can face. Threat modeling is a fancy name for something we all do instinctively. If I asked you to threat model your house, you might start by thinking about the precious things within it: your family, heirlooms, photos, or perhaps your collec- tion of signed movie posters. You might start thinking about the ways someone might break in, such as unlocked doors or open windows. And you might start thinking about the sorts of people who might break in, including neighborhood kids, professional burglars, drug addicts, perhaps a stalker, or someone trying to steal your Picasso original. Each of these examples has an analog in the software world, but
  • 45. for now, the important thing is not how you guard against each threat, but that you’re able to relate to this way of thinking. If you were asked to help assess a friend’s house, you could probably help, but you might lack confi dence in how complete your analysis is. If you were asked to secure an offi ce complex, you might have a still harder time, and securing a military base or a prison seems even more diffi cult. In those cases, your instincts are insuffi cient, and you’d need tools to help tackle the questions. This book will give you the tools to think about threat modeling technology in structured and effective ways. In this introduction, you’ll learn about what threat modeling is and why indi- viduals, teams, and organizations threat model. Those reasons include fi nding security issues early, improving your understanding of security requirements, and being able to engineer and deliver better products. This introduction has
  • 46. Introduction www.it-ebooks.info http://www.it-ebooks.info/ xxii Introduction fl ast.indd 11:57:23:AM 01/17/2014 Page xxii fi ve main sections describing what the book is about, including a defi nition of threat modeling and reasons it’s important; who should read this book; how to use it, and what you can expect to gain from the various parts, and new lessons in threat modeling. What Is Threat Modeling? Everyone threat models. Many people do it out of frustration in line at the airport, sneaking out of the house or into a bar. At the airport, you might idly consider how to sneak something through security, even if you have no intent to do so. Sneaking in or out of someplace, you worry about who might catch you. When you speed
  • 47. down the highway, you work with an implicit threat model where the main threat is the police, who you probably think are lurking behind a billboard or overpass. Threats of road obstructions, deer, or rain might play into your model as well. When you threat model, you usually use two types of models. There’s a model of what you’re building, and there’s a model of the threats (what can go wrong). What you’re building with software might be a website, a downloadable program or app, or it might be delivered in a hardware package. It might be a distributed system, or some of the “things” that will be part of the “Internet of things.” You model so that you can look at the forest, not the trees. A good model helps you address classes or groups of attacks, and deliver a more secure product. The English word threat has many meanings. It can be used to describe a person, such as “Osama bin Laden was a threat to America,” or people, such
  • 48. as “the insider threat.” It can be used to describe an event, such as “There is a threat of a hurricane coming through this weekend,” and it can be used to describe a weakness or possibility of attack, such as “What are you doing about confi dentiality threats?” It is also used to describe viruses and malware such as “This threat incorporates three different methods for spreading.” It can be used to describe behavior such as “There’s a threat of operator error.” Similarly, the term threat modeling has many meanings, and the term threat model is used in many distinct and perhaps incompatible ways, including: ■ As a verb—for example, “Have you threat modeled?” That is, have you gone through an analysis process to fi gure out what might go wrong with the thing you’re building? ■ As a noun, to ask what threat model is being used. For example, “Our threat model is someone in possession of the machine,” or “Our threat
  • 49. model is a skilled and determined remote attacker.” ■ It can mean building up a set of idealized attackers. ■ It can mean abstracting threats into classes such as tampering. There are doubtless other defi nitions. All of these are useful in various sce- narios and thus correct, and there are few less fruitful ways to spend your time www.it-ebooks.info http://www.it-ebooks.info/ Introduction xxiii fl ast.indd 11:57:23:AM 01/17/2014 Page xxiii than debating them. Arguing over defi nitions is a strange game, and the only way to win is not to play. This book takes a big tent approach to threat model- ing and includes a wide range of techniques you can apply early to make what you’re designing or building more secure. It will also address the reality that some techniques are more effective than others, and that some
  • 50. techniques are more likely to work for people with particular skills or experience. Threat modeling is the key to a focused defense. Without threat models, you can never stop playing whack-a-mole. In short, threat modeling is the use of abstractions to aid in thinking about risks. Reasons to Threat Model In today’s fast-paced world, there is a tendency to streamline development activ- ity, and there are important reasons to threat model, which are covered in this section. Those include fi nding security bugs early, understanding your security requirements, and engineering and delivering better products. Find Security Bugs Early If you think about building a house, decisions you make early will have dramatic effects on security. Wooden walls and lots of ground-level windows expose you
  • 51. to more risks than brick construction and few windows. Either may be a reason- able choice, depending on where you’re building and other factors. Once you’ve chosen, changes will be expensive. Sure, you can put bars over your windows, but wouldn’t it be better to use a more appropriate design from the start? The same sorts of tradeoffs can apply in technology. Threat modeling will help you fi nd design issues even before you’ve written a line of code, and that’s the best time to fi nd those issues. Understand Your Security Requirements Good threat models can help you ask “Is that really a requirement?” For example, does the system need to be secure against someone in physical possession of the device? Apple has said yes for the iPhone, which is different from the tradi- tional world of the PC. As you fi nd threats and triage what you’re going to do with them, you clarify your requirements. With more clear requirements, you
  • 52. can devote your energy to a consistent set of security features and properties. There is an important interplay between requirements, threats, and mitiga- tions. As you model threats, you’ll fi nd that some threats don’t line up with your business requirements, and as such may not be worth addressing. Alternately, your requirements may not be complete. With other threats, you’ll fi nd that addressing them is too complex or expensive. You’ll need to make a call between www.it-ebooks.info http://www.it-ebooks.info/ xxiv Introduction fl ast.indd 11:57:23:AM 01/17/2014 Page xxiv addressing them partially in the current version or accepting (and communicat- ing) that you can’t address those threats. Engineer and Deliver Better Products By considering your requirements and design early in the
  • 53. process, you can dramatically lower the odds that you’ll be re-designing, re- factoring, or facing a constant stream of security bugs. That will let you deliver a better product on a more predictable schedule. All the effort that would go to those can be put into building a better, faster, cheaper or more secure product. You can focus on whatever properties your customers want. Address Issues Other Techniques Won’t The last reason to threat model is that threat modeling will lead you to catego- ries of issues that other tools won’t fi nd. Some of these issues will be errors of omission, such as a failure to authenticate a connection. That’s not something that a code analysis tool will fi nd. Other issues will be unique to your design. To the extent that you have a set of smart developers building something new, you might have new ways threats can manifest. Models of what goes wrong,
  • 54. by abstracting away details, will help you see analogies and similarities to problems that have been discovered in other systems. A corollary of this is that threat modeling should not focus on issues that your other safety and security engineering is likely to fi nd (except insofar as fi nding them early lets you avoid re-engineering). So if, for example, you’re building a product with a database, threat modeling might touch quickly on SQL injection attacks, and the variety of trust boundaries that might be injectable. However, you may know that you’ll encounter those. Your threat modeling should focus on issues that other techniques can’t fi nd. Who Should Read This book? This book is written for those who create or operate complex technology. That’s primarily software engineers and systems administrators, but it also includes a variety of related roles, including analysts or architects. There’s also a lot of
  • 55. information in here for security professionals, so this book should be useful to them and those who work with them. Different parts of the book are designed for different people—in general, the early chapters are for generalists (or special- ists in something other than security), while the end of the book speaks more to security specialists. www.it-ebooks.info http://www.it-ebooks.info/ Introduction xxv fl ast.indd 11:57:23:AM 01/17/2014 Page xxv You don’t need to be a security expert, professional, or even enthusiast to get substantial benefi t from this book. I assume that you understand that there are people out there whose interests and desires don’t line up with yours. For example, maybe they’d like to take money from you, or they may have other goals, like puffi ng themselves up at your expense or using your
  • 56. computer to attack other people. This book is written in plain language for anyone who can write or spec a program, but sometimes a little jargon helps in precision, conciseness, or clarity, so there’s a glossary. What You Will Gain from This Book When you read this book cover to cover, you will gain a rich knowledge of threat modeling techniques. You’ll learn to apply those techniques to your projects so you can build software that’s more secure from the get-go, and deploy it more securely. You’ll learn to how to make security tradeoffs in ways that are considered, measured, and appropriate. You will learn a set of tools and when to bring them to bear. You will discover a set of glamorous distractions. Those distractions might seem like wonderful, sexy ideas, but they hide an ugly inte- rior. You’ll learn why they prevent you from effectively threat
  • 57. modeling, and how to avoid them. You’ll also learn to focus on the actionable outputs of threat modeling, and I’ll generally call those “bugs.” There are arguments that it’s helpful to consider code issues as bugs, and design issues as fl aws. In my book, those arguments are a distraction; you should threat model to fi nd issues that you can address, and arguing about labels probably doesn’t help you address them. Lessons for Diff erent Readers This book is designed to be useful to a wide variety of people working in tech- nology. That includes a continuum from those who develop software to those who combine it into systems that meet operational or business goals to those who focus on making it more secure. For convenience, this book pretends there is a bright dividing line between development and operations. The distinction is used as a way of
  • 58. understand- ing who has what capabilities, choices, and responsibilities. For example, it is “easy” for a developer to change what is logged, or to implement a different authentication system. Both of these may be hard for operations. Similarly, it’s “easy” for operations to ensure that logs are maintained, or to ensure that a computer is in a locked cage. As this book was written, there’s also an important www.it-ebooks.info http://www.it-ebooks.info/ xxvi Introduction fl ast.indd 11:57:23:AM 01/17/2014 Page xxvi model of “devops” emerging. The lessons for developers and operations can likely be applied with minor adjustments. This book also pretends that security expertise is separate from either development or operations expertise, again, simply as a convenience.
  • 59. Naturally, this means that the same parts of the book will bring different les- sons for different people. The breakdown below gives a focused value proposi- tion for each audience. Software Developers and Testers Software developers—those whose day jobs are focused on creating software— include software engineers, quality assurance, and a variety of program or project managers. If you’re in that group, you will learn to fi nd and address design issues early in the software process. This book will enable you to deliver more secure software that better meets customer requirements and expecta- tions. You’ll learn a simple, effective and fun approach to threat modeling, as well as different ways to model your software or fi nd threats. You’ll learn how to track threats with bugs that fi t into your development process. You’ll learn to use threats to help make your requirements more crisp, and vice
  • 60. versa. You’ll learn about areas such as authentication, cryptography, and usability where the interplay of mitigations and attacks has a long history, so you can understand how the recommended approaches have developed to their current state. You’ll learn about how to bring threat modeling into your development process. And a whole lot more! Systems Architecture, Operations, and Management For those whose day jobs involve bringing together software components, weaving them together into systems to deliver value, you’ll learn to fi nd and address threats as you design your systems, select your components, and get them ready for deployment. This book will enable you to deliver more secure systems that better meet business, customer, and compliance requirements. You’ll learn a simple, effective, and fun approach to threat modeling, as well as
  • 61. different ways to model the systems you’re building or have built. You’ll learn how to fi nd security and privacy threats against those systems. You’ll learn about the building blocks which are available for you to operationally address those threats. You’ll learn how to make tradeoffs between the threats you face, and how to ensure that those threats are addressed. You’ll learn about specifi c threats to categories of technology, such as web and cloud systems, and about threats to accounts, both of which are deeply important to those in operations. It will cover issues of usability, and perhaps even change your perspective on www.it-ebooks.info http://www.it-ebooks.info/ Introduction xxvii fl ast.indd 11:57:23:AM 01/17/2014 Page xxvii how to infl uence the security behavior of people within your organization and/
  • 62. or your customers. You will learn about cryptographic building blocks, which you may be using to protect systems. And a whole lot more! Security Professionals If you work in security, you will learn two major things from this book: First, you’ll learn structured approaches to threat modeling that will enhance your productivity, and as you do, you’ll learn why many of the “obvious” parts of threat modeling are not as obvious, or as right, as you may have believed. Second, you’ll learn about bringing security into the development, operational and release processes that your organization uses. Even if you are an expert, this book can help you threat model better. Here, I speak from experience. As I was writing the case study appendix, I found myself turning to both the tree in Appendix B and the requirements chapter, and fi nding threats that didn’t spring to mind from just considering the models
  • 63. of software. T O M Y CO L L E AG U E S I N I N FO R M AT I O N S E C U R I T Y I want to be frank. This book is not about how to design abstractly perfect software. It is a practical, grounded book that acknowledges that most software is built in some business or organizational reality that requires tradeoff s. To the dismay of purists, software where tradeoff s were made runs the world these days, and I’d like to make such software more secure by making those tradeoff s better. That involves a great many elements, two of which are making security more consistent and more acces- sible to our colleagues in other specialties. This perspective is grounded in my time as a systems administrator, deploying security technologies, and observing the issues people encountered. It is grounded in my time as a startup executive, learning to see security as a property of a system which serves a business goal. It is grounded in my responsibility for threat model-
  • 64. ing as part of Microsoft’s Security Development Lifecycle. In that last role, I spoke with thousands of people at Microsoft, its partners, and its customers about our approaches. These individuals ranged from newly hired developers to those with decades of experience in security, and included chief security offi cers and Microsoft’s Trustworthy Computing Academic Advisory Board. I learned that there are an awful lot of opinions about what works, and far fewer about what does not. This book aims to convince my fellow security professionals that pragmatism in what we ask of devel- opment and operations helps us deliver more secure software over time. This perspec- tive may be a challenge for some security professionals. They should focus on Parts II, IV, and V, and perhaps give consideration to the question of the best as the enemy of the good. www.it-ebooks.info http://www.it-ebooks.info/
  • 65. xxviii Introduction fl ast.indd 11:57:23:AM 01/17/2014 Page xxviii How To Use This Book You should start at the very beginning. It’s a very good place to start, even if you already know how to threat model, because it lays out a framework that will help you understand the rest of the book. The Four-Step Framework This book introduces the idea that you should see threat modeling as composed of steps which accomplish subgoals, rather than as a single activity. The essential questions which you ask to accomplish those subgoals are: 1. What are you building? 2. What can go wrong with it once it’s built? 3. What should you do about those things that can go wrong? 4. Did you do a decent job of analysis? The methods you use in each step of the framework can be thought of like
  • 66. Lego blocks. When working with Legos, you can snap in other Lego blocks. In Chapter 1, you’ll use a data fl ow diagram to model what you’re building, STRIDE to help you think about what can go wrong and what you should do about it, and a checklist to see if you did a decent job of analysis. In Chapter 2, you’ll see how diagrams are the most helpful way to think about what you’re building. Different diagram types are like different building blocks to help you model what you’re building. In Chapter 3, you’ll go deep into STRIDE (a model of threats), while in Chapter 4, you’ll learn to use attack trees instead of STRIDE, while leaving everything else the same. STRIDE and attack trees are different building blocks for considering what can go wrong once you’ve built your new technology. Not every approach can snap with every other approach. It takes crazy glue
  • 67. to make an Erector set and Lincoln logs stick together. Attempts to glue threat modeling approaches together has made for some confusing advice. For example, trying to consider how terrorists would attack your assets doesn’t really lead to a lot of actionable issues. And even with building blocks that snap together, you can make something elegant, or something confusing or bizarre. So to consider this as a framework, what are the building blocks? The four- step framework is shown graphically in Figure I-1. The steps are: 1. Model the system you’re building, deploying, or changing. 2. Find threats using that model and the approaches in Part II. www.it-ebooks.info http://www.it-ebooks.info/ Introduction xxix fl ast.indd 11:57:23:AM 01/17/2014 Page xxix 3. Address threats using the approaches in Part III.
  • 68. 4. Validate your work for completeness and effectiveness (also Part III). Model System Find Threats Address Threats Validate Figure I-1: The Four-Step Framework This framework was designed to align with software development and opera- tional deployment. It has proven itself as a way to structure threat modeling. It also makes it easier to experiment without replacing the entire framework. From here until you reach Part V, almost everything you encounter is selected because it plugs into this four-step framework. This book is roughly organized according to the framework: Part I “Getting Started” is about getting started. The opening part of the book (especially Chapter 1) is designed for those without much
  • 69. security expertise. The later parts of the book build on the security knowledge you’ll gain from this material (or combined with your own experience). You’ll gain an understanding of threat modeling, and a recommended approach for those who are new to the discipline. You’ll also learn various ways to model your software, along with why that’s a better place to start than other options, such as attackers or assets. Part II “Finding Threats” is about fi nding threats. It presents a collection of techniques and tools you can use to fi nd threats. It surveys and analyzes the dif- ferent ways people approach information technology threat modeling, enabling you to examine the pros and cons of the various techniques that people bring to bear. They’re grouped in a way that enables you to either read them from start to fi nish or jump in at a particular point where you need help. Part III “Managing and Addressing Threats” is about managing and address- ing threats. It includes processing threats and how to manage
  • 70. them, the tactics and technologies you can use to address them, and how to make risk tradeoffs www.it-ebooks.info http://www.it-ebooks.info/ xxx Introduction fl ast.indd 11:57:23:AM 01/17/2014 Page xxx you might need to make. It also covers validating that your threats are addressed, and tools you can use to help you threat model. Part IV “Threat Modeling in Technologies and Tricky Areas” is about threat modeling in specifi c technologies and tricky areas where a great deal of threat modeling and analysis work has already been done. It includes chapters on web and cloud, accounts and identity, and cryptography, as well as a requirements “cookbook” that you can use to jump-start your own security requirements analysis. Part V “Taking it to the Next Level” is about taking threat
  • 71. modeling to the next level. It targets the experienced threat modeler, security expert, or process designer who is thinking about how to build and customize threat modeling processes for a given organization. Appendices include information to help you apply what you’ve learned. They include sets of common answers to “what’s your threat model,” and “what are our assets”; as well as threat trees that can help you fi nd threats, lists of attackers and attacker personas; and details about the Elevation of Privilege game you’ll use in Chapter 1; and lastly, a set of detailed example threat models. These are followed by a glossary, bibliography, and index. Website: This book’s website, www.threatmodelingbook.com will contain a PDF of some of the fi gures in the book, and likely an errata list to mitigate the errors that inevitably threaten to creep in. What This Book Is Not Many security books today promise to teach you to hack. Their intent is to teach
  • 72. you what sort of attacks need to be defended against. The idea is that if you have an empirically grounded set of attacks, you can start with that to create your defense. This is not one of those books, because despite millions of such books being sold, vulnerable systems are still being built and deployed. Besides, there are solid, carefully considered defenses against many sorts of attacks. It may be useful to know how to execute an attack, but it’s more important to know where each attack might be executed, and how to effectively defend against it. This book will teach you that. This book is not focused on a particular technology, platform, or API set. Platforms and APIs infl uence may offer security features you can use, or mitigate some threats for you. The threats and mitigations associated with a platform change from release to release, and this book aims to be a useful reference volume on your shelf for longer than the release of any particular technology.
  • 73. This book is not a magic pill that will make you a master of threat modeling. It is a resource to help you understand what you need to know. Practice will www.it-ebooks.info http://www.it-ebooks.info/ Introduction xxxi fl ast.indd 11:57:23:AM 01/17/2014 Page xxxi help you get better, and deliberative practice with feedback and hard problems will make you a master. New Lessons on Threat Modeling Most experienced security professionals have developed an approach to threat modeling that works for them. If you’ve been threat modeling for years, this book will give you an understanding of other approaches you can apply. This book also give you a structured understanding of a set of methods and how they inter-relate. Lastly, there are some deeper lessons which
  • 74. are worth bringing to your attention, rather than leaving them for you to extract. There’s More Than One Way to Threat Model If you ask a programmer “What’s the right programming language for my new project?” you can expect a lot of clarifying questions. There is no one ideal pro- gramming language. There are certainly languages that are better or worse at certain types of tasks. For example, it’s easier to write text manipulation code in Perl than assembler. Python is easier to read and maintain than assembler, but when you need to write ultra-fast code for a device driver, C or assembler might be a better choice. In the same way, there are better and worse ways to threat model, which depend greatly on your situation: who will be involved, what skills they have, and so on. So you can think of threat modeling like programming. Within programming there are languages, paradigms (such as waterfall or agile), and
  • 75. practices (pair programming or frequent deployment). The same is true of threat modeling. Most past writing on threat modeling has presented “the” way to do it. This book will help you see how “there’s more than one way to do it” is not just the Perl motto, but also applies to threat modeling. The Right Way Is the Way That Finds Good Threats The right way to threat model is the way that empowers a project team to fi nd more good threats against a system than other techniques that could be employed with the resources available. (A “good threat” is a threat that illuminates work that needs to be done.) That’s as true for a project team of one as it is for a project team of thousands. That’s also true across all levels of resources, such as time, expertise, and tooling. The right techniques empower a team to really fi nd and address threats (and gain assurance that they have done so). www.it-ebooks.info
  • 76. http://www.it-ebooks.info/ xxxii Introduction fl ast.indd 11:57:23:AM 01/17/2014 Page xxxii There are lots of people who will tell you that they know the one true way. (That’s true in fi elds far removed from threat modeling.) Avoid a religious war and fi nd a way that works for you. Threat Modeling Is Like Version Control Threat modeling is sometimes seen as a specialist skill that only a few people can do well. That perspective holds us back, because threat modeling is more like version control than a specialist skill. This is not intended to denigrate or minimize threat modeling; rather, no professional developer would think of building software of any complexity without a version control system of some form. Threat modeling should aspire to be that fundamental. You expect every professional software developer to know the
  • 77. basics of a version control system or two, and similarly, many systems administrators will use version control to manage confi guration fi les. Many organizations get by with a simple version control approach, and never need an expert. If you work at a large organization, you might have someone who manages the build tree full time. Threat modeling is similar. With the lessons in this book, it will become reasonable to expect professionals in software and operations to have basic experience threat modeling. Threat Modeling Is Also Like Playing a Violin When you learn to play the violin, you don’t start with the most beautiful violin music ever written. You learn to play scales, a few easy pieces, and then progress to trickier and trickier music. Similarly, when you start threat modeling, you need to practice to learn the skills, and it may involve challenges or frustration as you learn.
  • 78. You need to understand threat modeling as a set of skills, which you can apply in a variety of ways, and which take time to develop. You’ll get better if you practice. If you expect to compete with an expert overnight, you might be disappointed. Similarly, if you threat model only every few years, you should expect to be rusty, and it will take you time to rebuild the muscles you need. Technique versus Repertoire Continuing with the metaphor, the most talented violinist doesn’t learn to play a single piece, but they develop a repertoire, a set of knowledge that’s relevant to their fi eld. As you get started threat modeling, you’ll need to develop both techniques and a repertoire—a set of threat examples that you can build from to imagine how new systems might be attacked. Attack lists or libraries can act as a partial www.it-ebooks.info
  • 79. http://www.it-ebooks.info/ Introduction xxxiii fl ast.indd 11:57:23:AM 01/17/2014 Page xxxiii substitute for the mental repertoire of known threats an expert knows about. Reading about security issues in similar products can also help you develop a repertoire of threats. Over time, this can feed into how you think about new and different threats. Learning to think about threats is easier with training wheels. “Think Like an Attacker” Considered Harmful A great deal of writing on threat modeling exhorts people to “think like an attacker.” For many people, that’s as hard as thinking like a professional chef. Even if you’re a great home cook, a restaurant-managing chef has to wrestle with problems that a home cook does not. For example, how many chickens should you buy to meet the needs of a restaurant with 78 seats, each of which
  • 80. will turn twice an evening? The advice to think like an attacker doesn’t help most people threat model. Worse, you may end up with implicit or incorrect assumptions about how an attacker will think and what they will choose to do. Such models of an attacker’s mindset may lead you to focus on the wrong threats. You don’t need to focus on the attacker to fi nd threats, but personifi cation may help you fi nd resources to address them. The Interplay of Attacks, Mitigations, & Requirements Threat modeling is all about making more secure software. As you use models of software and threats to fi nd potential problems, you’ll discover that some threats are hard or impossible to address, and you’ll adjust requirements to match. This interplay is a rarely discussed key to useful threat modeling. Sometimes it’s a matter of wanting to defend against administrators, other
  • 81. times it’s a matter of what your customers will bear. In the wake of the 9/11 hijackings, the US government reputedly gave serious consideration to banning laptops from airplanes. (A battery and a mass of explosives reportedly look the same on the x-ray machines.) Business customers, who buy last minute expen- sive tickets and keep the airlines aloft, threatened to revolt. So the government implemented other measures, whose effectiveness might be judged with some of the tools in this book. This interplay leads to the conclusion that there are threats that cannot be effectively mitigated. That’s a painful thought for many security professionals. (But as the Man in Black said, “Life is pain, Highness! Anyone who says differ- ently is selling something.”) When you fi nd threats that violate your requirements and cannot be mitigated, it generally makes sense to adjust your requirements.
  • 82. Sometimes it’s possible to either mitigate the threat operationally, or defer a decision to the person using the system. With that, it’s time to dive in and threat model! www.it-ebooks.info http://www.it-ebooks.info/ www.it-ebooks.info http://www.it-ebooks.info/ c01.indd 11:33:50:AM 01/17/2014 Page 1 This part of the book is for those who are new to threat modeling, and it assumes no prior knowledge of threat modeling or security. It focuses on the key new skills that you’ll need to threat model and lays out a methodology that’s designed for people who are new to threat modeling. Part I also introduces the various ways to approach threat modeling using a set of toy analogies. Much like there are many children’s toys for modeling, there
  • 83. are many ways to threat model. There are model kits with precisely molded parts to create airplanes or ships. These kits have a high degree of fi delity and a low level of fl exibility. There are also numerous building block systems such as Lincoln Logs, Erector Sets, and Lego blocks. Each of these allows for more fl exibility, at the price of perhaps not having a propeller that’s quite right for the plane you want to model. In threat modeling, there are techniques that center on attackers, assets, or software, and these are like Lincoln Logs, Erector Sets, and Lego blocks, in that each is powerful and fl exible, each has advantages and disadvantages, and it can be tricky to combine them into something beautiful. P a r t I Getting Started www.it-ebooks.info http://www.it-ebooks.info/
  • 84. 2 Part I ■ Getting Started c01.indd 11:33:50:AM 01/17/2014 Page 2 Part I contains the following chapters: ■ Chapter 1: Dive In and Threat Model! contains everything you need to get started threat modeling, and does so by focusing on four questions: ■ What are you building? ■ What can go wrong? ■ What should you do about those things that can go wrong? ■ Did you do a decent job of analysis? These questions aren’t just what you need to get started, but are at the heart of the four-step framework, which is the core of this book. ■ Chapter 2: Strategies for Threat Modeling covers a great many ways to approach threat modeling. Many of them are “obvious” approaches, such as thinking about attackers or the assets you want to protect. Each is explained, along with why it works less well than you hope. These
  • 85. and others are contrasted with a focus on software. Software is what you can most reasonably expect a software professional to understand, and so models of software are the most important lesson of Chapter 2. Models of software are one of the two models that you should focus on when threat modeling. www.it-ebooks.info http://www.it-ebooks.info/ 3 c01.indd 11:33:50:AM 01/17/2014 Page 3 Anyone can learn to threat model, and what’s more, everyone should. Threat modeling is about using models to fi nd security problems. Using a model means abstracting away a lot of details to provide a look at a bigger picture, rather than the code itself. You model because it enables you to fi nd issues in things you
  • 86. haven’t built yet, and because it enables you to catch a problem before it starts. Lastly, you threat model as a way to anticipate the threats that could affect you. Threat modeling is fi rst and foremost a practical discipline, and this chapter is structured to refl ect that practicality. Even though this book will provide you with many valuable defi nitions, theories, philosophies, effective approaches, and well-tested techniques, you’ll want those to be grounded in experience. Therefore, this chapter avoids focusing on theory and ignores variations for now and instead gives you a chance to learn by experience. To use an analogy, when you start playing an instrument, you need to develop muscles and awareness by playing the instrument. It won’t sound great at the start, and it will be frustrating at times, but as you do it, you’ll fi nd it gets easier. You’ll start to hit the notes and the timing. Similarly, if you use the simple four- step breakdown of how to threat model that’s exercised in Parts
  • 87. I-III of this book, you’ll start to develop your muscles. You probably know the old joke about the person who stops a musician on the streets of New York and asks “How do I get to Carnegie Hall?” The answer, of course, is “practice, practice, practice.” Some of that includes following along, doing the exercises, and developing an C H A P T E R 1 Dive In and Threat Model! www.it-ebooks.info http://www.it-ebooks.info/ 4 Part I ■ Getting Started c01.indd 11:33:50:AM 01/17/2014 Page 4 understanding of the steps involved. As you do so, you’ll start to understand how the various tasks and techniques that make up threat modeling come together. In this chapter you’re going to fi nd security fl aws that might
  • 88. exist in a design, so you can address them. You’ll learn how to do this by examining a simple web application with a database back end. This will give you an idea of what can go wrong, how to address it, and how to check your work. Along the way, you’ll learn to play Elevation of Privilege, a serious game designed to help you start threat modeling. Finally you’ll get some hands-on experience building your own threat model, and the chapter closes with a set of checklists that help you get started threat modeling. Learning to Threat Model You begin threat modeling by focusing on four key questions: 1. What are you building? 2. What can go wrong? 3. What should you do about those things that can go wrong? 4. Did you do a decent job of analysis? In addressing these questions, you start and end with tasks that all technolo-
  • 89. gists should be familiar with: drawing on a whiteboard and managing bugs. In between, this chapter will introduce a variety of new techniques you can use to think about threats. If you get confused, just come back to these four questions. Everything in this chapter is designed to help you answer one of these ques- tions. You’re going to fi rst walk through these questions using a three-tier web app as an example, and after you’ve read that, you should walk through the steps again with something of your own to threat model. It could be software you’re building or deploying, or software you’re considering acquiring. If you’re feeling uncertain about what to model, you can use one of the sample systems in this chapter or an exercise found in Appendix E, “Case Studies.” The second time you work through this chapter, you’ll need a copy of the Elevation of Privilege threat-modeling game. The game uses a deck of cards that you can download free from
  • 90. http://www.microsoft.com/security/sdl/ adopt/eop.aspx. You should get two–four friends or colleagues together for the game part. You start with building a diagram, which is the fi rst of four major activities involved in threat modeling and is explained in the next section. The other three include fi nding threats, addressing them, and then checking your work. www.it-ebooks.info http://www.it-ebooks.info/ Chapter 1 ■ Dive In and Threat Model! 5 c01.indd 11:33:50:AM 01/17/2014 Page 5 What Are You Building? Diagrams are a good way to communicate what you are building. There are lots of ways to diagram software, and you can start with a whiteboard diagram of how data fl ows through the system. In this example, you’re working with
  • 91. a simple web app with a web browser, web server, some business logic and a database (see Figure 1-1). Web browser Web server Business Logic Database Figure 1-1: A whiteboard diagram Some people will actually start thinking about what goes wrong right here. For example, how do you know that the web browser is being used by the person you expect? What happens if someone modifi es data in the database? Is it OK for information to move from one box to the next without being encrypted? You might want to take a minute to think about some things that could go wrong here because these sorts of questions may lead you to ask “is that allowed?” You can create an even better model of what you’re building if you think about “who controls what” a little. Is this a website for the whole Internet, or is it an intranet site? Is the database on site, or at a web provider? For this example, let’s say that you’re building an Internet site,
  • 92. and you’re using the fi ctitious Acme storage-system. (I’d put a specifi c product here, but then I’d get some little detail wrong and someone, certainly not you, would get all wrapped around the axle about it and miss the threat modeling lesson. Therefore, let’s just call it Acme, and pretend it just works the way I’m saying. Thanks! I knew you’d understand.) Adding boundaries to show who controls what is a simple way to improve the diagram. You can pretty easily see that the threats that cross those bound- aries are likely important ones, and may be a good place to start identifying threats. These boundaries are called trust boundaries, and you should draw www.it-ebooks.info http://www.it-ebooks.info/ 6 Part I ■ Getting Started c01.indd 11:33:50:AM 01/17/2014 Page 6
  • 93. them wherever different people control different things. Good examples of this include the following: ■ Accounts (UIDs on unix systems, or SIDS on Windows) ■ Network interfaces ■ Different physical computers ■ Virtual machines ■ Organizational boundaries ■ Almost anywhere you can argue for different privileges T R U S T B O U N DA RY V E R S U S AT TAC K S U R FAC E A closely related concept that you may have encountered is attack surface. For example, the hull of a ship is an attack surface for a torpedo. The side of a ship presents a larger attack surface to a submarine than the bow of the same ship. The ship may have inter- nal “trust” boundaries, such as waterproof bulkheads or a Captain’s safe. A system that exposes lots of interfaces presents a larger attack surface than one that presents few
  • 94. APIs or other interfaces. Network fi rewalls are useful boundaries because they reduce the attack surface relative to an external attacker. However, much like the Captain’s safe, there are still trust boundaries inside the fi rewall. A trust boundary and an attack surface are very similar views of the same thing. An attack surface is a trust boundary and a direction from which an attacker could launch an attack. Many people will treat the terms are inter- changeable. In this book, you’ll generally see “trust boundary” used. In your diagram, draw the trust boundaries as boxes (see Figure 1-2), show- ing what’s inside each with a label (such as “corporate data center”) near the edge of the box. Web browser Web server Corporate data center Web storage (offsite) Business Logic Database Figure 1-2: Trust boundaries added to a whiteboard diagram
  • 95. www.it-ebooks.info http://www.it-ebooks.info/ Chapter 1 ■ Dive In and Threat Model! 7 c01.indd 11:33:50:AM 01/17/2014 Page 7 As your diagram gets larger and more complex, it becomes easy to miss a part of it, or to become confused by labels on the data fl ows. Therefore, it can be very helpful to number each process, data fl ow, and data store in the diagram, as shown in Figure 1-3. (Because each trust boundary should have a unique name, representing the unique trust inside of it, there’s limited value to numbering those.) Web browser Web server 1 2 3 4 5 6 7 Corporate data center Web storage (offsite) Business Logic Database Figure 1-3: Numbers and trust boundaries added to a whiteboard
  • 96. diagram Regarding the physical form of the diagram: Use whatever works for you. If that’s a whiteboard diagram and a camera phone picture, great. If it’s Visio, or OmniGraffl e, or some other drawing program, great. You should think of threat model diagrams as part of the development process, so try to keep it in source control with everything else. Now that you have a diagram, it’s natural to ask, is it the right diagram? For now, there’s a simple answer: Let’s assume it is. Later in this chapter there are some tips and checklists as well as a section on updating the diagram, but at this stage you have a good enough diagram to get started on identifying threats, which is really why you bought this book. So let’s identify. What Can Go Wrong? Now that you have a diagram, you can really start looking for what can go wrong with its security. This is so much fun that I turned it into a
  • 97. game called, Elevation of Privilege. There’s more on the game in Appendix D, “Elevation of Privilege: The Cards,” which discusses each card, and in Chapter 11, “Threat Modeling Tools,” which covers the history and philosophy of the game, but you can get started playing now with a few simple instructions. If you haven’t already done so, download a deck of cards from http://www.microsoft.com/security/sdl/ adopt/eop.aspx. Print the pages in color, and cut them into individual cards. Then shuffl e the deck and deal it out to those friends you’ve invited to play. www.it-ebooks.info http://www.it-ebooks.info/ 8 Part I ■ Getting Started c01.indd 11:33:50:AM 01/17/2014 Page 8 N O T E Some people aren’t used to playing games at work. Others approach new games with trepidation, especially when those games involve long, complicated instruc-
  • 98. tions. Elevation of Privilege takes just a few lines to explain. You should give it a try. How To Play Elevation of Privilege Elevation of Privilege is a serious game designed to help you threat model. A sample card is shown in Figure 1-4. You’ll notice that like playing cards, it has a number and suit in the upper left, and an example of a threat as the main text on the card. To play the game, simply follow the instructions in the upcoming list. Tampering An attacker can take advantage of your custom key exchange or integrity control which you built instead of using standard crypto. Figure 1-4: An Elevation of Privilege card 1. Deal the deck. (Shuffl ing is optional.) 2. The person with the 3 of Tampering leads the fi rst round. (In card games like this, rounds are also called “tricks” or “hands.”) www.it-ebooks.info http://www.it-ebooks.info/
  • 99. Chapter 1 ■ Dive In and Threat Model! 9 c01.indd 11:33:50:AM 01/17/2014 Page 9 3. Each round works like so: A. Each player plays one card, starting with the person leading the round, and then moving clockwise. B. To play a card, read it aloud, and try to determine if it affects the system you have diagrammed. If you can link it, write it down, and score yourself a point. Play continues clockwise with the next player. C. When each player has played a card, the player who has played the highest card wins the round. That player leads the next round. 4. When all the cards have been played, the game ends and the person with the most points wins. 5. If you’re threat modeling a system you’re building, then you go fi le any bugs you fi nd.
  • 100. There are some folks who threat model like this in their sleep, or even have trouble switching it off. Not everyone is like that. That’s OK. Threat modeling is not rocket science. It’s stuff that anyone who participates in software devel- opment can learn. Not everyone wants to dedicate the time to learn to do it in their sleep. Identifying threats can seem intimidating to a lot of people. If you’re one of them, don’t worry. This section is designed to gently walk you through threat identifi cation. Remember to have fun as you do this. As one reviewer said: “Playing Elevation of Privilege should be fun. Don’t downplay that. We play it every Friday. It’s enjoyable, relaxing, and still has business value.” Outside of the context of the game, you can take the next step in threat model- ing by thinking of things that might go wrong. For instance, how do you know that the web browser is being used by the person you expect? What happens
  • 101. if someone modifi es data in the database? Is it OK for information to move from one box to the next without being encrypted? You don’t need to come up with these questions by just staring at the diagram and scratching your chin. (I didn’t!) You can identify threats like these using the simple mnemonic STRIDE, described in detail in the next section. Using the STRIDE Mnemonic to Find Threats STRIDE is a mnemonic for things that go wrong in security. It stands for Spoofi ng, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege: ■ Spoofi ng is pretending to be something or someone you’re not. www.it-ebooks.info http://www.it-ebooks.info/ 10 Part I ■ Getting Started c01.indd 11:33:50:AM 01/17/2014 Page 10
  • 102. ■ Tampering is modifying something you’re not supposed to modify. It can include packets on the wire (or wireless), bits on disk, or the bits in memory. ■ Repudiation means claiming you didn’t do something (regardless of whether you did or not). ■ Denial of Service are attacks designed to prevent a system from provid- ing service, including by crashing it, making it unusably slow, or fi lling all its storage. ■ Information Disclosure is about exposing information to people who are not authorized to see it. ■ Elevation of Privilege is when a program or user is technically able to do things that they’re not supposed to do. N O T E This is where Elevation of Privilege, the game, gets its name. This book uses Elevation of Privilege, italicized, or abbreviated to EoP, for the game—to avoid confusion with the threat. Recall the three example threats mentioned in the preceding section:
  • 103. ■ How do you know that the web browser is being used by the person you expect? ■ What happens if someone modifi es data in the database? ■ Is it ok for information to go from one box to the next without being encrypted? These are examples of spoofi ng, tampering, and information disclosure. Using STRIDE as a mnemonic can help you walk through a diagram and select example threats. Pair that with a little knowledge of security and the right techniques, and you’ll fi nd the important threats faster and more reliably. If you have a process in place for ensuring that you develop a threat model, document it, and you can increase confi dence in your software. Now that you have STRIDE in your tool belt, walk through your diagram again and look for more threats, this time using the mnemonic. Make a list as you go with the threat and what element of the diagram it affects. (Generally,
  • 104. the software, data fl ow, or storage is affected, rather than the trust boundary.) The following list provides some examples of each threat. ■ Spoofi ng: Someone might pretend to be another customer, so you’ll need a way to authenticate users. Someone might also pretend to be your website, so you should ensure that you have an SSL certifi cate and that you use a single domain for all your pages (to help that subset of customers who read URLs to see if they’re in the right place). Someone might also place a deep link to one of your pages, such as logout.html or placeorder.aspx. You should be checking the Referrer fi eld before taking action. That’s not a complete solution to what are called CSRF (Cross Site Request Forgery) attacks, but it’s a start. www.it-ebooks.info http://www.it-ebooks.info/ Chapter 1 ■ Dive In and Threat Model! 11 c01.indd 11:33:50:AM 01/17/2014 Page 11
  • 105. ■ Tampering: Someone might tamper with the data in your back end at Acme. Someone might tamper with the data as it fl ows back and forth between their data center and yours. A programmer might replace the operational code on the web front end without testing it, thinking they’re uploading it to staging. An angry programmer might add a coupon code “PayBobMore” that offers a 20 percent discount on all goods sold. ■ Repudiation: Any of the preceding actions might require digging into what happened. Are there system logs? Is the right information being logged effectively? Are the logs protected against tampering? ■ Information Disclosure: What happens if Acme reads your database? Can anyone connect to the database and read or write information? ■ Denial of Service: What happens if a thousand customers show up at once at the website? What if Acme goes down? ■ Elevation of Privilege: Perhaps the web front end is the only place
  • 106. customers should access, but what enforces that? What prevents them from connecting directly to the business logic server, or uploading new code? If there’s a fi rewall in place, is it correctly confi gured? What controls access to your database at Acme, or what happens if an employee at Acme makes a mistake, or even wants to edit your fi les? The preceding possibilities aren’t intended to be a complete list of how each threat might manifest against every model. You can fi nd a more complete list in Chapter 3, “STRIDE.” This shorter version will get you started though, and it is focused on what you might need to investigate based on the very simple diagram shown in Figure 1-2. Remember the musical instrument analogy. If you try to start playing the piano with Ravel’s Gaspard (regarded as one of the most complex piano pieces ever written), you’re going to be frustrated. Tips for Identifying Threats
  • 107. Whether you are identifying threats using Elevation of Privilege, STRIDE, or both, here are a few tips to keep in mind that can help you stay on the right track to determine what could go wrong: ■ Start with external entities: If you’re not sure where to start, start with the external entities or events which drive activity. There are many other valid approaches though: You might start with the web browser, look- ing for spoofi ng, then tampering, and so on. You could also start with the business logic if perhaps your lead developer for that component is in the room. Wherever you choose to begin, you want to aspire to some level of organization. You could also go in “STRIDE order” through the diagram. Without some organization, it’s hard to tell when you’re done, but be careful not to add so much structure that you stifl e creativity. www.it-ebooks.info
  • 108. http://www.it-ebooks.info/ 12 Part I ■ Getting Started c01.indd 11:33:50:AM 01/17/2014 Page 12 ■ Never ignore a threat because it’s not what you’re looking for right now. You might come up with some threats while looking at other categories. Write them down and come back to them. For example, you might have thought about “can anyone connect to our database,” which is listed under information disclosure, while you were looking for spoofi ng threats. If so, that’s awesome! Good job! Redundancy in what you fi nd can be tedious, but it helps you avoid missing things. If you fi nd yourself asking whether “someone not authorized to connect to the database who reads informa- tion” constitutes spoofi ng or information disclosure, the answer is, who cares? Record the issue and move along to the next one. STRIDE is a tool
  • 109. to guide you to threats, not to ask you to categorize what you’ve found; it makes a lousy taxonomy, anyway. (That is to say, there are plenty of security issues for which you can make an argument for various different categorizations. Compare and contrast it with a good taxonomy, such as the taxonomy of life. Does it have a backbone? If so, it’s a vertebrae.) ■ Focus on feasible threats: Along the way, you might come up with threats like “someone might insert a back door at the chip factory,” or “someone might hire our janitorial staff to plug in a hardware key logger and steal all our passwords.” These are real possibilities but not very likely com- pared to using an exploit to attack a vulnerability for which you haven’t applied the patch, or tricking someone into installing software. There’s also the question of what you can do about either, which brings us to the next section.
  • 110. Addressing Each Threat You should now have a decent-sized list or lists of threats. The next step in the threat modeling process is to go through the lists and address each threat. There are four types of action you can take against each threat: Mitigate it, eliminate it, transfer it, or accept it. The following list looks briefl y at each of these ways to address threats, and then in the subsequent sections you will learn how to address each specifi c threat identifi ed with the STRIDE list in the “What Can Go Wrong” section. For more details about each of the strategies and techniques to address these threats, see Chapters 8 and 9, “Defensive Building Blocks” and “Tradeoffs When Addressing Threats.” ■ Mitigating threats is about doing things to make it harder to take advan- tage of a threat. Requiring passwords to control who can log in mitigates the threat of spoofi ng. Adding password controls that enforce complex-
  • 111. ity or expiration makes it less likely that a password will be guessed or usable if stolen. ■ Eliminating threats is almost always achieved by eliminating features. If you have a threat that someone will access the administrative function of www.it-ebooks.info http://www.it-ebooks.info/ Chapter 1 ■ Dive In and Threat Model! 13 c01.indd 11:33:50:AM 01/17/2014 Page 13 a website by visiting the /admin/URL, you can mitigate it with passwords or other authentication techniques, but the threat is still present. You can make it less likely to be found by using a URL like /j8e8vg21euwq/, but the threat is still present. You can eliminate it by removing the interface, handling administration through the command line. (There are still threats associated with how people log in on a command line. Moving
  • 112. away from HTTP makes the threat easier to mitigate by controlling the attack surface. Both threats would be found in a complete threat model.) Incidentally, there are other ways to eliminate threats if you’re a mob boss or you run a police state, but I don’t advocate their use. ■ Transferring threats is about letting someone or something else handle the risk. For example, you could pass authentication threats to the operating system, or trust boundary enforcement to a fi rewall product. You can also transfer risk to customers, for example, by asking them to click through lots of hard-to-understand dialogs before they can do the work they need to do. That’s obviously not a great solution, but sometimes people have knowledge that they can contribute to making a security tradeoff. For example, they might know that they just connected to a coffee shop
  • 113. wireless network. If you believe the person has essential knowledge to contribute, you should work to help her bring it to the decision. There’s more on doing that in Chapter 15, “Human Factors and Usability.” ■ Accepting the risk is the fi nal approach to addressing threats. For most organizations most of the time, searching everyone on the way in and out of the building is not worth the expense or the cost to the dignity and job satisfaction of those workers. (However, diamond mines and some- times government agencies take a different approach.) Similarly, the cost of preventing someone from inserting a back door in the motherboard is expensive, so for each of these examples you might choose to accept the risk. And once you’ve accepted the risk, you shouldn’t worry over it. Sometimes worry is a sign that the risk hasn’t been fully accepted, or that the risk acceptance was inappropriate.
  • 114. The strategies listed in the following tables are intended to serve as examples to illustrate ways to address threats. Your “go-to” approach should be to miti- gate threats. Mitigation is generally the easiest and the best for your customers. (It might look like accepting risk is easier, but over time, mitigation is easier.) Mitigating threats can be hard work, and you shouldn’t take these examples as complete. There are often other valid ways to address each of these threats, and sometimes trade-offs must be made in the way the threats are addressed. Addressing Spoofi ng Table 1-1 and the list that follows show targets of spoofi ng, mitigation strategies that address spoofi ng, and techniques to implement those mitigations. www.it-ebooks.info http://www.it-ebooks.info/ 14 Part I ■ Getting Started
  • 115. c01.indd 11:33:50:AM 01/17/2014 Page 14 Table 1-1: Addressing Spoofi ng Threats THREAT TARGET MITIGATION STRATEGY MITIGATION TECHNIQUE Spoofi ng a person Identifi cation and authen- tication (usernames and something you know/have/ are) Usernames, real names, or other identifi ers: ❖ Passwords ❖ Tokens ❖ Biometrics Enrollment/maintenance/expiry Spoofi ng a “fi le” on disk
  • 116. Leverage the OS ❖ Full paths ❖ Checking ACLs ❖ Ensuring that pipes are created properly Cryptographic authenticators Digital signatures or authenticators Spoofi ng a net- work address Cryptographic ❖ DNSSEC ❖ HTTPS/SSL ❖ IPsec Spoofi ng a program in memory Leverage the OS Many modern operating systems have some form of application identifi er that the OS will enforce. ■ When you’re concerned about a person being spoofed, ensure that each
  • 117. person has a unique username and some way of authenticating. The tradi- tional way to do this is with passwords, which have all sorts of problems as well as all sorts of advantages that are hard to replicate. See Chapter 14, “Accounts and Identity” for more on passwords. ■ When accessing a fi le on disk, don’t ask for the fi le with open(file). Use open(/path/to/file). If the fi le is sensitive, after opening, check vari- ous security elements of the fi le descriptor (such as fully resolved name, permissions, and owner). You want to check with the fi le descriptor to avoid race conditions. This applies doubly when the fi le is an executable, although checking after opening can be tricky. Therefore, it may help to ensure that the permissions on the executable can’t be changed by an attacker. In any case, you almost never want to call exec() with ./file. ■ When you’re concerned about a system or computer being
  • 118. spoofed when it connects over a network, you’ll want to use DNSSEC, SSL, IPsec, or a combination of those to ensure you’re connecting to the right place. www.it-ebooks.info http://www.it-ebooks.info/ Chapter 1 ■ Dive In and Threat Model! 15 c01.indd 11:33:50:AM 01/17/2014 Page 15 Addressing Tampering Table 1-2 and the list that follows show targets of tampering, mitigation strate- gies that address tampering, and techniques to implement those mitigations. Table 1-2: Addressing Tampering Threats THREAT TARGET MITIGATION STRATEGY MITIGATION TECHNIQUE Tampering with a fi le Operating system ACLs Cryptographic ❖ Digital Signatures
  • 119. ❖ Keyed MAC Racing to create a fi le (tampering with the fi le system) Using a directory that’s protected from arbitrary user tampering ACLs Using private directory structures (Randomizing your fi le names just makes it annoying to execute the attack.) Tampering with a net- work packet Cryptographic ❖ HTTPS/SSL ❖ IPsec Anti-pattern Network isolation (See note on network isolation anti-pattern.)
  • 120. ■ Tampering with a fi le: Tampering with fi les can be easy if the attacker has an account on the same machine, or by tampering with the network when the fi les are obtained from a server. ■ Tampering with memory: The threats you want to worry about are those that can occur when a process with less privileges than you, or that you don’t trust, can alter memory. For example, if you’re getting data from a shared memory segment, is it ACLed so only the other process can see it? For a web app that has data coming in via AJAX, make sure you validate that the data is what you expect after you pull in the right amount. ■ Tampering with network data: Preventing tampering with network data requires dealing with both spoofi ng and tampering. Otherwise, someone who wants to tamper can simply pretend to be the other end, using what’s called a man-in-the-middle attack. The most common solution to these problems is SSL, with IP Security (IPsec) emerging as another
  • 121. possibility. SSL and IPsec both address confi dentiality and tampering, and can help address spoofi ng. www.it-ebooks.info http://www.it-ebooks.info/ 16 Part I ■ Getting Started c01.indd 11:33:50:AM 01/17/2014 Page 16 ■ Tampering with networks anti-pattern: It’s somewhat common for peo- ple to hope that they can isolate their network, and so not worry about tampering threats. It’s also very hard to maintain isolation over time. Isolation doesn’t work as well as you would hope. For example, the isolated United States SIPRNet was thoroughly infested with malware, and the operation to clean it up took 14 months (Shachtman, 2010). N O T E A program can’t check whether it’s authentic after it loads. It may be possible for something to rely on “trusted bootloaders” to
  • 122. provide a chain of signatures, but the security decisions are being made external to that code. (If you’re not familiar with the technology, don’t worry, the key lesson is that a program cannot check its own authenticity.) Addressing Repudiation Addressing repudiation is generally a matter of ensuring that your system is designed to log and ensuring that those logs are preserved and protected. Some of that can be handled with simple steps such as using a reliable trans- port for logs. In this sense, syslog over UDP was almost always silly from a security perspective; syslog over TCP/SSL is now available and is vastly better. Table 1-3 and the list that follows show targets of repudiation, mitigation strate- gies that address repudiation, and techniques to implement those mitigations. Table 1-3: Addressing Repudiation Threats
  • 123. THREAT TARGET MITIGATION STRATEGY MITIGATION TECHNIQUE No logs means you can’t prove anything. Log Be sure to log all the security- relevant information. Logs come under attack Protect your logs. ❖ Send over the network. ❖ ACL Logs as a channel for attack Tightly specifi ed logs Documenting log design early in the development process ■ No logs means you can’t prove anything: This is self- explanatory. For example, when a customer calls to complain that they never got their order, how will this be resolved? Maintain logs so that you can investigate what happens when someone attempts to repudiate something. www.it-ebooks.info
  • 124. http://www.it-ebooks.info/ Chapter 1 ■ Dive In and Threat Model! 17 c01.indd 11:33:50:AM 01/17/2014 Page 17 ■ Logs come under attack: Attackers will do things to prevent your logs from being useful, including fi lling up the log to make it hard to fi nd the attack or forcing logs to “roll over.” They may also do things to set off so many alarms that the real attack is lost in a sea of troubles. Perhaps obviously, sending logs over a network exposes them to other threats that you’ll need to handle. ■ Logs as a channel for attack: By design, you’re collecting data from sources outside your control, and delivering that data to people and systems with security privileges. An example of such an attack might be sending mail addressed to "</html> [email protected]", causing trouble for web-based tools that don’t expect inline HTML.
  • 125. You can make it easier to write secure code to process your logs by clearly communicating what your logs can’t contain, such as “Our logs are all plaintext, and attackers can insert all sorts of things,” or “Fields 1–5 of our logs are tightly controlled by our software, fi elds 6–9 are easy to inject data into. Field 1 is time in GMT. Fields 2 and 3 are IP addresses (v4 or 6)...” Unless you have incredibly strict control, documenting what your logs can contain will likely miss things. (For example, can your logs contain Unicode double-wide characters?) Addressing Information Disclosure Table 1-4 and the list which follows show targets of information disclosure, mitigation strategies that address information disclosure, and techniques to implement those mitigations. Table 1-4: Addressing Information Disclosure Threats THREAT TARGET MITIGATION STRATEGY MITIGATION TECHNIQUE
  • 126. Network monitoring Encryption ❖ HTTPS/SSL ❖ IPsec Directory or fi lename (for example layoff-letters/ adamshostack.docx) Leverage the OS. ACLs File contents Leverage the OS. ACLS Cryptography File encryption such as PGP, disk encryption (FileVault, BitLocker) API information disclosure Design Careful design control Consider pass by reference or value. www.it-ebooks.info http://www.it-ebooks.info/ 18 Part I ■ Getting Started
  • 127. c01.indd 11:33:50:AM 01/17/2014 Page 18 ■ Network monitoring: Network monitoring takes advantage of the archi- tecture of most networks to monitor traffi c. (In particular, most networks now broadcast packets, and each listener is expected to decide if the packet matters to them.) When networks are architected differently, there are a variety of techniques to draw traffi c to or through the monitoring station. If you don’t address spoofi ng, much like tampering, an attacker can just sit in the middle and spoof each end. Mitigating network information disclosure threats requires handling both spoofi ng and tampering threats. If you don’t address tampering, then there are all sorts of clever ways to get information out. Here again, SSL and IP Security options are your simplest choices. ■ Names reveal information: When the name of a directory or a fi lename itself will reveal information, then the best way to protect it is
  • 128. to create a parent directory with an innocuous name and use operating system ACLs or permissions. ■ File content is sensitive: When the contents of the fi le need protection, use ACLs or cryptography. If you want to protect all the data should the machine fall into unauthorized hands, you’ll need to use cryptography. The forms of cryptography that require the person to manually enter a key or passphrase are more secure and less convenient. There’s fi le, fi lesystem, and database cryptography, depending on what you need to protect. ■ APIs reveal information: When designing an API, or otherwise passing information over a trust boundary, select carefully what information you disclose. You should assume that the information you provide will be passed on to others, so be selective about what you provide. For example, website errors that reveal the username and password to a
  • 129. database are a common form of this fl aw, others are discussed in Chapter 3. Addressing Denial of Service Table 1-5 and the list that follows show targets of denial of service, mitigation strategies that address denial of service, and techniques to implement those mitigations. www.it-ebooks.info http://www.it-ebooks.info/ Chapter 1 ■ Dive In and Threat Model! 19 c01.indd 11:33:50:AM 01/17/2014 Page 19 Table 1-5: Addressing Denial of Service Threats THREAT TARGET MITIGATION STRATEGY MITIGATION TECHNIQUE Network fl ooding
  • 130. Look for exhaustible resources. ❖ Elastic resources ❖ Work to ensure attacker resource consumption is as high as or higher than yours. Network ACLS Program resources Careful design Elastic resource management, proof of work Avoid multipliers. Look for places where attackers can multiply CPU consumption on your end with minimal eff ort on their end: Do something to require work or enable distinguishing attackers, such as client does crypto fi rst or login before large work factors (of course, that can’t mean that logins are unencrypted). System resources
  • 131. Leverage the OS. Use OS settings. ■ Network flooding: If you have static structures for the number of connections, what happens if those fi ll up? Similarly, to the extent that it’s under your control, don’t accept a small amount of network data from a possibly spoofed address and return a lot of data. Lastly, fi rewalls can provide a layer of network ACLs to control where you’ll accept (or send) traffi c, and can be useful in mitigating network denial-of- service attacks. ■ Look for exhaustible resources: The fi rst set of exhaustible resources are network related, the second set are those your code manages, and the third are those the OS manages. In each case, elastic resourcing is a valuable technique. For example, in the 1990s some TCP stacks had a hardcoded limit of fi ve half-open TCP connections. (A half-open connection is one in the process of being opened. Don’t worry if that doesn’t make sense,
  • 132. but rather ask yourself why the code would be limited to fi ve of them.) Today, you can often obtain elastic resourcing of various types from cloud providers. www.it-ebooks.info http://www.it-ebooks.info/ 20 Part I ■ Getting Started c01.indd 11:33:50:AM 01/17/2014 Page 20 ■ System resources: Operating systems tend to have limits or quotas to control the resource consumption of user-level code. Consider those resources that the operating system manages, such as memory or disk usage. If your code runs on dedicated servers, it may be sensible to allow it to chew up the entire machine. Be careful if you unlimit your code, and be sure to document what you’re doing. ■ Program resources: Consider resources that your program manages itself. Also, consider whether the attacker can make you do
  • 133. more work than they’re doing. For example, if he sends you a packet full of random data and you do expensive cryptographic operations on it, then your vulnerability to denial of service will be higher than if you make him do the cryptography fi rst. Of course, in an age of botnets, there are limits to how well one can reassign this work. There’s an excellent paper by Ben Laurie and Richard Clayton, “Proof of work proves not to work,” which argues against proof of work schemes (Laurie, 2004). Addressing Elevation of Privilege Table 1-6 and the list that follows show targets of elevation of privilege, mitiga- tion strategies that address elevation of privilege, and techniques to implement those mitigations. Table 1-6: Addressing Elevation of Privilege Threats
  • 134. THREAT TARGET MITIGATION STRATEGY MITIGATION TECHNIQUE Data/code confusion Use tools and architectures that separate data and code. ❖ Prepared statements or stored procedures in SQL ❖ Clear separators with canonical forms ❖ Late validation that data is what the next func- tion expects Control fl ow/ memory corrup- tion attacks Use a type-safe
  • 135. language. Writing code in a type-safe language protects against entire classes of attack. Leverage the OS for memory protection. Most modern operating systems have memory- protection facilities. www.it-ebooks.info http://www.it-ebooks.info/ Chapter 1 ■ Dive In and Threat Model! 21 c01.indd 11:33:50:AM 01/17/2014 Page 21 THREAT TARGET MITIGATION STRATEGY MITIGATION TECHNIQUE Use the sandbox. ❖ Modern operating systems support sand- boxing in various ways (AppArmor on Linux,
  • 136. AppContainer or the MOICE pattern on Windows, Sandboxlib on Mac OS). ❖ Don’t run as the “nobody” account, create a new one for each app. Postfi x and QMail are examples of the good pattern of one account per function. Command injec- tion attacks Be careful. ❖ Validate that your input is the size and form you expect. ❖ Don’t sanitize. Log and then throw it away if it’s weird. ■ Data/code confusion: Problems where data is treated as code are common. As information crosses layers, what’s tainted and what’s pure can be lost. Attacks such as XSS take advantage of HTML’s freely interweaving code and data. (That is, an .html fi le contains both code, such as Javascript, and data, such as text, to be displayed and sometimes formatting
  • 137. instructions for that text.) There are a few strategies for dealing with this. The fi rst is to look for ways in which frameworks help you keep code and data separate. For example, prepared statements in SQL tell the database what statements to expect, and where the data will be. You can also look at the data you’re passing right before you pass it, so you know what validation you might be expected to perform for the func- tion you’re calling. For example, if you’re sending data to a web page, you might ensure that it contains no <, >, #, or & characters, or whatever. In fact, the value of “whatever” is highly dependent on exactly what exists between “you” and the rendition of the web page, and what security checks it may be performing. If “you” means a web server, it may be very important to have a few < and > symbols in what you produce. If “you”
  • 138. is something taking data from a database and sending it to, say PHP, then the story is quite different. Ideally, the nature of “you” and the additional steps are clear in your diagrams. ■ Control fl ow/memory corruption attacks: This set of attacks generally takes advantage of weak typing and static structures in C-like languages to enable an attacker to provide code and then jump to that code. If you www.it-ebooks.info http://www.it-ebooks.info/ 22 Part I ■ Getting Started c01.indd 11:33:50:AM 01/17/2014 Page 22 use a type-safe language, such as Java or C#, many of these attacks are harder to execute. Modern operating systems tend to contain memory protection and random- ization features, such as Address Space Layout Randomization (ASLR).
  • 139. Sometimes the features are optional, and require a compiler or linker switch. In many cases, such features are almost free to use, and you should at least try all such features your OS supports. (It’s not completely effortless, you may need to recompile, test, or make other such small investments.) The last set of controls to address memory corruption are sandboxes. Sandboxes are OS features that are designed to protect the OS or the rest of the programs running as the user from a corrupted program. N O T E Details about each of these features are outside the scope of this book, but searching on terms such as type safety, ASLR, and sandbox should provide a plethora of details. ■ Command injection attacks: Command injection attacks are a form of code/data confusion where an attacker supplies a control character, fol- lowed by commands. For example, in SQL injection, a single quote will
  • 140. often close a dynamic SQL statement; and when dealing with unix shell scripts, the shell can interpret a semicolon as the end of input, taking anything after that as a command. In addition to working through each STRIDE threat you encounter, a few other recurring themes will come up as you address your threats; these are covered in the following two sections. Validate, Don’t Sanitize Know what you expect to see, how much you expect to see, and validate that that’s what you’re receiving. If you get something else, throw it away and return an error message. Unless your code is perfect, errors in sanitization will hurt a lot, because after you write that sanitize input function you’re going to rely on it. There have been fascinating attacks that rely on a sanitize function to get their code into shape to execute.
  • 141. Trust the Operating System One of the themes that recurs in the preceding tables is “trust the operating system.” Of course, you may want to discount that because I did much of this www.it-ebooks.info http://www.it-ebooks.info/ Chapter 1 ■ Dive In and Threat Model! 23 c01.indd 11:33:50:AM 01/17/2014 Page 23 work while working for Microsoft, a purveyor of a variety of fi ne operating system software, so there might be some bias here. It’s a valid point, and good for you for being skeptical. See, you’re threat modeling already! More seriously, trusting the operating system is a good idea for a number of reasons: ■ The operating system provides you with security features so you can focus on your unique value proposition. ■ The operating system runs with privileges that are probably
  • 142. not available to your program or your attacker. ■ If your attacker controls the operating system, you’re likely in a world of hurt regardless of what your code tries to do. With all of that “trust the operating system” advice, you might be tempted to ask why you need this book. Why not just rely on the operating system? Well, many of the building blocks just discussed are discretionary. You can use them well or you can use them poorly. It’s up to you to ensure that you don’t set the permissions on a fi le to 777, or the ACLs to allow Guest accounts to write. It’s up to you to write code that runs well as a normal or even sandboxed user, and it’s certainly up to you in these early days of client/server, web, distributed systems, web 2.0, cloud, or whatever comes next to ensure that you’re building the right security mechanisms that these newfangled widgets don’t yet offer.
  • 143. File Bugs Now that you have a list of threats and ways you would like to mitigate them, you’re through the complex, security-centered parts of the process. There are just a few more things to do, the fi rst of which is to treat each line of the preceding tables as a bug. You want to treat these as bugs because if you ship software, you’ve learned to handle bugs in some way. You presumably have a way to track them, prioritize them, and ensure that you’re closing them with an appropriate degree of consistency. This will mean something very different to a three-person start-up versus a medical device manufacturer, but both organizations will have a way to handle bugs. You want to tap into that procedure to ensure that threat modeling isn’t just a paper exercise. You can write the text of the bugs in a variety of ways, based on what your organization does. Examples of fi ling a bug might include the following:
  • 144. ■ Someone might use the /admin/ interface without proper authorization. ■ The admin interface lacks proper authorization controls, ■ There’s no automated security testing for the /admin/ interface. www.it-ebooks.info http://www.it-ebooks.info/ 24 Part I ■ Getting Started c01.indd 11:33:50:AM 01/17/2014 Page 24 Whichever way you go, it’s great if you can include the entire threat in the bug, and mark it as a security bug if your bug-tracking tool supports that. (If you’re a super-agile scrum shop, use a uniquely colored Post-it for security bugs.) You’ll also have to prioritize the bugs. Elevation-of-privilege bugs are almost always going to fall into the highest priority category, because when they’re exploited they lead to so much damage. Denial of service often falls toward
  • 145. the bottom of the stack, but you’ll have to consider each bug to determine how to rank it. Checking Your Work Validation of your threat model is the last thing you do as part of threat model- ing. There are a few tasks to be done here, and it is best to keep them aligned with the order in which you did the previous work. Therefore, the validation tasks include checking the model, checking that you’ve looked for each threat, and checking your tests. You probably also want to validate the model a second time as you get close to shipping or deploying. Checking the model You should ensure that the fi nal model matched what you built. If it doesn’t, how can you know that you found the right, relevant threats? To do so, try to arrange a meeting during which everyone looks at the diagram, and answer the following questions:
  • 146. ■ Is this complete? ■ Is it accurate? ■ Does it cover all the security decisions we made? ■ Can I start the next version with this diagram without any changes? If everyone says yes, your diagram is suffi ciently up to date for the next step. If not, you’ll need to update it. Updating the Diagram As you went through the diagram, you might have noticed that it’s missing key data. If it were a real system, there might be extra interfaces that were not drawn in, or there might be additional databases. There might be details that you jumped to the whiteboard to draw in. If so, you need to update the diagram with those details. A few rules of thumb are useful as you create or update diagrams: ■ Focus on data fl ow, not control fl ow. ■ Anytime you need to qualify your answer with “sometimes” or “also,”
  • 147. you should consider adding more detail to break out the various cases. For example, if you say, “Sometimes we connect to this web service via SSL, www.it-ebooks.info http://www.it-ebooks.info/ Chapter 1 ■ Dive In and Threat Model! 25 c01.indd 11:33:50:AM 01/17/2014 Page 25 and sometimes we fall back to HTTP,” you should draw both of those data fl ows (and consider whether an attacker can make you fall back like that). ■ Anytime you fi nd yourself needing more detail to explain security-relevant behavior, draw it in. ■ Any place you argued over the design or construction of the system, draw in the agreed-on facts. This is an important way to ensure that everyone ended that discussion on the same page. It’s especially important for larger
  • 148. teams when not everyone is in the room for the threat model discussions. If they see a diagram that contradicts their thinking, they can either accept it or challenge the assumptions; but either way, a good clear diagram can help get everyone on the same page. ■ Don’t have data sinks: You write the data for a reason. Show who uses it. ■ Data can’t move itself from one data store to another: Show the process that moves it. ■ The diagram should tell a story, and support you telling stories while pointing at it. ■ Don’t draw an eye chart (a diagram with so much detail that you need to squint to read the tiny print). Diagram Details If you’re wondering how to reconcile that last rule of thumb, don’t draw an eye chart, with all the details that a real software project can entail, one technique
  • 149. is to use a sub diagram that shows the details of one particular area. You should look for ways to break things out that make sense for your project. For example, if you have one hyper-complex process, maybe everything in that process should be covered in one diagram, and everything outside it in another. If you have a dispatcher or queuing system, that’s a good place to break things up. Your databases or the fail-over system is also a good split. Maybe there’s a set of a few elements that really need more detail. All of these are good ways to break things out. The key thing to remember is that the diagram is intended to help ensure that you understand and can discuss the system. Recall the quote that opens this book: “All models are wrong. Some models are useful.” Therefore, when you’re adding additional diagrams, don’t ask, “Is this the right way to do it?”
  • 150. Instead, ask, “Does this help me think about what might go wrong?” Checking Each Threat There are two main types of validation activities you should do. The fi rst is checking that you did the right thing with each threat you found. The other is asking if you found all the threats you should fi nd. www.it-ebooks.info http://www.it-ebooks.info/ 26 Part I ■ Getting Started c01.indd 11:33:50:AM 01/17/2014 Page 26 In terms of checking that you did the right thing with each threat you did fi nd, the fi rst and foremost question here is “Did I do something with each unique threat I found?” You really don’t want to drop stuff on the fl oor. This is “turning the crank” sort of work. It’s rarely glamorous or exciting until you fi nd the thing you overlooked. You can save a lot of time by taking meeting minutes
  • 151. and writing a bug number next to each one, checking that you’ve addressed each when you do your bug triage. The next question is “Did I do the right something with each threat?” If you’ve fi led bugs with some sort of security tag, run a query for all the security bugs, and give each one a going-over. This can be as lightweight as reading each bug and asking yourself, “Did I do the right thing?” or you could use a short checklist, an example of which (“Validating threats”) is included at the end of this chapter in the “Checklists for Diving in and Threat Modeling” section. Checking Your Tests For each threat that you address, ensure you’ve built a good test to detect the problem. Your test can be a manual testing process or an automated test. Some of these tests will be easy, and others very tricky. For example, if you want to ensure that no static web page under /beta can be accessed
  • 152. without the beta cookie, you can build a quick script that retrieves all the pages from your source repository, constructs a URL for it, and tries to collect the page. You could extend the script to send a cookie with each request, and then re- request with an admin cookie. Ideally, that’s easy to do in your existing web testing framework. It gets a little more complex with dynamic pages, and a lot more complex when the security risk is something such as SQL injection or secure parsing of user input. There are entire books written on those subjects, not to mention entire books on the subject of testing. The key question you should ask is something like “Are my security tests in line with the other software tests and the sorts of risks that failures expose?” Threat Modeling on Your Own You have now walked through your fi rst threat model. Congratulations! Remember
  • 153. though: You’re not going to get to Carnegie Hall if you don’t practice, practice, practice. That means it is time to do it again, this time on your own, because doing it again is the only way to get better. Pick a system you’re working on and threat model it. Follow this simplifi ed, fi ve-step process as you go: 1. Draw a diagram. 2. Use the EoP game to fi nd threats. www.it-ebooks.info http://www.it-ebooks.info/ Chapter 1 ■ Dive In and Threat Model! 27 c01.indd 11:33:50:AM 01/17/2014 Page 27 3. Address each threat in some way. 4. Check your work with the checklists at the end of this chapter. 5. Celebrate and share your work. Right now, if you’re new to threat modeling, your best bet is to do it often, applying it to the software and systems that matters to you.
  • 154. After threat model- ing a few systems, you’ll fi nd yourself getting more comfortable with the tools and techniques. For now, the thing to do is practice. Build your fi rst muscles to threat model with. This brings up the question, what should you threat model next? What you’re working on now is the fi rst place to look for the next system to threat model. If it has a trust boundary of some sort, it may be a good candidate. If it’s too simple to have trust boundaries, threat modeling it probably won’t be very satisfying. If it has too many boundaries, it may be too big a project to chew on all at once. If you’re collaborating closely on it with a few other people who you trust, that may be a good opportunity to play EoP with them. If you’re working on a large team, or across organizational boundaries, or things are tense, then those people may not be good fi rst collaborators on threat modeling. Start with what you’re working on now, unless there are tangible
  • 155. reasons to wait. Checklists for Diving In and Threat Modeling There’s a lot in this chapter. As you sit down to really do the work yourself, it can be tricky to assess how you’re doing. Here are some checklists that are designed to help you avoid the most common problems. Each question is designed to be read aloud and to have an affi rmative answer from everyone present. After read- ing each question out loud, encourage questions or clarifi cation from everyone else involved. Diagramming 1. Can we tell a story without changing the diagram? 2. Can we tell that story without using words such as “sometimes” or “also”? 3. Can we look at the diagram and see exactly where the software will make a security decision? 4. Does the diagram show all the trust boundaries, such as where different
  • 156. accounts interact? Do you cover all UIDs, all application roles, and all network interfaces? 5. Does the diagram refl ect the current or planned reality of the software? www.it-ebooks.info http://www.it-ebooks.info/ 28 Part I ■ Getting Started c01.indd 11:33:50:AM 01/17/2014 Page 28 6. Can we see where all the data goes and who uses it? 7. Do we see the processes that move data from one data store to another? Threats 1. Have we looked for each of the STRIDE threats? 2. Have we looked at each element of the diagram? 3. Have we looked at each data fl ow in the diagram? N O T E Data fl ows are a type of element, but they are sometimes overlooked as people get started, so question 3 is a belt-and-suspenders question to add redundancy. (A belt-
  • 157. and-suspenders approach ensures that a gentleman’s pants stay up.) Validating Threats 1. Have we written down or fi led a bug for each threat? 2. Is there a proposed/planned/implemented way to address each threat? 3. Do we have a test case per threat? 4. Has the software passed the test? Summary Any technical professional can learn to threat model. Threat modeling involves the intersection of two models: a model of what can go wrong (threats), applied to a model of the software you’re building or deploying, which is encoded in a diagram. One model of threats is STRIDE: spoofi ng, tampering, repudiation, information disclosure, denial of service, and elevation of privilege. This model of threats has been made into the Elevation of Privilege game, which adds struc- ture and hints to the model. With a whiteboard diagram and a copy of Elevation of
  • 158. Privilege, developers can threat model software that they’re building, systems administrators can threat model software they’re deploying or a system they’re constructing, and security professionals can introduce threat modeling to those with skillsets outside of security. It’s important to address threats, and the STRIDE threats are the inverse of properties you want. There are mitigation strategies and techniques for devel- opers and for systems administrators. Once you’ve created a threat model, it’s important to check your work by making sure you have a good model of the software in an up-to- date diagram, and that you’ve checked each threat you’ve found. www.it-ebooks.info http://www.it-ebooks.info/ 29 c02.indd 11:35:5:AM 01/17/2014 Page 29
  • 159. The earlier you fi nd problems, the easier it is to fi x them. Threat modeling is all about fi nding problems, and therefore it should be done early in your develop- ment or design process, or in preparing to roll out an operational system. There are many ways to threat model. Some ways are very specifi c, like a model air- plane kit that can only be used to build an F-14 fi ghter jet. Other methods are more versatile, like Lego building blocks that can be used to make a variety of things. Some threat modeling methods don’t combine easily, in the same way that Erector set pieces and Lego set blocks don’t fi t together. This chapter covers the various strategies and methods that have been brought to bear on threat modeling, presents each one in depth, and sets the stage for effectively fi nding threats. You’ll start with very simple methods such as asking “what’s your threat
  • 160. model?” and brainstorming about threats. Those can work for a security expert, and they may work for you. From there, you’ll learn about three strategies for threat modeling: focusing on assets, focusing on attackers, and focusing on software. These strategies are more structured, and can work for people with different skillsets. A focus on software is usually the most appropriate strategy. The desire to focus on assets or attackers is natural, and often presented as an unavoidable or essential aspect of threat modeling. It would be wrong not to present each in its best light before discussing issues with those strategies. From there, you’ll learn about different types of diagrams you can use to model your system or software. C H A P T E R 2 Strategies for Threat Modeling www.it-ebooks.info
  • 161. http://www.it-ebooks.info/ 30 Part I ■ Getting Started c02.indd 11:35:5:AM 01/17/2014 Page 30 N O T E This chapter doesn’t include the specifi c threat building blocks that discover threats, which are the subject of the next few chapters. “What’s Your Threat Model?” The question “what’s your threat model?” is a great one because in just four words, it can slice through many conundrums to determine what you are worried about. Answers are often of the form “an attacker with the laptop” or “ insiders,” or (unfortunately, often) “huh?” The “huh?” answer is useful because it reveals how much work would be needed to fi nd a consistent and structured approach to defense. Consistency and structure are important because they help you invest in defenses that will stymie attackers. There’s a compendium
  • 162. of standard answers to “what’s your threat model?” in Appendix A, “Helpful Tools,” but a few examples are listed here as well: ■ A thief who could steal your money ■ The company stakeholders (employees, consultants, shareholders, etc.) who access sensitive documents and are not trusted ■ An untrusted network ■ An attacker who could steal your cookie (web or otherwise) N O T E Throughout this book, you’ll visit and revisit the same example for each of these approaches. Your main targets are the fi ctitious Acme Corporation’s “Acme/SQL,” which is a commercial database server, and Acme’s operational network. Using Acme examples, you can see how the diff erent approaches play out against the same systems. Applying the question “what’s your threat model?” to the Acme Corporation example, you might get the following answers: ■ For the Acme SQL database, the threat model would be an
  • 163. attacker who wants to read or change data in the database. A more subtle model might also include people who want to read the data without showing up in the logs. ■ For Acme’s fi nancial system, the answers might include someone getting a check they didn’t deserve, customers who don’t make a payment they owe, and/or someone reading or altering fi nancial results before reporting. If you don’t have a clear answer to the question, “what’s your threat model?” it can lead to inconsistency and wasted effort. For example, start- up Zero-Knowledge www.it-ebooks.info http://www.it-ebooks.info/ Chapter 2 ■ Strategies for Threat Modeling 31 c02.indd 11:35:5:AM 01/17/2014 Page 31 Systems didn’t have a clear answer to the question “what’s your threat model?”
  • 164. Because there was no clear answer, there wasn’t consistency in what security features were built. A great deal of energy went into building defenses against the most complex attacks, and these choices to defend against such attackers had performance impacts on the whole system. While preventing governments from spying on customers was a fun technical challenge and an emotionally resonant goal, both the challenge and the emotional impact made it hard to make techni- cal decisions that could have made the business more successful. Eventually, a clearer answer to “what’s your threat model?” let Zero- Knowledge Systems invest in mitigations that all addressed the same subset of possible threats. So how do you ensure you have a clear answer to this question? Often, the answers are not obvious, even to those who think regularly about security, and the question itself offers little structure for fi guring out the answers. One
  • 165. approach, often recommended is to brainstorm. In the next section, you’ll learn about a variety of approaches to brainstorming and the tradeoffs associated with those approaches. Brainstorming Your Threats Brainstorming is the most traditional way to enumerate threats. You get a set of experienced experts in a room, give them a way to take notes (white- boards or cocktail napkins are traditional) and let them go. The quality of the brainstorm is bounded by the experience of the brainstormers and the amount of time spent brainstorming. Brainstorming involves a period of idea-generation, followed by a period of analyzing and selecting the ideas. Brainstorming for threat modeling involves coming up with possible attacks of all sorts. During the idea generation phase, you should forbid criticism. You want to explore the space of possible threats,
  • 166. and an atmosphere of criticism will inhibit such idea generation. A moderator can help keep brainstorming moving. During brainstorming, it is key to have an expert on the technology being modeled in the room. Otherwise, it’s easy to make bad assumptions about how it works. However, when you have an expert who’s proud of their technology, you need to ensure that you don’t end up with a “proud parent” offended that their software baby is being called ugly. A helpful rule is that it’s the software being attacked, not the software architects. That doesn’t always suffi ce, but it’s a good start. There’s also a benefi t to bringing together a diverse grouping of experts with a broader set of experience. Brainstorming can also devolve into out-of-scope attacks. For example, if you’re designing a chat program, attacks by the memory management unit against the CPU are probably out of scope, but if you’re
  • 167. designing a motherboard, www.it-ebooks.info http://www.it-ebooks.info/ 32 Part I ■ Getting Started c02.indd 11:35:5:AM 01/17/2014 Page 32 these attacks may be the focus of your threat modeling. One way to handle this issue is to list a set of attacks that are out of scope, such as “the administrator is malicious” or “an attacker edits the hard drive on another system,” as well as a set of attack equivalencies, like, “an attacker can smash the stack and execute code,” so that those issues can be acknowledged and handled in a consistent way. A variant of brainstorming is the exhortation to “think like an attacker,” which is discussed in more detail in Chapter 18, “Experimental Approaches.” Some attacks you might brainstorm in threat modeling the Acme’s fi nancial statements include breaking in over the Internet, getting the
  • 168. CFO drunk, bribing a janitor, or predicting the URL where the fi nancials will be published. These can be a bit of a grab bag, so the next section provides somewhat more focused approaches. Brainstorming Variants Free-form or “normal” brainstorming, as discussed in the preceding sec- tion, can be used as a method for threat modeling, but there are more specifi c methods you can use to help focus your brainstorming. The following sections describe variations on classic brainstorming: scenario analyses, pre-mortems, and movie-plotting. Scenario Analysis It may help to focus your brainstorming with scenarios. If you’re using written scenarios in your overall engineering, you might start from those and ask what might go wrong, or you could use a variant of Chandler’s law (“When in doubt,
  • 169. have a man come through a door with a gun in his hand.”) You don’t need to restrict yourself to a man with a gun, of course; you can use any of the attackers listed in Appendix C, “Attacker Lists.” For an example of scenario-specifi c brainstorming, try to threat model for handing your phone to a cute person in a bar. It’s an interesting exercise. The recipient could perhaps text donations to the Red Cross, text an important person to “stop bothering me,” or post to Facebook that “I don’t take hints well” or “I’m skeevy,” not to mention possibilities of running away with the phone or dropping it in a beer. Less frivolously, your sample scenarios might be based on the product scenarios or use cases for the current development cycle, and therefore cover failover and replication, and how those services could be exploited when not properly authenticated and authorized.
  • 170. Pre-Mortem Decision-sciences expert Gary Klein has suggested another brainstorming tech- nique he calls the pre-mortem (Klein, 1999). The idea is to gather those involved www.it-ebooks.info http://www.it-ebooks.info/ Chapter 2 ■ Strategies for Threat Modeling 33 c02.indd 11:35:5:AM 01/17/2014 Page 33 in a decision and ask them to assume that it’s shortly after a project deadline, or after a key milestone, and that things have gone totally off the rails. With an “assumption of failure” in mind, the idea is to explore why the participants believe it will go off the rails. The value to calling this a pre- mortem is the fram- ing it brings. The natural optimism that accompanies a project is replaced with an explicit assumption that it has failed, giving you and other participants a
  • 171. chance to express doubts. In threat modeling, the assumption is that the product is being successfully attacked, and you now have permission to express doubts or concerns. Movie Plotting Another variant of brainstorming is movie plotting. The key difference between “normal brainstorming” and “movie plotting” is that the attack ideas are intended to be outrageous and provocative to encourage the free fl ow of ideas. Defending against these threats likely involves science-fi ction-type devices that impinge on human dignity, liberty, and privacy without actually defending anyone. Examples of great movies for movie plot threats include Ocean’s Eleven, The Italian Job, and every Bond movie that doesn’t stink. If you’d like to engage in more structured movie plotting, create three lists: fl awed protagonists, brilliant antagonists, and whiz-bang gadgetry. You can then combine them as you see fi t. Examples of movie plot threats include a foreign spy writing
  • 172. code for Acme SQL so that a fourth connection attempt lets someone in as admin, a scheming CFO stealing from the fi rm, and someone rappelling from the ceiling to avoid the pressure mats in the fl oor while hacking into the database from the console. Note that these movie plots are equally applicable to Acme and its customers. The term movie plotting was coined by Bruce Schneier, a respected security expert. Announcing his contest to elicit movie plot threats, he said: “The purpose of this contest is absurd humor, but I hope it also makes a point. Terrorism is a real threat, but we’re not any safer through security measures that require us to correctly guess what the terrorists are going to do next” (Schneier, 2006). The point doesn’t apply only to terrorism; convoluted but vividly described threats can be a threat to your threat modeling methodology. Literature Review
  • 173. As a precursor to brainstorming (or any other approach to fi nding threats), reviewing threats to systems similar to yours is a helpful starting point in threat modeling. You can do this using search engines, or by checking the academic literature and following citations. It can be incredibly helpful to search on competitors or related products. To start, search on a competitor, appending terms such as “security,” “security bug,” “penetration test,” “pwning,” or “Black Hat,” and use your creativity. You can also review common threats in this book, www.it-ebooks.info http://www.it-ebooks.info/ 34 Part I ■ Getting Started c02.indd 11:35:5:AM 01/17/2014 Page 34 especially Part III, “Managing and Addressing Threats” and the appendixes. Additionally, Ross Anderson’s Security Engineering is a great collection of real world attacks and engineering lessons you can draw on,
  • 174. especially if what you’re building is similar to what he covers (Wiley, 2008). A literature review of threats against databases might lead to an understanding of SQL injection attacks, backup failures, and insider attacks, suggesting the need for logs. Doing a review is especially helpful for those developing their skills in threat modeling. Be aware that a lot of the threats that may come up can be super-specifi c. Treat them as examples of more general cases, and look for variations and related problems as you brainstorm. Perspective on Brainstorming Brainstorming and its variants suffer from a variety of problems. Brainstorming often produces threats that are hard or impossible to address. Brainstorming inten- tionally requires the removal of scoping or boundaries, and the threats are very dependent on the participants and how the session happens to progress. When experts get together, unstructured discussion often ensues. This can be fun for the
  • 175. experts and it usually produces interesting results, but oftentimes, experts are in short supply. Other times, engineers get frustrated with the inconsistency of “ask two experts, get three answers.” There’s one other issue to consider, and that relates to exit criteria. It’s diffi cult to know when you’re done brainstorming, and whether you’re done because you have done a good job or if everyone is just tired. Engineering management may demand a time estimate that they can insert into their schedule, and these are diffi cult to predict. The best approach to avoid this timing issue is simply to set a meeting of defi ned length. Unfortunately, this option doesn’t provide a high degree of confi dence that all interesting threats have been found. Because of the diffi culty of addressing threats illuminated with a limitless brainstorming technique and the poorly defi ned exit criteria to a brainstorming
  • 176. session, it is important to consider other approaches to threat modeling that are more prescriptive, formal, repeatable, or less dependent on the aptitudes and knowledge of the participants. Such approaches are the subject of the rest of this chapter and also discussed in the rest of Part II, “Finding Threats.” Structured Approaches to Threat Modeling When it’s hard to answer “What’s your threat model?” people often use an approach centered on models of their assets, models of attackers, or models of their software. Centering on one of those is preferable to using approaches that attempt to combine them because these combinations tend to be confusing. www.it-ebooks.info http://www.it-ebooks.info/ Chapter 2 ■ Strategies for Threat Modeling 35 c02.indd 11:35:5:AM 01/17/2014 Page 35 Assets are the valuable things you have. The people who might
  • 177. go after your assets are attackers, and the most common way for them to attack is via the software you’re building or deploying. Each of these is a natural place to start thinking about threats, and each has advantages and disadvantages, which are covered in this section. There are people with very strong opinions that one of these is right (or wrong). Don’t worry about “right” or “wrong,” but rather “usefulness.” That is, does your approach help you fi nd problems? If it doesn’t, it’s wrong for you, however forcefully someone might argue its merits. These three approaches can be thought of as analogous to Lincoln Log sets, Erector sets, and Lego sets. Each has a variety of pieces, and each enables you to build things, but they may not combine in ways as arbitrary as you’d like. That is, you can’t snap Lego blocks to an Erector set model. Similarly, you can’t
  • 178. always snap attackers onto a software model and have something that works as a coherent whole. To understand these three approaches, it can be useful to apply them to some- thing concrete. Figure 2-1 shows a data fl ow diagram of the Acme/SQL system. Web Clients SQL Clients Acme Front End(s) External Entity Key: Process Data Store DB Admin Data Management Logs Log Analysis Original SQL Account DB Cluster DBA (Human) DB
  • 179. Users (Human) Database data flow Trust Boundary Figure 2-1: Data flow diagram of the Acme/SQL database Looking at the diagram, and reading from left to right, you can see two types of clients accessing the front ends and the core database, which manages www.it-ebooks.info http://www.it-ebooks.info/ 36 Part I ■ Getting Started c02.indd 11:35:5:AM 01/17/2014 Page 36 transactions, access control, atomicity, and so on. Here, assume that the Acme/ SQL system comes with an integrated web server, and that authorized clients are given nearly raw access to the data. There could simultaneously be web
  • 180. servers offering deeper business logic, access to multiple back ends, integration with payment systems, and so on. Those web servers would access Acme/SQL via the SQL protocol over a network connection. Back to the diagram, Figure 2-1 also shows there is also a set of DB Admin tools that the DBA (the human database administrator) uses to manage the system. As shown in the diagram, there are three conceptual data stores: Data, Management (including metadata such as locks, policies, and indices), and Logs. These might be implemented in memory, as fi les, as custom stores on raw disk, or delegated to network storage. As you dig in, the details matter greatly, but you usually start modeling from the conceptual level, as shown. Finally, there’s a log analysis package. Note that only the database core has direct access to the data and management information in this design. You should also note that most of the arrows are two-way, except
  • 181. Database ➪ Logs and Logs ➪ Log Analysis. Of course, the Log Analysis process will be query- ing the logs, but because it’s intended as a read-only interface, it is represented as a one-way arrow. Very occasionally, you might have strictly one-way fl ows, such as those implemented by SNMP traps or syslog. Some threat modelers prefer two one-way arrows, which can help you see threats in each direction, but also lead to busy diagrams that are hard to label or read. If your diagrams are simple, the pair of one way arrows helps you fi nd threats, and is therefore better than two-way. If your diagram is complex, either approach can be used. The data fl ow diagram is discussed in more detail later in this chapter, in the section “Data Flow Diagrams.” In the next few sections, you’ll see how to apply asset, attacker, and software-centric models to fi nd threats against Acme/SQL. Focusing on Assets
  • 182. It seems very natural to center your approach on assets, or things of value. After all, if a thing has no value, why worry about how someone might attack it? It turns out that focusing on assets is less useful than you may hope, and is therefore not the best approach to threat modeling. However, there are a small number of people who will benefi t from asset-centered threat modeling. The most likely to benefi t are a team of security experts with experience structuring their thinking around assets. (Having found a way that works for them, there may be no reason to change.) Less technical people may be able to contribute to threat modeling by saying “focus on this asset.” If you are in either of these groups, or work with them, congratulations! This section is for you. If you aren’t one of those people, however, don’t be too quick to skip ahead. It is still important www.it-ebooks.info
  • 183. http://www.it-ebooks.info/ Chapter 2 ■ Strategies for Threat Modeling 37 c02.indd 11:35:5:AM 01/17/2014 Page 37 to have a good understanding of the role assets can play in threat modeling, even if they’re not in a starring role). It can also help for you to understand why the approach is not as useful as it may appear, so you can have an informed discussion with those advocating it. The term asset usually refers to something of value. When people bring up assets as part of a threat modeling activity, they often mean something an attacker wants to access, control, or destroy. If everyone who touches the threat modeling process doesn’t have a working agreement about what an asset is, your process will either get bogged down, or participants will just talk past each other. There are three ways the term asset is commonly used in threat modeling:
  • 184. ■ Things attackers want ■ Things you want to protect ■ Stepping stones to either of these You should think of these three types of assets as families rather than categories because just as people can belong to more than one family at a time, assets can take on more than one meaning at a time. In other words, the tags that apply to assets can overlap, as shown in Figure 2-2. The most common usage of asset in discussing threat models seems to be a marriage of “things attackers want” and “things you want to protect.” Stepping stones Things you protect Things attackers want Figure 2-2: The overlapping definitions of assets N O T E There are a few other ways in which the term asset is used by those who
  • 185. are threat modeling—such as a synonym for computer, or a type of computer (for example, “Targeted assets: mail server, database”). For the sake of clarity, this book only uses asset with explicit reference to one or more of the three families previously defi ned, and you should try to do the same. www.it-ebooks.info http://www.it-ebooks.info/ 38 Part I ■ Getting Started c02.indd 11:35:5:AM 01/17/2014 Page 38 Things Attackers Want Usually assets that attackers want are relatively tangible things such as “this database of customer medical data.” Good examples of things attackers want include the following: ■ User passwords or keys ■ Social security numbers or other identifi ers ■ Credit card numbers
  • 186. ■ Your confi dential business data Things You Want to Protect There’s also a family of assets you want to protect. Unlike the tangible things attackers want, many of these assets are intangibles. For example, your com- pany’s reputation or goodwill is something you want to protect. Competitors or activists may well attack your reputation by engaging in smear tactics. From a threat modeling perspective, your reputation is usually too diffuse to be able to technologically mitigate threats against it. Therefore, you want to protect it by protecting the things that matter to your customers. As an example of something you want to protect, if you had an empty safe, you intuitively don’t want someone to come along and stick their stethoscope to it. But there’s nothing in it, so what’s the damage? Changing the combination and letting the right folks (but only the right folks) know the new combination
  • 187. requires work. Therefore you want to protect this empty safe, but it would be an unlikely target for a thief. If that same safe has one million dollars in it, it would be much more likely to pique a thief’s interest. The million dollars is part of the family of things you want that attackers want, too. Stepping Stones The fi nal family of assets is stepping stones to other assets. For example, every- thing drawn in a threat model diagram is something you want to protect because it may be a stepping stone to the targets that attackers want. In some ways, the set of stepping stone assets is an attractive nuisance. For example, every com- puter has CPU and storage that an attacker can use. Most also have Internet connectivity, and if you’re in systems management or operational security, many of the computers you worry most about will have special access to your organization’s network. They’re behind a fi rewall or have VPN access. These
  • 188. are stepping stones. If they are uniquely valuable stepping stones in some way, note that. In practice, it’s rarely helpful to include “all our PCs” in an asset list. www.it-ebooks.info http://www.it-ebooks.info/ Chapter 2 ■ Strategies for Threat Modeling 39 c02.indd 11:35:5:AM 01/17/2014 Page 39 N O T E Referring back to the safe example in the previous section, the safe combina- tion is a member of the stepping stone family. It may well be that stepping-stones and things you protect are, in practice, very similar. The list of technical elements you protect that are not members of the stepping-stone family appears fairly short. Implementing Asset-Centric Modeling If you were to threat model with an asset-focused approach, you would make a list of your assets and then consider how an attacker could threaten each. From
  • 189. there, you’d consider how to address each threat. After an asset list is created, you should connect each item on the list to particular computer systems or sets of systems. (If an asset includes something like “Amazon Web Services” or “Microsoft Azure,” then you don’t need to be able to point to the computer systems in question, you just need to understand where they are—eventually you’ll need to identify the threats to those systems and determine how to mitigate them.) The next step is to draw the systems in question, showing the assets and other components as well as interconnections, until you can tell a story about them. You can use this model to apply either an attack set like STRIDE or an attacker-centered brainstorm to understand how those assets could be attacked. Perspective on Asset-Centric Threat Modeling Focusing on assets appears to be a common-sense approach to threat model-
  • 190. ing, to the point where it seems hard to argue with. Unfortunately, much of the time, a discussion of assets does not improve threat modeling. However, the misconception is so common that it’s important to examine why it doesn’t help. There’s no direct line from assets to threats, and no prescriptive set of steps. Essentially, effort put into enumerating assets is effort you’re not spending fi nd- ing or fi xing threats. Sometimes, that involves a discussion of what’s an asset, or which type of asset you’re discussing. That discussion, at best, results in a list of things to look for in your software or operational model, so why not start by creating such a model? Once you have a list of assets, that list is not (ahem) a stepping stone to fi nding threats; you still need to apply some methodology or approach. Finally, assets may help you prioritize threats, but if that’s your goal, it doesn’t mean you should start with or focus on assets. Generally, such informa- tion comes out naturally when discussing impacts as you
  • 191. prioritize and address threats. Those topics are covered in Part III, “Managing and Addressing Threats.” How you answer the question “what are our assets?” should help focus your threat modeling. If it doesn’t help, there is no point to asking the question or spending time answering it. www.it-ebooks.info http://www.it-ebooks.info/ 40 Part I ■ Getting Started c02.indd 11:35:5:AM 01/17/2014 Page 40 Focusing on Attackers Focusing on attackers seems like a natural way to threat model. After all, if no one is going to attack your system, why would you bother defending it? And if you’re worried because people will attack your systems, shouldn’t you understand them? Unfortunately, like asset-centered threat modeling, attacker-
  • 192. centered threat modeling is less useful than you might anticipate. But there are also a small number of scenarios in which focusing on attackers can come in handy, and they’re the same scenarios as assets: experts, less- technical input to your process, and prioritization. And similar to the “Focusing on Assets” section, you can also learn for yourself why this approach isn’t optimal, so you can discuss the possibility with those advocating this approach. Implementing Attacker-Centric Modeling Security experts may be able to use various types of attacker lists to fi nd threats against a system. When doing so, it’s easy to fi nd yourself arguing about the resources or capabilities of such an archetype, and needing to fl esh them out. For example, what if your terrorist is state-sponsored, and has access to govern- ment labs? These questions make the attacker-centric approach start to resemble “personas,” which are often used to help think about human interface issues.
  • 193. There’s a spectrum of detail in attacker models, from simple lists to data-derived personas, and examples of each are given in Appendix C, “Attacker Lists” That appendix may help security experts and will help anyone who wants to try attacker-centric modeling and learn faster than if they have to start by creating a list. Given a list of attackers, it’s possible to use the list to provide some structure to a brainstorming approach. Some security experts use attacker lists as a way to help elicit the knowledge they’ve accumulated as they’ve become experts. Attacker-driven approaches are also likely to bring up possibilities that are human-centered. For example, when thinking about what a spy would do, it may be more natural (and fun) to think about them seducing your sysadmin or corrupting a janitor, rather than think about technical attacks. Worse, it will probably be challenging to think about what those human
  • 194. attacks mean for your system’s security. Where Attackers Can Help You Talking about human threat agents can help make the threats real. That is, it’s sometimes tough to understand how someone could tamper with a confi gura- tion fi le, or replace client software to get around security checks. Especially when dealing with management or product teams who “just want to ship,” www.it-ebooks.info http://www.it-ebooks.info/ Chapter 2 ■ Strategies for Threat Modeling 41 c02.indd 11:35:5:AM 01/17/2014 Page 41 it’s helpful to be able to explain who might attack them and why. There’s real value in this, but it’s not a suffi cient argument for centering your approach on those threat agents; you can add that information at a later stage. (The risk
  • 195. associated with talking about attackers is the claim that “no one would ever do that.” Attempting to humanize a risk by adding an actor can exacerbate this, especially if you add a type of actor who someone thinks “wouldn’t be interested in us.”). You were promised an example, and the spies stole it. More seriously, carefully walking through the attacker lists and personas in Appendix C likely doesn’t help you (or the author) fi gure out what they might want to do to Acme/SQL, and so the example is left empty to avoid false hope. Perspective on Attacker-Centric Modeling Helping security experts structure and recall information is nice, but doesn’t lead to reproducible results. More importantly, attacker lists or even personas are not enough structure for most people to fi gure out what those people will do. Engineers may subconsciously project their own biases or approaches into
  • 196. what an attacker might do. Given that the attacker has his own motivations, skills, background, and perspective (and possibly organizational priorities), avoiding such projection is tricky. In my experience, this combination of issues makes attacker- centric approaches less effective than other approaches. Therefore, I recommend against using attackers as the center of your threat modeling process. Focusing on Software Good news! You’ve offi cially reached the “best” structured threat modeling approach. Congrats! Read on to learn about software-centered threat modeling, why it’s the most helpful and effective approach, and how to do it. Software-centric models are models that focus on the software being built or a system being deployed. Some software projects have documented models of various sorts, such as architecture, UML diagrams, or APIs. Other projects
  • 197. don’t bother with such documentation and instead rely on implicit models. Having large project teams draw on a whiteboard to explain how the software fi ts together can be a surprisingly useful and contentious activity. Understandings differ, especially on large projects that have been running for a while, but fi nd- ing where those understandings differ can be helpful in and of itself because it offers a focal point where threats are unlikely to be blocked. (“I thought you were validating that for a SQL call!”) The same complexity applies to any project that is larger than a few people or has been running longer than a few years. Projects accumulate complexity, www.it-ebooks.info http://www.it-ebooks.info/ 42 Part I ■ Getting Started c02.indd 11:35:5:AM 01/17/2014 Page 42 which makes many aspects of development harder, including security. Software-
  • 198. centric threat modeling can have a useful side effect of exposing this accumu- lated complexity. The security value of this common understanding can also be substantial, even before you get to looking for threats. In one project it turned out that a library on a trust boundary had a really good threat model, but unrealistic assumptions had been made about what the components behind it were doing. The work to create a comprehensive model led to an explicit list of common assumptions that could and could not be made. The comprehensive model and resultant understanding led to a substantial improvement in the security of those components. N O T E As complexity grows, so will the assumptions that are made, and such lists are never complete. They grow as experience requires and feedback loops allow. Threat Modeling Diff erent Types of Software
  • 199. The threat discovery approaches covered in Part II, can be applied to models of all sorts of software. They can be applied to software you’re building for others to download and install, as well as to software you’re building into a larger operational system. The software they can be applied to is nearly endless, and is not dependent on the business model or deployment model associated with the software. Even though software no longer comes in boxes sold on store shelves, the term boxed software is a convenient label for a category. That category is all the software whose architecture is defi nable, because there’s a clear edge to what is the software: It’s everything in the box (installer, application download, or open source repository). This edge can be contrasted with the deployed systems that organizations develop and change over time. N O T E You may be concerned that the techniques in this book focus on either
  • 200. boxed software or deployed systems, and that the one you’re concerned about isn’t covered. In the interests of space, the examples and discussion only cover both when there’s a clear diff erence and reason. That’s because the recommended ways to model software will work for both with only a few exceptions. The boundary between boxed software models and network models gets blurrier every year. One important difference is that the network models tend to include more of the infrastructural components such as routers, switches, and data circuits. Trust boundaries are often operationalized by these components, or by whatever group operates the network, the platforms, or the applications. www.it-ebooks.info http://www.it-ebooks.info/ Chapter 2 ■ Strategies for Threat Modeling 43 c02.indd 11:35:5:AM 01/17/2014 Page 43
  • 201. Data fl ow models (which you met in Chapter 1, “Dive In and Threat Model!” and which you’ll learn more about in the next section) are usually a good choice for both boxed software and operational models. Some large data center operators have provided threat models to teams, showing how the data center is laid out. The product group can then overlay its models “on top” of that to align with the appropriate security controls that they’ll get from operations. When you’re using someone else’s data center, you may have discussions about their infrastructure choices that make it easy to derive a model, or you might have to assume the worst. Perspective on Software-Centric Modeling I am fond of software-centric approaches because you should expect software developers to understand the software they’re developing. Indeed, there is nothing else you should expect them to understand better. That makes software an ideal place to start the threat-modeling tasks in which you
  • 202. ask developers to participate. Almost all software development is done with software models that are good enough for the team’s purposes. Sometimes they require work to make them good enough for effective threat modeling. In contrast, you can merely hope that developers understand the business or its assets. You may aspire to them understanding the people who will attack their product or system. But these are hopes and aspirations, rather than reasonable expectations. To the extent that your threat modeling strategy depends on these hopes and aspirations, you’re adding places where it can fail. The remainder of this chapter is about modeling your software in ways that help you fi nd threats, and as such enabling software centric-modeling. (The methods for fi nding these threats are covered in the rest of Part II.) Models of Software Making an explicit model of your software helps you look for
  • 203. threats without getting bogged down in the many details that are required to make the software function properly. Diagrams are a natural way to model software. As you learned in Chapter 1, whiteboard diagrams are an extremely effective way to start threat modeling, and they may be suffi cient for you. However, as a system hits a certain level of complexity, drawing and redrawing on white- boards becomes infeasible. At that point, you need to either simplify the system or bring in a computerized approach. In this section, you’ll learn about the various types of diagrams, how they can be adapted for use in threat modeling, and how to handle the complexities of larger systems. You’ll also learn more detail about trust boundaries, effective labeling, and how to validate your diagrams. www.it-ebooks.info http://www.it-ebooks.info/
  • 204. 44 Part I ■ Getting Started c02.indd 11:35:5:AM 01/17/2014 Page 44 Types of Diagrams There are many ways to diagram, and different diagrams will help in differ- ent circumstances. The types of diagrams you’ll encounter most frequently are probably data fl ow diagrams (DFDs). However, you may also see UML, swim lane diagrams, and state diagrams. You can think of these diagrams as Lego blocks, looking them over to see which best fi ts whatever you’re building. Each diagram type here can be used with the models of threats in Part II. The goal of all these diagrams is to communicate how the system works, so that everyone involved in threat modeling has the same understanding. If you can’t agree on how to draw how the software works, then in the process of getting to agreement, you’re highly likely to discover
  • 205. misunderstandings about the security of the system. Therefore, use the diagram type that helps you have a good conversation and develop a shared understanding. Data Flow Diagrams Data fl ow models are often ideal for threat modeling; problems tend to follow the data fl ow, not the control fl ow. Data fl ow models more commonly exist for network or architected systems than software products, but they can be created for either. Data fl ow diagrams are used so frequently they are sometimes called “threat model diagrams.” As laid out by Larry Constantine in 1967, DFDs consist of numbered elements (data stores and processes) connected by data fl ows, interacting with external entities (those outside the developer’s or the organization’s control). The data fl ows that give DFDs their name almost always fl ow two ways,
  • 206. with exceptions such as radio broadcasts or UDP data sent off into the Ethernet. Despite that, fl ows are usually represented using one-way arrows, as the threats and their impact are generally not symmetric. That is, if data fl owing to a web server is read, it might reveal passwords or other data, whereas a data fl ow from the web server might reveal your bank balance. This diagramming convention doesn’t help clarify channel security versus message security. (The channel might be something like SMTP, with messages being e- mail messages.) Swim lane diagrams may be more appropriate as a model if this channel/message distinction is important. (Swim lane diagrams are described in the eponymous subsection later in this chapter.) The main elements of a data fl ow diagram are shown in Table 2-1. www.it-ebooks.info http://www.it-ebooks.info/
  • 207. Chapter 2 ■ Strategies for Threat Modeling 45 c02.indd 11:35:5:AM 01/17/2014 Page 45 Table 2-1: Elements of a Data Flow Diagram ELEMENT APPEARANCE MEANING EXAMPLES Process Rounded rect- angle, circle, or concentric circles Any running code Code written in C, C#, Python, or PHP Data fl ow Arrow Communication between processes, or between processes and data stores Network connec- tions, HTTP, RPC, LPC Data store Two parallel lines with a label
  • 208. between them Things that store data Files, databases, the Windows Registry, shared memory segments External entity Rectangle with sharp corners People, or code outside your control Your customer, Microsoft.com Figure 2-3 shows a classic DFD based on the elements from Table 2-1; how- ever, it’s possible to make these models more usable. Figure 2-4 shows this same model with a few changes, which you can use as an example for improving your own models.
  • 209. 1 Web Clients 2 SQL Clients 3 Front End(s) External Entity Key: Process Data Store 5 DB Admin 9 Data 10 Management 11 Logs 8 Log Analysis 6 DBA (Human) 7 DB Users 4 Database data flow Figure 2-3: A classic DFD model www.it-ebooks.info http://www.it-ebooks.info/
  • 210. 46 Part I ■ Getting Started c02.indd 11:35:5:AM 01/17/2014 Page 46 Web Clients SQL Clients Acme Front End(s) External Entity Key: Process Data Store DB Admin Data Management Logs Log Analysis Original SQL Account DB Cluster DBA (Human) DB Users (Human) Database
  • 211. data flow Trust Boundary Figure 2-4: A modern DFD model (previously shown as Figure 2-1) The following list explains the changes made from classic DFDs to more modern ones: ■ The processes are rounded rectangles, which contain text more effi ciently than circles. ■ Straight lines are used, rather than curved, because straight lines are easier to follow, and you can fi t more in larger diagrams. Historically, many descriptions of data flow diagrams contained both “process” elements and “complex process” elements. A process was depicted as a circle, a complex process as two concentric circles. It isn’t entirely clear, however, when to use a normal process versus a complex one. One possible rule is that anything that has a subdiagram should be a complex
  • 212. process. That seems like a decent rule, if (ahem) a bit circular. DFDs can be used for things other than software products. For example, Figure 2-5 shows a sample operational network in a DFD. This is a typical model for a small to mid-sized corporate network, with a representative sampling www.it-ebooks.info http://www.it-ebooks.info/ Chapter 2 ■ Strategies for Threat Modeling 47 c02.indd 11:35:5:AM 01/17/2014 Page 47 of systems and departments shown. It is discussed in depth in Appendix E, “Case Studies.” Acme Corporate Network Internet Payroll HR Directory
  • 213. Sales/CRM Operations Production Desktop & Mobile E-mail & Intranet Servers Development Servers HR Mgmt Figure 2-5: An operational network model UML UML is an abbreviation for Unifi ed Modeling Language. If you use UML in your software development process, it’s likely that you can adapt UML diagrams for threat modeling, rather than redrawing them. The most impor- tant way to adapt UML for threat modeling diagrams is the addition of trust boundaries.
  • 214. UML is fairly complex. For example, the Visio stencils for UML offer roughly 80 symbols, compared to six for DFDs. This complexity brings a good deal of nuance and expressiveness as people draw structure diagrams, behavior diagrams, and interaction diagrams. If anyone involved in the threat model- ing isn’t up on all the UML symbols, or if there’s misunderstanding about what those symbols mean, then the diagram’s effectiveness as a tool is greatly diminished. In theory, anyone who’s confused can just ask, but that requires them to know they’re confused (they might assume that the symbol for fi sh www.it-ebooks.info http://www.it-ebooks.info/ 48 Part I ■ Getting Started c02.indd 11:35:5:AM 01/17/2014 Page 48 excludes sharks). It also requires a willingness to expose one’s ignorance by
  • 215. asking a “simple” question. It’s probably easier for a team that’s invested in UML to add trust boundaries to those diagrams than to create new diagrams just for threat modeling. Swim Lane Diagrams Swim lane diagrams are a common way to represent fl ows between various participants. They’re drawn using long lines, each representing participants in a protocol, with each participant getting a line. Each lane edge is labeled to identify the participant; each message is represented by a line between participants; and time is represented by fl ow down the diagram lanes. The diagrams end up looking a bit like swim lanes, thus the name. Messages should be labeled with their contents; or if the contents are complex, it may make more sense to have a diagram key that abstracts out some details. Computation done by the parties or state should be noted along that partici-
  • 216. pant’s line. Generally, participants in such protocols are entities like comput- ers; and as such, swim lane diagrams usually have implicit trust boundaries between each participant. Cryptographer and protocol designer Carl Ellison has extended swim lanes to include the human participants as a way to structure discussion of what people are expected to know and do. He calls this extension ceremonies, which is discussed in more detail in Chapter 15, “Human Factors and Usability.” A sample swim lane diagram is shown in Figure 2-6. SYN SYN-ACK ACK Data Client Server Figure 2-6: Swim lane diagram (showing the start of a TCP connection) www.it-ebooks.info
  • 217. http://www.it-ebooks.info/ Chapter 2 ■ Strategies for Threat Modeling 49 c02.indd 11:35:5:AM 01/17/2014 Page 49 State Diagrams State diagrams represent the various states a system can be in, and the transi- tions between those states. A computer system is modeled as a machine with state, memory, and rules for moving from one state to another, based on the valid messages it receives, and the data in its memory. (The computer should course test the messages it receives for validity according to some rules.) Each box is labeled with a state, and the lines between them are labeled with the conditions that cause the state transition. You can use state diagrams in threat modeling by checking whether each transition is managed in accordance with the appropriate security validations.
  • 218. A very simple state machine for a door is shown in Figure 2-7 (derived from Wikipedia). The door has three states: opened, closed, and locked. Each state is entered by a transition. The “deadbolt” system is much easier to draw than locks on the knob, which can be locked from either state, creating a more complex diagram and user experience. Obviously, state diagrams can become complex quickly. You could imagine a more complex state diagram that includes “ajar,” a state that can result from either open or closed. (I started drawing that but had trouble deciding on labels. Obviously, doors that can be ajar are poorly specifi ed and should not be deployed.) You don’t want to make architectural decisions just to make modeling easier, but often simple models are easier to work with, and refl ect better engineering. Opened Closed Locked
  • 219. State Transition Open doorClose door Unlock deadbolt Lock deadbolt Transition condition Figure 2-7: A state machine diagram www.it-ebooks.info http://www.it-ebooks.info/ 50 Part I ■ Getting Started c02.indd 11:35:5:AM 01/17/2014 Page 50 Trust Boundaries As you saw in Chapter 1, a trust boundary is anyplace where various principals come together—that is, where entities with different privileges interact. Drawing Boundaries After a software model has been drawn, there are two ways to add boundaries:
  • 220. You can add the boundaries you know and look for more, or you can enumerate principals and look for boundaries. To start from boundaries, add any sorts of enforced trust boundary you can. Boundaries between unix UIDs, Windows sessions, machines, network segments, and so on should be drawn in as boxes, and the principal inside each box should be shown with a label. To start from principals, begin from one end or the other of the privilege spectrum (often that’s root/admin or anonymous Internet users), and then add boundaries each time they talk to “someone else.” You can always add at least one boundary, as all computation takes place in some context. (So you might criticize Figure 2-1 for showing Web Clients and SQL Clients without an identifi ed context.) If you don’t see where to draw trust boundaries of any sort, your diagram may be detailed as everything is inside a single trust boundary, or you may
  • 221. be missing boundaries. Ask yourself two questions. First, does everything in the system have the same level of privilege and access to everything else on the system? Second, is everything your software communicates with inside that same boundary? If either of these answers are a no, then you should now have clarifi ed either a missing boundary or a missing element in the diagram, or both. If both are yes, then you should draw a single trust boundary around everything, and move on to other development activities. (This state is unlikely except when every part of a development team has to create a software model. That “bottom up” approach is discussed in more detail in Chapter 7, “Processing and Managing Threats.”) A lot of writing on threat modeling claims that trust boundaries should only cross data fl ows. This is useful advice for the most detailed level of your
  • 222. model. If a trust boundary crosses over a data store (that is, a database), that might indicate that there are different tables or stored procedures with different trust levels. If a boundary crosses over a given host, it may refl ect that members of, for example, the group “software installers,” have different rights from the “web content updaters.” If you fi nd a trust boundary crossing an element of a diagram other than a data fl ow, either break that element into two (in the model, in reality, or both), or draw a subdiagram to show them separated into multiple entities. What enables good threat models is clarity about what boundaries exist and how those boundaries are to be protected. Contrariwise, a lack of clarity will inhibit the creation of good models. www.it-ebooks.info http://www.it-ebooks.info/ Chapter 2 ■ Strategies for Threat Modeling 51
  • 223. c02.indd 11:35:5:AM 01/17/2014 Page 51 Using Boundaries Threats tend to cluster around trust boundaries. This may seem obvious: The trust boundaries delineate the attack surface between principals. This leads some to expect that threats appear only between the principals on the boundary, or only matter on the trust boundaries. That expectation is sometimes incorrect. To see why, consider a web server performing some complex order processing. For example, imagine assembling a computer at Dell’s online store where thousands of parts might be added, but only a subset of those have been tested and are on offer. A model of that website might be constructed as shown in Figure 2-8. Web browser TCP/IP stack Kernel Platform Application Web server
  • 224. Sales Order processing Marketing Figure 2-8: Trust boundaries in a web server The web server in Figure 2-8 is clearly at risk of attack from the web browser, even though it talks through a TCP/IP stack that it presumably trusts. Similarly, the sales module is at risk; plus an attacker might be able to insert random part numbers into the HTML post in which the data is checked in an order processing module. Even though there’s no trust boundary between the sales module and the order processing module, and even though data might be checked at three boundaries, the threats still follow the data fl ows. The client is shown simply as a web browser because the client is an external entity. Of course, there are many other components around that web browser, but you can’t do anything
  • 225. about threats to them, so why model them? Therefore, it is more accurate to say that threats tend to cluster around trust boundaries and complex parsing, but may appear anywhere that information is under the control of an attacker. www.it-ebooks.info http://www.it-ebooks.info/ 52 Part I ■ Getting Started c02.indd 11:35:5:AM 01/17/2014 Page 52 What to Include in a Diagram So what should be in your diagram? Some rules of thumb include the following: ■ Show the events that drive the system. ■ Show the processes that are driven. ■ Determine what responses each process will generate and send. ■ Identify data sources for each request and response. ■ Identify the recipient of each response.
  • 226. ■ Ignore the inner workings, focus on scope. ■ Ask if something will help you think about what goes wrong, or what will help you fi nd threats. This list is derived from Howard and LeBlanc’s Writing Secure Code, Second Edition (Microsoft Press, 2009). Complex Diagrams When you’re building complex systems, you may end up with complex diagrams. Systems do become complex, and that complexity can make using the diagrams (or understanding the full system) diffi cult. One rule of thumb is “don’t draw an eye chart.” It is important to balance all the details that a real software project can entail with what you include in your actual model. As mentioned in Chapter 1, one technique you can use to help you do this is a subdiagram showing the details of one particular area. You should look for ways to break out highly-detailed areas that make sense
  • 227. for your project. For example, if you have one very complex process, maybe everything inside it is one diagram, and everything outside it is another. If you have a dispatcher or queuing system, that might be a good place to break things up. Maybe your databases or the fail over system is a good place to split. Maybe there are a few elements that really need more detail. All of these are good ways to break things out. One helpful approach to subdiagrams is to ensure that there are not more subdiagrams than there are processes. Another approach is to use different diagrams to show different scenarios. Sometimes it’s also useful to simplify diagrams. When two elements of the diagram are equivalent from a security perspective, you can combine them. Equivalent means inside the same trust boundary, relying on the same technol- ogy, and handling the same sort of data.
  • 228. The key thing to remember is that the diagram is intended to help ensure that you understand and can discuss the system. Remember the quote that opens this book: “All models are wrong, some models are useful.” Therefore, when www.it-ebooks.info http://www.it-ebooks.info/ Chapter 2 ■ Strategies for Threat Modeling 53 c02.indd 11:35:5:AM 01/17/2014 Page 53 you’re adding additional diagrams, don’t ask “is this the right way to do it?” Instead, ask “does this help us think about what might go wrong?” Labels in Diagrams Labels in diagrams should be short, descriptive, and meaningful. Because you want to use these names to tell stories, start with the outsiders who are driving the system; those are nouns, such as “customer” or “vibration sensor.” They communicate information via data fl ows, which are nouns or
  • 229. noun phrases, such as “books to buy” or “vibration frequency.” Data fl ows should almost never be labeled using verbs. Even though it can be hard, you should work to fi nd more descriptive labels than “read” or “write,” which are implied by the direction of the arrows. In other words, data fl ows communicate their information (nouns) to processes, which are active: verbs, verb phrases, or verb/noun chains. Many people fi nd it helpful to label data fl ows with sequence numbers to help keep track of what happens in what order. It can also be helpful to number ele- ments within a diagram to help with completeness or communication. You can number each thing (data fl ow 1, a process 1, et cetera) or you can have a single count across the diagram, with external entity 1 talking over data fl ows 2 and 3 to process 4. Generally, using a single counter for everything is less confusing. You can say “number 1” rather than “data fl ow 1, not process
  • 230. 1.” Color in Diagrams Color can add substantial amounts of information without appearing over- whelming. For example, Microsoft’s Peter Torr uses green for trusted, red for untrusted and blue for what’s being modeled (Torr, 2005). Relying on color alone can be problematic. Roughly one in twelve people suffer from color blindness, the most common being red/green confusion (Heitgerd, 2008). The result is that even with a color printer, a substantial number of people are unable to easily access this critical information. Box boundaries with text labels address both problems. With box trust boundaries, there is no reason not to use color. Entry Points One early approach to threat modeling was the “asset/entry point” approach, which can be effective at modeling operational systems. This approach can be
  • 231. partially broken down into the following steps: 1. Draw a DFD. 2. Find the points where data fl ows cross trust boundaries. 3. Label those intersections as “entry points.” www.it-ebooks.info http://www.it-ebooks.info/ 54 Part I ■ Getting Started c02.indd 11:35:5:AM 01/17/2014 Page 54 N O T E There were other steps and variations in the approaches, but we as a com- munity have learned a lot since then, and a full explanation would be tedious and distracting. In the Acme/SQL example (as shown in Figure 2-1) the entry points are the “front end(s)” and the “database admin” console process. “Database” would also be an entry point, because nominally, other software could alter data in the databases and use failures in the parsers to gain control of the system. For
  • 232. the fi nancials, the entry points shown are “external reporting,” “fi nancial plan- ning and analysis,” “core fi nance software,” “sales” and “accounts receivable.” Validating Diagrams Validating that a diagram is a good model of your software has two main goals: ensuring accuracy and aspiring to goodness. The fi rst is easier, as you can ask whether it refl ects reality. If important components are missing, or the diagram shows things that are not being built, then you can see that it doesn’t refl ect reality. If important data fl ows are missing, or nonexistent fl ows are shown, then it doesn’t refl ect reality. If you can’t tell a story about the software without editing the diagram, then it’s not accurate. Of course, there’s that word “important” in there, which leads to the second criterion: aspiring to goodness. What’s important is what helps you fi nd issues. Finding issues is a matter of asking questions like “does this
  • 233. element have any security impact?” and “are there things that happen sometimes or in special circumstances?” Knowing the answers to these questions is a matter of expe- rience, just like many aspects of building software. A good and experienced architect can quickly assess requirements and address them, and a good threat modeler can quickly see which elements will be important. A big part of gain- ing that experience is practice. The structured approaches to fi nding threats in Part II, are designed to help you identify which elements are important. How To Validate Diagrams To best validate your diagrams, bring together the people who understand the system best. Someone should stand in front of the diagram and walk through the important use cases, ensuring the following: ■ They can talk through stories about the diagram. ■ They don’t need to make changes to the diagram in its
  • 234. current form. ■ They don’t need to refer to things not present in the diagram. www.it-ebooks.info http://www.it-ebooks.info/ Chapter 2 ■ Strategies for Threat Modeling 55 c02.indd 11:35:5:AM 01/17/2014 Page 55 The following rules of thumb will be useful as you update your diagram and gain experience: ■ Anytime you fi nd someone saying “sometimes” or “also” you should consider adding more detail to break out the various cases. For example, if you say, “Sometimes we connect to this web service via SSL, and some- times we fall back to HTTP,” you should draw both of those data fl ows (and consider whether an attacker can make you fall back like that). ■ Anytime you need more detail to explain security-relevant behavior, draw
  • 235. it in. ■ Each trust boundary box should have a label inside it. ■ Anywhere you disagreed over the design or construction of the system, draw in those details. This is an important step toward ensuring that everyone ended that discussion on the same page. It’s especially important for larger teams where not everyone is in the room for the threat model discussions. If anyone sees a diagram that contradicts their thinking, they can either accept it or challenge the assumptions; but either way, a good clear diagram can help get everyone on the same page. ■ Don’t have data sinks: You write the data for a reason. Show who uses it. ■ Data can’t move itself from one data store to another: Show the process that moves it. ■ All ways data can arrive should be shown. ■ If there are mechanisms for controlling data fl ows (such as fi rewalls or
  • 236. permissions) they should be shown. ■ All processes must have at least one entry data fl ow and one exit data fl ow. ■ As discussed earlier in the chapter, don’t draw an eye chart. ■ Diagrams should be visible on a printable page. N O T E Writing Secure Code author David LeBlanc notes that “A process without input is a miracle, while one without output is a black hole. Either you’re missing something, or have mistaken a process for people, who are allowed to be black holes or miracles.” When to Validate Diagrams For software products, there are two main times to validate diagrams: when you create them and when you’re getting ready to ship a beta. There’s also a third triggering event (which is less frequent), which is if you add a security boundary. www.it-ebooks.info http://www.it-ebooks.info/
  • 237. 56 Part I ■ Getting Started c02.indd 11:35:5:AM 01/17/2014 Page 56 For operational software diagrams, you also validate when you create them, and then again using a sensible balance between effort and up- to-dateness. That sensible balance will vary according to the maturity of a system, its scale, how tightly the components are coupled, the cadence of rollouts, and the nature of new rollouts. Here are a few guidelines: ■ Newer systems will experience more diagram changes than mature ones. ■ Larger systems will experience more diagram changes than smaller ones. ■ Tightly coupled systems will experience more diagram changes than loosely coupled systems. ■ Systems that roll out changes quickly will likely experience fewer diagram changes per rollout. ■ Rollouts or sprints focused on refactoring or paying down
  • 238. technical debt will likely see more diagram changes. In either case, create an appropriate tracking item to ensure that you recheck your diagrams at a good time. The appropriate tracking item is whatever you use to gate releases or rollouts, such as bugs, task management software, or checklists. If you have no formal way to gate releases, then you might focus on a clearly defi ned release process before worrying about rechecking threat models. Describing such a process is beyond the scope of this book. Summary There’s more than one way to threat model, and some of the strategies you can employ include modeling assets, modeling attackers, or modeling software. “What’s your threat model” and brainstorming are good for security experts,
  • 239. but they lack structure that less experienced threat modelers need. There are more structured approaches to brainstorming, including scenario analysis, pre-mortems, movie plotting, and literature reviews, which can help bring a little structure, but they’re still not great. If your threat modeling starts from assets, the multiple overlapping defi ni- tions of the term, including things attackers want, things you’re protecting, and stepping stones, can trip you up. An asset-centered approach offers no route to fi gure out what will go wrong with the assets. Attacker modeling is also attractive, but trying to predict how another person will attack is hard, and the approach can invite arguments that “no one would do that.” Additionally, human-centered approaches may lead you to human- centered threats that can be hard to address. www.it-ebooks.info http://www.it-ebooks.info/
  • 240. Chapter 2 ■ Strategies for Threat Modeling 57 c02.indd 11:35:5:AM 01/17/2014 Page 57 Software models are focused on that what software people understand. The best models are diagrams that help participants understand the software and fi nd threats against it. There are a variety of ways you can diagram your soft- ware, and DFDs are the most frequently useful. Once you have a model of the software, you’ll need a way to fi nd threats against it, and that is the subject of Part II. www.it-ebooks.info http://www.it-ebooks.info/ www.it-ebooks.info http://www.it-ebooks.info/ c03.indd 07:51:54:AM 01/15/2014 Page 59 At the heart of threat modeling are the threats.
  • 241. There are many approaches to fi nding threats, and they are the subject of Part II. Each has advantages and disadvantages, and different approaches may work in different circumstances. Each of the approaches in this part is like a Lego block. You can substitute one for another in the midst of this second step in the four-step framework and expect to get good results. Knowing what aspects of security can go wrong is the unique element that makes threat modeling threat modeling, rather than some other form of mod- eling. The models in this part are abstractions of threats, designed to help you think about these security problems. The more specifi c models (such as attack libraries) will be more useful to those new to threat modeling, and are less freewheeling. As you become more experienced, the less structured approaches such as STRIDE become more useful. In this part, you’ll learn about the following approaches to fi nding threats:
  • 242. ■ Chapter 3: STRIDE covers the STRIDE mnemonic you met in chapter 1, and its many variants. ■ Chapter 4: Attack Trees are either a way for you to think through threats against your system, or a way to help others structure their thinking about those threats. Both uses of attack trees are covered in this chapter. P a r t II Finding Threats www.it-ebooks.info http://www.it-ebooks.info/ 60 Part II ■ Finding Threats c03.indd 07:51:54:AM 01/15/2014 Page 60 ■ Chapter 5: Attack Libraries are libraries constructed to track and organize threats. They can be very useful to those new to security or threat modeling. ■ Chapter 6: Privacy Tools covers a collection of tools for fi nding privacy threats.
  • 243. Part II focuses on the second question in the four-step framework: What can go wrong? As you’ll recall from Part I, before you start fi nding threats with any of the techniques in this part, you should fi rst have an idea of scope: where are you looking for threats? A diagram, such as a data fl ow diagram discussed in Part I, can help scope the threat modeling session, and thus is an excellent input condition. As you discuss threats, however, you’ll likely fi nd imperfec- tions in the diagram, so it isn’t necessary to “perfect” your diagram before you start fi nding threats. www.it-ebooks.info http://www.it-ebooks.info/ 61 c03.indd 07:51:54:AM 01/15/2014 Page 61 As you learned in Chapter 1, “Dive in and Threat Model!,” STRIDE is an acro- nym that stands for Spoofing, Tampering, Repudiation,
  • 244. Information Disclosure, Denial of Service, and Elevation of Privilege. The STRIDE approach to threat modeling was invented by Loren Kohnfelder and Praerit Garg (Kohnfelder, 1999). This framework and mnemonic was designed to help people developing software identify the types of attacks that software tends to experience. The method or methods you use to think through threats have many differ- ent labels: fi nding threats, threat enumeration, threat analysis, threat elicitation, threat discovery. Each connotes a slightly different fl avor of approach. Do the threats exist in the software or the diagram? Then you’re fi nding them. Do they exist in the minds of the people doing the analysis? Then you’re doing analysis or elicitation. No single description stands out as always or clearly preferable, but this book generally talks about fi nding threats as a superset of all these ideas. Using STRIDE is more like an elicitation technique, with an
  • 245. expectation that you or your team understand the framework and know how to use it. If you’re not familiar with STRIDE, the extensive tables and examples are designed to teach you how to use it to discover threats. This chapter explains what STRIDE is and why it’s useful, including sections covering each component of the STRIDE mnemonic. Each threat-specifi c sec- tion provides a deeper explanation of the threat, a detailed table of examples for that threat, and then a discussion of the examples. The tables and examples are designed to teach you how to use STRIDE to discover threats. You’ll also C H A P T E R 3 STRIDE www.it-ebooks.info http://www.it-ebooks.info/
  • 246. 62 Part II ■ Finding Threats c03.indd 07:51:54:AM 01/15/2014 Page 62 learn about approaches built on STRIDE: STRIDE-per-element, STRIDE-per- interaction, and DESIST. The other approach built on STRIDE, the Elevation of Privilege game, is covered in Chapters 1, “Dive In and Threat Model!” and 12, “Requirements Cookbook,” and Appendix C, “Attacker Lists.” Understanding STRIDE and Why It’s Useful The STRIDE threats are the opposite of some of the properties you would like your system to have: authenticity, integrity, non-repudiation, confidentiality, availability, and authorization. Table 3-1 shows the STRIDE threats, the cor- responding property that you’d like to maintain, a defi nition, the most typical victims, and examples. Table 3-1: The STRIDE Threats THREAT PROPERT Y VIOLATED
  • 247. THREAT DEFINITION T YPICAL VICTIMS EXAMPLES Spoofi ng Authentication Pretending to be something or some- one other than yourself Processes, external entities, people Falsely claiming to be Acme.com, winsock .dll, Barack Obama, a police offi cer, or the Nigerian Anti-Fraud Group
  • 248. Tampering Integrity Modifying some- thing on disk, on a network, or in memory Data stores, data fl ows, processes Changing a spread- sheet, the binary of an important program, or the contents of a database on disk; modifying, adding, or removing packets over a network, either local or far across the Internet, wired
  • 249. or wireless; chang- ing either the data a program is using or the running program itself www.it-ebooks.info http://www.it-ebooks.info/ Chapter 3 ■ STRIDE 63 c03.indd 07:51:54:AM 01/15/2014 Page 63 THREAT PROPERT Y VIOLATED THREAT DEFINITION T YPICAL VICTIMS EXAMPLES Repudiation Non-
  • 250. Repudiation Claiming that you didn’t do some- thing, or were not responsible. Repudiation can be honest or false, and the key question for system designers is, what evidence do you have? Process Process or system: “I didn’t hit the big red button” or “I didn’t order that Ferrari.” Note that repudia- tion is somewhat the odd-threat-out here;
  • 251. it transcends the technical nature of the other threats to the business layer. Information Disclosure Confi dentiality Providing informa- tion to someone not authorized to see it Processes, data stores, data fl ows The most obvious example is allowing access to fi les, e-mail, or databases, but
  • 252. information disclosure can also involve fi le- names (“Termination for John Doe.docx”), packets on a network, or the contents of program memory. Denial of Service Availability Absorbing resources needed to provide service Processes, data stores, data fl ows A program that can be tricked into using
  • 253. up all its memory, a fi le that fi lls up the disk, or so many net- work connections that real traffi c can’t get through Elevation of Privilege Authorization Allowing someone to do something they’re not autho- rized to do Process Allowing a normal user to execute code as admin; allowing a remote person with- out any privileges to run code
  • 254. In Table 3-1, “typical victims” are those most likely to be victimized: For example, you can spoof a program by starting a program of the same name, or www.it-ebooks.info http://www.it-ebooks.info/ 64 Part II ■ Finding Threats c03.indd 07:51:54:AM 01/15/2014 Page 64 by putting a program with that name on disk. You can spoof an endpoint on the same machine by squatting or splicing. You can spoof users by capturing their authentication info by spoofing a site, by assuming they reuse credentials across sites, by brute forcing (online or off) or by elevating privilege on their machine. You can also tamper with the authentication database and then spoof with falsified credentials. Note that as you’re using STRIDE to look for threats, you’re simply enumerat- ing the things that might go wrong. The exact mechanisms for
  • 255. how it can go wrong are something you can develop later. (In practice, this can be easy or it can be very challenging. There might be defenses in place, and if you say, for example, “Someone could modify the management tables,” someone else can say, “No, they can’t because...”) It can be useful to record those possible attacks, because even if there is a mitigation in place, that mitigation is a testable feature, and you should ensure that you have a test case. You’ll sometimes hear STRIDE referred to as “STRIDE categories” or “the STRIDE taxonomy.” This framing is not helpful because STRIDE was not intended as, nor is it generally useful for, categorization. It is easy to find things that are hard to categorize with STRIDE. For example, earlier you learned about tampering with the authentication database and then spoofi ng. Should you record that as a tam- pering threat or a spoofi ng threat? The simple answer is that it doesn’t matter. If
  • 256. you’ve already come up with the attack, why bother putting it in a category? The goal of STRIDE is to help you find attacks. Categorizing them might help you figure out the right defenses, or it may be a waste of effort. Trying to use STRIDE to categorize threats can be frustrating, and those efforts cause some people to dismiss STRIDE, but this is a bit like throwing out the baby with the bathwater. Spoofing Threats Spoofing is pretending to be something or someone other than yourself. Table 3-1 includes the examples of claiming to be Acme.com, winsock.dll, Barack Obama, or the Nigerian Anti-Fraud Offi ce. Each of these is an example of a different subcategory of spoofing. The first example, pretending to be Acme. com (or Google.com, etc.) entails spoofing the identity of an entity across a net- work. There is no mediating authority that takes responsibility for telling you
  • 257. that Acme.com is the site I mean when I write these words. This differs from the second example, as Windows includes a winsock.dll. You should be able to ask the operating system to act as a mediating authority and get you to winsock. If you have your own DLLs, then you need to ensure that you’re open- ing them with the appropriate path (%installdir%dll); otherwise, someone might substitute one in a working directory, and get your code to do what they want. (Similar issues exist with unix and LD_PATH.) The third example, spoofing Barack Obama, is an instance of pretending to be a specific person. Contrast that www.it-ebooks.info http://www.it-ebooks.info/ Chapter 3 ■ STRIDE 65 c03.indd 07:51:54:AM 01/15/2014 Page 65 with the fourth example, pretending to be the President of the United States or
  • 258. the Nigerian Anti-Fraud Offi ce. In those cases, the attacker is pretending to be in a role. These spoofi ng threats are laid out in Table 3-2. Table 3-2: Spoofi ng Threats THREAT EXAMPLES WHAT THE ATTACKER DOES NOTES Spoofi ng a process on the same machine Creates a fi le before the real process Renaming/linking Creating a Trojan “su” and alter- ing the path Renaming Naming your process “sshd” Spoofi ng a fi le Creates a fi le in the local directory This can be a library, execut- able, or confi g fi le. Creates a link and changes it From the attacker’s perspec- tive, the change should hap- pen between the link being
  • 259. checked and the link being accessed. Creates many fi les in the expected directory Automation makes it easy to cre- ate 10,000 fi les in /tmp, to fi ll the space of fi les called /tmp /”pid.NNNN, or similar. Spoofi ng a machine ARP spoofi ng IP spoofi ng DNS spoofi ng Forward or reverse DNS Compromise Compromise TLD, registrar or DNS operator IP redirection At the switch or router level Spoofi ng a person Sets e-mail display name Takes over a real account Spoofi ng a role Declares themselves to be that role Sometimes opening a special
  • 260. account with a relevant name Spoofing a Process or File on the Same Machine If an attacker creates a file before the real process, then if your code is not care- ful to create a new file, the attacker may supply data that your code interprets, thinking that your code (or a previous instantiation or thread) wrote that data, www.it-ebooks.info http://www.it-ebooks.info/ 66 Part II ■ Finding Threats c03.indd 07:51:54:AM 01/15/2014 Page 66 and it can be trusted. Similarly, if file permissions on a pipe, local procedure call, and so on, are not managed well, then an attacker can create that endpoint, confusing everything that attempts to use it later. Spoofi ng a process or fi le on a remote machine can work either by creating spoofed fi les or processes on the expected machine (possibly having taken admin
  • 261. rights) or by pretending to be the expected machine, covered next. Spoofing a Machine Attackers can spoof remote machines at a variety of levels of the network stack. These spoofing attacks can influence your code’s view of the world as a client, server, or peer. They can spoof ARP requests if they’re local, they can spoof IP packets to make it appear that they’re coming from somewhere they are not, and they can spoof DNS packets. DNS spoofing can happen when you do a forward or reverse lookup. An attacker can spoof a DNS reply to a forward query they expect you to make. They can also adjust DNS records for machines they control such that when your code does a reverse lookup (translating IP to FQDN) their DNS server returns a name in a domain that they do not control—for example, claiming that 10.1.2.3 is update.microsoft.com. Of course, once attackers have
  • 262. spoofed a machine, they can either spoof or act as a man-in-the- middle for the processes on that machine. Second-order variants of this threat involve stealing machine authenticators such as cryptographic keys and abusing them as part of a spoofing attack. Attackers can also spoof at higher layers. For example, phishing attacks involve many acts of spoofi ng. There’s usually spoofi ng of e-mail from “your” bank, and spoofi ng of that bank’s website. When someone falls for that e-mail, clicks the link and visits the bank, they then enter their credentials, sending them to that spoofed website. The attacker then engages in one last act of spoofi ng: They log into your bank account and transfer your money to themselves or an accomplice. (It may be one attacker, or it may be a set of attackers, contracting with one another for services rendered.) Spoofing a Person Major categories of spoofing people include access to the
  • 263. person’s account and pretending to be them through an alternate account. Phishing is a common way to get access to someone else’s account. However, there’s often little to prevent anyone from setting up an account and pretending to be you. For example, an attacker could set up accounts on sites like LinkedIn, Twitter, or Facebook and pretend to be you, the Adam Shostack who wrote this book, or a rich and deposed prince trying to get their money out of the country. www.it-ebooks.info http://www.it-ebooks.info/ Chapter 3 ■ STRIDE 67 c03.indd 07:51:54:AM 01/15/2014 Page 67 Tampering Threats Tampering is modifying something, typically on disk, on a network, or in memory. This can include changing data in a spreadsheet (using either a program
  • 264. such as Excel or another editor), changing a binary or configuration file on disk, or modifying a more complex data structure, such as a database on disk. On a network, packets can be added, modified, or removed. It’s sometimes easier to add packets than to edit them as they fly by, and programs are remarkably bad about handling extra copies of data securely. More examples of tampering are in Table 3-3. Table 3-3: Tampering Threats THREAT EXAMPLES WHAT THE ATTACKER DOES NOTES Tampering with a fi le Modifi es a fi le they own and on which you rely Modifi es a fi le you own Modifi es a fi le on a fi le server that you own Modifi es a fi le on their fi le server Loads of fun when you include fi les from remote domains
  • 265. Modifi es a fi le on their fi le server Ever notice how much XML includes remote schemas? Modifi es links or redirects Tampering with memory Modifi es your code Hard to defend against once the attacker is run- ning code as the same user Modifi es data they’ve supplied to your API Pass by value, not by refer- ence when crossing a trust boundary Tampering with a network Redirects the fl ow of data to their machine Often stage 1 of tampering
  • 266. Modifi es data fl owing over the network Even easier and more fun when the network is wire- less (WiFi, 3G, et cetera) Enhances spoofi ng attacks www.it-ebooks.info http://www.it-ebooks.info/ 68 Part II ■ Finding Threats c03.indd 07:51:54:AM 01/15/2014 Page 68 Tampering with a File Attackers can modify fi les wherever they have write permission. When your code has to rely on fi les others can write, there’s a possibility that the fi le was written maliciously. While the most obvious form of tampering is on a local disk, there are also plenty of ways to do this when the file is remotely included, like most of the JavaScript on the Internet. The attacker can breach
  • 267. your security by breaching someone else’s site. They can also (because of poor privileges, spoofing, or elevation of privilege) modify files you own. Lastly, they can modify links or redirects of various sorts. Links are often left out of integrity checks. There’s a somewhat subtle variant of this when there are caches between things you control (such as a server) and things you don’t (such as a web browser on the other side of the Internet). For example, cache poisoning attacks insert data into web caches through poor security controls at caches (OWASP, 2009). Tampering with Memory Attackers can modify your code if they’re running at the same privilege level. At that point, defense is tricky. If your API handles data by reference (a pat- tern often chosen for speed), then an attacker can modify it after you perform security checks. Tampering with a Network
  • 268. Network tampering often involves a variety of tricks to bring the data to the attacker’s machine, where he forwards some data intact and some data modi- fi ed. However, tricks to bring you the data are not always needed; with radio interfaces like WiFi and Bluetooth, more and more data flow through the air. Many network protocols were designed with the assumption you needed spe- cial hardware to create or read arbitrary packets. The requirement for special hardware was the defense against tampering (and often spoofi ng). The rise of software-defined radio (SDR) has silently invalidated the need for special hard- ware. It is now easy to buy an inexpensive SDR unit that can be programmed to tamper with wireless protocols. Repudiation Threats Repudiation is claiming you didn’t do something, or were not responsible for what happened. People can repudiate honestly or deceptively.
  • 269. Given the increas- ing knowledge often needed to understand the complex world, those honestly repudiating may really be exposing issues in your user experiences or service architectures. Repudiation threats are a bit different from other security threats, www.it-ebooks.info http://www.it-ebooks.info/ Chapter 3 ■ STRIDE 69 c03.indd 07:51:54:AM 01/15/2014 Page 69 as they often appear at the business layer. (That is, above the network layer such as TCP/IP, above the application layer such as HTTP/HTML, and where the business logic of buying products would be implemented.) Repudiation threats are also associated with your logging system and process. If you don’t have logs, don’t retain logs, or can’t analyze logs, repudiation threats are hard to dispute. There is also a class of attacks in which attackers will drop
  • 270. data in the logs to make log analysis tricky. For example, if you display your logs in HTML and the attacker sends </tr> or </html>, your log display needs to treat those as data, not code. More repudiation threats are shown in Table 3-4. Table 3-4: Repudiation Threats THREAT EXAMPLES WHAT THE ATTACKER DOES NOTES Repudiating an action Claims to have not clicked Maybe they really did Claims to have not received Receipt can be strange; does mail being downloaded by your phone mean you’ve read it? Did a network proxy pre-fetch images? Did some- one leave a package on the porch? Claims to have been a fraud victim Uses someone else’s account
  • 271. Uses someone else’s pay- ment instrument without authorization Attacking the logs Notices you have no logs Puts attacks in the logs to con- fuse logs, log-reading code, or a person reading the logs Attacking the Logs Again, if you don’t have logs, don’t retain logs, or can’t analyze logs, repudiation actions are hard to dispute. So if you aren’t logging, you probably need to start. If you have no log centralization or analysis capability, you probably need that as well. If you don’t properly define what you will be logging, an attacker may be able to break your log analysis system. It can be challenging to work through the layers of log production and analysis to ensure reliability, but if you don’t, it’s easy to have attacks slip through the cracks or
  • 272. inconsistencies. www.it-ebooks.info http://www.it-ebooks.info/ 70 Part II ■ Finding Threats c03.indd 07:51:54:AM 01/15/2014 Page 70 Repudiating an Action When you’re discussing repudiation, it’s helpful to discuss “someone” rather than “an attacker.” You want to do this because those who repudiate are often not actually attackers, but people who have been failed by technology or pro- cess. Maybe they really didn’t click (or didn’t perceive that they clicked). Maybe the spam filter really did eat that message. Maybe UPS didn’t deliver, or maybe UPS delivered by leaving the package on a porch. Maybe someone claims to have been a victim of fraud when they really were not (or maybe someone else in a household used their credit card, with or without their knowledge). Good
  • 273. technological systems that both authenticate and log well can make it easier to handle repudiation issues. Information Disclosure Threats Information disclosure is about allowing people to see information they are not authorized to see. Some information disclosure threats are shown in Table 3-5. Table 3-5: Information Disclosure Threats THREAT EXAMPLES WHAT THE ATTACKER DOES NOTES Information dis- closure against a process Extracts secrets from error messages Reads the error messages from username/passwords to entire database tables Extracts machine secrets from error cases Can make defense against memory
  • 274. corruption such as ASLR far less useful Extracts business/personal secrets from error cases Information dis- closure against data stores Takes advantage of inappropriate or missing ACLs Takes advantage of bad database permissions Finds fi les protected by obscurity Finds crypto keys on disk (or in memory) Sees interesting information in fi lenames Reads fi les as they traverse the network Gets data from logs or temp fi les Gets data from swap or other temp storage Extracts data by obtaining device, changing OS www.it-ebooks.info http://www.it-ebooks.info/
  • 275. Chapter 3 ■ STRIDE 71 c03.indd 07:51:54:AM 01/15/2014 Page 71 THREAT EXAMPLES WHAT THE ATTACKER DOES NOTES Information dis- closure against a data fl ow Reads data on the network Redirects traffi c to enable reading data on the network Learns secrets by analyzing traffi c Learns who’s talking to whom by watching the DNS Learns who’s talking to whom by social network info disclosure Information Disclosure from a Process Many instances in which a process will disclose information are those that inform further attacks. A process can do this by leaking memory addresses, extracting
  • 276. secrets from error messages, or extracting design details from error messages. Leaking memory addresses can help bypass ASLR and similar defenses. Leaking secrets might include database connection strings or passwords. Leaking design details might mean exposing anti-fraud rules like “your account is too new to order a diamond ring.” Information Disclosure from a Data Store As data stores, well, store data, there’s a profusion of ways they can leak it. The first set of causes are failures to properly use security mechanisms. Not setting permis- sions appropriately or hoping that no one will find an obscure file are common ways in which people fail to use security mechanisms. Cryptographic keys are a special case whereby information disclosure allows additional attacks. Files read from a data store over the network are often readable as they traverse the network. An additional attack, often overlooked, is data in filenames. If you have a
  • 277. directory named “May 2013 layoffs,” the fi lename itself, “Termination Letter for Alice.docx,” reveals important information. There’s also a group of attacks whereby a program emits information into the operating environment. Logs, temp files, swap, or other places can contain data. Usually, the OS will protect data in swap, but for things like crypto keys, you should use OS facilities for preventing those from being swapped out. Lastly, there is the class of attacks whereby data is extracted from the device using an operating system under the attacker’s control. Most commonly (in 2013), these attacks affect USB keys, but they also apply to CDs, backup tapes, hard drives, or stolen laptops or servers. Hard drives are often decommissioned without full data deletion. (You can address the need to delete data from hard www.it-ebooks.info
  • 278. http://www.it-ebooks.info/ 72 Part II ■ Finding Threats c03.indd 07:51:54:AM 01/15/2014 Page 72 drives by buying a hard drive chipper or smashing machine, and since such machines are awesome, why on earth wouldn’t you?) Information Disclosure from a Data Flow Data flows are particularly susceptible to information disclosure attacks when information is flowing over a network. However, data flows on a single machine can still be attacked, particularly when the machine is shared by cloud co-tenants or many mutually distrustful users of a compute server. Beyond the simple reading of data on the network, attackers might redirect traffi c to themselves (often by spoofing some network control protocol) so they can see it when they’re not on the normal path. It’s also possible to obtain information even when the
  • 279. network traffi c itself is encrypted. There are a variety of ways to learn secrets about who’s talking to whom, including watching DNS, friend activity on a site such as LinkedIn, or other forms of social network analysis. N O T E Security mavens may be wondering if side channel attacks and covert channels are going to be mentioned. These attacks can be fun to work on (and side channels are covered a bit in Chapter 16, “Threats to Cryptosystems”), but they are not relevant until you’ve mitigated the issues covered here. Denial-of-Service Threats Denial-of-service attacks absorb a resource that is needed to provide service. Examples are described in Table 3-6. Table 3-6: Denial-of-Service Threats THREAT EXAMPLES WHAT THE ATTACKER DOES NOTES Denial of service against a
  • 280. process Absorbs memory (RAM or disk) Absorbs CPU Uses process as an amplifi er Denial of service against a data store Fills data store up Makes enough requests to slow down the system Denial of service against a data fl ow Consumes network resources www.it-ebooks.info http://www.it-ebooks.info/ Chapter 3 ■ STRIDE 73 c03.indd 07:51:54:AM 01/15/2014 Page 73
  • 281. Denial-of-service attacks can be split into those that work while the attacker is attacking (say, filling up bandwidth) and those that persist. Persistent attacks can remain in effect until a reboot (for example, while(1){fork();}), or even past a reboot (for example, filling up a disk). Denial-of-service attacks can also be divided into amplified and unamplified. Amplified attacks are those whereby small attacker effort results in a large impact. An example would take advantage of the old unix chargen service, whose purpose was to generate a semi-random character scheme for testing. An attacker could spoof a single packet from the chargen port on machine A to the chargen port on machine B. The hilarity continues until someone pulls a network cable. Elevation of Privilege Threats Elevation of privilege is allowing someone to do something they’re not autho- rized to do—for example, allowing a normal user to execute
  • 282. code as admin, or allowing a remote person without any privileges to run code. Two important ways to elevate privileges involve corrupting a process and getting past autho- rization checks. Examples are shown in Table 3-7. Table 3-7: Elevation of Privilege Threats THREAT EXAMPLES WHAT THE ATTACKER DOES NOTES Elevation of privilege against a process by corrupting the process Send inputs that the code doesn’t handle properly These errors are very com- mon, and are usually high impact. Gains access to read or write
  • 283. memory inappropriately Writing memory is (hope- fully obviously) bad, but reading memory can enable further attacks. Elevation through missed authorization checks Elevation through buggy authorization checks Centralizing such checks makes bugs easier to manage Elevation through data tampering Modifi es bits on disk to do things other than what the authorized user intends www.it-ebooks.info
  • 284. http://www.it-ebooks.info/ 74 Part II ■ Finding Threats c03.indd 07:51:54:AM 01/15/2014 Page 74 Elevate Privileges by Corrupting a Process Corrupting a process involves things like smashing the stack, exploiting data on the heap, and a whole variety of exploitation techniques. The impact of these techniques is that the attacker gains influence or control over a program’s control flow. It’s important to understand that these exploits are not limited to the attack surface. The first code that attacker data can reach is, of course, an important target. Generally, that code can only validate data against a limited subset of purposes. It’s important to trace the data flows further to see where else elevation of privilege can take place. There’s a somewhat unusual case whereby a program relies on and executes things from shared memory, which is a trivial
  • 285. path for elevation if everything with permissions to that shared memory is not running at the same privilege level. Elevate Privileges through Authorization Failures There is also a set of ways to elevate privileges through authorization failures. The simplest failure is to not check authorization on every path. More complex for an attacker is taking advantage of buggy authorization checks. Lastly, if a program relies on other programs, configuration files, or datasets being trust- worthy, it’s important to ensure that permissions are set so that each of those dependencies is properly secured. Extended Example: STRIDE Threats against Acme-DB This extended example discusses how STRIDE threats could manifest against the Acme/SQL database described in Chapter 1, “Dive In and Threat Model!” and 2, “Strategies for Threat Modeling,” and shown in Figure 2- 1. You’ll fi rst look at these threats by STRIDE category, and then examine the
  • 286. same set according to who can address them. Spoofi ng ■ A web client could attempt to log in with random credentials or stolen credentials, as could a SQL client. ■ If you assume that the SQL client is the one you wrote and allow it to make security decisions, then a spoofed (or tampered with) client could bypass security checks. ■ The web client could connect to a false (spoofed) front end, and end up disclosing credentials. ■ A program could pretend to be the database or log analysis program, and try to read data from the various data stores. www.it-ebooks.info http://www.it-ebooks.info/ Chapter 3 ■ STRIDE 75
  • 287. c03.indd 07:51:54:AM 01/15/2014 Page 75 Tampering ■ Someone could also tamper with the data they’re sending, or with any of the programs or data files. ■ Someone could tamper with the web or SQL clients. (This is nominally out of scope, as you shouldn’t be trusting external entities anyway.) N O T E These threats, once you consider them, can easily be addressed with operating system permissions. More challenging is what can alter what data within the database. Operating system permissions will only help a little there; the database will need to implement an access control system of some sort. Repudiation ■ The customers using either SQL or web clients could claim not to have done things. These threats may already be mitigated by the presence of logs and log analysis. So why bother with these threats? They remind you
  • 288. that you need to configure logging to be on, and that you need to log the “right things,” which probably include successes and failures of authen- tication attempts, access attempts, and in particular, the server needs to track attempts by clients to access or change logs. Information Disclosure ■ The most obvious information disclosure issues occur when confidential information in the database is exposed to the wrong client. This informa- tion may be either data (the contents of the salaries table) or metadata (the existence of the termination plans table). The information disclosure may be accidental (failure to set an ACL) or malicious (eavesdropping on the network). Information disclosure may also occur by the front end(s)— for example, an error message like “Can’t connect to database foo with password bar!”
  • 289. ■ The database files (partitions, SAN attached storage) need to be protected by the operating system and by ACLs for data within the files. ■ Logs often store confidential information, and therefore need to be protected. Denial of Service ■ The front ends could be overwhelmed by random or crafted requests, especially if there are anonymous (or free) web accounts that can craft requests designed to be slow to execute. ■ The network connections could be overwhelmed with data. ■ The database or logs could be filled up. www.it-ebooks.info http://www.it-ebooks.info/ 76 Part II ■ Finding Threats c03.indd 07:51:54:AM 01/15/2014 Page 76 ■ If the network between the main processes, or the processes and databases, is shared, it may become congested.
  • 290. Elevation of Privilege ■ Clients, either web or SQL, could attempt to run queries they’re not autho- rized to run. ■ If the client is enforcing security, then anyone who tampers with their client or its network stream will be able to run queries of their choice. ■ If the database is capable of running arbitrary commands, then that capa- bility is available to the clients. ■ The log analysis program (or something pretending to be the log analysis program) may be able to run arbitrary commands or queries. N O T E The log analysis program may be thought of as trusted, but it’s drawn outside the trust boundaries. So either the thinking or the diagram (in Figure 2-1) is incorrect. ■ If the DB cluster is connected to a corporate directory service and no action is taken to restrict who can log in to the database servers (or file servers),
  • 291. then anyone in the corporate directory, including perhaps employees, contractors, build labs, and partners can make changes on those systems. N O T E The preceding lists in this extended example are intended to be illustrative; other threats may exist. It is also possible to consider these threats according to the person or team that must address them, divided between Acme and its customers. As shown in Table 3-8, this illustrates the natural overlap of threat and mitigation, fore- shadowing the Part III, “Managing and Addressing Threats” on how to mitigate threats. It also starts to enumerate things that are not requirements for Acme/ SQL. These non-requirements should be documented and provided to customers, as covered in Chapter 12. In this table, you’re seeing more and more actionable threats. As a developer or a systems administrator, you can start to see how to handle these sorts of issues. It’s tempting to start to address
  • 292. threats in the table itself, and a natural extension to the table would be a set of ways for each actor to address the threats that apply. www.it-ebooks.info http://www.it-ebooks.info/ Chapter 3 ■ STRIDE 77 c03.indd 07:51:54:AM 01/15/2014 Page 77 Table 3-8: Addressing Threats According to Who Handles Them THREAT INSTANCES THAT ACME MUST HANDLE INSTANCES THAT IT DEPARTMENTS MUST HANDLE Spoofi ng Web/SQL/other client brute forcing logins DBA (human) DB users Web client SQL client
  • 293. DBA (human) DB users Tampering Data Management Logs Front end(s) Database DB admin Repudiation Logs (Log analysis must be protected.) Certain actions from web and SQL clients will need careful logging. Certain actions from DBAs will need careful logging. Logs (Log analysis must be protected.) If DBAs are not fully trusted, a system in another privilege domain to log all
  • 294. commands might be required. Information disclosure Data, management, and logs must be protected. Front ends must implement access control. Only the front ends should be able to access the data. ACLs and security groups must be managed. Backups must be protected. Denial of service Front ends must be designed to minimize DoS risks. The system must be deployed with suf- fi cient resources. Continues
  • 295. www.it-ebooks.info http://www.it-ebooks.info/ 78 Part II ■ Finding Threats c03.indd 07:51:54:AM 01/15/2014 Page 78 THREAT INSTANCES THAT ACME MUST HANDLE INSTANCES THAT IT DEPARTMENTS MUST HANDLE Elevation of privilege Trusting client The DB should support prepared statements to make injection harder. No “run this command” tools should be in the default install. No default way to run commands on the server, and calls like exec()and system() must be permis- sioned and confi gurable if they exist.
  • 296. Inappropriately trusting clients that are written locally Confi gure the DB appropriately. STRIDE Variants STRIDE can be a very useful mnemonic when looking for threats, but it’s not perfect. In this section, you’ll learn about variants of STRIDE that may help address some of its weaknesses. STRIDE-per-Element STRIDE-per-element makes STRIDE more prescriptive by observing that certain threats are more prevalent with certain elements of a diagram. For example, a data store is unlikely to spoof another data store (although running code can be confused as to which data store it’s accessing.) By focusing on a set of threats against each element, this approach makes it easier to find
  • 297. threats. For example, Microsoft uses Table 3-9 as a core part of its Security Development Lifecycle threat modeling training. Table 3-9: STRIDE-per-Element S T R I D E External Entity x x Process x x x x x x Data Flow x x x Data Store x ? x x Table 3-8 (continued) www.it-ebooks.info http://www.it-ebooks.info/ Chapter 3 ■ STRIDE 79 c03.indd 07:51:54:AM 01/15/2014 Page 79 Applying this chart, you can focus threat analysis on how an attacker might tamper with, read data from, or prevent access to a data flow. For example, if data
  • 298. is flowing over a network such as Ethernet, it’s trivial for someone attached to that same Ethernet to read all the content, modify it, or send a flood of packets to cause a TCP timeout. You might argue that you have some form of network segmentation, and that may mitigate the threats suffi ciently for you. The ques- tion mark under repudiation indicates that logging data stores are involved in addressing repudiation, and sometimes logs will come under special attack to allow repudiation attacks. The threat is to the element listed in Table 3-9. Each element is the victim, not the perpetrator. Therefore, if you’re tampering with a data store, the threat is to the data store and the data within. If you’re spoofing in a way that affects a process, then the process is the victim. So, spoofing by tampering with the network is really a spoof of the endpoint, regardless of the technical details. In other words, the other endpoint (or endpoints) are confused
  • 299. about what’s at the other end of the connection. The chart focuses on spoofing of a process, not spoofing of the data flow. Of course, if you happen to find spoofing when looking at the data flow, obviously you should record the threat so you can address it, not worry about what sort of threat it is. STRIDE- per-element has the advantage of being prescriptive, helping you identify what to look for where without being a checklist of the form “web component: XSS, XSRF...” In skilled hands, it can be used to find new types of weaknesses in components. In less skilled hands, it can still find many common issues. STRIDE-per-element does have two weaknesses. First, similar issues tend to crop up repeatedly in a given threat model; second, the chart may not represent your issues. In fact, Table 3-9 is somewhat specifi c to Microsoft. The easiest place to see this is “information disclosure by external entity,” which is a good
  • 300. description of some privacy issues. (It is by no means a complete description of privacy.) However, the table doesn’t indicate that this could be a problem. That’s because Microsoft has a separate set of processes for analyzing privacy problems. Those privacy processes are outside the security threat modeling space. Therefore, if you’re going to adopt this approach, it’s worth analyzing whether the table covers the set of issues you care about, and if it doesn’t, create a version that suits your scenario. Another place you might see the specifi city is that many people want to discuss spoofi ng of data fl ows. Should that be part of STRIDE-per-element? The spoofi ng action is a spoofi ng of the endpoint, but that description may help some people to look for those threats. Also note that the more “x” marks you add, the closer you come to “consider STRIDE for each element of the diagram.” The editors ask if that’s a good or bad thing, and it’s
  • 301. a fi ne question. If you want to be comprehensive, this is helpful; if you want to focus on the most likely issues, however, it will likely be a distraction. So what are the exit criteria for STRIDE-per-element? When you have a threat per checkbox in the STRIDE-per-element table, you are doing reasonably well. www.it-ebooks.info http://www.it-ebooks.info/ 80 Part II ■ Finding Threats c03.indd 07:51:54:AM 01/15/2014 Page 80 If you circle around and consider threats against your mitigations (or ways to bypass them) you’ll be doing pretty well. STRIDE-per-Interaction STRIDE-per-element is a simplified approach to identifying threats, designed to be easily understood by the beginner. However, in reality, threats don’t show up in a vacuum. They show up in the interactions of the system.
  • 302. STRIDE-per- interaction is an approach to threat enumeration that considers tuples of (origin, destination, interaction) and enumerates threats against them. Initially, another goal of this approach was to reduce the number of things that a modeler would have to consider, but that didn’t work out as planned. STRIDE- per-interaction leads to the same number of threats as STRIDE-per-element, but the threats may be easier to understand with this approach. This approach was developed by Larry Osterman and Douglas MacIver, both of Microsoft. The STRIDE- per-interaction approach is shown in Tables 3-10 and 3-11. Both reference two processes, Contoso.exe and Fabrikam.dll. Table 3-10 shows which threats apply to each interaction, and Table 3-11 shows an example of STRIDE per interaction applied to Figure 3-1. The relationships and trust boundaries used for the named elements in both tables are shown in Figure 3-1. Browser database
  • 303. widgetsCreate(widgets) Write Results Commands Responses Fabrikam.dll Contoso.exe Figure 3-1: The system referenced in Table 3-10 In Table 3-10, the table columns are as follows: ■ A number for referencing a line (For example, “Looking at line 2, let’s look for spoofi ng and information disclosure threats.”) www.it-ebooks.info http://www.it-ebooks.info/ Chapter 3 ■ STRIDE 81 c03.indd 07:51:54:AM 01/15/2014 Page 81 ■ The main element you’re looking at ■ The interactions that element has
  • 304. ■ The STRIDE threats applicable to the interaction Table 3-10: STRIDE-per-Interaction: Threat Applicability # ELEMENT INTERACTION S T R I D E 1 Process (Contoso) Process has outbound data fl ow to data store. x x 2 Process sends output to another process. x x x x x 3 Process sends output to external interactor (code). x x x x 4 Process sends output to external interactor (human). x 5 Process has inbound data
  • 305. fl ow from data store. x x x x 6 Process has inbound data fl ow from a process. x x x x 7 Process has inbound data fl ow from external interactor. x x x 8 Data Flow (com- mands/ responses) Crosses machine boundary x x x 9 Data Store (database) Process has outbound data fl ow to data store.
  • 306. x x x x 10 Process has inbound data fl ow from data store. x x x 11 External Interactor (browser) External interactor passes input to process. x x x 12 External interactor gets input from process. x www.it-ebooks.info http://www.it-ebooks.info/ c03.indd 07:51:54:AM 01/15/2014 Page 82 T a
  • 391. 3 -1 1 (c o n ti n u e d ) www.it-ebooks.info http://www.it-ebooks.info/ Chapter 3 ■ STRIDE 85 c03.indd 07:51:54:AM 01/15/2014 Page 85 When you have a threat per checkbox in the STRIDE-per- interaction table, you are doing reasonably well. If you circle through and consider threats against your mitigations (or ways to bypass them) you’ll be doing pretty well.
  • 392. STRIDE-per-interaction is too complex to use without a reference chart handy. (In contrast, STRIDE is an easy mnemonic, and STRIDE-per- element is simple enough that the chart can be memorized or printed on a wallet card.) DESIST DESIST is a variant of STRIDE created by Gunnar Peterson. DESIST stands for Dispute, Elevation of privilege, Spoofing, Information disclosure, Service denial, and Tampering. (Dispute replaces repudiation with a less fancy word, and Service denial replaces Denial of Service to make the acronym work.) Starting from scratch, it might make sense to use DESIST over STRIDE, but after more than a decade of STRIDE, it would be expensive to displace at Microsoft. (CEO of Scorpion Software, Dana Epp, has pointed out that acronyms with repeated letters can be challenging, a point in STRIDE’s favor.) Therefore, STRIDE-per- element, rather than DESIST-per-element, exists as the norm.
  • 393. Either way, it’s always useful to have mnemonics for helping people look for threats. Exit Criteria There are three ways to judge whether you’re done fi nding threats with STRIDE. The easiest way is to see if you have a threat of each type in STRIDE. Slightly harder is ensuring you have one threat per element of the diagram. However, both of these criterion will be reached before you’ve found all threats. For more comprehensiveness, use STRIDE-per-element, and ensure you have one threat per check. Not having met these criteria will tell you that you’re not done, but having met them is not a guarantee of completeness. Summary STRIDE is a useful mnemonic for fi nding threats against all sorts of techno- logical systems. STRIDE is more useful with a repertoire of more detailed
  • 394. threats to draw on. The tables of threats can provide that for those who are new to security, or act as reference material for security experts (a function also served by Appendix B, “Threat Trees”). There are variants of STRIDE that attempt to add focus and attention. STRIDE-per-element is very useful for www.it-ebooks.info http://www.it-ebooks.info/ 86 Part II ■ Finding Threats c03.indd 07:51:54:AM 01/15/2014 Page 86 this purpose, and can be customized to your needs. STRIDE- per-interaction provides more focus, but requires a crib sheet (or perhaps software) to use. If threat modeling experts were to start over, perhaps DESIST would help us make better ... progress in fi nding threats. www.it-ebooks.info http://www.it-ebooks.info/
  • 395. 87 c04.indd 11:38:53:AM 01/17/2014 Page 87 As Bruce Schneier wrote in his introduction to the subject, “Attack trees provide a formal, methodical way of describing the security of systems, based on vary- ing attacks. Basically, you represent attacks against a system in a tree structure, with the goal as the root node and different ways of achieving that goal as leaf nodes” (Schneier, 1999). In this chapter you’ll learn about the attack tree building block as an alterna- tive to STRIDE. You can use attack trees as a way to fi nd threats, as a way to organize threats found with other building blocks, or both. You’ll start with how to use an attack tree that’s provided to you, and from there learn various ways you can create trees. You’ll also examine several example and real attack trees and see how they fi t into fi nding threats. The chapter
  • 396. closes with some additional perspective on attack trees. Working with Attack Trees Attack trees work well as a building block for threat enumeration in the four- step framework. They have been presented as a full approach to threat modeling (Salter, 1998), but the threat modeling community has learned a lot since then. There are three ways you can use attack trees to enumerate threats: You can use an attack tree someone else created to help you fi nd threats. You can create C H A P T E R 4 Attack Trees www.it-ebooks.info http://www.it-ebooks.info/ 88 Part II ■ Finding Threats c04.indd 11:38:53:AM 01/17/2014 Page 88
  • 397. a tree to help you think through threats for a project you’re working on. Or you can create trees with the intent that others will use them. Creating new trees for general use is challenging, even for security experts. Using Attack Trees to Find Threats If you have an attack tree that is relevant to the system you’re building, you can use it to fi nd threats. Once you’ve modeled your system with a DFD or other diagram, you use an attack tree to analyze it. The attack elicitation task is to iterate over each node in the tree and consider if that issue (or a variant of that issue) impacts your system. You might choose to track either the threats that apply or each interaction. If your system or trees are complex, or if process documentation is important, each interaction may be help- ful, but otherwise that tracking may be distracting or tedious. You can use the attack trees in this chapter or in Appendix B “Threat Trees” for this
  • 398. purpose. If there’s no tree that applies to your system, you can either create one, or use a different threat enumeration building block. Creating New Attack Trees If there are no attack trees that you can use for your system, you can create a project-specifi c tree. A project-specifi c tree is a way to organize your thinking about threats. You may end up with one or more trees, but this section assumes you’re putting everything in one tree. The same approach enables you to create trees for a single project or trees for general use. The basic steps to create an attack tree are as follows: 1. Decide on a representation. 2. Create a root node. 3. Create subnodes. 4. Consider completeness. 5. Prune the tree.
  • 399. 6. Check the presentation. Decide on a Representation There are AND trees, where the state of a node depends on all of the nodes below it being true, and OR trees, where a node is true if any of its subnodes are true. You need to decide, will your tree be an AND or an OR tree? (Most will be OR trees.) Your tree can be created or presented graphically or as an outline. See the section “Representing a Tree” later in this chapter for more on the various forms of representation. www.it-ebooks.info http://www.it-ebooks.info/ Chapter 4 ■ Attack Trees 89 c04.indd 11:38:53:AM 01/17/2014 Page 89 Create a Root Node To create an attack tree, start with a root node. The root node can be the com- ponent that prompts the analysis, or an adversary’s goal. Some
  • 400. attack trees use the problematic state (rather than the goal) as the root. Which you should use is a matter of preference. If the root node is a component, the subnodes should be labeled with what can go wrong for the node. If the root node is an attacker goal, consider ways to achieve that goal. Each alternative way to achieve the goal should be drawn in as a subnode. The guidance in “Toward a Secure System Engineering Methodology” (Salter, 1999) is helpful to security experts; however, it doesn’t shed much light on how to actually generate the trees, comparative advice about what a root node should be (in other words, whether it’s a goal or a system component and, most important, when one is better than the other), or how to evaluate trees in a structured fashion that would be suitable for those who are not security experts. To be prescriptive: ■ Create a root node with an attacker goal or high-impact action.
  • 401. ■ Use OR trees. ■ Draw them into a grid that the eye can track linearly. Create Subnodes You can create subnodes by brainstorming, or you can look for a structured way to fi nd more nodes. The relation between your nodes can be AND or OR, and you’ll have to make a choice and communicate it to those who are using your tree. Some possible structures for first-level subnodes include: ■ Attacking a system: ■ physical access ■ subvert software ■ subvert a person ■ Attacking a system via: ■ People ■ Process ■ Technology ■ Attacking a product during:
  • 402. ■ Design ■ Production ■ Distribution ■ Usage ■ Discard www.it-ebooks.info http://www.it-ebooks.info/ 90 Part II ■ Finding Threats c04.indd 11:38:53:AM 01/17/2014 Page 90 You can use these as a starting point, and make them more specifi c to your system. Iterate on the trees, adding subnodes as appropriate. N O T E Here the term subnode is used to include leaf (end) nodes and nodes with children, because as you create something you may not always know whether it is a leaf or whether it has more branches. Consider Completeness For this step, you want to determine whether your set of attack trees is complete
  • 403. enough. For example, if you are using components, you might need to add addi- tional trees for additional components. You can also look at each node and ask “is there another way that could happen?” If you’re using attacker motivations, consider additional attackers or motivations. The lists of attackers in Appendix C “Attacker Lists” can be used as a basis. An attack tree can be checked for quality by iterating over the nodes, look- ing for additional ways to reach the goal. It may be helpful to use STRIDE, one of the attack libraries in the next chapter, or a literature review to help you check the quality. Prune the Tree In this step, go through each node in the tree and consider whether the action in each subnode is prevented or duplicative. (An attack that’s worth putting in a tree will generally only be prevented in the context of a project.) If an attack is prevented,
  • 404. by some mitigation you can mark those nodes to indicate that they don’t need to be analyzed. (For example, you can use the test case ID, an “I” for impossible, put a slash through the node, or shade it gray.) Marking the nodes (rather than deleting them) helps people see that the attacks were considered. You might choose to test the assumption that a given node is impossible. See the “Test Process Integration” section in Chapter 10 “Validating That Threats Are Addressed” for more details. Check the Presentation Regardless of graphical form, you should aim to present each tree or subtree in no more than a page. If your tree is hard to see on a page, it may be help- ful to break it into smaller trees. Each top level subnode can be the root of a new tree, with a “context” tree that shows the overall relations. You may also be able to adjust presentation details such as font size, within the constraints
  • 405. of usability. The node labels should be of the same form, focusing on active terms. Finally, draw the tree on a grid to make it easy to track. Ideally, the equivalent level subnodes will show on a single line. That becomes more challenging as you go deeper into a tree. www.it-ebooks.info http://www.it-ebooks.info/ Chapter 4 ■ Attack Trees 91 c04.indd 11:38:53:AM 01/17/2014 Page 91 Representing a Tree Trees can be represented in two ways: as a free-form (human- viewable) model without any technical structure, or as a structured representation with variable types and/or metadata to facilitate programmatic analysis. Human-Viewable Representations Attack trees can be drawn graphically or shown in outline form. Graphical
  • 406. representations are a bit more work to create but have more potential to focus attention. In either case, if your nodes are not all related by the same logic (AND/OR), you’ll need to decide on a way to represent the relationship and communicate that decision. If your tree is being shown graphically, you’ll also want to decide if you use a distinct shape for a terminal node: The labels in a node should be carefully chosen to be rich in information, especially if you’re using a graphical tree. Words such as “attack” or “via” can distract from the key information. Choose “modify fi le” over “attack via modifying fi le.” Words such as “weak” are more helpful when other nodes say “no.” So “weak cryptography” is a good contrast to “no cryptography.” As always, care should be taken to ensure that the graphics are actually information-rich and communicative. For instance, consider the three repre-
  • 407. sentations of a tree shown in Figure 4–1. Asset/Revenue Overstatement Asset/Revenue Overstatement Asset/Revenue Overstatement Timing Differences Fictitious Revenue Timing Differences Fictitious Revenue Timing Differences Fictitious Revenue Figure 4–1: Three representations of a tree The left tree shows an example of a real tree that simply uses boxes. This rep- resentation does not clearly distinguish hierarchy, making it hard to tell which
  • 408. nodes are at the same level of the tree. Compare that to the center tree, which uses a tree to show the equivalence of the leaf nodes. The rightmost tree adds the “OR gate” symbol from circuit design to show that any of the leaf nodes lead to the parent condition. Additionally, tree layout should make considered use of space. In the very small tree in Figure 4–2, note the pleasant grid that helps your eye follow the layout. In contrast, consider the layout of Figure 4–3, which feels jumbled. To focus your attention on the layout, both are shown too small to read. www.it-ebooks.info http://www.it-ebooks.info/ 92 Part II ■ Finding Threats c04.indd 11:38:53:AM 01/17/2014 Page 92 Conflicts of Interest
  • 409. Bribery Corruption AssetMisappropriation Fraudulent Statements Purchases Schemes Sales Schemes Other Other Cash Fraudulent Disbursements Inventory and all Other Assets Larceny Skimming Misuse Larceny Bid Rigging Invoice Kickbacks Illegal Gratuities Economic
  • 413. Sales Receivables Refunds & other Asset Req. & Transfers Fake Sales & Shipping Purchasing & Receiving Write-off Schemes Lapping Schemes Of Cash on Hand From the Deposit Other Understated Unrecorded Internal Documents External Documents
  • 414. Figure 4–2: A tree drawn on a grid Repudiate message Weak signature system Replay attacks Weak logging Spoofing external entity Spoofing external entity Logs weaker than authentication system Logging unauthenticated or weakly authenticated data No logs Logging insufficient data
  • 415. Tampering threats against logs Repudiate transaction Repudiation, external entity or process Figure 4–3: A tree drawn without a grid www.it-ebooks.info http://www.it-ebooks.info/ Chapter 4 ■ Attack Trees 93 c04.indd 11:38:53:AM 01/17/2014 Page 93 N O T E In Writing Secure Code 2 (Microsoft Press, 2003), Michael Howard and David LeBlanc suggest the use of dotted lines for unlikely threats, solid lines for likely threats, and circles to show mitigations, although including mitigations may make the trees too complex.
  • 416. Outline representations are easier to create than graphical representations, but they tend to be less attention-grabbing. Ideally, an outline tree is shown on a single page, not crossing pages. The question of how to effectively represent AND/OR is not simple. Some representations leave them out, others include an indicator either before or after a line. The next three samples are modeled after the trees in “Election Operations Assessment Threat Trees” later in this chapter. As you look at them, ask yourself precisely what is needed to achieve the goal in node 1, “Attack voting equipment.” 1. Attack voting equipment 1.1 Gather knowledge 1.1.1 From insider 1.1.2 From components 1.2 Gain insider access 1.2.1 At voting system vendor 1.2.2 By illegal insider entry
  • 417. The preceding excerpt isn’t clear. Should the outline be read as a need to do each of these steps, or one or the other to achieve the goal of attacking voting equipment? Contrast that with the next tree, which is somewhat better: 1. Attack voting equipment 1.1 Gather knowledge (and) 1.1.1 From insider (or) 1.1.2 From components 1.2 Gain insider access (and) 1.2.1 At voting system vendor (or) 1.2.2 By illegal insider entry This representation is useful at the end nodes: It is clearly 1.1.1 or 1.1.2. But what does the “and” on line 1.1 refer to? 1.1.1 or 1.1.2? The representation is not clear. Another possible form is shown next: www.it-ebooks.info http://www.it-ebooks.info/
  • 418. 94 Part II ■ Finding Threats c04.indd 11:38:53:AM 01/17/2014 Page 94 1. Attack voting equipment O 1.1 Gather knowledge T 1.1.1 From insider O 1.1.2 From components O 1.2 Gain insider access T 1.2.1 At voting system vendor T 1.2.2 By illegal insider entry This is intended to be read as “AND Node: 1: Attack voting equipment, involves 1.1, gather knowledge either from insider or from components AND 1.2, gain insider access . . .” This can be confusing if read as the children of that node are to be ORed, rather than being ORed with its sibling nodes. This is much clearer in the graphical presentation. Also note that the steps are intended to be sequential. You must gather knowledge, then gain insider access, then attack
  • 419. the components to pull off the attack. As you can see from the preceding examples, the question of how to use an out- line representation of a tree is less simple than you might expect. If you are using someone else’s tree, be sure you understand their intent. If you are creating a tree, be sure you are clear on your intent, and clear in your communication of your intent. Structured Representations Graphical and outline presentation of trees are useful for humans, but a tree is also a data structure, and a structured representation of a tree makes it pos- sible to apply logic to the tree and in turn, the system you’re modeling. Several software packages enable you to create and manage complex trees. One such package allows the modeler to add costs to each node, and then assess what attacks an attacker with a given budget can execute. As your trees become more complex, such software is more likely to be worthwhile. See
  • 420. Chapter 11 “Threat Modeling Tools” for a list of tree management software. Example Attack Tree The following simple example of an attack tree (and a useful component for other attack tree activity) models how an attacker might get into a building. The entire tree is an OR tree; any of the methods listed will achieve the goal. (This tree is derived from “An Attack Tree for the Border Gateway Protocol” [Convery, 2004].) www.it-ebooks.info http://www.it-ebooks.info/ Chapter 4 ■ Attack Trees 95 c04.indd 11:38:53:AM 01/17/2014 Page 95 Goal: Access to the building 1. Go through a door a. When it’s unlocked: i. Get lucky. ii. Obstruct the latch plate (the “Watergate Classic”).
  • 421. iii. Distract the person who locks the door at night. b. Drill the lock. c. Pick the lock. d. Use the key. i. Find a key. ii. Steal a key. iii. Photograph and reproduce the key. iv. Social engineer a key from someone. 1. Borrow the key. 2. Convince someone to post a photo of their key ring. e. Social engineer your way in. i. Act like you’re authorized and follow someone in. ii. Make friends with an authorized person. iii. Carry a box, a cup of coffee in each hand, etc. 2. Go through a window. a. Break a window. b. Lift the window. 3. Go through a wall.
  • 422. a. Use a sledgehammer or axe. b. Use a truck to go through the wall. 4. Gain access via other means. a. Use a fi re escape. b. Use roof access from a helicopter (preferably black) or adjacent building. c. Enter another part of the building, using another tenant’s access. www.it-ebooks.info http://www.it-ebooks.info/ 96 Part II ■ Finding Threats c04.indd 11:38:53:AM 01/17/2014 Page 96 Real Attack Trees A variety of real attack trees have been published. These trees may be helpful to you either directly, because they model systems like the one you’re model- ing, or as examples of how to build an attack tree. The three attack trees in this
  • 423. section show how insiders commit fi nancial fraud, how to attack elections, and threats against SSL. Each of these trees has the nice property of being available now, either as an extended example, as a model for you to build from, or (if you’re working around fraud, elections, or SSL), to use directly in analyzing a system which matters to you. The fraud tree is designed for you to use. In contrast, the election trees were developed to help the team think through their threats and organize the possibilities. Fraud Attack Tree An attack tree from the Association of Certifi ed Fraud Examiners is shown with their gracious permission in Figure 4–4, and it has a number of good qualities. First, it’s derived from actual experience in fi nding and exposing fraud. Second,
  • 424. it has a structure put together by subject matter experts, so it’s not a random collection of threats. Finally, it has an associated set of mitigations, which are discussed at great length in Joseph Wells’ Corporate Fraud Handbook (Wiley, 2011). Election Operations Assessment Threat Trees The largest publicly accessible set of threat trees was created for the Elections Assistance Commission by a team centered at the University of Southern Alabama. There are six high-level trees. They are useful both as an example and for you to use directly, and there are some process lessons you can learn. N O T E This model covers a wider scope of attacks than typical for software threat models, but is scoped like many operational threat models. 1. Attack voting equipment. 2. Perform an insider attack. 3. Subvert the voting process. 4. Experience technical failure.
  • 425. 5. Attack audit. 6. Disrupt operations. www.it-ebooks.info http://www.it-ebooks.info/ Chapter 4 ■ Attack Trees 97 c04.indd 11:38:53:AM 01/17/2014 Page 97 Conflicts of Interest Bribery Corruption AssetMisappropriation Fraudulent Statements Purchases Schemes Sales Schemes Other Other Cash Fraudulent Disbursements Inventory
  • 426. and all Other Assets Larceny Skimming Misuse Larceny Bid Rigging Invoice Kickbacks Illegal Gratuities Economic Extortion Financial Non- Financial Employment Credentials Asset/Revenue Overstatements Asset/Revenue Understatements Timing Differences Fictitious
  • 429. Register Disbursements False Voids False Refunds Improper Disclosures Improper Asset Valuations Sales Receivables Refunds & other Asset Req. & Transfers Fake Sales & Shipping Purchasing & Receiving Write-off Schemes Lapping Schemes
  • 430. Of Cash on Hand From the Deposit Other Understated Unrecorded Internal Documents External Documents Figure 4–4: The ACFE fraud tree www.it-ebooks.info http://www.it-ebooks.info/ 98 Part II ■ Finding Threats c04.indd 11:38:53:AM 01/17/2014 Page 98 If your system is vulnerable to threats such as equipment attack, insider attack, process subversion or disruption, these attack trees may work well to help you fi nd threats against those systems. The team created these trees to organize their thinking around
  • 431. what might go wrong. They described their process as having found a very large set of issues via literature review, brainstorming, and original research. They then broke the threats into high-level sets, and had individuals organize them into trees. An attempt to sort the sets into a tree in a facilitated group process did not work (Yanisac, 2012). The organization of trees may require a single person or a very close-knit team; you should be cautious about trying for consensus trees. Mind Maps Application security specialist Ivan Ristic (Ristić, 2009) conducted an interesting experiment using a mind map for what he calls an SSL threat model, as shown in Figure 4–5. This is an interesting way to present a tree. There are very few mind-map trees out there. This tree, like the election trees, shows a set of editorial decisions and those
  • 432. who use mind maps may fi nd the following perspective on this mind map helpful: ■ The distinction between “Protocols/Implementation bugs” and “End points/Client side/secure implementation” is unclear. ■ There’s “End points/Client side/secure implementation” but no “server side” counterpart to that. ■ Under “End points/server side/server confi g” there’s a large subtree. Compare that to Client side where there’s no subtree at all. ■ Some items have an asterisk (*) but it’s unclear what that means. After discussion with Ivan, it turns out that those “may not apply to everyone.” ■ There’s an entire set of traffi c analytic threats that allow you to see where on a secure site someone is. These issues are made worse by AJAX, but more important here, how should they fi t into this mind map? Perhaps under “Protocols/specifi cations/scope limits”? ■ It’s hard to fi nd elements of the map, as it draws the eye in
  • 433. various direc- tions, some of which don’t align with the direction of reading. Perspective on Attack Trees Attack trees can be a useful way to convey information about threats. They can be helpful even to security experts as a way to quickly consider possible attack types. However, despite their surface appeal, it is very hard to create attack trees. www.it-ebooks.info http://www.it-ebooks.info/ c04.indd 11:38:53:AM 01/17/2014 Page 99 Trust path validation bugs NUL-byte Certificates Leaked CA Certificates Rogue CA Certificates Rogue Sysadmin Server Compromise Backup Compromise Attacks against sysadmins
  • 434. Social engineering Validation software subversion Forgery BriberyFailure to enforce SSL Expired certificate Incorrectly configured chain Invalid hostname Not valid for all required hostnames Insufficient assurance (*) Self-signed Certificates Unprotected Private Key Private Key Duplication (*) Private Key reuse Lack of trust validation Validation against other root certs Lack of revocation checking Client Authentication Use of weak protocols
  • 435. Weak key exchange (*) Weak ciphers (*) Non-FIPS approved ciphers (*) Anonymous key exchange Invalid Certificates Configuration errors Server Configuration Server-side End Points SSL Threat Model Protocols Specifications Scope limitations No IP layer protection Not end-to-end No certificate information protection Hostname leakage (via SNI) Downgrade attack (SSLv2) Truncation attack (SSLv2)
  • 436. Bleichenbacher adaptive chosen-ciphertext attack Klima-Pokorny-Rosa adaptive chosen-ciphertext attack etc.. Implementation bugs Usability Prevalence of self-signed certificates Domain name spoofing Internationalised domain names Similar domain names DNS Cache Poisoning MITM LAN Wireless Route hijacking (BGP) Phishing Corporate interception XSS Weaknesses Users
  • 437. Attacks Configuration Weaknesses Use of unpactched SSL libraries Mixed SSL/Non-SSL Areas Insecure cookies Site Implementation User Interface (Usability) Client Configuration Secure Implementation Lack of revocation checking Client Side Validation errors Theft Site certificate attacks Trust (PKI) Certificate Validation Bugs CA Certificate Attacks Figure 4–5: Ristic’s SSL mind map
  • 438. www.it-ebooks.info http://www.it-ebooks.info/ 100 Part II ■ Finding Threats c04.indd 11:38:53:AM 01/17/2014 Page 100 I hope that we’ll see experimentation and perhaps progress in the quality of advice. There are also a set of issues that can make trees hard to use, including completeness, scoping, and meaning: ■ Completeness: Without the right set of root nodes, you could miss entire attack groupings. For example, if your threat model for a safe doesn’t include “pour liquid nitrogen on the metal, then hit with a hammer,” then your safe is unlikely to resist this attack. Drawing a tree encourages specifi c questions, such as “how could I open the safe without the combination?” It may or may not bring you to the specifi c threat. Because there’s no way of knowing how many nodes a branch should have, you may
  • 439. never reach that point. A close variant of the this is how do you know that you’re done? (Schneier’s attack tree article alludes to these problems.) ■ Scoping: It may be unreasonable to consider what happens when the computer’s main memory is rapidly cooled and removed from the moth- erboard. If you write commercial software for word processing, this may seem like an operating system issue. If you create commercial operating systems, it may seem like a hardware issue. The nature of attack trees means many of the issues discovered will fall under the category of “there’s no way for us to fi x that.” ■ Meaning: There is no consistency around AND/OR, or around sequence, which means that understanding a new tree takes longer. Summary Attack trees fi t well into the four-step framework for threat modeling. They can be a useful tool for fi nding threats, or a way to organize
  • 440. thinking about threats (either for your project or more broadly). To create a new attack tree to help you organize thinking, you need to decide on a representation, and then select a root node. With that root node, you can brainstorm, use STRIDE, or use a literature review to fi nd threats to add to nodes. As you iterate over the nodes, consider if the tree is complete or overly- full, aiming to ensure the right threats are in the tree. When you’re happy with the content of the tree, you should check the presentation so others can use it. Attack trees can be represented as graphical trees, as outlines, or in software. You saw a sample tree for breaking into a building, and real trees for fraud, elections, and SSL. Each can be used as presented, or as an example for you to consider how to construct trees of your own. www.it-ebooks.info http://www.it-ebooks.info/
  • 441. 101 c05.indd 07:52:5:AM 01/15/2014 Page 101 Some practitioners have suggested that STRIDE is too high level, and should be replaced with a more detailed list of what can go wrong. Insofar as STRIDE being abstract, they’re right. It could well be useful to have a more detailed list of common problems. A library of attacks can be a useful tool for fi nding threats against the system you’re building. There are a number of ways to construct such a library. You could collect sets of attack tools; either proof-of-concept code or fully developed (“weaponized”) exploit code can help you understand the attacks. Such a collec- tion, where no modeling or abstraction has taken place, means that each time you pick up the library, each participant needs to spend time and energy creat- ing a model from the attacks. Therefore, a library that provides
  • 442. that abstraction (and at a more detailed level than STRIDE) could well be useful. In this chapter, you’ll learn about several higher-level libraries, including how they compare to checklists and literature reviews, and a bit about the costs and benefi ts of creating a new one. Properties of Attack Libraries As stated earlier, there are a number of ways to construct an attack library, so you probably won’t be surprised to learn that selecting one involves trade-offs, C H A P T E R 5 Attack Libraries www.it-ebooks.info http://www.it-ebooks.info/ 102 Part II ■ Finding Threats c05.indd 07:52:5:AM 01/15/2014 Page 102
  • 443. and that different libraries address different goals. The major decisions to be made, either implicitly or explicitly, are as follows: ■ Audience ■ Detail versus abstraction ■ Scope Audience refers to whom the library targets. Decisions about audience dra- matically infl uence the content and even structure of a library. For example, the “Library of Malware Traffi c Patterns” is designed for authors of intrusion detection tools and network operators. Such a library doesn’t need to spend much, if any, time explaining how malware works. The question of detail versus abstraction is about how many details are included in each entry of the library. Detail versus abstraction is, in theory, simple. You pick the level of detail at which your library should deliver, and then make sure it lands there. Closely related is structure, both within entries and between them.
  • 444. Some libraries have very little structure, others have a great deal. Structure between entries helps organize new entries, while structure within an entry helps promote consistency between entities. However, all that structure comes at a cost. Elements that are hard to categorize are inevitable, even when the things being categorized have some form of natural order, such as they all descend from the same biological origin. Just ask that egg-laying mammal, the duck- billed platypus. When there is less natural order (so to speak), categorization is even harder. You can conceptualize this as shown in Figure 5-1. DetailedAbstract STRIDE OWASP Top 10 CAPEC Checklist Figure 5-1: Abstraction versus detail Scope is also an important characteristic of an attack library. If it isn’t shown by a network trace, it probably doesn’t fi t the malware traffi c attack library. If
  • 445. it doesn’t impact the web, it doesn’t make sense to include it in the OWASP attack libraries. There’s probably more than one sweet spot for libraries. They are a balance of listing detailed threats while still being thought provoking. The thought- provoking nature of a library is important for good threat modeling. A thought- provoking list means that some of the engineers using it will find interesting and different threats. When the list of threats reaches a certain level of granularity, it stops prompting thinking, risks being tedious to apply, and becomes more and more of a checklist. www.it-ebooks.info http://www.it-ebooks.info/ Chapter 5 ■ Attack Libraries 103 c05.indd 07:52:5:AM 01/15/2014 Page 103 The library should contain something to help remind people using it that
  • 446. it is not a complete enumeration of what could go wrong. The precise form of that reminder will depend on the form of the library. For example, in Elevation of Privilege, it is the ace card(s), giving an extra point for a threat not in the game. Closely related to attack libraries are checklists and literature reviews, so before examining the available libraries, the following section looks at checklists and literature reviews. Libraries and Checklists Checklists are tremendously useful tools for preventing certain classes of prob- lems. If a short list of problems is routinely missed for some reason, then a checklist can help you ensure they don’t recur. Checklists must be concise and actionable. Many security professionals are skeptical, however, of “checklist security” as a substitute for careful consideration of threats. If you hate the very idea
  • 447. of checklists, you should read The Checklist Manifesto by Atul Gawande. You might be surprised by how enjoyable a read it is. But even if you take a big-tent approach to threat modeling, that doesn’t mean checklists can replace the work of trained people using their judgment. A checklist helps people avoid common problems, but the modeling of threats has already been done when the checklist is created. Therefore, a checklist can help you avoid whatever set of problems the checklist creators included, but it is unlikely to help you think about security. In other words, using a checklist won’t help you fi nd any threats not on the list. It is thus narrower than threat modeling. Because checklists can still be useful as part of a larger threat modeling process, you can fi nd a collection of them at the end of Chapter 1, “Dive In and Threat Model!” and throughout this book as appropriate. The Elevation of Privilege game, by the way, is somewhat similar to a checklist. Two things distinguish
  • 448. it. The fi rst is the use of aces to elicit new threats. The second is that by making threat modeling into a game, players are given social permission to playfully analyze a system, to step beyond the checklist, and to engage with the security questions in play. The game implicitly abandons the “stop and check in” value that a checklist provides. Libraries and Literature Reviews A literature review is roughly consulting the library to learn what has happened in the past. As you saw in Chapter 2, “Strategies for Threat Modeling,” reviewing threats to systems similar to yours is a helpful starting point in threat modeling. If you write up the input and output of such a review, you may have the start of www.it-ebooks.info http://www.it-ebooks.info/ 104 Part II ■ Finding Threats c05.indd 07:52:5:AM 01/15/2014 Page 104
  • 449. an attack library that you can reuse later. It will be more like an attack library if you abstract the attacks in some way, but you may defer that to the second or third time you review the attack list. Developing a new library requires a very large time investment, which is probably part of why there are so few of them. However, another reason might be the lack of prescriptive advice about how to do so. If you want to develop a literature review into a library, you need to consider how the various attacks are similar and how they differ. One model you can use for this is a zoo. A zoo is a grouping—whether of animals, malware, attacks, or other things—that taxonomists can use to test their ideas for categorization. To track your zoo of attacks, you can use whatever form suits you. Common choices include a wiki, or a Word or Excel document. The main criteria are ease of use and a space for each entry to contain enough concrete detail to allow an analyst
  • 450. to dig in. As you add items to such a zoo, consider which are similar, and how to group them. Be aware that all such categorizations have tricky cases, which sometimes require reorganization to refl ect new ways of thinking about them. If your categorization technique is intended to be used by multiple independent people, and you want what’s called “inter-rater consistency,” then you need to work on a technique to achieve that. One such technique is to create a fl owchart, with specifi c questions from stage to stage. Such a fl owchart can help produce consistency. The work of grouping and regrouping can be a considerable and ongoing investment. If you’re going to create a new library, consider spending some time fi rst researching the history and philosophy of taxonomies. Books like Sorting Things Out: Classifi cation and Its Consequences (Bowker, 2000) can help.
  • 451. CAPEC The CAPEC is MITRE’s Common Attack Pattern Enumeration and Classifi cation. As of this writing, it is a highly structured set of 476 attack patterns, organized into 15 groups: ■ Data Leakage ■ Attacks Resource Depletion ■ Injection (Injecting Control Plane content through the Data Plane) ■ Spoofi ng ■ Time and State Attacks ■ Abuse of Functionality www.it-ebooks.info http://www.it-ebooks.info/ Chapter 5 ■ Attack Libraries 105 c05.indd 07:52:5:AM 01/15/2014 Page 105 ■ Probabilistic Techniques ■ Exploitation of Authentication
  • 452. ■ Exploitation of Privilege/Trust ■ Data Structure Attacks ■ Resource Manipulation ■ Network Reconnaissance ■ Social Engineering Attacks ■ Physical Security Attacks ■ Supply Chain Attacks Each of these groups contains a sub-enumeration, which is available via MITRE (2013b). Each pattern includes a description of its completeness, with values ranging from “hook” to “complete.” A complete entry includes the following: ■ Typical severity ■ A description, including: ■ Summary ■ Attack execution fl ow ■ Prerequisites ■ Method(s) of attack ■ Examples
  • 453. ■ Attacker skills or knowledge required ■ Resources required ■ Probing techniques ■ Indicators/warnings of attack ■ Solution s and mitigations ■ Attack motivation/consequences ■ Vector ■ Payload ■ Relevant security requirements, principles and guidance ■ Technical context ■ A variety of bookkeeping fi elds (identifi er, related attack patterns and
  • 454. vulnerabilities, change history, etc.) www.it-ebooks.info http://www.it-ebooks.info/ 106 Part II ■ Finding Threats c05.indd 07:52:5:AM 01/15/2014 Page 106 An example CAPEC is shown in Figure 5-2. You can use this very structured set of information for threat modeling in a few ways. For instance, you could review a system being built against either each CAPEC entry or the 15 CAPEC categories. Reviewing against the individual entries is a large task, however; if a reviewer averages fi ve minutes for each of the
  • 455. 475 entries, that’s a full 40 hours of work. Another way to use this information is to train people about the breadth of threats. Using this approach, it would be possible to create a training class, probably taking a day or more. Exit Criteria The appropriate exit criteria for using CAPEC depend on the mode in which you’re using it. If you are performing a category review, then you should have at least one issue per categories 1–11 (Data Leakage, Resource Depletion, Injection, Spoofi ng, Time and State, Abuse of Functionality, Probabilistic Techniques, Exploitation of Authentication, Exploitation of Privilege/Trust,
  • 456. Data Structure Attacks, and Resource Manipulation) and possibly one for categories 12–15 (Network Reconnaissance, Social Engineering, Physical Security, Supply Chain). Perspective on CAPEC Each CAPEC entry includes an assessment of its completion, which is a nice touch. CAPECs include a variety of sections, and its scope differs from STRIDE in ways that can be challenging to unravel. (This is neither a criticism of CAPEC, which existed before this book, nor a suggestion that CAPEC change.) The impressive size and scope of CAPEC may make it intimidating for people to
  • 457. jump in. At the same time, that specifi city may make it easier to use for someone who’s just getting started in security, where specifi city helps to identify attacks. For those who are more experienced, the specifi city and apparent completeness of CAPEC may result in less creative thinking. I personally fi nd that CAPEC’s impressive size and scope make it hard for me to wrap my head around it. CAPEC is a classifi cation of common attacks, whereas STRIDE is a set of security properties. This leads to an interesting contrast. CAPEC, as a set of attacks, is a richer elicitation technique. However, when it comes to addressing the CAPEC attacks, the resultant techniques are far more
  • 458. complex. The STRIDE defenses are simply those approaches that preserve the property. However, looking up defenses is simpler than fi nding the attacks. As such, CAPEC may have more promise than STRIDE for many populations of threat modelers. It would be fascinating to see efforts made to improve CAPEC’s usability, perhaps with cheat sheets, mnemonics, or software tools. www.it-ebooks.info http://www.it-ebooks.info/ Chapter 5 ■ Attack Libraries 107 c05.indd 07:52:5:AM 01/15/2014 Page 107
  • 459. Figure 5-2: A sample CAPEC entry www.it-ebooks.info http://www.it-ebooks.info/ 108 Part II ■ Finding Threats c05.indd 07:52:5:AM 01/15/2014 Page 108 OWASP Top Ten OWASP, The Open Web Application Security Project, offers a Top Ten Risks list each year. In 2013, the list was as follows: ■ Injection ■ Broken Authentication and Session Management ■ Cross-Site Scripting ■ Insecure Direct Object References
  • 460. ■ Security Misconfi guration ■ Sensitive Data Exposure ■ Missing Function Level Access Control ■ Cross Site Request Forgery ■ Components with Known Vulnerabilities ■ Invalidated Requests and Forwards [sic] This is an interesting list from the perspective of the threat modeler. The list is a good length, and many of these attacks seem like they are well-balanced in terms of attack detail and its power to provoke thought. A few (cross-site scripting and cross-site request forgery) seem overly specifi c with respect to
  • 461. threat modeling. They may be better as input into test planning. Each has backing information, including threat agents, attack vectors, security weaknesses, technical and business impacts, as well as details covering whether you are vulnerable to the attack and how you prevent it. To the extent that what you’re building is a web project, the OWASP Top Ten list is probably a good adjunct to STRIDE. OWASP updates the Top Ten list each year based on the input of its volunteer membership. Over time, the list may be more or less valuable as a threat modeling attack library. The OWASP Top Ten are incorporated into a number of OWASP-suggested methodologies for web security. Turning the Top Ten into a
  • 462. threat modeling methodology would likely involve creating something like a STRIDE-per-element approach (Top Ten per Element?) or looking for risks in the list at each point where a data fl ow has crossed a trust boundary. Summary By providing mode specifi cs, attack libraries may be useful to those who are not deeply familiar with the ways attackers work. It is challenging to fi nd generally useful sweet spots between providing lots of details and becoming tedious. It www.it-ebooks.info http://www.it-ebooks.info/
  • 463. Chapter 5 ■ Attack Libraries 109 c05.indd 07:52:5:AM 01/15/2014 Page 109 is also challenging to balance details with the threat of fooling a reader into thinking a checklist is comprehensive. Performing a literature review and cap- turing the details in an attack library is a good way for someone to increase their knowledge of security. There are a number of attack libraries available, including CAPEC and the OWASP Top Ten. Other libraries may also provide value depending on the technology or system on which you’re working. www.it-ebooks.info
  • 464. http://www.it-ebooks.info/ www.it-ebooks.info http://www.it-ebooks.info/ 111 c06.indd 02:15:11:PM 01/16/2014 Page 111 Threat modeling for privacy issues is an emergent and important area. Much like security threats violate a required security property, privacy threats are where a required privacy property is violated. Defi ning privacy requirements is a delicate balancing act, however, for a few reasons: First, the organization
  • 465. offering a service may want or even need a lot of information that the people using the service don’t want to provide. Second, people have very different perceptions of what privacy is, and what data is private, and those perceptions can change with time. (For example, someone leaving an abusive relation- ship should be newly sensitive to the value of location privacy, and perhaps consider their address private for the fi rst time.) Lastly, most people are “privacy pragmatists” and will make value tradeoffs for personal information. Some people take all of this ambiguity to mean that engineering for privacy is a waste. They’re wrong. Others assert that concern over
  • 466. privacy is a waste, as consumers don’t behave in ways that expose privacy concerns. That’s also wrong. People often pay for privacy when they understand the threat and the mitigation. That’s why advertisements for curtains, mailboxes, and other privacy- enhancing technologies often lead with the word “privacy.” Unlike the previous three chapters, each of which focused on a single type of tool, this chapter is an assemblage of tools for fi nding privacy threats. The approaches described in this chapter are more developed than “worry about privacy,” yet they are somewhat less developed than security attack libraries
  • 467. such as CAPEC (discussed in Chapter 5, “Attack Libraries”). In either event, they C H A P T E R 6 Privacy Tools www.it-ebooks.info http://www.it-ebooks.info/ 112 Part II ■ Finding Threats c06.indd 02:15:11:PM 01/16/2014 Page 112 are important enough to include. Because this is an emergent area, appropriate exit criteria are less clear, so there are no exit criteria sections here. In this chapter, you’ll learn about the ways to threat model for
  • 468. privacy, includ- ing Solove’s taxonomy of privacy harms, the IETF’s “Privacy Considerations for Internet Protocols,” privacy impact assessments (PIAs), the nymity slider, contextual integrity, and the LINDDUN approach, a mirror of STRIDE created to fi nd privacy threats. It may be reasonable to treat one or more of contextual integrity, Solove’s taxonomy or (a subset of) LINDDUN as a building block that can snap into the four-stage model, either replacing or complementing the security threat discovery. N O T E Many of these techniques are easier to execute when threat modeling
  • 469. operational systems, rather than boxed software. (Will your database be used to contain medical records? Hard to say!) The IETF process is more applicable than other processes to “boxed software” designs. Solove’s Taxonomy of Privacy In his book, Understanding Privacy (Harvard University Press, 2008), George Washington University law professor Daniel Solove puts forth a taxonomy of privacy harms. These harms are analogous to threats in many ways, but also include impact. Despite Solove’s clear writing, the descriptions might be most helpful to those with some background in privacy, and challenging for tech- nologists to apply to their systems. It may be possible to use the taxonomy as
  • 470. a tool, applying it to a system under development, considering whether each of the harms presented is enabled. The following list presents a version of this taxonomy derived from Solove, but with two changes. First, I have added “iden- tifi er creation,” in parentheses. I believe that the creation of an identifi er is a discrete harm because it enables so many of the other harms in the taxonomy. (Professor Solove and I have agreed to disagree on this issue.) Second, exposure is in brackets, because those using the other threat modeling techniques in this Part should already be handling such threats. ■ (Identifi er creation)
  • 471. ■ Information collection: surveillance, interrogation ■ Information processing: aggregation, identifi cation, insecurity, secondary use, exclusion ■ Information dissemination: breach of confi dentiality, disclosure, increased accessibility, blackmail, appropriation, distortion, [exposure] ■ Invasion: intrusion, decisional interference www.it-ebooks.info http://www.it-ebooks.info/ Chapter 6 ■ Privacy Tools 113 c06.indd 02:15:11:PM 01/16/2014 Page 113 Many of the elements of this list are self-explanatory, and all
  • 472. are explained in depth in Solove’s book. A few may benefi t from a brief discussion. The harm of surveillance is twofold: First is the uncomfortable feeling of being watched and second are the behavioral changes it may cause. Identifi cation means the association of information with a fl esh-and-blood person. Insecurity refers to the psychological state of a person made to feel insecure, rather than a techni- cal state. The harm of secondary use of information relates to societal trust. Exclusion is the use of information provided to exclude the provider (or others) from some benefi t. Solove’s taxonomy is most usable by privacy experts, in the same way that
  • 473. STRIDE as a mnemonic is most useful for security experts. To make use of it in threat modeling, the steps include creating a model of the data fl ows, paying particular attention to personal data. Finding these harms may be possible in parallel to or replacing security threat modeling. Below is advice on where and how to focus looking for these. ■ Identifi er creation should be reasonably easy for a developer to identify. ■ Surveillance is where data is collected about a broad swath of people or where that data is gathered in a way that’s hard for a person to notice. ■ Interrogation risks tend to correlate around data collection
  • 474. points, for example, the many “* required” fi elds on web forms. The tendency to lie on such forms may be seen as a response to the interrogation harm. ■ Aggregation is most frequently associated with inbound data fl ows from external entities. ■ Identifi cation is likely to be found in conjunction with aggregation or where your system has in-person interaction. ■ Insecurity may associate with where data is brought together for decision purposes. ■ Secondary use may cross trust boundaries, possibly including boundaries
  • 475. that your customers expect to exist. ■ Exclusion happens at decision points, and often fraud management decisions. ■ Information dissemination threats (all of them) are likely to be associated with outbound data fl ows; you should look for them where data crosses trust boundaries. ■ Intrusion is an in-person intrusion; if your system has no such features, you may not need to look at these. ■ Decisional interference is largely focused on ways in which information collection and processing may infl uence decisions, and as such
  • 476. it most likely plays into a requirements discussion. www.it-ebooks.info http://www.it-ebooks.info/ 114 Part II ■ Finding Threats c06.indd 02:15:11:PM 01/16/2014 Page 114 Privacy Considerations for Internet Protocols The Internet Engineering Task Force (IETF) requires consideration of security threats, and has a process to threat model focused on their organizational needs, as discussed in Chapter 17, “Bringing Threat Modeling to Your Organization.” As of 2013, they sometimes require consideration of privacy
  • 477. threats. An infor- mational RFC “Privacy Considerations for Internet Protocols,” outlines a set of security-privacy threats, a set of pure privacy threats, and offers a set of mitiga- tions and some general guidelines for protocol designers (Cooper, 2013). The combined security-privacy threats are as follows: ■ Surveillance ■ Stored data compromise ■ Mis-attribution or intrusion (in the sense of unsolicited messages and denial-of-service attacks, rather than break-ins) The privacy-specifi c threats are as follows: ■ Correlation
  • 478. ■ Identifi cation ■ Secondary use ■ Disclosure ■ Exclusion (users are unaware of the data that others may be collecting) Each is considered in detail in the RFC. The set of mitigations includes data minimization, anonymity, pseudonymity, identity confi dentiality, user participa- tion and security. While somewhat specifi c to the design of network protocols, the document is clear, free, and likely a useful tool for those attempting to threat model privacy. The model, in terms of the abstracted threats and methods to
  • 479. address them, is an interesting step forward, and is designed to be helpful to protocol engineers. Privacy Impact Assessments (PIA) As outlined by Australian privacy expert Roger Clarke in his “An Evaluation of Privacy Impact Assessment Guidance Documents,” a PIA “is a systematic process that identifi es and evaluates, from the perspectives of all stakeholders, the potential effects on privacy of a project, initiative, or proposed system or scheme, and includes a search for ways to avoid or mitigate negative privacy impacts.” Thus, a PIA is, in several important respects, a privacy analog to security
  • 480. threat modeling. Those respects include the systematic tools for identifi cation www.it-ebooks.info http://www.it-ebooks.info/ Chapter 6 ■ Privacy Tools 115 c06.indd 02:15:11:PM 01/16/2014 Page 115 and evaluation of privacy issues, and the goal of not simply identifying issues, but also mitigating them. However, as usually presented, PIAs have too much integration between their steps to snap into the four-stage framework used in this book. There are also important differences between PIAs and threat modeling. PIAs
  • 481. are often focused on a system as situated in a social context, and the evaluation is often of a less technical nature than security threat modeling. Clarke’s evalu- ation criteria include things such as the status, discoverability, and applicability of the PIA guidance document; the identifi cation of a responsible person; and the role of an oversight agency; all of which would often be considered out of scope for threat modeling. (This is not a critique, but simply a contrast.) One sample PIA guideline from the Offi ce of the Victorian Privacy Commissioner states the following: “Your PIA Report might have a Table of Contents that looks
  • 482. something like this: 1. Description of the project 2. Description of the data fl ows 3. Analysis against ‘the’ Information Privacy Principles 4. Analysis against the other dimensions to privacy 5. Analysis of the privacy control environment 6. Findings and recommendations” Note that step 2, “description of the data fl ows,” is highly reminiscent of “data fl ow diagrams,” while steps 3 and 4 are very similar to the “threat fi nding” building blocks. Therefore, this approach might be highly complementary to the four-step model of threat modeling.
  • 483. The appropriate privacy principles or other dimensions to consider are some- what dependent on jurisdiction, but they can also focus on classes of intrusion, such as those offered by Solove, or a list of concerns such as informational, bodily, territorial, communications, and locational privacy. Some of these documents, such as those from the Offi ce of the Victorian Privacy Commissioner (2009a), have extensive lists of common privacy threats that can be used to support a guided brainstorming approach, even if the documents are not legally required. Privacy impact assessments that are performed to comply with a law will often have a formal structure for assessing suffi ciency.
  • 484. The Nymity Slider and the Privacy Ratchet University of Waterloo professor Ian Goldberg has defi ned a measurement he calls nymity, the “amount of information about the identity of the participants that is revealed [in a transaction].” Nymity is from the Latin for name, from which anonymous (“without a name”) and pseudonym (“like a name”) are derived. www.it-ebooks.info http://www.it-ebooks.info/ 116 Part II ■ Finding Threats c06.indd 02:15:11:PM 01/16/2014 Page 116 Goldberg has pointed out that you can graph nymity on a continuum (Goldberg,
  • 485. 2000). Figure 6-1 shows the nymity slider. On the left-hand side, there is less privacy than on the right-hand side. As Goldberg points out, it is easy to move towards more nymity, and extremely diffi cult to move away from it. For example, there are protocols for electronic cash that have most of the privacy-preserving properties of physical cash, but if you deliver it over a TCP connection you lose many of those properties. As such, the nymity slider can be used to examine how privacy-threatening a protocol is, and to compare the amount of nymity a system uses. To the extent that it can be designed to use less identifying infor-
  • 486. mation, other privacy features will be easier to achieve. Verinymity Persistent Pseudonym Pen name Linkable anonymity Prepaid phone cards Unlinkable anonymity Cash paymentsGovernment ID Credit Card # Address Figure 6-1: The nymity slider When using nymity privacy in threat modeling, the goal is to measure how much information a protocol, system, or design exposes or gathers. This enables you to compare it to other possible protocols, systems, or designs. The nymity
  • 487. slider is thus an adjunct to other threat-fi nding building blocks, not a replace- ment for them. Closely related to nymity is the idea of linkability. Linkability is the ability to bring two records together, combining the data in each into a single record or virtual record. Consider several databases, one containing movie preferences, another containing book purchases, and a third containing telephone records. If each contains an e-mail address, you can learn that [email protected] likes religious movies, that he’s bought books on poison, and that several of the people he talks with are known religious extremists. Such intersections might
  • 488. be of interest to the FBI, and it’s a good thing you can link them all together! (Unfortunately, no one bothered to include the professional database showing he’s a doctor, but that’s beside the point!) The key is that you’ve engaged in link- ing several datasets based on an identifi er. There is a set of identifi ers, including e-mail addresses, phone numbers, and government-issued ID numbers, that are often used to link data, which can be considered strong evidence that multiple records refer to the same person. The presence of these strongly linkable data points increases linkability threats. Linkability as a concept relates closely to Solove’s concept of identifi cation and
  • 489. aggregation. Linkability can be seen as a spectrum from strongly linkable with multiple validated identifi ers to weakly linkable based on similarities in the data. www.it-ebooks.info http://www.it-ebooks.info/ Chapter 6 ■ Privacy Tools 117 c06.indd 02:15:11:PM 01/16/2014 Page 117 (“John Doe and John E. Doe is probably the same person.”) As data becomes richer, the threat of linkage increases, even if the strongly linkable data points are removed. For example, Harvard professor Latanya Sweeney has shown how data with only date of birth, gender, and zip code uniquely identifi es 87
  • 490. percent of the U.S. population (Sweeney, 2002). There is an emergent scientifi c research stream into “re-identifi cation” or “de-anonymization,” which discloses more such results on a regular basis. The release of anonymous datasets carries a real threat of re-identifi cation, as AOL, Netfl ix, and others have discovered. (McCullagh, 2006; Narayanan, 2008; Buley, 2010). Contextual Integrity Contextual integrity is a framework put forward by New York University professor Helen Nissenbaum. It is based on the insight that many privacy issues occur when information is taken from one context and brought into another. A
  • 491. context is a term of art with a deep grounding in discussions of the spheres, or arenas, of our lives. A context has associated roles, activities, norms, and val- ues. Nissenbaum’s approach focuses on understanding contexts and changes to those contexts. This section draws very heavily from Chapter 7 of her book Privacy in Context, (Stanford Univ. Press, 2009) to explain how you might apply the framework to product development. Start by considering what a context is. If you look at a hospital as a context, then the roles might include doctors, patients, and nurses, but also family mem- bers, administrators, and a host of other roles. Each has a reason for being in a hospital, and associated with that reason are activities that they
  • 492. tend to perform there, norms of behavior, and values associated with those norms and activities. Contexts are places or social areas such as restaurants, hospitals, work, the Boy Scouts, and schools (or a type of school, or even a specifi c school). An event can be “in a work context” even if it takes place somewhere other than your normal offi ce. Any instance in which there is a defi ned or expected set of “normal” behaviors can be treated as a context. Contexts nest and overlap. For example, normal behavior in a church in the United States is infl uenced by the norms within the United States, as well as the narrower context of the
  • 493. parishioners. Thus, what is normal at a Catholic Church in Boston or a Baptist Revival in Mississippi may be inappropriate at a Unitarian Congregation in San Francisco (or vice versa). Similarly, there are shared roles across all schools, those of student or teacher, and more specifi c roles as you specify an elementary school versus a university. There are specifi c contexts within a university or even the particular departments of a university. Contextual integrity is violated when the informational norms of a context are breached. Norms, in Nissenbaum’s sense, are “characterized by four key parameters: context, actors, attributes, and transmission
  • 494. principles.” Context www.it-ebooks.info http://www.it-ebooks.info/ 118 Part II ■ Finding Threats c06.indd 02:15:11:PM 01/16/2014 Page 118 is roughly as just described. Actors are senders, recipients, and information subjects. Attributes refer to the nature of the information—for example, the nature or particulars of a disease from which someone is suffering. A transmission principle is “a constraint on the fl ow (distribution, dissemination, transmission) of information from party to party.” Nussbaum fi rst provides two presentations
  • 495. of contextual integrity, followed by an augmented contextual integrity heuristic. As the technique is new, and the “augmented” approach is not a strict superset of the initial presentation, it may help you to see both. Contextual Integrity Decision Heuristic Nissenbaum fi rst presents contextual integrity as a post- incident analytic tool. The essence of this is to document the context as follows: 1. Establish the prevailing context. 2. Establish key actors. 3. Ascertain what attributes are affected. 4. Establish changes in principles of transmission. 5. Red fl ag
  • 496. Step 5 means “if the new practice generates changes in actors, attributes, or transmission principles, the practice is fl agged as violating entrenched infor- mational norms and constitutes a prima facie violation of contextual integrity.” You might have noticed a set of interesting potential overlaps with software development and threat modeling methodologies. In particular, actors over- lap fairly strongly with personas, in Cooper’s sense of personas (discussed in Appendix B, “Threat Trees”). A contextual integrity analysis probably does not require a set of personas for bad actors, as any data fl ow outside the intended
  • 497. participants (and perhaps some between them) is a violation. The information transmissions, and the associated attributes are likely visible in data fl ow or swim lane diagrams developed for normal security threat modeling. Thus, to the extent that threat models are being enhanced from version to version, a set of change types could be used to trigger contextual integrity analysis. The extant diagram is the “prevailing context.” The important change types would include the addition of new human entities or new data fl ows. Nissenbaum takes pains to explore the question of whether a violation of contextual integrity is a worthwhile reason to avoid the change.
  • 498. From the perspective of threat elicitation, such discussions are out of scope. Of course, they are in scope as you decide what to do with the identifi ed privacy threats. www.it-ebooks.info http://www.it-ebooks.info/ Chapter 6 ■ Privacy Tools 119 c06.indd 02:15:11:PM 01/16/2014 Page 119 Augmented Contextual Integrity Heuristic Nissenbaum also presents a longer, ‘augmented’ heuristic, which is more pre- scriptive about steps, and may work better to predict privacy issues.
  • 499. 1. Describe the new practice in terms of information fl ows. 2. Identify the prevailing context. 3. Identify information subjects, senders, and recipients. 4. Identify transmission principles. 5. Locate applicable norms, identify signifi cant changes. 6. Prima facie assessment 7. Evaluation a. Consider moral and political factors. b. Identify threats to autonomy and freedom. c. Identify effects on power structures. d. Identify implications for justice, fairness, equality, social hierarchy, democracy and so on.
  • 500. 8. Evaluation 2 a. Ask how the system directly impinges on the values, goals, and ends of the context. b. Consider moral and ethical factors in light of the context. 9. Decide. This is, perhaps obviously, not an afternoon’s work. However, in considering how to tie this to a software engineering process, you should note that steps 1, 3, and 4 look very much like creating data fl ow diagrams. The context of most organizations is unlikely to change substantially, and thus descriptions of the context may be reusable, as may be the work products to support the evalua-
  • 501. tions of steps 7 and 8. Perspective on Contextual Integrity I very much like contextual integrity. It strikes me as providing deep insight into and explanations for a great number of privacy problems. That is, it may be pos- sible to use it to predict privacy problems for products under design. However, that’s an untested hypothesis. One area of concern is that the effort to spell out www.it-ebooks.info http://www.it-ebooks.info/ 120 Part II ■ Finding Threats c06.indd 02:15:11:PM 01/16/2014 Page 120
  • 502. all the aspects of a context may be quite time consuming, but without spelling out all the aspects, the privacy threats many be missed. This sort of work is challenging when you’re trying to ship software and Nissenbaum goes so far as to describe it as “tedious” (Privacy In Context, page 142). Additionally, the act of fi xing a context in software or structured defi nitions may present risks that the fi xed representation will deviate as social norms evolve. This presents a somewhat complex challenge to the idea of using contextual integrity as a threat modeling methodology within a software engineering process. The process of creating taxonomies or categories is an essential step in
  • 503. structuring data in a database. Software engineers do it as a matter of course as they develop software, and even those who are deeply cognizant of taxonomies often treat it as an implicit step. These taxonomies can thus restrict the evolution of a context—or worse; generate dissonance between the software-engineered version of the context or the evolving social context. I encourage security and privacy experts to grapple with these issues. LINDDUN LUNDDUN is a mnemonic developed by Mina Deng for her PhD at the Katholieke Universiteit in Leuven, Belgium (Deng, 2010). LINDDUN is an explicit mirroring
  • 504. of STRIDE-per-element threat modeling. It stands for the following violations of privacy properties: ■ Linkability ■ Identifi ability ■ Non-Repudiation ■ Detectability ■ Disclosure of information ■ Content Unawareness ■ Policy and consent Noncompliance LINDDUN is presented as a complete approach to threat modeling with a process, threats, and requirements discovery method. It may be reasonable to use
  • 505. the LINDDUN threats or a derivative as a tool for privacy threat enumeration in the four-stage framework, snapping it either in place of or next to STRIDE security threat enumeration. However, the threats in LINDDUN are somewhat unusual terminology; therefore, the training requirements may be higher, or the learning curve steeper than other privacy approaches. www.it-ebooks.info http://www.it-ebooks.info/ Chapter 6 ■ Privacy Tools 121 c06.indd 02:15:11:PM 01/16/2014 Page 121 N O T E LINDDUN leaves your author deeply confl icted. The
  • 506. privacy terminology it relies on will be challenging for many readers. However, it is, in many ways, one of the most serious and thought-provoking approaches to privacy threat modeling, and those seriously interested in privacy threat modeling should take a look. As an aside, the tension between non-repudiation as a privacy threat and repudiation as a security threat is delicious. Summary Privacy is no less important to society than security. People will usually act to protect their privacy given an understanding of the threats and how they can address them. As such, it may help you to look for privacy
  • 507. threats in addition to security threats. The ways to do so are less prescriptive than ways to look for security threats. There are many tools you can use to fi nd privacy issues, including Solove’s taxonomy of privacy harms. (A harm is a threat with its impact.) Solove’s taxonomy helps you understand the harm associated with a privacy violation, and thus, perhaps, how best to prioritize it. The IETF has an approach to privacy threats for new Internet protocols. That approach may complement or substitute Privacy Impact Assessments. PIAs and the IETF’s processes are appropriate when a regu-
  • 508. latory or protocol design context calls for their use. Both are more prescriptive than the nymity slider, a tool for assessing the amount of personal information in a system and measuring privacy invasion for comparative purposes. They are also more prescriptive than contextual integrity, an approach which attempts to tease out the social norms of privacy. If your goal is to identify when a design is likely to raise privacy concerns, however, then contextual integrity may be the most helpful. Far more closely related to STRIDE-style threat identifi cation is LINDDUN, which considers privacy violations in the manner that STRIDE considers security violations.
  • 509. www.it-ebooks.info http://www.it-ebooks.info/ www.it-ebooks.info http://www.it-ebooks.info/ c07.indd 09:44:3:AM 01/09/2014 Page 123 P a r t III Managing and Addressing Threats Part III is all about managing threats and the activities involved in threat mod- eling. While threats themselves are at the heart of threat modeling, the reason
  • 510. you threat model is so that you can deliver more secure products, services, or technologies. This part of the book focuses on the third step in the four-step framework, what to do after you’ve found threats and need to do something about them; but it also covers the fi nal step: validation. Chapters in this part include the following: ■ Chapter 7: Processing and Managing Threats describes how to start a threat modeling project, how to iterate across threats, the tables and lists you may want to use, and some scenario-specifi c process elements. ■ Chapter 8: Defensive Tactics and Technologies are tools you can use to address threats, ranging from simple to complex. This chapter
  • 511. focuses on a STRIDE breakdown of security threats and a variety of ways to address privacy. ■ Chapter 9: Trade-Offs When Addressing Threats includes risk manage- ment strategies, how to use those strategies to select mitigations, and threat-modeling specifi c prioritization approaches. www.it-ebooks.info http://www.it-ebooks.info/ 124 Part III ■ Managing and Addressing Threats c07.indd 09:44:3:AM 01/09/2014 Page 124 ■ Chapter 10: Validating That Threats Are Addressed includes how to test
  • 512. your threat mitigations, QA’ing threat modeling, and process aspects of addressing threats. This is the last step of the four-step approach. ■ Chapter 11: Threat Modeling Tools covers the various tools that you can use to help you threat model, ranging from the generic to the specifi c. www.it-ebooks.info http://www.it-ebooks.info/ 125 c07.indd 09:44:3:AM 01/09/2014 Page 125 Finding threats against arbitrary things is fun, but when you’re building some- thing with many moving parts, you need to know where to start, and how to
  • 513. approach it. While Part II is about the tasks you perform and the methodolo- gies you can use to perform them, this chapter is about the processes in which those tasks are performed. Questions of “what to do when” naturally come up as you move from the specifi cs of looking at a particular element of a system to looking at a complete system. To the extent that these are questions of what an individual or small team does, they are addressed in this chapter; questions about what an organization does are covered in Chapter 17, “Bringing Threat Modeling to Your Organization.” Each of the approaches covered here should work with any of
  • 514. the “Lego blocks” covered in Part II. In this chapter, you’ll learn how to get started look- ing for threats, including when and where to start and how to iterate through a diagram. The chapter continues with a set of tables and lists that you might use as you threat model, and ends with a set of scenario-specifi c guidelines, including the importance of the vendor-customer trust boundary, threat model- ing new technologies, and how to threat model an API. C H A P T E R 7 Processing and Managing Threats
  • 515. www.it-ebooks.info http://www.it-ebooks.info/ 126 Part III ■ Managing and Addressing Threats c07.indd 09:44:3:AM 01/09/2014 Page 126 Starting the Threat Modeling Project The basic approach of “draw a diagram and use the Elevation of Privilege game to fi nd threats” is functional, but people prefer different amounts of prescrip- tiveness, so this section provides some additional structure that may help you get started. When to Threat Model You can threat model at various times during a process, with each choice having
  • 516. a different value. Most important, you should threat model as you get started on a project. The act of drawing trust boundaries early on can greatly help you improve your architecture. You can also threat model as you work through fea- tures. This allows you to have smaller, more focused threat modeling projects, keeping your skills sharp and reducing the chance that you’ll fi nd big problems at the end. It is also a good idea to revisit the threat model as you get ready to deliver, to ensure that you haven’t made decisions which accidentally altered the reality underlying the model. Starting at the Beginning
  • 517. Threat modeling as you get started involves modeling the system you’re plan- ning or building, fi nding threats against the model, and fi ling bugs that you’ll track and manage, such as other issues discovered throughout the develop- ment process. Some of those bugs may be test cases, some might be feature work, and some may be deployment decisions. It depends on what you’re threat modeling. Working through Features As you develop each feature, there may be a small amount of threat modeling work to do. That work involves looking deeply at the threats to that feature (and
  • 518. possibly refreshing or validating your understanding of the context by checking the software model). As you start work on a feature or component, it can also be a good time to work through second- or third-order threats. These are the threats in which an attacker will try to bypass the features or design elements that you put in place to block the most immediate threats. For example, if the primary threat is a car thief breaking a window, a secondary threat is them jumping the ignition. You can mitigate that with a steering- wheel lock, which is thus a second-order mitigation. There’s more on this concept of ordered
  • 519. threats in the “Digging Deeper into Mitigations” section later in this chapter, www.it-ebooks.info http://www.it-ebooks.info/ Chapter 7 ■ Processing and Managing Threats 127 c07.indd 09:44:3:AM 01/09/2014 Page 127 as well as more on considering planned mitigations, and how an attacker might work around them. Threat modeling as you work through a feature has several important value propositions. One is that if you do a small threat model as you start a component or feature, the design is probably closer to mind. In other words, you’ll have a
  • 520. more detailed model with which to look for threats. Another is that if you fi nd threats, they are closer to mind as you’re working on that feature. Threat model- ing as you work through features can also help you maintain your awareness of threats and your skills at threat modeling. (This is especially true if your project is a long one.) Close to Delivery Lastly, you should threat model as you get ready to ship by reexamining the model and checking your bugs. (Shipping here is inclusive of delivering, deploy- ing, or going live.) Reexamining the model means ensuring that everyone still
  • 521. agrees it’s a decent model of what you’re building, and that it includes all the trust boundaries and data fl ows that cross them. Checking your bugs involves checking each bug that’s tagged threat modeling (or however else you’re track- ing them), and ensuring it didn’t slip through the cracks. Time Management So how long should all this take? The answer to that varies according to system size and complexity, the familiarity of the participants with the system, their skill in threat modeling, and even the culture of meetings in an organization. Some very rough rules of thumb are that you should be able to diagram and fi nd
  • 522. threats against a “component” and decide if you need to do more enumeration in a one-hour session with an experienced threat modeler to moderate or help. For the sort of system that a small start-up might build, the end- to-end threat modeling could take a few hours to a week, or possibly longer if the data the system holds is particularly sensitive. At the larger end of the spectrum, a project to diagram the data fl ows of a large online service has been known to require four people working for several months. That level of effort was required to help fi nd threat variations and alternate routes through a system that had grown to
  • 523. serve millions of people. Whatever you’re modeling, familiarity with threat modeling helps. If you need to refer back to this book every few minutes, your progress will be slower. One of the reasons to threat model regularly is to build skill and familiarity with the tasks and techniques. Organizational culture also plays a part. Organizations that run meetings with nowhere to sit will likely create a list of threats faster www.it-ebooks.info http://www.it-ebooks.info/ 128 Part III ■ Managing and Addressing Threats c07.indd 09:44:3:AM 01/09/2014 Page 128
  • 524. than a consensus-oriented organization that encourages exploring ideas. (Which list will be better is a separate and fascinating question.) What to Start and (Plan to) End With When you start looking for threats, a diagram is something between useful and essential input. Experienced modelers may be able to start without it, but will likely iterate through creating a diagram as they fi nd threats. The diagram is likely to change as you use it to fi nd threats; you’ll discover things you missed or don’t need. That’s normal, and unless you run a strict waterfall approach to engineering, it’s a process that evolves much like the way requirements evolve
  • 525. as you discover what’s easy or hard to build. Use the following two testable states to help assess when you’re done: ■ You have fi led bugs. ■ You have a diagram or diagrams that everyone agrees represents the system. To be more specifi c, you should probably have a number of bugs that’s roughly scaled to the number of things in your diagram. If you’re using a data fl ow diagram and STRIDE, expect to have about fi ve threats per diagram element. N O T E Originally, this text suggested that you should have: (# of processes * 6) + (# of data fl ows * 3) + (# of data stores * 3.5) + (# of distinct external entities *2) threats,
  • 526. but that requires keeping four separate counts, and is thus more work to get approxi- mately the same answer. You might notice that says “STRIDE” rather than “STRIDE-per- element” or “STRIDE-per-interaction,” and fi ve turns out to match the number you get if you tally up the checkmarks in those charts. That’s because those charts are derived from where the threats usually show up. Where to Start When you are staring at a blank whiteboard and wondering where to start, there are several commonly recommended places. Many people have recom-
  • 527. mended assets or attackers, but as you learned in Chapter 2, “Strategies for Threat Modeling,” the best starting place is a diagram that covers the system as a whole, and from there start looking at the trust boundaries. For advice on how to create diagrams, see Chapter 2. When you assemble a group of people in a room to look for threats, you should include people who know about the software, the data fl ows and (if possible) threat modeling. Begin the process with the component(s) on which the participants in the room are working. You’ll want to start top-down, and www.it-ebooks.info
  • 528. http://www.it-ebooks.info/ Chapter 7 ■ Processing and Managing Threats 129 c07.indd 09:44:3:AM 01/09/2014 Page 129 then work across the system, going “breadth fi rst,” rather than delving deep into any component (“depth fi rst”). Finding Threats Top Down Almost any system should be modeled from the highest-level view you can build of the entire system, for some appropriate value of “the entire system”. What constitutes an entire system is, of course, up for debate, just like what constitutes the entire Internet, the entirety of (say) Amazon’s website, and so
  • 529. on, isn’t a simple question. In such cases more scoping is needed. The ideal is probably what is within an organization’s control and, to the extent possible, cross-reviews with those responsible for other components. In contrast, bottom-up threat modeling starts from features, and then attempts to derive a coherent model from those feature-level models. This doesn’t work well, but advice that implies you should do this is common, so a bit of discussion may be helpful. The reason this doesn’t work is because it turns out to be very challenging to bring threat models together when they are not derived from a system-level view. As such, you should start from the highest-
  • 530. level view you can build of the entire system. E It may help to understand the sorts of issues that can lead to a bottom-up approach. At Microsoft in the mid-2000s, there was an explosion of bottom-up threat model- ing. There were three drivers for this: specifi c words in the Security Development Lifecycle (SDL) threat model requirement, aspects of Microsoft’s approach to func- tion teams, and the work involved in creating top-level models. The SDL required “all new features” be threat modeled. This intersected with an approach to features
  • 531. whereby a particular team of developer, tester, and program manager owns a feature and collaborates to ship it. Because the team owned its feature, it was natural to ask it to add threat models to the specifi cations involved in producing it. As Microsoft’s approach to security evolved, product security teams had diverse sets of important tasks to undertake. Creating all-up threat models was usually not near the top of the list. (Many large product diagrams have now done that work and found it worthwhile. Some of these diagrams require more than one poster-size sheet of paper.) Finding Threats “Across” Even with a top-down approach, you want to go breadth fi rst, and there are
  • 532. three different lists you can iterate “across”: A list of the trust boundaries, a list of diagram elements, or a list of threats. A structure can help you look www.it-ebooks.info http://www.it-ebooks.info/ 130 Part III ■ Managing and Addressing Threats c07.indd 09:44:3:AM 01/09/2014 Page 130 for threats, either because you and your team like structure, or because the task feels intimidating and you want to break it down, Table 7-1 shows three approaches. Table 7-1: Lists to Iterate Across
  • 533. METHOD SAMPLE STATEMENT COMMENTS Start from what goes across trust boundaries. “What can go wrong as foo comes across this trust boundary? This is likely to identify the highest-value threats. Iterate across diagram elements. “What can go wrong with this data-
  • 534. base fi le?” “What can go wrong with the logs?” Focusing on diagram ele- ments may work well when a lot of teams are collaborating. Iterate across the threats. “Where are the spoofi ng threats in this diagram?” “Where are the tam- pering threats?” Making threats the focus of discussion may help you fi nd related threats.
  • 535. Each of these approaches can be a fi ne way to start, as long as you don’t let them become straightjackets. If you don’t have a preference, try starting from what crosses trust boundaries, as threats tend to cluster there. However you iterate, ensure that you capture each threat as it comes up, regardless of the planned approach. Digging Deeper into Mitigations Many times threats will be mitigated by the addition of features, which can be designed, developed, tested and delivered much like other features. (Other times, mitigation might be a confi guration change, or at the other end of the
  • 536. effort scale, require re-design.) However, mitigations are not quite like other features. An attacker will rarely try to work around the bold button and fi nd an unintended, unsupported way to bold their text. Finding threats is great, and to the extent that you plan to be attacked only by people who are exactly lazy enough to fi nd a threat but not enthusiastic enough to try to bypass your mitigation, you don’t need to worry about going deeper into the mitigations. (You may have to worry about a new job, but that, as they say, is beyond the scope of this book. I recommend Mike Murray’s Forget the Parachute: Let Me Fly the Plane.) In this section, you’ll learn about how to go
  • 537. deeper into the interplay of how attackers can attempt to bypass the design choices and features you put in place to make their lives harder. www.it-ebooks.info http://www.it-ebooks.info/ Chapter 7 ■ Processing and Managing Threats 131 c07.indd 09:44:3:AM 01/09/2014 Page 131 The Order of Mitigation Going back to the example of threat modeling a home from the introduction, it’s easy and fun for security experts to focus on how attackers could defeat the security system by cutting the alarm wire. If you consider the window to
  • 538. be the attack surface, then threats include someone smashing through it and someone opening it. The smashing is addressed by re-enforced glass, which is thus “first-order” mitigation. The smashing threat is also addressed by an alarm, which is a second-order defense. But, oh no! Alarms can be defeated by cutting power. To address that third-level threat, the system designer can add more defenses. For example, alarm systems can include an alert if the line is ever dropped. Therefore, the defender can add a battery, or perhaps a cell phone or some other radio. (See how much fun this is?) These multiple layers, or orders, of attack and defense are shown in Table 7-2.
  • 539. N O T E If you become obsessed with the window-smashing threat and forget to put a lock on the window, or you never discover that there’s a door key under the mat, you have unaddressed problems, and are likely mis-investing your resources. Table 7-2: Threat and Mitigation “Orders” or “Layers” ORDER THREAT MITIGATION 1st Window smashing Reinforced glass 2nd Window smashing Alarm 3rd Cut alarm wire Heartbeat 4th Fake heartbeat Cryptographic signal integrity Threat modeling should usually proceed from attack surfaces, and ensure
  • 540. that all fi rst-order threats are mitigated before attention is paid to the second- order threats. Even if a methodology is in place to ensure that the full range of fi rst-order threats is addressed, a team may run out of time to follow the methodology. Therefore, you should fi nd threats breadth-fi rst. Playing Chess It’s also important to think about what an attacker will do next, given your mitigations. Maybe that means following the path to faking a heartbeat on the alarm wire, but maybe the attacker will fi nd that door key, or maybe they’ll move on to another victim. Don’t think of attacks and mitigations as static. There’s a
  • 541. www.it-ebooks.info http://www.it-ebooks.info/ 132 Part III ■ Managing and Addressing Threats c07.indd 09:44:3:AM 01/09/2014 Page 132 dynamic interplay between them, usually driven by attackers. You might think of threats and mitigations like the black and white pieces on a chess board. The attacker can move, and when they move, their relationship to other pieces can change. As you design mitigation, ask what an attacker could do once you deliver that mitigation. How could they work around it? (This is subtly different from asking what they will do. As Nobel-prize winning physicist
  • 542. Niels Bohr said, “Prediction is very diffi cult, especially about the future.”) Generally, attackers look for the weakest link they can easily fi nd. As you consider your threats and where to go into depth, start with the weakest links. This is an area where experience, including a repertoire of real scenarios, can be very helpful. If you don’t have that repertoire, a literature review can help, as you saw in Chapter 2. This interplay is a place where artful threat modeling and clever redesign can make a huge difference. Believing that an attacker will stop because you put a mitigation in place is optimistic, or perhaps naive would be a better word. What happens several
  • 543. moves ahead can be important. (Attackers are tricky like that.) This differs from thinking through threat ordering in that your attacker will likely move to the “next easiest” attack available. That is, an attacker doesn’t need to stick to the attack you’re planning for or coding against at the moment, but rather can go anywhere in your system. The attacker gets to choose where to go, so you need to defend everywhere. This isn’t really fair, but no one promised you fair. Prioritizing You might feel that the advice in this section about the layers of mitigations and the suggestion to fi nd threats across fi rst is somewhat
  • 544. contradictory. Which should you do fi rst? Consider the chess game or cover everything? Covering breadth fi rst is probably wise. As you manage the bugs and select ways to mitigate threats, you can consider the chess game and deeper threat variants. However, it’s important to cover both. The unfortunate reality is that attackers with enough interest in your technology will try to fi nd places where you didn’t have enough time to investigate or build defenses. Good requirements, along with their interplay with threats and mitigations, can help you create a target that is consistently
  • 545. hard to attack. Running from the Bear There’s an old joke about Alice and Bob hiking in the woods when they come across an angry bear. Alice takes off running, while Bob pauses to put on some running shoes. Alice stops and says, “What the heck are you doing?” www.it-ebooks.info http://www.it-ebooks.info/ Chapter 7 ■ Processing and Managing Threats 133 c07.indd 09:44:3:AM 01/09/2014 Page 133 Bob looks at her and replies, “I don’t need to outrun the bear, I just need to
  • 546. outrun you.” OK, it’s not a very good joke. But it is a good metaphor for bad threat model- ing. You shouldn’t assume that there’s exactly one bear out there in the woods. There are a lot of people out there actively researching new vulnerabilities, and they publish information about not only the vulnerabilities they fi nd, but also about their tools and techniques. Thus, more vulnerabilities are being found more effi ciently than in past years. Therefore, not only are there multiple bears, but they have conferences in which they discuss techniques for eating both Alice and Bob for lunch.
  • 547. Worse, many of today’s attacks are largely automated, and can be scaled up nearly infi nitely. It’s as if the bears have machine guns. Lastly, it will get far worse as the rise of social networking empowers automated social engineering attacks (just consider the possibilities when attackers start applying modern behavior-based advertising to the malware distribution business). If your iteration ends with “we just have to run faster than the next target,” you may well be ending your analysis early. Tracking with Tables and Lists Threat modeling can lead you to generate a lot of information, and good tracking
  • 548. mechanisms can make a big difference. Discovering what works best for you may require a bit of experimentation. This section lays out some sample lists and sample entries in such lists. These are all intended as advice, not straight- jackets. If you regularly fi nd something that you’re writing on the side, give yourself a way to track it. Tracking Threats The fi rst type of table to consider is one that tracks threats. There are (at least) three major ways to organize such a table, including by diagram element (see Table 7-3), by threat type (see Table 7-4), or by order of discovery (see Table 7-5).
  • 549. Each of these tables uses these methods to examine threats against the super- simple diagram from Chapter 1, “Dive In and Threat Model!” reprised here in Figure 7-1. If you organize by diagram element, the column headers are Diagram Element, Threat Type, Threat, and Bug ID/Title. You can check your work by validating that the expected threats are all present in the table. For example, if you’re using STRIDE-per-element, then you should have at least one tampering, information disclosure, and denial-of-service threat for each data fl ow. An example table for use in iterating over diagram elements is shown in Table 7-3.
  • 550. www.it-ebooks.info http://www.it-ebooks.info/ 134 Part III ■ Managing and Addressing Threats c07.indd 09:44:3:AM 01/09/2014 Page 134 Web browser Web server 1 2 3 4 5 6 7 Corporate data center Web storage (offsite) Business Logic Database Figure 7-1: The diagram considered in threat tables Table 7-3: A Table of Threats, Indexed by Diagram Element (excerpt) DIAGRAM
  • 551. ELEMENT THREAT T YPE THREAT BUG ID AND TITLE Data fl ow (4) web server to Biz Logic Tampering Add orders without payment checks. 4553 “need integrity controls on channel” Information disclosure
  • 552. Payment instruments in the clear 4554 “need crypto” #PCI #p0 Denial of service Can we just accept all these inside the data center? 4555 “Check with Alice in IT if these are acceptable”. To check the completeness of Table 7-3, confi rm that each element has at least one threat. If you’re using STRIDE-per-element, each process
  • 553. should have six threats, each data fl ow three, each external entity two, and each data store three or four if the data store is a log. You can also organize a table by threats, and use threats as what’s being iterated “across.” If you do that, you end up with a table like the one shown in Table 7-4. Table 7-4: A Table of Threats, Organized by Threat (excerpt) THREAT DIAGRAM ELEMENT THREAT BUG ID AND TITLE Tampering Web browser (1) Attacker modifi es our JavaScript order
  • 554. checker. 4556 “Add order-checking logic to the server”. Data fl ow (2) from browser to server Failure to use HTTPS* 4557 “Build unit tests to ensure that there are no HTTP listeners for endpoints to these data fl ows.” www.it-ebooks.info http://www.it-ebooks.info/
  • 555. Chapter 7 ■ Processing and Managing Threats 135 c07.indd 09:44:3:AM 01/09/2014 Page 135 THREAT DIAGRAM ELEMENT THREAT BUG ID AND TITLE Web server Someone who tam- pers with the web server can attack our customers. 4558 “Ensure all changes to server code are pulled from source control so changes are accountable”.
  • 556. Web server Web server can add items to orders. 4559 “Investigate moving con- trols to Biz Logic, which is less accessible to attackers”. * The entry for “failure to use HTTPS” is an example that illustrates how knowledge of the scenario and mitigating techniques can lead you to jump to a fix, rather than perhaps tediously working through the threats. Be careful that when you (literally) jump to a conclusion like this, you’re not missing other threats. N O T E You may have noticed that Table 7-4 has two entries for web server—this is totally fi ne. You shouldn’t let having something in a space prevent you from fi nding more threats against that thing, or recording additional threats
  • 557. that you discover. The last way to formulate a table is by order of discovery, as shown in Table 7-5. This is the easiest to fi ll out, as the next threat is simply added to the next line. It is, however, the hardest to validate, because the threats will be “jumbled together.” Table 7-5: Threats by Order of Discovery (excerpt) THREAT DIAGRAM ELEMENT THREAT BUG ID Tampering Web browser (1) Attacker modifi es the JavaScript order checker.
  • 558. 4556 “Add order-checking logic to the server”. With three variations, the obvious question is which one should you use? If you’re new to threat modeling and need structure, then either of the fi rst two forms help you organize your approach. The third is more natural, but requires checking at the end; so as you become more comfortable threat modeling, jumping around will become natural, and therefore the third type of table will become more useful. Making Assumptions The key reason to track assumptions as you discover them is so that you can follow up and ensure that you’re not assuming your way into a
  • 559. problem. To do that, you should track the following: ■ The assumption ■ The impact if it’s wrong www.it-ebooks.info http://www.it-ebooks.info/ 136 Part III ■ Managing and Addressing Threats c07.indd 09:44:3:AM 01/09/2014 Page 136 ■ Who can tell you if it’s wrong ■ Who’s going to follow-up ■ When they need to do so ■ A bug for tracking
  • 560. Table 7-6 shows an example entry for such a table of assumptions. Table 7-6: A Table for Recording Assumptions ASSUMPTION IMPACT IF WRONG WHO TO TALK TO WHO’S
  • 561. FOLLOWING UP BY DATE BUG # It’s OK to ignore denial of service within the data center. Unhandled vulnerabilities Alice Bob April 15 4555 External Security Notes Many of the documented Microsoft threat-modeling approaches
  • 562. have a section for “external security notes.” That name frames these notes with respect to the threat model. That is, they’re notes for those outside the threat discovery process in some way, and they’ll probably emerge or crystalize as you look for threats. Therefore, like tracking threats and assumptions, you want to track external security notes. You can be clearer by framing the notes in terms of two sets of audiences: your customers and those calling your APIs. One of the most illus- trative forms of these notes appears in the IETF “RFC Security Considerations” section, so you’ll get a brief tour of those here.
  • 563. Notes for Customers Security notes that are designed for your customers or the people using your system are generally of the form “we can’t fi x problem X.” Not being able to fi x problem X may be acceptable, and it’s more likely to be acceptable if it’s not a surprise—for example, “This product is not designed to defend against the system administrator.” This sort of note is better framed as “non-requirements,” and they are discussed at length in Chapter 12, “Requirements Cookbook.” www.it-ebooks.info http://www.it-ebooks.info/
  • 564. Chapter 7 ■ Processing and Managing Threats 137 c07.indd 09:44:3:AM 01/09/2014 Page 137 Notes for API Callers The right design for an API involves many trade-offs, including utility, usability, and security. Threat modeling that leads to security notes for your callers can serve two functions. First, those notes can help you understand the security implications of your design decisions before you fi nalize those decisions. Second, they can help your customers understand those implications. These notes address the question “What does someone calling your API need to do to use it in a secure way?” The notes help those callers know what threats
  • 565. you address (what security checks you perform), and thus you can tell them about a subset of the checks they’ll need to perform. (If your security depends on not telling them about threats then you have a problem; see the section on Kerckhoffs’s Principles in Chapter 16, “Threats to Cryptosystems.”) Notes to API callers are generally one of the following types: ■ Our threat model is [description]—That is, the things you worry about are… This should hopefully be obvious to readers of this book. ■ Our code will only accept input that looks like [some description]—What you’ll accept is simply that—a description of what validation you’ll perform, and thus what inputs you would reject. This is, at a surface
  • 566. level, at odds with the Internet robustness principle of “be conservative in what you send, and liberal in what you accept”; but being liberal does not require foolishness. Ideally, this description is also matched by a set of unit tests. ■ Common mistakes in using our code include [description]— This is a set of things you know callers should or should not do, and are two sides of the same coin: ■ We do not validate this property that callers might expect us to vali- date—In other words, what should callers check for themselves in their context, especially when they might expect you to have done something?
  • 567. ■ Common mistakes that our code doesn’t or can’t address—In other words, if you regularly see bug reports (or security issues) because your callers are not doing something, consider treating that as a design fl aw and fi xing it. For an example of callers having trouble with an API, consider the strcpy function. According to the manual pages included with various fl avors of unix, “strcpy does not validate that s2 will fi t buffer s1,” or “strcpy requires that s1 be of at least length (s2) + 1.” These examples are carefully chosen, because as the fi ne manual continues, “Avoid using strcat.” (You should now use SafeStr*
  • 568. on Windows, strL* on unix.) The manual says this because, although notes www.it-ebooks.info http://www.it-ebooks.info/ 138 Part III ■ Managing and Addressing Threats c07.indd 09:44:3:AM 01/09/2014 Page 138 about safer use were eventually added, the function was simply too hard to use correctly, and no amount of notes to callers was going to overcome that. If your notes to those calling your API boil down to “it is impossible to use this API without jumping through error-prone hoops,” then your API is going to need to change (or deliver some outstanding business value).
  • 569. Sometimes, however, it’s appropriate to use these sorts of notes to API callers— for example, “This API validates that the SignedDataBlob you pass in is signed by a valid root CA. You will still need to ensure that the OrganizationName fi eld matches the name in the URL, as you do not pass us the URL.” That’s a reasonable note, because the blob might not be associated with a URL. It might also be reasonable to have a ValidateSignatureURL() API call. RFC Security Considerations IETF RFCs are a form of external security notes, so it’s worth looking at them as an evolved example of what such notes might contain (Rescorla, 2003). If you
  • 570. need a more structured form of note, this framework is a good place to start. The security considerations of a modern RFC include discussion of the following: ■ What is in scope ■ What is out of scope—and why ■ Foreseeable threats that the protocol is susceptible to ■ Residual risk to users or operators ■ Threats the protocol protects against ■ Assumptions which underlie security Scope is reasonably obvious, and the RFC contains interesting discussion about not arbitrarily defi ning either foreseeable threats or residual risk as out
  • 571. of scope. The point about residual risk is similar to non- requirements, as cov- ered in Chapter 12. It discusses those things that the protocol designers can’t address at their level. Scenario-Specifi c Elements of Threat Modeling There are a few scenarios where the same issues with threat modeling show up again and again. These scenarios include issues with customer/vendor bound- aries, threat modeling new technologies, and threat modeling an API (which is broader than just writing up external security notes). The customer/vendor trust boundary is dropped with unfortunate regularity, and how to approach
  • 572. www.it-ebooks.info http://www.it-ebooks.info/ Chapter 7 ■ Processing and Managing Threats 139 c07.indd 09:44:3:AM 01/09/2014 Page 139 an API or new technology often appears intimidating. The following sections address each scenario separately. Customer/Vendor Trust Boundary It is easy to assume that because someone is running your code, they trust you, and/or you can trust them. This can lead to things like Acme engineers saying, “Acme.com isn’t really an external entity... ” While this may be true, it may also
  • 573. be wrong. Your customers may have carefully audited the code they received from you. They may believe that your organization is generally trustworthy without wanting to expose their secrets to you. You holding those secrets is a different security posture. For example, if you hold backups of their cryp- tographic keys, they may be subject to an information disclosure threat via a subpoena or other legal demand that you can’t reveal to them. Good security design involves minimizing risk by appropriate design and enforcement of the customer/vendor trust boundary. This applies to traditional programs such as installed software packages, and
  • 574. it also applies to the web. Believing that a web browser is faithfully executing the code you sent it is optimistic. The other end of an HTTPS connection might not even be a browser. If it is a browser, an attacker may have modifi ed your JavaScript, or be altering the data sent back to you via a proxy. It is important to pay attention to the trust boundary once your code has left your trust context. New Technologies From mobile to cloud to industrial control systems to the emergent “Internet of Things,” technologists are constantly creating new and exciting technologies. Sometimes these technologies genuinely involve new threat
  • 575. categories. More often, the same threats manifest themselves. Models of threats that are intended to elicit or organize thinking about skilled threat modelers (such as STRIDE in its mnemonic form) can help in threat modeling these new technologies. Such models of threats enable skilled practitioners to fi nd many of the threats that can occur even as the new technologies are being imagined. As your threat elicitation technique moves from the abstract to the detailed, changes in the details of both the technologies and the threats may inhibit your ability to apply the technique. From a threat-modeling perspective, the most important thing
  • 576. that design- ers of new technologies can do is clearly defi ne and communicate the trust relationships by drawing their trust boundaries. What’s essential is not just identifi cation, but also communication. For example, in the early web, the trust model was roughly as shown in Figure 7-2. www.it-ebooks.info http://www.it-ebooks.info/ 140 Part III ■ Managing and Addressing Threats c07.indd 09:44:3:AM 01/09/2014 Page 140 Web server Browser context Server context
  • 577. Origin 1 Origin 2 Web browser Figure 7-2: The early web threat model In that model, servers and clients both ran code, and what passed between them via HTTP was purely data in the form of HTML and images. (As a model, this is simplifi ed; early web browsers supported more than HTTP, including gopher and FTP.) However, the boundaries were clearly drawable. As the web evolved and web developers pushed the boundary of what was possible with the browser’s built-in functionality, a variety of ways for the
  • 578. server to run code on the client were added, including JavaScript, Java, ActiveX, and Flash. This was an active transformation of the security model, which now looks more like Figure 7-3. Flash Java Active content Web server Browser context Server context Origin 1 Origin 2 Web browser
  • 579. Figure 7-3: The evolved web threat model In this model, content from the server has dramatically more access to the browser, leading to two new categories of threats. One is intra- page threats, whereby code from different servers can attack other information in the browser. The other is escape threats, whereby code from the server can fi nd a way to infl uence what’s happening on the client computer. In and of themselves, these changes are neither good nor bad. The new technologies created a dramatic transforma- tion of what’s possible on the web for both web developers and web attackers. The transformation of the web would probably have been accomplished with
  • 580. more security if boundaries had been clearly identifi ed. Thus, those using new technology will get substantial security benefi ts from its designers by defi ning, communicating, and maintaining clear trust boundaries. www.it-ebooks.info http://www.it-ebooks.info/ Chapter 7 ■ Processing and Managing Threats 141 c07.indd 09:44:3:AM 01/09/2014 Page 141 Threat Modeling an API API threat models are generally very similar. Each API has a low trust side, regardless of whether it is called from a program running on the local machine or
  • 581. called by anonymous parties on the far side of the Internet. On a local machine, the low trust side is more often clear: The program running as a normal user is running at a low trust level compared to the kernel. (It may also be running at the same trust level as other code running with the same user ID, but with the introduction of things like AppContainer on Windows and Mac OS sandbox, two programs running with the same UID may not be fully equivalent. Each should treat the other as being untrusted.) In situations where there’s a clear “lower trust” side, that unprivileged code has little to do, except ensure that data is validated for the context in which that data will be used.
  • 582. If it really is at a lower trust level, then it will have a hard time defending itself from a malicious kernel, and should generally not try. This applies only to relationships like that of a program to a kernel, where there’s a defi ned hierarchy of privilege. Across the Internet, each side must treat the other as untrusted. The “high trust” side of the API needs to do the following seven things for security: ■ Perform all security checks inside the trust boundary. A system has a trust boundary because the other side is untrusted or less trusted. To allow the less trusted side to perform security checks for you is missing the point of having a boundary. It is often useful to test input
  • 583. before sending it (for example, a user might fi ll out a long web form, miss something, and then get back an error message). However, that’s a usability feature, not a security feature. You must test inside the trust boundary. Additionally, for networked APIs/restful APIs/protocol endpoints, it is important to consider authentication, authorization, and revocation. ■ When reading, ensure that all data is copied in before validation for purpose (see next bullet). The low, or untrusted side, can’t be trusted to validate data. Nor can it be trusted to not change data under its control after you’ve checked it. There is an entire genus of security fl aws called
  • 584. TOCTOU (“time of check, time of use”) in which this pattern is violated. The data that you take from the low side needs to be copied into secured memory, validated for some purpose, and then used without further reference to the data on the low side. ■ Know what purpose the data will be put to, and validate that it matches what the rest of your system expects from it. Knowing what the data will be used for enables you to check it for that purpose—for example, ensure an IPv4 address is four octets, or that an e-mail address matches some regular expression (an e-mail regular expression is an easily grasped example, but sending e-mail to the address is better [Celis,
  • 585. 2006]). If the www.it-ebooks.info http://www.it-ebooks.info/ 142 Part III ■ Managing and Addressing Threats c07.indd 09:44:3:AM 01/09/2014 Page 142 API is a pass-through (for example, listening on a socket), then you may be restricted to validating length, length to a C-style string, or perhaps nothing at all. In that case, your documentation should be very clear that you are doing minimal or no validation, and callers should be cautious. ■ Ensure that your messages enable troubleshooting without giving away
  • 586. secrets. Trusted code will often know things that untrusted code should not. You need to balance the capability to debug problems with return- ing too much data. For example, if you have a database connection, you might want to return an error like “can’t connect to server instance with username dba password dfug90845b4j,” but anyone who connected then now knows the DBA’s password. Oops! Messages of the form “An error of type X occurred. This is instance Y in the error log Z” are helpful enough to a systems administrator, who can search for Y in Z while only disclosing the existence of the logs to the attacker. Even better are errors that include information about who to contact. Messages that
  • 587. merely say “Contact your system administrator” are deeply frustrating. ■ Document what security checks you perform, and the checks you expect callers to perform for themselves. Very few APIs take unconstrained input. The HTTP interface is a web interface: It expects a verb (GET, POST, HEAD) and data. For a GET or HEAD, the data is a URL, an HTTP version, and a set of HTTP headers. Old-fashioned CGI programs knew that the web server would pass them a set of headers as environment variables and then a set of name-value pairs. The environment variables were not always unique, leading to a number of bugs. What a CGI could rely on
  • 588. could be documented, but the diverse set of attacks and assumptions that people could read into it was not documentable. ■ Ensure that any cryptographic function runs in a constant time. All your crypto functions should run in constant time from the perspective of the low trust side. It should be obvious that cryptographic keys (except the public portion of asymmetric systems) are a critical subset of the things you should not expose to low. Crypto keys are usually both a stepping stone asset and a thing you want to protect asset. See also Chapter 16 on threats to cryptosystems.
  • 589. ■ Handle the unique security demands of your API. While the preceding issues show up with great consistency, you’re hopefully building a new API to deliver new value, and that API may also bring new risks that you should consider. Sometimes it’s useful to use the “rogue insider” model to help ask “what could we do wrong with this?” In addition to the preceding checklist, it may be helpful to look to similar or competitive APIs and see what security changes they’ve executed, although the security changes may not be documented as such. www.it-ebooks.info http://www.it-ebooks.info/
  • 590. Chapter 7 ■ Processing and Managing Threats 143 c07.indd 09:44:3:AM 01/09/2014 Page 143 Summary There are a set of tools and techniques that you can use to help threat modeling fi t with other development, architecture, or technology deployments. Threat modeling tasks that use these tools happen at the start of a project, as you’re working through features, and as you’re close to delivery. Threat modeling should start with the creation of a software diagram (or updating the diagram from the previous release). It should end with a set of security bugs being fi led, so that the normal development
  • 591. process picks up and manages those bugs. When you’re creating the diagram, start with the broadest description you can, and add details as appropriate. Work top down, and as you do, at each level of the diagram(s), work across something: trust boundaries, software elements or your threat discovery technique. As you look to create mitigations, be aware that attackers may try to bypass those mitigations. You want to mitigate the most accessible (aka “fi rst order”) threats fi rst, and then mitigate attacks against your mitigations. You have to consider your threats and mitigations not as a static
  • 592. environment, but as a game where the attacker can move pieces, and possibly cheat. As you go through these analyses, you’ll want to track discoveries, includ- ing threats, assumptions, and things your customers need to know. Customers here include your customers, who need to understand what your goals and non-goals are, and API callers, who need to understand what security checks you perform, and what checks they need to perform. There are some scenario-specifi c call outs: It is important to respect the customer/vendor security boundary; new technologies can and should be threat modeled, especially with respect to all the trust
  • 593. boundaries, not just the customer/vendor one; all APIs have very similar threat models, although there may be new and interesting security properties of your new and interesting API. www.it-ebooks.info http://www.it-ebooks.info/ c07.indd 09:44:3:AM 01/09/2014 Page 144 www.it-ebooks.info http://www.it-ebooks.info/ 145 c08.indd 12:26:46:PM 01/13/2014 Page 145 So far you’ve learned to model your software using diagrams
  • 594. and learned to fi nd threats using STRIDE, attack trees, and attack libraries. The next step in the threat modeling process is to address every threat you’ve found. When it works, the fastest and easiest way to address threats is through technology-level implementations of defensive patterns or features. This chapter covers the standard tactics and technologies that you will use to mitigate threats. These are often operating system or program features that you can confi gure, activate, apply or otherwise rapidly engage to defend against one or more threats. Sometimes, they involve additional code that is widely available and designed to
  • 595. quickly plug in. (For example, tunneling connections over SSH to add security is widely supported, and some unix packages even have options to make that easier.) Because you likely found your threats via STRIDE, the bulk of this chapter is organized according to STRIDE. The main part of the chapter addresses STRIDE and privacy threats, because most pattern collections already include information about how to address the threats. Tactics and Technologies for Mitigating Threats The mitigation tactics and technologies in this chapter are organized by STRIDE because that’s most likely how you found them. This section is therefore orga-
  • 596. nized by ways to mitigate each of the STRIDE threats, each of which includes C H A P T E R 8 Defensive Tactics and Technologies www.it-ebooks.info http://www.it-ebooks.info/ 146 Part III ■ Managing and Addressing Threats c08.indd 12:26:46:PM 01/13/2014 Page 146 a brief recap of the threat, tactics that can be brought to bear against it, and the techniques for accomplishing that by people with various skills
  • 597. and respon- sibilities. For example, if you’re a developer who wants to add cryptographic authentication to address spoofi ng, the techniques you use are different from those used by a systems administrator. Each subsection ends with a list of specifi c technologies. Authentication: Mitigating Spoofi ng Spoofi ng threats against code come in a number of forms: faking the program on disk, squatting a port (IP, RPC, etc.), splicing a port, spoofi ng a remote machine, or faking the program in memory (related problems with libraries and depen- dencies are covered under tampering); but in general, only
  • 598. programs running at the same or a lower level of trust are spoofable, and you should endeavor to trust only code running at a higher level of trust, such as in the OS. There is also spoofi ng of people, of course, a big, complex subject covered in Chapter 14, “Accounts and Identity.” Mitigating spoofi ng threats often requires unusually tight integration between layers of systems. For example, a mainte- nance engineer from Acme, Inc. might want remote (or even local) access to your database. Is it enough to know that the person is an employee of Acme? Is it enough to know that he or she can initiate a connection from Acme’s domain?
  • 599. You might reasonably want to create an account on your database to allow Joe Engineer to log in to it, but how do you bind that to Acme’s employee database? When Joe leaves Acme and gets a job at Evil Geniuses for a Better Tomorrow, what causes his access to Acme’s database to go away? N O T E Authentication and authorization are related concepts, and sometimes confused. Knowing that someone really is Adam Shostack should not authorize a bank to take money from my account (there are several people of that name in the U.S.). Addressing authorization is covered in the Authorization: Mitigating Elevation of Privilege
  • 600. From here, let’s dig into the specifi c ways in which you can ensure authen- tication is done well. Tactics for Authentication You can authenticate a remote machine either with or without cryptographic trust mechanisms. Without crypto involves verifying via IP or “classic” DNS entries. All the noncryptographic methods are unreliable. Before they existed, there were attempts to make hostnames reliable, such as the double-reverse DNS lookup. At the time, this was sometimes the best tactic for authentication. www.it-ebooks.info
  • 601. http://www.it-ebooks.info/ Chapter 8 ■ Defensive Tactics and Technologies 147 c08.indd 12:26:46:PM 01/13/2014 Page 147 Today, you can do better, and there’s rarely an excuse for doing worse. (SNMP may be an excuse, and very small devices may be another). As mentioned earlier, authenticating a person is a complex subject, covered in Chapter 14. Authenticating on-system entities is somewhat operating system dependent. Whatever the underlying technical mechanisms are, at some point crypto- graphic keys are being managed to ensure that there’s a correspondence between
  • 602. technical names and names people use. That validation cannot be delegated entirely to machines. You can choose to delegate it to one of the many compa- nies that assert they validate these things. These companies often do business as “PKI” or “public key infrastructure” companies, and are often referred to as “certifi cation authorities” or “CAs”. You should be careful about relying on that delegation for any transaction valued at more than what the company will accept for liability. (In most cases, certifi cate authorities limit their liability to nothing). Why you should assign it a higher value is a question their market- ing departments hope will not be asked, but the answer roughly
  • 603. boils down to convenience, limited alternatives, and accepted business practice. Developer Ways to Address Spoofi ng Within an operating system, you should aim to use full and canonical path names for libraries, pipes, and so on to help mitigate spoofi ng. If you are relying on something being protected by the operating system, ensure that the permissions do what you expect. (In particular, unix fi les in /tmp are generally unreliable, and Windows historically has had similarly shared directories.) For networked systems in a single trust domain, using operating system mechanisms such
  • 604. as Active Directory or LDAP makes sense. If the system spans multiple trust domains, you might use persistence or a PKI. If the domains change only rarely, it may be appropriate to manually cross-validate keys, or to use a contract to specify who owns what risks. You can also use cryptographic ways to address spoofi ng, and these are covered in Chapter 16, “Threats to Cryptosystems.” Essentially, you tie a key to a person, and then work to authenticate that the key is correctly associated with the person who’s connecting or authenticating. Operational Ways to Address Spoofi ng Once a system is built, a systems administrator has limited
  • 605. options for improv- ing spoofi ng defenses. To the extent that the system is internal, pressure can be brought to bear on system developers to improve authentication. It may also be possible to use DNSSEC, SSH, or SSL tunneling to add or improve authentication. Some network providers will fi lter outbound traffi c to make spoofi ng harder. That’s helpful, but you cannot rely on it. www.it-ebooks.info http://www.it-ebooks.info/ 148 Part III ■ Managing and Addressing Threats c08.indd 12:26:46:PM 01/13/2014 Page 148
  • 606. Authentication Technologies Technologies for authenticating computers (or computer accounts) include the following: ■ IPSec ■ DNSSEC ■ SSH host keys ■ Kerberos authentication ■ HTTP Digest or Basic authentication ■ “Windows authentication” (NTLM) ■ PKI systems, such as SSL or TLS with certifi cates Technologies for authenticating bits (fi les, messages, etc.) include the following: ■ Digital signatures
  • 607. ■ Hashes Methods for authenticating people can involve any of the following: ■ Something you know, such as a password ■ Something you have, such as an access card ■ Something you are, such as a biometric, including photographs ■ Someone you know who can authenticate you Technologies for maintaining authentication across connections include the following: ■ Cookies Maintaining authentication across connections is a common issue as you
  • 608. integrate systems. The cookie pattern has fl aws, but generally, it has fewer fl aws than re-authenticating with passwords. Integrity: Mitigating Tampering Tampering threats come in several fl avors, including tampering with bits on disk, bits on a network, and bits in memory. Of course, no one is limited to tampering with a single bit at a time. www.it-ebooks.info http://www.it-ebooks.info/ Chapter 8 ■ Defensive Tactics and Technologies 149 c08.indd 12:26:46:PM 01/13/2014 Page 149 Tactics for Integrity
  • 609. There are three main ways to address tampering threats: relying on system defenses such as permissions, use of cryptographic mechanisms, and use of logging technology and audit activities as a deterrent. Permission mechanisms can protect things that are within their scope of control, such as fi les on disk, data in a database, or paths within a web server. Examples of such permissions include ACLs on Windows, unix fi le permissions, or .htaccess fi les on a web server. There are two main cryptographic primitives for integrity: hashes and signatures. A hash takes an input of some arbitrary length, and produces a fi xed-
  • 610. length digest or hash of the input. Ideally, any change to the input completely transforms the output. If you store a protected hash of a digital object, you can later detect tampering. Actually, anyone with that hash can detect tampering, so, for example, many software projects list a hash of the software on their website. Anyone who gets the bits from any source can rely on them being the bits described on the project website, to a level of security based on the security of the hash and the operation of the web site. A signature is a cryptographic operation with a private key and a hash that does much the same thing. It has the advantage that once someone has obtained the right public
  • 611. key, they can validate a lot of hashes. Hashes can also be used in binary trees of various forms, where large sets of hashes are collected together and signed. This can enable, for example, inserting data into a tree and noting the time in a way that’s hard to alter. There are also systems for using hashes and signatures to detect changes to a fi le system. The fi rst was co-invented by Gene Kim, and later commercial- ized by Tripwire, Inc. (Kim, 1994). Logging technology is a weak third in this list. If you log how fi les change, you may be able to recover from integrity failures. Implementing Integrity
  • 612. If you’re implementing a permission system, you should ensure that there’s a single permissions kernel, also called a reference monitor. That reference monitor should be the one place that checks all permissions for everything. This has two main advantages. First, you have a single monitor, so there are no bugs, synchronization failures, or other issues based on which code path called. Second, you only have to fi x bugs in one place. Creating a good reference monitor is a fairly intricate bit of work. It’s hard to get right, and easy to get wrong. For example, it’s easy to run checks on references www.it-ebooks.info
  • 613. http://www.it-ebooks.info/ 150 Part III ■ Managing and Addressing Threats c08.indd 12:26:46:PM 01/13/2014 Page 150 (such as symlinks) that can change when the code fi nally opens the fi le. If you need to implement a reference monitor, perform a literature review fi rst. If you’re implementing a cryptographic defense, see Chapter 16. If you’re implementing an auditing system, you need to ensure it is suffi ciently perfor- mant that people will leave it on, that security successes and failures are both logged, and that there’s a usable way to access the logs. You also need to ensure
  • 614. that the data is protected from attackers. Ideally, this involves moving it off the generating system to an isolated logging system. Operational Assurance of Integrity The most important element of assuring integrity is about process, not technology. Mechanisms for ensuring integrity only work to the extent that integrity failures generate operational exceptions or interruptions that are addressed by a person. All the cryptographic signatures in the world only help if someone investigates the failure, or if the user cannot or does not override the message about a failure. You can devote all your disk access operations to
  • 615. running checksums, but if no one investigates the alarms, they won’t do any good. Some systems use “whitelists” of applications so only code on the whitelist runs. That reduces risk, but carries an operational cost. It may be possible to use SSH or SSL tunneling or IPSec to address network tampering issues. Systems like Tripwire, OSSEC, or L5 can help with system integrity. Integrity Technologies Technologies for protecting fi les include: ■ ACLs or permissions ■ Digital signatures ■ Hashes
  • 616. ■ Windows Mandatory Integrity Control (MIC) feature ■ Unix immutable bits Technologies for protecting network traffi c: ■ SSL ■ SSH ■ IPSec ■ Digital signatures Non-Repudiation: Mitigating Repudiation Repudiation is a somewhat different threat because it bridges the business realm, in which there are four elements to addressing it: preventing fraudulent www.it-ebooks.info
  • 617. http://www.it-ebooks.info/ Chapter 8 ■ Defensive Tactics and Technologies 151 c08.indd 12:26:46:PM 01/13/2014 Page 151 transactions, taking note of contested issues, investigating them, and respond- ing to them. In an age when anyone can instantly be a publisher, assuming that you can ignore the possibility of a customer (or noncustomer) complaint or contested charge is foolish. Ensuring you can accept customer complaints and investigate them is outside the scope of this book, but the output from such a system provides a key validation that you have the right logs. Note that repudiation is sometimes a feature. As Professor Ian
  • 618. Goldberg pointed out when introducing his Off-the-Record messaging protocol, signed conversations can be embarrassing, incriminating, or otherwise undesirable (Goldberg, 2008). Two features of the Off-the-Record (OTR) messaging system are that it’s secure (encrypted and authenticated) and deniable. This duality of feature or threat also comes up in the LINDDUN approach to privacy threat modeling. Tactics for Non-Repudiation The technical elements of addressing repudiation are fraud prevention, logs, and cryptography. Fraud prevention is sometimes considered outside the scope
  • 619. of repudiation. It’s included here because managing repudiation is easier if you have fewer contested transactions. Fraud prevention can be divided into fraud by internal actors (embezzlement and the like) and external fraud. Internal fraud prevention is a complex matter; for a full treatment see The Corporate Fraud Handbook (Wells, 2011). You should have good account management practices, including ensuring that your tools work well enough that people are not tempted or forced to share passwords as part of getting their jobs done. Be sure you log and audit the data in those logs. Logs are the traditional technical core of addressing repudiation issues. What is logged depends on the transaction, but generally includes
  • 620. signatures or an IP address and all related information. There are also cryptographic ways to address repudiation, which are currently mostly used between larger businesses. Tactics for Preventing Fraud by External Parties External fraud prevention can be seen as a matter of payment fraud prevention, and ensuring that your customers remain in control of their account. In both cases, details about the state of the art changes quickly, so talk to your peers. Even the most tight-lipped companies have been willing to have very frank discussions with peers under NDA. In essence, stability is good. For example, someone who has
  • 621. been buying two romance novels a month from you for a decade and is still living at the same address is likely the person who just ordered another one. If that person suddenly moves to the other side of the world, and orders technical books in Slovakian with a new credit card with a billing address in the Philippines, you www.it-ebooks.info http://www.it-ebooks.info/ 152 Part III ■ Managing and Addressing Threats c08.indd 12:26:46:PM 01/13/2014 Page 152 might have a problem. (Then again, they might have fi nally found true love,
  • 622. and you don’t want to upset your loyal customers.) Tools for Preventing Fraud by External Parties In their annual report on online fraud, CyberSource includes a survey of popular fraud detection tools and their perceived effectiveness (CyberSource, 2013). Their 2013 survey includes a set of automated tools: ■ Validation services ■ Proprietary data/customer history ■ Multi-merchant data ■ Purchase device tracing Validation services include tracking verifi cation numbers (aka CVN/CVV), address verifi cation services, postal address verifi cation,
  • 623. Verifi ed by Visa/ MasterCard SecureCode, telephone number verifi cation/reverse lookups, public records services, credit checks, and “out-of-wallet/in-wallet” verifi cation services. Proprietary data and customer history includes customer order history, in house “negative lists” of problematic customers, “positive lists” of VIP or reliable customers, order velocity monitoring, company-specifi c fraud models (these are usually built with manual, statistical, or machine learning analyses of past fraudulent orders), and customer website behavioral analysis. Multi-merchant data focuses on shared negative lists or multi- merchant
  • 624. purchase velocity analyzed by the merchant. (This analysis is nominally also performed by the card processors and clearing houses, so the additional value may be transient.) Finally, purchase device tracking includes device “fi ngerprinting” and IP address geolocation. The CyberSource report also discusses the importance of tools to help manual review, and how a varied list is both very helpful and time consuming. Because manual review is one of the most expensive components of an anti-fraud approach to repudiation threats, it may be worth investing in tools to gather all the data into one (or at least fewer) places to improve analyst
  • 625. productivity. Implementing Non-Repudiation The two key tools for non-repudiation are logging and digital signatures. Digital signatures are probably most useful for business-to-business systems. Log as much as you can keep for as long as you need to keep it. As the price of storage continues to fall, this advice becomes easier and easier to follow. For example, with a web transaction, you might log IP address, current geoloca- tion of that address, and browser details. You might also consider services that www.it-ebooks.info
  • 626. http://www.it-ebooks.info/ Chapter 8 ■ Defensive Tactics and Technologies 153 c08.indd 12:26:46:PM 01/13/2014 Page 153 either provide information on fraud or allow you to request decision advice. To the extent that these companies specialize, and may have broader visibility into fraud, this may be a good area of security to outsource. Some of the information you log or transfer may interact with your privacy policies, and it’s important to check. There are also cryptographic digital signatures. Digital signature should be distinguished from electronic signature, which is a term of art
  • 627. under U.S. law referring to a variety of mechanisms with which to produce a signature, some as minimalistic as “press 1 to agree to these terms and conditions.” In contrast, a digital signature is a mathematical transformation that demonstrates irrefut- ably that someone in possession of a mathematical key took an action to cause a signature to be made. The strength of “irrefutable” here depends on the strength of the math, and the tricky bits are possession of the key and what human intent (if any) may have lain behind the signature. Operational Assurance of Non-Repudiation When a customer or partner attempts to repudiate a transaction,
  • 628. someone needs to investigate it. If repudiation attempts are frequent, you may need dedicated people, and those people might require specialized tools. Non-Repudiation Technologies Technologies you can use to address repudiation include: ■ Logging ■ Log analysis tools ■ Secured log storage ■ Digital signatures ■ Secure time stamps ■ Trusted third parties ■ Hash trees
  • 629. ■ As mentioned in “tools for preventing fraud” above Confi dentiality: Mitigating Information Disclosure Information disclosure can happen with information at rest (in storage) or in motion (over a network). The information disclosed can range from the con- tent of communication to the existence of an entity with which someone is communicating. www.it-ebooks.info http://www.it-ebooks.info/ 154 Part III ■ Managing and Addressing Threats c08.indd 12:26:46:PM 01/13/2014 Page 154 Tactics for Confi dentiality
  • 630. Much like with integrity, there are two main ways to prevent information dis- closure: Within the confi nes of a system, you can use ACLs, and outside of it you must use cryptography. If what must be protected is the content of the communication, then traditional cryptography will be suffi cient. If you need to hide who is communicating with whom and how often, you’ll need a system that protects that data, such as a cryptographic mix or onion network. If you must hide the fact that communica- tion is taking place at all, steganography will be required. Implementing Confi dentiality
  • 631. If your system can act as a reference monitor and control all access to the data, you can use a permissions system. Otherwise, you’ll need to encrypt either the data or its “container.” The data might be a fi le on disk, a record in a database, or an e-mail message as it transits over the network. The container might be a fi le system, database, or network channel, such as all e-mail between two systems, or all packets between a web client and a web server. In each cryptographic case, you have to consider who needs access to the keys for encrypting and the decrypting data. For fi le encryption, that might be as simple as asking the operating system to securely store the key for the user so
  • 632. that the user can get to it later. Also, note that encrypted data is not integrity controlled. The details can be complex and tricky, but consider a database of salaries, where the cells are encrypted. You don’t need to know the CEO’s sal- ary to know that replacing your salary with it is likely a good thing (for you); and if there’s no integrity control, replacing the encrypted value of your salary with the CEO’s salary will do just fi ne. An important subset of information disclosure cases related to the storage of passwords or backup authentication mechanisms is considered in depth in Chapter 14.
  • 633. Operational Assurance of Confi dentiality It may be possible to add ACLs to an already developed system, or to use chroot or similar sandboxes to restrict what it can access. On Windows, the addition of a SID to a program and an inherited deny ACL for that SID may help (or it may break things). It is usually possible to add a disk or fi le encryption layer to protect information at rest from disclosure. Disk crypto will work “by default” with all the usual caveats about how keys are managed. It works for adversarial custody of the machine, but not if the password is written down or otherwise stored with the machine. With regard to a network, it may be
  • 634. possible to use SSH or SSL tunneling or IPSec to address network tampering issues. www.it-ebooks.info http://www.it-ebooks.info/ Chapter 8 ■ Defensive Tactics and Technologies 155 c08.indd 12:26:46:PM 01/13/2014 Page 155 Confi dentiality Technologies Technologies for confi dentiality include: ■ Protecting fi les: ■ ACLs/permissions ■ Encryption ■ Appropriate key management
  • 635. ■ Protecting network data: ■ Encryption ■ Appropriate key management ■ Protecting communication headers or the fact of communication: ■ Mix networks ■ Onion routing ■ Steganography N O T E In the preceding lists, “appropriate key management” is not quite a technology, but is so important that it’s included. Availability: Mitigating Denial of Service Denial-of-service attacks work by exhausting some resource. Traditionally,
  • 636. those resources are CPU, memory (both RAM and hard drive space can be exhausted), and bandwidth. Denial-of-service attacks can also exhaust human availability. Consider trying to call the reservations line of a very exclusive restaurant—the French Laundry in Napa Valley books all its tables within 5 minutes of the phone being open every day (for a day 30 days in the future). The resource under contention is the phone lines, and in particular the people answering them. Tactics for Availability There are two forms of denial of service attacks: brute force and clever. Using
  • 637. the restaurant example, brute force involves bringing 100 people to a restaurant that can seat only 25. Clever attacks bring 20 people, each of whom makes an ever-escalating list of requests and changes, and runs the staff ragged. In the online world, brute force attacks on networks are somewhat common under the name DDoS (Distributed Denial of Service). They can also be carried out against CPU (for example, while(1) fork()) or disk. It’s simple to construct a small zip www.it-ebooks.info http://www.it-ebooks.info/ 156 Part III ■ Managing and Addressing Threats
  • 638. c08.indd 12:26:46:PM 01/13/2014 Page 156 fi le that will expand to whatever limit might be in place: the maximum size of a fi le or space on the fi le system. Recall that a zip fi le is structured to describe the contents of the real fi le as simply as possible, such as 65,535 0s. That three-byte description will expand to 64K, for a magnifi cation effect of over 21,000—which is awfully cool if you’re an attacker. Clever denial-of-service attacks involve a small amount of work by an attacker that causes you to do a lot of work. For example, connecting to an SSL v2 server, the client sends a client master key challenge, which is a random key encrypted
  • 639. such that the server does (relatively) expensive public key operations to decrypt it. The client does very little work compared to the server. This can be partially addressed in a variety of ways, most notably the Photuris key management protocol. The core of such protocols is proof that the client has done more work than the server, and the body of approaches is called proof of work. However, in a world of abundant bots and volunteers to run DDoS software for political causes, Ben Laurie and Richard Clayton have shown reasonably conclusively that “Proof-of-Work Proves Not to Work” (in a paper of that name [Laurie]). A second important strategy for defending against denial-of-
  • 640. service attacks is to ensure your attacker can receive data from you. For example, defenses against SYN fl ooding attacks now take this form. In a SYN fl ood attack, a host receives a lot of connection attempts (TCP SYNchronize) and it needs to keep track of each one to set up new connections. By sending a slew of those, operat- ing systems in the 1990s could be run out of memory in the fi xed-size buffers allocated to track SYNs, and no new connections could be established. Modern TCP stacks calculate certain parts of their response to a SYN packet using some cryptography. They maintain no state for incoming packets, and use the
  • 641. cryptographic tools to validate that new connections are real (Rescorla, 2003). Implementing Availability If you’re implementing a system, consider what resources an attacker might consume, and look for ways to limit those resources on a per- user basis. Understand that there are limits to what you can achieve when dealing with systems on the other side of a trust boundary, and some of the response needs to be operational. Ensure that the operators have such mechanisms. Operational Assurance of Availability Addressing brute force denial-of-service attacks is simple: Acquire more resources
  • 642. such that they don’t run out, or apply limits so that one bad apple can’t spoil things for others. For example, multi-user operating systems implement quota systems, and business ISPs may be able to fi lter traffi c coming from certain sources. www.it-ebooks.info http://www.it-ebooks.info/ Chapter 8 ■ Defensive Tactics and Technologies 157 c08.indd 12:26:46:PM 01/13/2014 Page 157 Addressing clever attacks is generally in the realm of implementation, not operations. Availability Technologies
  • 643. Technologies for protecting fi les include: ■ ACLs ■ Filters ■ Quotas (rate limiting, thresholding, throttling) ■ High-availability design ■ Extra bandwidth (rate limiting, throttling) ■ Cloud services Authorization: Mitigating Elevation of Privilege Elevation of privilege threats are one category of unauthorized use, and the only one addressed in this section. The overall question of designing authorization systems fi lls other books.
  • 644. Tactics for Authorization As discussed in the section “Implementing Integrity,” having a reference moni- tor that can control access between objects is a precursor to avoiding several forms of a problem, including elevation of privilege. Limiting the attack surface makes the problem more tractable. For example, limiting the number of setuid programs limits the opportunity for a local user to become root. (Technically, programs can be setuid to something other than root, but generally those other accounts are also privileged.) Each program should do a small number of things, and carefully manage their input, including user input, environment,
  • 645. and so on. Each should be sandboxed to the extent that the system supports it. Ensure that you have layers of defense, such that an anonymous Internet user can’t elevate to administrator with a single bug. You can do this by having the code that listens on the network run as a limited user. An attacker who exploits a bug will not have complete run of the system. (If they’re a normal user, they may well have easy access to many elevation paths, so lock down the account.) The permission system needs to be comprehensible, both to administrators trying to check things and to people trying to set things. A permission system
  • 646. that’s hard to use often results in people incorrectly setting permissions, (tech- nically) enabling actions that policy and intent mean to forbid. www.it-ebooks.info http://www.it-ebooks.info/ 158 Part III ■ Managing and Addressing Threats c08.indd 12:26:46:PM 01/13/2014 Page 158 Implementing Authorization Having limited the attack surface, you’ll need to very carefully manage the input you accept at each point on the attack surface. Ensure that you know what you want to accept and how you’re going to use that input. Reject anything that
  • 647. doesn’t match, rather than trying to make a complete list of bad characters. Also, if you get a non-match, reject it, rather than try to clean it up. Operational Assurance of Authorization Operational details, such as “we need to expose this to the Internet” can often lead to those deploying technology wanting to improve their defensive stance. This usually involves adding what can be referred to as defense in depth or layered defense. There are several ways to do this. First, run as a normal or limited user, not as administrator/root. While techni- cally that’s not a mitigation against an elevation-of-privilege threat, but a har- binger of such, it’s inline with the “principle of least privilege.” Each program
  • 648. should run as its own limited user. When unix made “nobody” the default account for services, the nobody account ended up with tremendous levels of authorization. Second, apply all the sandboxing you can. Authorization Technologies Technologies for improving authorization include: ■ ACLs ■ Group or role membership ■ Role based access control ■ Claims-based access control ■ Windows privileges (runas) ■ Unix sudo
  • 649. ■ Chroot, AppArmor or other unix sandboxes ■ The “MOICE” Windows sandbox pattern ■ Input validation for a defi ned purpose N O T E MOICE is the “Microsoft Offi ce Isolated Conversion Environment.” The name comes from the problem that led to the pattern being invented, but the approach can now be considered a pattern for sandboxing on Windows. For more on MOICE, see (LeBlanc, 2007). www.it-ebooks.info http://www.it-ebooks.info/ Chapter 8 ■ Defensive Tactics and Technologies 159 c08.indd 12:26:46:PM 01/13/2014 Page 159
  • 650. N O T E Many Windows privileges are functionally equivalent to administrator, and may not be as helpful as you desire. See (Margosis, 2006) for more details. Tactic and Technology Traps There are two places where it’s easy to get pulled into wasting time when working through these technologies and tactics. The fi rst distraction is risk management. The tactics and technologies in this chapter aren’t the only ways to address threats, but they are the best place to start. When you can use them, they will be easier to implement and work better than more complex or nuanced risk management approaches. For example, if you can address a
  • 651. threat by chang- ing a network endpoint to a local endpoint, there’s no point to engaging in the more time consuming risk management approaches covered in the next chapter. The second distraction is trying to categorize threats. If you found a threat via brainstorming or just the free fl ow of ideas, don’t let the organization of this chapter fool you into thinking you should try to categorize that threat. Instead, focus on fi nding the best way to address it. (Teams can spend longer in debate around categorization than it would take to implement the fi x they identifi ed—changing permissions on a fi le.)
  • 652. Addressing Threats with Patterns In his book, A Pattern Language, architect Christopher Alexander and his col- leagues introduced the concept of architectural patterns (Alexander, 1977). A pattern is a way of expressing how experts capture ways of solving recurring problems. Patterns have since been adapted to software. There are well-understood development patterns, such as the three-tier enterprise app. Security patterns seem like a natural way to group and communicate about tactics and technologies to address security problems into something larger. You can create and distribute patterns in a variety of ways, and this section discusses some of them. However, in practice, these patterns have not been
  • 653. popular. The reasons for this are not clear, and those investing in using patterns to address security problems would likely benefi t from studying the factors that have limited their popularity. Some of those factors might include engineers not knowing when to reach for such a text, or the presentation of security patterns as a distinct subset, apart from other patterns. At least one web patterns book (Van Duyne, 2007) includes a chapter on security patterns. Embedding security patterns where non-specialists are likely to fi nd them seems like a good pattern. www.it-ebooks.info
  • 654. http://www.it-ebooks.info/ 160 Part III ■ Managing and Addressing Threats c08.indd 12:26:46:PM 01/13/2014 Page 160 Standard Deployments In many larger organizations, an operations group will have a standard way to deploy systems, or possibly several standard ways, depending on the data’s sensitivity. In these cases, the operations group can document what sorts of threats their standard deployment mitigates, and provide that document as part of their “on-boarding” process. For example, a standard data center at
  • 655. an organization might include defenses against DDoS, or state that “network information disclosure is an accepted risk for risk categories 1– 3.” Addressing CAPEC Threats CAPEC (MITRE’s Common Attack Pattern Enumeration and Classifi cation) is primarily a collection of attack patterns, but most CAPEC threat patterns include defenses. This chapter has primarily organized threats according to STRIDE. If you are using CAPEC, each CAPEC pattern includes advice about how to address it in its “