SlideShare a Scribd company logo
RATIO ANALYSIS RATIO ANALYSIS Note: Please change
the column names based on your industry and your selected
companies.RATIOS<INDUSTRY><COMPANY
#1><COMPANY #2>ANALYSIS (your comments), which
company is stronger, better/worse than industry, what results
meanProfitability Ratios (%)show calculationshow ending
resultshoww Calculationshow resultGross Margin EBITD
Margin Operating Margin Pretax Margin Effective Tax Rate
Financial StrengthQuick RatioCurrent Ratio LT Debt to Equity
Total Debt to Equity Interest Coverage Valuation RatiosP/E
Ratio Price to Sales (P/S)Price to Book (P/B)Price to Tangible
Book Price to Cash FlowPrice to Free Cash Flow Management
Effectiveness (%)Return On Assets Return On Investment
Return On Equity DividendsDividend YieldPayout Ratio
EfficiencyRevenue/Employee Net Income/Employee Receivable
TurnoverInventory TurnoverAsset Turnover SummaryWhat is
ratio analysis? Briefly explain in this space, and reference your
resources: Referring to your ratio analysis above, in which
company would you be willing to invest, and why?
Heuristics and biases in cyber security dilemmas
Heather Rosoff • Jinshu Cui • Richard S. John
Published online: 28 September 2013
� Springer Science+Business Media New York 2013
Abstract Cyber security often depends on decisions
made by human operators, who are commonly considered a
major cause of security failures. We conducted 2 behav-
ioral experiments to explore whether and how cyber
security decision-making responses depend on gain–loss
framing and salience of a primed recall prior experience. In
Experiment I, we employed a 2 9 2 factorial design,
manipulating the frame (gain vs. loss) and the presence
versus absence of a prior near-miss experience. Results
suggest that the experience of a near-miss significantly
increased respondents’ endorsement of safer response
options under a gain frame. Overall, female respondents
were more likely to select a risk averse (safe) response
compared with males. Experiment II followed the same
general paradigm, framing all consequences in a loss frame
and manipulating recall to include one of three possible
prior experiences: false alarm, near-miss, or a hit involving
a loss of data. Results indicate that the manipulated prior
hit experience significantly increased the likelihood of
respondents’ endorsement of a safer response relative to
the manipulated prior near-miss experience. Conversely,
the manipulated prior false-alarm experience significantly
decreased respondents’ likelihood of endorsing a safer
response relative to the manipulated prior near-miss
experience. These results also showed a main effect for age
and were moderated by respondent’s income level.
Keywords Cyber security � Framing effect �
Near-miss � Decision making
1 Introduction
Individual users regularly make decisions that affect the
security of their personal devices connected to the internet
and, in turn, to the security of the cybersphere. For
example, they must decide whether to install software to
protect from viruses and hackers, download files from
unknown sources, or submit personal identification infor-
mation for web site access or online purchases. Such
decisions involve actions that could result in various neg-
ative consequences (loss of data, reduced computer per-
formance or destruction of a computer’s hard drive).
Conversely, other alternative actions are available that
could protect individuals from negative outcomes, but also
could limit the efficiency and ease of use of the personal
device.
Aytes and Connolly (2004) propose a decision model of
computer-related behavior that suggests individuals make a
rational choice to either engage in safe or unsafe cyber
behavior. In their model, individual behavior is driven by
perceptions of the usefulness of safe and unsafe behaviors
and the consequences of each. More specifically, the model
captures how information sources, the user’s base knowl-
edge of cyber security, the user’s relevant perceptions (e.g.,
interpretations of the applicability of the knowledge), and
the user’s risk attitude influence individual cyber decision
making.
H. Rosoff (&)
Sol Price School of Public Policy, University of Southern
California, Los Angeles, CA, USA
e-mail: [email protected]
H. Rosoff � J. Cui � R. S. John
Center for Risk and Economic Analysis of Terrorism Events
(CREATE), University of Southern California, Los Angeles,
CA, USA
J. Cui � R. S. John
Department of Psychology, University of Southern California,
Los Angeles, CA, USA
123
Environ Syst Decis (2013) 33:517–529
DOI 10.1007/s10669-013-9473-2
This paper reports on two behavioral experiments, using
over 500 respondents, designed to explore whether and
how recommended cyber security decision-making
responses depend on gain–loss framing and salience of
prior cyber dilemma experiences. More specifically, we
explored whether priming individuals to recall a prior
cyber-related experience influenced their decision to select
either a safe versus risky option in responding to a hypo-
thetical cyber dilemma. We hypothesized that recall of a hit
experience involving negative consequences would
increase feelings of vulnerability, even more so than a
near-miss, and lead to the endorsement of a risk averse
option. This result has been reported in the disaster liter-
ature, which has shown that individual decision making
depends on prior experiences, including hits, near-misses
(events where a hazardous or fatal outcome could have
occurred, but do not), and false alarms (Barnes et al. 2007;
Dillon et al. 2011; Siegrist and Gutscher 2008). Further-
more, damage from past disasters has been shown to sig-
nificantly influence individual perceptions of future risk
and to motivate more protective and mitigation-related
behavior (Kunreuther and Pauly 2004; Siegrist and Gut-
scher 2008; Slovic et al. 2005).
We anticipated that the effect of prior near-miss expe-
riences would depend on the interpretation of the prior
near-miss event by the respondent. This expectation was
based on near-miss research that has shown that future-
intended mitigation behavior depends greatly on the per-
ception of the near-miss event outcome. Tinsley et al.
(2012) describe two near-miss types—a resilient and vul-
nerable near-miss. A resilient near-miss is as an event that
did not occur. In these situations, individuals were found to
underestimate the danger of subsequent events and were
more likely to engage in risky behavior by choosing not to
take protective action. A vulnerable near-miss occurs when
a disaster almost happened. New information is incorpo-
rated into the assessment that counters the basic ‘‘near-
miss’’ definition and results in the individual being more
inclined to engage in risk averse behavior (the opposite
behavior related to a resilient near-miss interpretation). In
the cyber context, we expected that respondents who fail to
recognize a prior near-miss as a cyber threat would be more
likely to recommend the risky course of action. However, if
respondents view a recalled near-miss as evidence of vul-
nerability, then they would be more inclined to endorse the
safer option.
In the case of a recalled prior false-alarm experience,
one hypothesis known as the ‘‘cry-wolf effect’’ (Breznitz
2013) suggests that predictions of disasters that do not
materialize affect beliefs about the uncertainty associated
with future events. In this context, false alarms are believed
to create complacency and reduce willingness to respond to
future warnings, resulting in a greater likelihood of
engaging in risky behavior (Barnes et al. 2007; Donner
et al. 2012; Dow and Cutter 1998; Simmons and Sutter
2009). In contrast, there is research showing that the public
may have a higher tolerance for false alarms than antici-
pated. This is because of the increased credibility given to
the event due to the frequency with which it is discussed,
both through media sources and informal discussion, thus,
suggesting that false alarms might increase individuals’
willingness to be risk averse (Dow and Cutter 1998). We
anticipated that recall of prior false alarms would likely
make respondents feel less vulnerable and more willing to
prefer the risky option, compared with the near-miss and
hit conditions.
In our research, we also anticipated that there would be
some influence of framing on individual cyber decision
making under risk. Prospect theory and related empirical
research suggest that decision making under risk depends
on whether potential outcomes are perceived as a gain or as
a loss in relation to a reference point (Kahneman and
Tversky 1979; Tversky and Kahneman 1986). A common
finding in the literature on individual preferences in deci-
sion making shows that people tend to avoid risk under
gain frames, but seek risk when outcomes are framed as a
loss.
Prospect theory is discussed in the security literature,
but empirical studies in cyber security contexts are limited
(Acquisti and Grossklags 2007; Garg and Camp 2013;
Helander and Khalid 2000; Shankar et al. 2002; Verendel
2008). Among the security studies that have been con-
ducted, the results are mixed. The work by Schroeder and
colleagues on computer information security presented at
the 2006 Information Resources Management Association
International Conference found that decision makers were
risk averse in the gain frame, yet they showed no risk
preference in the loss frame. Similarly, in a 1999 presen-
tation about online shopping behavior by Helander and Du
at the International Conference on TQM and Human Fac-
tors, perceived risk of credit card fraud and the potential for
price inflation did not negatively affect purchase intention
(loss frame), while perceived value of a product was found
to positively affect purchase intention. We anticipated that
gain-framed messages in cyber dilemmas would increase
endorsement of protective responses and loss-framed
messages would have no effect on the endorsement of
protective options.
We also explored how subject variables affect the
strength and/or the direction of the relationship between the
manipulated variables, prior experience and gain–loss
framing, and the dependent variable, endorsement of safe
or unsafe options in response to cyber dilemmas. For
example, one possibility is that the relationship between
prior experience and risk averse behavior is greater for
individuals with higher self-reported victimization given
518 Environ Syst Decis (2013) 33:517–529
123
their increased exposure to cyber dilemma consequences.
Another possibility is that the relationship between the gain
frame and protective behavior would be less for younger
individuals because they are more familiar and comfortable
with the nuances of internet security. We anticipated that
there would be some difference in the patterns of response
as a function of sex, age, income, education, job domain,
and self-reported victimization.
The next section of this article describes the methods,
results, and a brief discussion for Experiment I, and Sect. 3
describes the methods, results, and a brief discussion for
Experiment II. The paper closes with a discussion of
findings across both experiments and how these results
suggest approaches to enhance and improve cyber security
by taking into account user decision making.
2 Experiment I
We conducted an experiment of risky cyber dilemmas with
two manipulated variables, gain–loss framing and primed
recall of a prior personal near-miss experience, to evaluate
individual cyber user decision making. The cyber dilem-
mas were developed to capture commonly confronted risky
cyber choices faced by individual users. In addition, in
Experiment I, the dependent variable focused on the advice
the respondent would provide to their best friend so as to
encourage more normative thinking about what might be
the correct response to the cyber dilemma. As such, each
cyber scenario described a risky choice dilemma faced by
the respondent’s ‘‘best friend,’’ and the respondent was
asked to recommend either a safe but inconvenient course
of action (e.g., recommend not downloading the music file
from an unknown source), or a risky but more convenient
option (e.g., recommend downloading the music file from
an unknown source).
2.1 Method
2.1.1 Design overview
In Experiment I, four cyber dilemmas were developed to
evaluate respondents’ risky choice behavior using a 2
(recalled personal near-miss experience or no recall control
condition) by 2 (gain versus loss-framed message) mixed
model factorial design with two dichotomous subject
variables: sex and self-reported victimization. Each par-
ticipant received all four dilemmas in a constant order.
Within this order, each of the four treatment conditions was
paired with each of the four dilemmas and counterbalanced
such that each of the dilemmas was randomly assigned to
each of the four treatment conditions.
After each cyber dilemma, respondents were asked to
respond on a 6-point scale (1 = strongly disagree to
6 = strongly agree) whether they would advise their ‘‘best
friend’’ to proceed in taking a risky course of action.
Responses of 1–3 indicated endorsement of the safe but
inconvenient option, while responses of 4–6 indicated
endorsement of the risky but expedient option. Following
the four cyber dilemmas, respondents were given four
attention check questions to determine whether they were
reading the cyber scenarios carefully. In addition, basic
demographic information was collected as well as infor-
mation on each respondent’s personal experience and self-
reported victimization, if any, with the topics of the cyber
dilemmas.
2.1.2 Scenarios and manipulations
The four cyber dilemma scenarios involved the threat of a
computer virus resulting from the download of a music file,
the use of an unknown USB drive device, the download of
a Facebook application, and the risk of financial fraud from
an online purchase. Gain–loss framing and primed recall of
a prior personal experience were manipulated independent
variables. The framing messages were used to describe the
potential outcome of the risky cyber choice. The gain-
framed messages endorsed the safe, more protective rec-
ommendation. For example, for the download of a music
file scenario, the gain frame was worded as ‘‘If she presses
‘do not proceed,’ she may avoid the risk of acquiring a
virus that will cause serious damage to her computer.’’
Conversely, the loss-framed messages endorsed the risky
option/choice. For the download of a music file scenario,
the loss frame was worded as ‘‘If she presses ‘proceed,’ she
may risk acquiring a virus that will cause serious damage to
her computer.’’ The experimental design also included a
manipulation of primed recall of a prior personal experi-
ence. Respondents either recalled a near-miss experience of
their own before advising their friend, or did not (a control
condition). In each near-miss experience, the respondent’s
dilemma was similar to the situation faced by their best
friend and the consequences of the threat were benign. A
complete description of the four scenarios, including the
near-miss and gain–loss framing manipulations, is pro-
vided in Table 1.
2.1.3 Subjects
The experiment was conducted using the University of
Southern California’s Psychology Subject Pool. Students
participated for course credit. Of the 365 students who
participated in the experiment, 99 were omitted for not
answering all 4 of the attention check questions correctly,
resulting in a sample of 266 respondents. Most, 203 (76 %)
Environ Syst Decis (2013) 33:517–529 519
123
Table 1 Summary of four scenarios and manipulations
(Experiment I)
Scenario 1: Music File Scenario 2: USB Scenario 3: Facebook
Scenario 4: Rare Book
Scenario Your best friend has contacted you
for advice. She wants to open a
music file linking to an early
release of her favorite band’s
new album. When she clicks on
the link, a window pops up
indicating that she needs to turn
off her firewall program in order
to access the file
Your best friend has contacted you
for advice. Her computer keeps
crashing because it is overloaded
with programs, documents and
media files. She consults a
computer technician who
advises her to purchase a 1
terabyte USB drive (data storage
device) to free up space on her
computer. She does her research
and narrows down the selection
to two choices
Your best friend has contacted you
for advice. She has opened her
Facebook page to find an app
request for a game that her
friends have been really excited
about. In order to download the
app, access to some of her
personal information is required
including her User ID and other
information from her profile
Your best friend has contacted you
for advice. She is going to buy a
rare book from an unknown
online store. The book is highly
desirable, expensive and only
available from this online store’s
website. By deciding to purchase
the book online with her credit
card, there is a risk that her
personal information will be
exploited which can generate
unauthorized credit card
charges. Her credit card charges
$50 for the investigation and
retrieval of funds expended
when resolving fraudulent credit
card issues
Gain
framing
If she presses ‘‘do not proceed,’’
she may avoid the risk of
acquiring a virus that will cause
serious damage to her computer
The first USB drive when used on
a computer other than your own
has a 10 % chance of becoming
infected with a virus that will
delete all the files and programs
on the drive. The second drive is
double the price, but has less
than a 5 % chance of becoming
infected with a virus when used
on a computer other than your
own
If she chooses not to agree to the
terms of the app, she is
protecting her private
information from being made
available to the developer of the
app
If she decides not to buy the book,
she may save up to $50 and the
time spent talking with the credit
card company
Loss
framing
If she presses ‘‘proceed,’’ she may
risk acquiring a virus that will
cause serious damage to her
computer
The first USB drive when used on
a computer other than her own
has a 5 % chance of becoming
infected with a virus that will
delete all the files and programs
on the drive. The second drive is
half the price but has more than
a 5 % chance of becoming
infected with a virus when used
on a computer other than her
own
If she chooses to agree to the terms
of the app, she risks the chance
of her private information being
made available to the developer
of the app
If she decides to buy the book, she
may lose up to $50 and the time
spent talking with the credit card
company
Near-miss
experience
As you consider how to advise
your friend, you recall that you
were confronted by a similar
situation in the past. You
attempted to open a link to a
music file and a window popped
up saying that you need to turn
off your firewall program in
order to access the file. You
pressed ‘‘proceed’’ and your
computer immediately crashed.
Fortunately, after restarting your
computer everything was
functioning normally again
As you consider how to advise
your friend, you recall that your
USB drive recently was infected
with a virus after being plugged
into a computer at work. You
contacted a computer technician
to see if there was any way to
repair the drive. The technician
was able to recover all the files
and told you that you were really
lucky because normally such
drives cannot be restored
As you consider how to advise
your friend, you recall that you
once agreed to share some of
your personal information in
order to download an app on
Facebook. The developers of the
app made your User ID publicly
available and because of this you
started to receive messages from
strangers on your profile page.
You were very upset about the
invasion of your privacy.
Fortunately, you discovered that
you could change the privacy
settings of your profile so that
only your friends could access
your page
As you consider how to advise
your friend, you recall that you
once purchased a rare book from
an unknown online store. You
were expecting the book to
arrive 1 week later. About
2 weeks later, you had yet to
receive the book. You were very
concerned that you had done
business with a fake online store.
You contacted the store’s
customer service who
fortunately tracked down the
book’s location and had it
shipped with overnight delivery.
Question Below please indicate your level
of agreement with the statement
‘‘You will advise your best
friend to press ‘‘proceed’’ and
risk acquiring a virus that will
cause serious damage to her
computer’’
Below please indicate your level
of agreement with the statement
‘‘You will advise your best
friend to buy the first USB drive
that has a 10 % chance of
becoming infected with a
virus.’’/’’You will advise your
best friend to buy the second
USB drive that has a greater than
5 % chance of becoming
infected with a virus’’
Below please indicate your level
of agreement with the statement
‘‘You will advise your best
friend to download the app and
risk having her private
information made available to
the app developer’’
Below please indicate your level
of agreement with the statement:
‘‘You will advise your best
friend to purchase the book
online and risk having her
personal information exploited’’
520 Environ Syst Decis (2013) 33:517–529
123
of the respondents, were female. Respondents ranged in
age from 18 to 41 years (95 % percentile is 22 years old).
Table 2 shows a summary of personal experience and
self-reported victimization associated with each of the four
cyber dilemmas. All respondents reported having been a
victim of one of the four cyber dilemmas. Twenty-four
percent of respondents further reported being a victim of
one or more of the four cyber dilemmas. We coded whether
the respondent had ever been victimized by one of the four
scenarios as a variable of self-reported victimization.
2.2 Results
Raw responses (1–6) were centered around the midpoint
(3.5) such that negative responses indicate endorsement of
the safe option, and positive responses indicate endorse-
ment of the risky option. Mean endorsement responses for
each of the four treatment conditions are displayed in
Fig. 1. The negative means in all four conditions indicate
that subjects were more likely to endorse risk averse
actions compared with the risky alternative.
1
In addition, a 2 (recalled personal near-miss experience
or no recall control condition) by 2 (gain vs. loss-framed
message) by 2 (sex) by 2 (self-reported victimization)
4-way factorial ANOVA was used to evaluate respondents
endorsement of risky versus safe options in cyber dilem-
mas. Analyses were specified to only include main effects
and 2-way interactions with the manipulated variables.
Preliminary data screening was conducted, and q–q plots
indicated that the dependent variable is approximately
normally distributed.
Results indicated that the near-miss manipulation was
significant, F (1, 260) = 7.42, p = .01, g2 = .03.
Respondents who received a description of a recalled near-
miss experience preferred the safe but inconvenient option
to the risky, more expedient option. No main effect was
found for the gain–loss framing manipulation, suggesting
that respondents were indifferent between safe versus risky
decision options when the outcomes were described as
gains or losses from a reference point. There also was a
significant interaction between the framing and near-miss
manipulations: F (1, 260) = 4.01, p = .05, g2 = .02. As
seen in Fig. 1, the near-miss manipulation was much larger
under the gain frame compared with the loss frame.
Basic demographic data also was collected to assess
whether individual differences moderated the effect of the
two manipulations. A significant main effect was found for
sex: F (1, 260) = 3.81, p = .05, g2 = .01; Sex’s cohen’s
d are 0.33 for gain framing without near-miss, 0.09 for gain
framing with near-miss, 0.18 for loss framing without near-
miss, and 0.19 for loss framing with near-miss. Female
respondents were more likely to avoid risks and choose the
safe option. No significant main effect was found for self-
reported victimization. Also, none of the interactions were
significant; sex and framing, sex and near-miss experience,
victimization and framing, and victimization and near-
miss.
Table 2 Summary of experience and victimization
Scenario (N = 266) Personal
experience
Previous
victimization
Music file download 205 (77 %) 40 (15 %)
USB drive 110 (41 %) 12 (4.5 %)
Facebook App
download
253
a
(95 %) 3 (1 %)
Online purchase 259 (97 %) 18 (7 %)
Overall (at least once) 265
b
(100 %) 64 (24 %)
a
An app is downloaded from Facebook at least once a week
b
There is one missing value
1
Since the four scenarios are in a constant order, a second
analysis
was run that ignored the manipulated factors and included
scenario/
order as a repeated factor. A one-way repeated measure
ANOVA
found a significant scenario/order effect: F (3, 265) = 30.42,
p  .001, g2 = .10. Over time, respondents were more likely to
endorse the risky option. Because the nature of the dilemma
scenario
and order are confounded, it is impossible to determine whether
the
significant main effect indicates an order effect or a scenarios
effect or
a combination of both. The counterbalanced design distributed
all 4
combinations of framing and prior experience recall evenly
across the
four scenario dilemmas. Order and/or scenario effects are
independent
of the manipulated factors, and thus are included in the error
term in
the ANOVA.
-1.4
-1.2
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
Control Near-missM
e
a
n
E
n
d
o
rs
e
m
e
n
t
o
f
S
a
fe
v
s
.
R
is
k
y
A
d
v
ic
e
Near- miss
Advice by Framing and Near-miss
Gain
Loss
Risky
Safe
Framing
Fig. 1 Mean endorsement of risky versus safe responses to cyber
threats by gain–loss frame and prior near-miss
Environ Syst Decis (2013) 33:517–529 521
123
2.3 Discussion
The results of Experiment I suggest that respondents’ cyber
security recommendations to their best friend were signif-
icantly influenced by the personal experience recall
manipulation. More specifically, respondents who recalled
a near-miss experience were more likely to advise their
best friend to avoid making the risky cyber choice com-
pared with their no recall counterpart. This finding is
consistent with Tinsley et al. (2012) definition of a vul-
nerable near-miss—an ‘‘almost happened’’ event that
makes individuals feel vulnerable and, in turn, leads to a
greater likelihood of endorsing the safer option.
Respondents who recalled a near-miss experience were
even more likely to advise their best friend to take the safer
course of action if they also received the gain message.
Comparatively, the loss frame had a negligible effect on
the primed recall prior experience manipulation. That is,
respondents who received the loss frame were as likely to
recommend the risk averse course of action to their best
friend regardless of whether their prior experience was a
near-miss or not. This finding suggests that people will be
more risk averse when they are exposed either to a recall of
a prior near-miss and/or a loss frame. The combination of
no prior recall of a near-miss and a gain frame did produce
less risk averse responses. This suggests a highly interac-
tive, synergistic effect, in which the frame and the near-
miss recall substitute for each other.
In addition, sex and prior victimization were found to
have no moderating effect on the relationship between cyber
dilemma responses and the two manipulated variables.
Cyber dilemma decision making was found to significantly
vary by respondents’ sex, but not by self-reported victim-
ization. The results suggest that females make more pro-
tective decisions when faced with risky cyber dilemmas
compared with males. This pattern has been replicated in
cyber research in an experiment of online shopping services
where males demonstrated a greater tendency to engage in
risky behavior online (Milne et al. 2009). Disaster risk per-
ception studies also have shown that risks tend to be judged
higher by females (Flynn et al. 1994; Bateman and Edwards
2002; Kung and Chen 2012; Bourque et al. 2012) and that
females tend to have a stronger desire to take preventative
and preparedness measures compared with males (Ho et al.
2008; Cameron and Shah 2012).
3 Experiment II
The primary purpose of Experiment II was to expand the
primed recall prior experience manipulation to compare
three prior cyber experiences: a near-miss, a false alarm,
and a hit involving a loss of data. The prior cyber experi-
ence recall prime for Experiment II involved experiences
of a good friend, rather than the respondents’ past experi-
ences (used in Experiment I). We also posed all questions
using a loss frame to enhance the ecological validity of the
cyber dilemmas posed, the consequences of which are
naturally perceived as losses from a status quo. The
dependent variable was also changed for Experiment II.
Each respondent was asked to report whether they would
select the safe or risky option in response to their own
cyber dilemma, as opposed to providing advice to their best
friend involved in a risky cyber dilemma as in Experiment
I. One interpretation of the finding from Experiment I that
respondents generally favored the safe option was that they
were possibly more risk averse in advising a friend com-
pared to how they would respond to their own cyber
dilemma. By posing the dilemma in the first person, we
sought to characterize how respondents would be likely to
respond when facing a cyber dilemma. The cyber dilemmas
were also described in a more concrete fashion for
Experiment II, including a ‘‘screenshot’’ of the dilemma
facing the respondent.
3.1 Method
3.1.1 Design overview
In Experiment II, three cyber dilemmas were constructed to
evaluate respondents’ risky choice behavior using one
manipulated variable, recall of a friend’s false alarm, near-
miss or hit experience. In addition, six individual differ-
ence variables were included in the design: sex, age,
income, education, job domain, and self-reported victim-
ization. Each participant received all three dilemmas in a
constant order. Each of the three primed recall prior cyber
experiences was paired with one of the three scenarios in a
counterbalanced design such that each of the cyber
dilemmas appeared in each of the three treatment condi-
tions with equal frequency.
After each cyber dilemma, respondents were asked to
respond on a 6-point scale (1 = strongly disagree to
6 = strongly agree) regarding their intention to ignore the
warning and proceed with the riskier course of action.
Following all three cyber dilemmas, respondents were
given three attention check questions related to the nature
of each dilemma. Respondents also were asked to provide
basic demographic information and answer a series of
questions about their experience with computers and cyber
dilemmas, such as their experience with purchasing from a
fraudulent online store, being locked out from an online
account, or having unauthorized withdrawals made from
their online banking account.
522 Environ Syst Decis (2013) 33:517–529
123
3.1.2 Scenarios and manipulations
The three cyber dilemma scenarios involved the threat of
causing serious damage to the respondents’ computer as a
result of downloading a music file, installing a plug-in for
an online game, and downloading a media player to legally
stream videos. The scenarios were written to share the
same possible negative outcome—the computer’s operat-
ing system crashes, resulting in an unusable computer until
repaired. Establishing uniformity of consequences across
the three scenarios reduced potential unexplained variance
across the three levels of the manipulated variable.
Experiment II also included screenshots of ‘‘pop-up’’
window images similar to those that would appear on the
computer display when the cyber dilemma is presented.
These images were intended to make the scenarios more
concrete and enhance the realism of the cyber dilemma
scenarios.
Primed recall of a friend’s prior cyber experience was
the only manipulated variable in this experiment.
Respondents either recalled their friend’s near-miss, false
alarm or hit experience before deciding whether to select
the safe or risky option in response to the described cyber
dilemma. All potential outcomes were presented in a loss
frame, with wording held constant except for details spe-
cific to the scenario under consideration. For example, the
wording of the loss frame for the hit outcome of the
download a music file scenario was ‘‘She pressed ‘allow
access’ and her computer immediately crashed. She ended
up having to wipe the computer’s hard drive clean and to
reinstall the operating system.’’ The only modification
made for the installation of the plug-in scenario was
switching the words ‘‘allow access’’ to ‘‘run.’’ A complete
description of the scenarios, including the primed recall of
the friend’s prior experiences, is provided in Table 3.
3.1.3 Subjects
Three hundred and seventy-six US residents were recruited
through Amazon Mechanical Turk (AMT) to participate in
the experiment. Researchers have assessed the representa-
tiveness of AMT samples compared with convenience
samples found locally and found AMT samples to be
representative (Buhrmester et al. 2011; Mason and Suri
2012; Paolacci et al. 2010) and ‘‘significantly more diverse
than typical American college samples’’ (Buhrmester et al.
2011). Each respondent earned $1 for completion of the
experiment. After removing respondents who did not
answer all three of the attention check questions correctly
or completed the experiment in less than 7 min, the sample
consisted of 247 respondents. Five additional respondents
skipped questions, resulting in a final sample size of
N = 242. Table 4 includes a summary of sample
characteristics, including sex, age, income, education, job
domain, and self-reported victimization. Self-reported
victimization is defined in terms of experiences with four
types of negative cyber events: (1) getting a virus on an
electronic device, (2) purchasing from a fraudulent online
store, (3) being locked out from an online account, or (4)
having unauthorized withdrawals made from their online
banking account. Respondents also responded to a number
of experience questions that are summarized in Table 5 as
additional detail about the study sample.
3.2 Results
A mixed model ANOVA with one within-subject factor
(primed recall of a prior experience) and six individual
difference variables as between-subject factors were used.
This model included only the seven main effects and the
six 2-way interactions involving the manipulated within-
subject variable and each of the six between-subject vari-
able. Preliminary data screening was done; q–q plots
showed the scores on the repeated measures variable, prior
salient experience, to have an approximately normal
distribution.
2
Results show that the primed recall prior experience
manipulation had a significant effect on how respondents
intended to respond to the cyber dilemmas, F (1,
231) = 31.60, p  .00, g2 = .12. Moreover, post hoc
comparisons using the least significant difference (LSD)
test indicate that the mean score for the false-alarm con-
dition (M = 3.65, SD = 0.11) was significantly different
from the near-miss condition (M = 2.97, SD = 0.11) with
p  .01, and the hit condition (M = 2.34, SD = 0.11)
significantly differed from the near-miss and false-alarm
conditions with p  .01. This suggests that respondents
who received a description of a friend’s near-miss experi-
ence recall preferred the safer, risk averse option compared
with respondents who were primed to recall a friend’s prior
false-alarm experience. Respondents were found to be even
more likely to select the safe option when they were primed
to recall a friend’s prior hit experience. As displayed in
Fig. 2, the positive means for the false-alarm condition
indicate that respondents were more likely to engage in
risky behavior compared with the negative means for the
near-miss and hit conditions.
The analysis also included both main effects and inter-
action terms for six different subject variables, including
2
As in Exp I, a one-way repeated measure ANOVA shows there
is a
significant scenario/order effect: F (2, 265) =4.47, p = .035, g2
= .02.
Over time and/or scenario, respondents were more likely to
endorse the
risky option. However, as in Experiment I, it is difficult to
determine
whether the main effect is for the scenarios or the order effect.
The study
design we used overcame this limitation by using a
counterbalanced
design.
Environ Syst Decis (2013) 33:517–529 523
123
T
a
b
le
3
S
u
m
m
a
ry
o
f
th
re
e
sc
e
n
a
ri
o
s
a
n
d
m
a
n
ip
u
la
ti
o
n
s
(E
x
p
e
ri
m
e
n
t
II
)
S
c
e
n
a
ri
o
1
:
M
u
si
c
F
il
e
S
c
e
n
a
ri
o
2
:
P
lu
g
-i
n
In
st
a
ll
S
c
e
n
a
ri
o
3
:
U
n
k
n
o
w
n
N
e
tw
o
rk
S
c
e
n
a
ri
o
Y
o
u
w
a
n
t
to
d
o
w
n
lo
a
d
a
m
u
si
c
fi
le
li
n
k
in
g
to
a
n
e
a
rl
y
re
le
a
se
o
f
y
o
u
r
fa
v
o
ri
te
b
a
n
d
’s
n
e
w
a
lb
u
m
.
W
h
e
n
y
o
u
c
li
c
k
o
n
th
e
li
n
k
,
th
e
fo
ll
o
w
in
g
w
in
d
o
w
p
o
p
s
u
p
:
If
y
o
u
p
re
ss
‘‘
a
ll
o
w
a
c
c
e
ss
,’
’
y
o
u
m
a
y
ri
sk
c
a
u
si
n
g
se
ri
o
u
s
d
a
m
a
g
e
to
y
o
u
r
c
o
m
p
u
te
r
Y
o
u
a
re
in
te
re
st
e
d
in
p
la
y
in
g
a
n
o
n
li
n
e
g
a
m
e
th
a
t
re
q
u
ir
e
s
a
p
lu
g
-i
n
to
ru
n
.
B
e
fo
re
in
st
a
ll
in
g
th
e
p
lu
g
-i
n
,
th
e
fo
ll
o
w
in
g
w
in
d
o
w
p
o
p
s
u
p
:
If
y
o
u
c
li
c
k
‘‘
ru
n
,’
’
y
o
u
m
a
y
ri
sk
in
st
a
ll
in
g
a
p
lu
g
-i
n
th
a
t
c
o
u
ld
se
ri
o
u
sl
y
d
a
m
a
g
e
y
o
u
r
c
o
m
p
u
te
r
Y
o
u
h
a
v
e
d
o
w
n
lo
a
d
e
d
a
m
e
d
ia
p
la
y
e
r
to
le
g
a
ll
y
st
re
a
m
v
id
e
o
s
fr
o
m
y
o
u
r
c
o
m
p
u
te
r.
W
h
e
n
y
o
u
o
p
e
n
th
e
p
la
y
e
r,
th
e
fo
ll
o
w
in
g
w
in
d
o
w
p
o
p
s
u
p
:
If
y
o
u
p
re
ss
‘‘
y
e
s,
y
o
u
m
a
y
ri
sk
u
si
n
g
a
m
e
d
ia
p
la
y
e
r
th
a
t
c
o
u
ld
se
ri
o
u
sl
y
d
a
m
a
g
e
y
o
u
r
c
o
m
p
u
te
r
Y
o
u
r
e
x
p
e
ri
e
n
c
e
Y
o
u
re
c
a
ll
th
a
t
y
o
u
r
fr
ie
n
d
to
ld
y
o
u
sh
e
w
a
s
c
o
n
fr
o
n
te
d
b
y
a
si
m
il
a
r
si
tu
a
ti
o
n
in
th
e
p
a
st
.
S
h
e
w
a
s
a
tt
e
m
p
ti
n
g
to
o
p
e
n
a
m
u
si
c
fi
le
a
n
d
a
w
in
d
o
w
p
o
p
p
e
d
u
p
sa
y
in
g
th
e
p
ro
g
ra
m
w
a
s
b
lo
c
k
e
d
b
y
a
fi
re
w
a
ll
Y
o
u
re
c
a
ll
th
a
t
y
o
u
r
fr
ie
n
d
to
ld
y
o
u
sh
e
o
n
c
e
d
o
w
n
lo
a
d
e
d
a
p
lu
g
-i
n
to
p
la
y
a
n
o
n
li
n
e
g
a
m
e
a
n
d
w
a
s
w
a
rn
e
d
p
ri
o
r
to
in
st
a
ll
a
ti
o
n
th
a
t
th
e
p
u
b
li
sh
e
r
c
o
u
ld
n
o
t
b
e
v
e
ri
fi
e
d
Y
o
u
re
c
a
ll
th
a
t
y
o
u
r
fr
ie
n
d
to
ld
y
o
u
sh
e
o
n
c
e
in
st
a
ll
e
d
a
m
e
d
ia
p
la
y
e
r
a
n
d
re
c
e
iv
e
d
a
w
a
rn
in
g
a
b
o
u
t
a
ll
o
w
in
g
a
n
u
n
k
n
o
w
n
p
u
b
li
sh
e
r
to
m
a
k
e
c
h
a
n
g
e
s
to
h
e
r
c
o
m
p
u
te
r
F
a
ls
e
a
la
rm
S
h
e
p
re
ss
e
d
‘‘
a
ll
o
w
a
c
c
e
ss
’’
a
n
d
su
c
c
e
ss
fu
ll
y
d
o
w
n
lo
a
d
e
d
th
e
m
u
si
c
fi
le
w
it
h
o
u
t
a
n
y
d
a
m
a
g
e
o
c
c
u
rr
in
g
to
h
e
r
c
o
m
p
u
te
r
S
h
e
c
li
c
k
e
d
‘‘
ru
n
,’
’
a
n
d
su
c
c
e
ss
fu
ll
y
p
la
y
e
d
th
e
g
a
m
e
w
it
h
o
u
t
c
a
u
si
n
g
a
n
y
d
a
m
a
g
e
to
h
e
r
c
o
m
p
u
te
r
S
h
e
p
re
ss
e
d
‘‘
a
ll
o
w
’’
a
n
d
su
c
c
e
ss
fu
ll
y
u
se
d
th
e
p
la
y
e
r
to
w
a
tc
h
v
id
e
o
s
w
it
h
o
u
t
a
n
y
d
a
m
a
g
e
o
c
c
u
rr
in
g
to
h
e
r
c
o
m
p
u
te
r
N
e
a
r-
m
is
s
S
h
e
p
re
ss
e
d
‘‘
a
ll
o
w
a
c
c
e
ss
’’
a
n
d
h
e
r
c
o
m
p
u
te
r
im
m
e
d
ia
te
ly
fl
a
sh
e
d
a
b
lu
e
sc
re
e
n
a
n
d
a
u
to
m
a
ti
c
a
ll
y
re
b
o
o
te
d
b
e
fo
re
sh
e
h
a
d
ti
m
e
to
re
a
d
a
n
y
th
in
g
.
F
o
rt
u
n
a
te
ly
,
fo
ll
o
w
in
g
th
e
re
b
o
o
t
h
e
r
c
o
m
p
u
te
r
w
a
s
o
p
e
ra
ti
n
g
n
o
rm
a
ll
y
S
h
e
c
li
c
k
e
d
‘‘
ru
n
’’
a
n
d
h
e
r
c
o
m
p
u
te
r
im
m
e
d
ia
te
ly
fl
a
sh
e
d
a
b
lu
e
sc
re
e
n
a
n
d
a
u
to
m
a
ti
c
a
ll
y
re
b
o
o
te
d
b
e
fo
re
sh
e
h
a
d
ti
m
e
to
re
a
d
a
n
y
th
in
g
.
F
o
rt
u
n
a
te
ly
,
fo
ll
o
w
in
g
th
e
re
b
o
o
t
h
e
r
c
o
m
p
u
te
r
w
a
s
o
p
e
ra
ti
n
g
n
o
rm
a
ll
y
S
h
e
p
re
ss
e
d
‘‘
a
ll
o
w
’’
a
n
d
h
e
r
c
o
m
p
u
te
r
im
m
e
d
ia
te
ly
fl
a
sh
e
d
a
b
lu
e
sc
re
e
n
a
n
d
a
u
to
m
a
ti
c
a
ll
y
re
b
o
o
te
d
b
e
fo
re
sh
e
h
a
d
ti
m
e
to
re
a
d
a
n
y
th
in
g
.
F
o
rt
u
n
a
te
ly
,
fo
ll
o
w
in
g
th
e
re
b
o
o
t
h
e
r
c
o
m
p
u
te
r
w
a
s
o
p
e
ra
ti
n
g
n
o
rm
a
ll
y
H
it
S
h
e
p
re
ss
e
d
‘‘
a
ll
o
w
a
c
c
e
ss
’’
a
n
d
h
e
r
c
o
m
p
u
te
r
im
m
e
d
ia
te
ly
c
ra
sh
e
d
.
S
h
e
e
n
d
e
d
u
p
h
a
v
in
g
to
w
ip
e
th
e
c
o
m
p
u
te
r’
s
h
a
rd
d
ri
v
e
c
le
a
n
a
n
d
to
re
in
st
a
ll
th
e
o
p
e
ra
ti
n
g
sy
st
e
m
S
h
e
c
li
c
k
e
d
‘‘
ru
n
’’
a
n
d
h
e
r
c
o
m
p
u
te
r
im
m
e
d
ia
te
ly
c
ra
sh
e
d
.
S
h
e
e
n
d
e
d
u
p
h
a
v
in
g
to
w
ip
e
th
e
c
o
m
p
u
te
r’
s
h
a
rd
d
ri
v
e
c
le
a
n
a
n
d
to
re
in
st
a
ll
th
e
o
p
e
ra
ti
n
g
sy
st
e
m
S
h
e
p
re
ss
e
d
‘‘
a
ll
o
w
’’
a
n
d
h
e
r
c
o
m
p
u
te
r
im
m
e
d
ia
te
ly
c
ra
sh
e
d
.
S
h
e
e
n
d
e
d
u
p
h
a
v
in
g
to
w
ip
e
th
e
c
o
m
p
u
te
r
c
le
a
n
a
n
d
to
re
in
st
a
ll
th
e
o
p
e
ra
ti
n
g
sy
st
e
m
Q
u
e
st
io
n
B
e
lo
w
p
le
a
se
in
d
ic
a
te
y
o
u
r
le
v
e
l
o
f
a
g
re
e
m
e
n
t
w
it
h
th
e
st
a
te
m
e
n
t
‘‘
Y
o
u
w
il
l
p
re
ss
‘‘
a
ll
o
w
a
c
c
e
ss
’’
a
n
d
ri
sk
in
st
a
ll
in
g
a
fi
le
th
a
t
c
o
u
ld
se
ri
o
u
sl
y
d
a
m
a
g
e
y
o
u
r
c
o
m
p
u
te
r’
’
B
e
lo
w
p
le
a
se
in
d
ic
a
te
y
o
u
r
le
v
e
l
o
f
a
g
re
e
m
e
n
t
w
it
h
th
e
st
a
te
m
e
n
t
‘‘
Y
o
u
w
il
l
c
li
c
k
‘‘
ru
n
’’
a
n
d
ri
sk
in
st
a
ll
in
g
a
p
lu
g
-i
n
th
a
t
c
o
u
ld
se
ri
o
u
sl
y
d
a
m
a
g
e
y
o
u
r
c
o
m
p
u
te
r’
’
B
e
lo
w
p
le
a
se
in
d
ic
a
te
y
o
u
r
le
v
e
l
o
f
a
g
re
e
m
e
n
t
w
it
h
th
e
st
a
te
m
e
n
t:
‘‘
Y
o
u
w
il
l
p
re
ss
‘‘
a
ll
o
w
’’
’
a
n
d
ri
sk
u
si
n
g
a
m
e
d
ia
p
la
y
e
r
th
a
t
c
o
u
ld
se
ri
o
u
sl
y
d
a
m
a
g
e
y
o
u
r
c
o
m
p
u
te
r’
’
524 Environ Syst Decis (2013) 33:517–529
123
sex, age, level of education, income level, job domain, and
self-reported victimization. For the purpose of analysis, age
was collapsed into three levels: 18–29, 30–39, and 40 years
and older; education level was collapsed into three cate-
gories: high school and 2-year college, 4-year college, and
master’s degree or higher; and annual income level was
collapsed into three categories: below $30,000/year,
$30,000–$59,999/year, and $60,000/year and more.
The results of the ANOVA indicated there was a sig-
nificant main effect for age: F (2, 231) = 4.9, p = .01,
g2 = .04, and no significant main effects for sex,
education, income, job domain, and self-reported victim-
ization. Figure 2 suggests that younger respondents com-
pared with older respondents were more likely to choose
the riskier option in cyber dilemmas across all 3 levels of
the primed prior recall experience manipulation.
Results also showed a significant interaction effect
between income and the primed prior recall experience
manipulation: F (2, 231) = 3.40, p = .01, g2 = .03. Fig-
ure 3 indicates that respondents with higher income levels
(greater than $60 K per year) were less sensitive to the
primed recall of a friend’s experience. There was no
Table 4 Demographic information for AMT respondents
Demographic variable (N = 242) Variable response category
Number and percentage of sample
Sex Male 108 (44.6 %)
Female 134 (55.4 %)
Highest level of education High school 65 (26.9 %)
2-year college 38 (15.7 %)
4-year college 102 (42.1 %)
Master’s degree 30 (12.4 %)
Professional (e.g., M.D.,
Ph.D., J.D.) degree
7 (2.9 %)
Personal gross annual income range Below $20,000/year 66
(27.3 %)
$20,000–$29,999/year 31 (12.8 %)
$30,000–$39,999/year 35 (14.5 %)
$40,000–$49,999/year 28 (11.6 %)
$50,000–$59,999/year 15 (6.2 %)
$60,000–$69,999/year 23 (9.5 %)
$70,000–$79,999/year 13 (5.4 %)
$80,000–$89,999/year 10 (4.1 %)
$90,000/year or more 21 (8.7 %)
Does your work relate to technology? I use computers normally
but my
work has nothing to do with
technology.
172 (71.1 %)
My work is about technology 70 (28.9 %)
Victim of getting a virus on an electronic device Yes 165 (68.2
%)
No 77 (31.8 %)
Victim of purchasing from a fake online store Yes 15 (6.2 %)
No 221 (91.3 %)
I don’t shop online 6 (2.5 %)
Victim of failure to log into an online account Yes 85 (35.1 %)
No 157 (64.9 %)
Victim of unauthorized withdrawals from an online banking
account Yes 44 (18.2 %)
No 198 (81.8 %)
Overall self-reported victimization None 46 (19.0 %)
One type 104 (43.0 %)
Two or more types 92 (38.0 %)
Age (years) Range 18–75
Percentiles 25th 27
50th 33
75th 44
Environ Syst Decis (2013) 33:517–529 525
123
significant interaction effect between the manipulation and
the other five individual difference variables, including sex:
F (1, 231) 1, age: F (2, 231) = 1.84, p = .12, g2 = .02,
education: F (2, 231) 1, job domain, F (1, 231) = 2.01,
p = .14, g2 = .01, and self-reported victimization, F (2,
231) = 2.03, p = .09, g2 = .02.
3.3 Discussion
Responses to risky cyber dilemmas in Experiment II were
significantly predicted by the primed recall of a friend’s
prior cyber experience. Consistent with our hypotheses, the
more negative the consequence associated with the prior
cyber experience, the more likely the respondents were to
choose the safer course of action. In particular, respondents
who were primed to recall a prior near-miss or hit event
interpreted the experience as a sign of vulnerability com-
pared with the recall of a prior false alarm and, in turn,
were more likely to promote more conservative (safe)
endorsements of actions. In the case of false alarms, our
findings suggest that respondents were more likely to
endorse the risky alternative.
-2
-1.5
-1
-0.5
0
0.5
1
False-alarm Near-miss Hit
M
e
a
n
E
n
d
o
rs
e
m
e
n
t
o
f
S
a
fe
v
s
.
R
is
k
y
O
p
ti
o
n
Salient Prior Experience
Endorsement by Age
18 - 29 years old
30 - 39 years old
40 years old and
older
Risky
Safe
Age
Fig. 2 Mean endorsement of risky versus safe responses to cyber
threats by primed recall of friend’s prior experience and age
M
e
a
n
E
n
d
o
rs
e
m
e
n
t
o
f
S
a
fe
v
s
.
R
is
k
y
O
p
ti
o
n
Salient Prior Experience
Endorsement by Income Level
Income
Fig. 3 Mean endorsement of risky versus safe responses to cyber
threats by primed recall of friend’s prior experience and income
level
Table 5 Cyber-related responses for AMT respondents
Questions Response
category
Number and
percentage of
sample
Personal computer PC 213 (88.0 %)
Mac 28 (11.6 %)
Do not have a
personal
computer
1 (0.4 %)
Smartphone iOS 67 (27.6 %)
Android 95 (39.3 %)
Do not have a
smartphone
80 (33.1 %)
Protection software Yes 211 (87.2 %)
No 31 (12.8 %)
Have you ever downloaded free
music, an e-book, a movie, or a
television show from an
unfamiliar website found
through a Google search?
Yes 135 (55.8 %)
No 107 (44.2 %)
How often do you access your
social networking accounts
(Facebook, Twitter, Myspace,
MSN, Match.com, etc.)?
Every day 150 (62.0 %)
Once a week 35 (14.5 %)
Once a month 8 (3.3 %)
2-3 times a
month
10 (4.1 %)
Every couple
months
14 (5.8 %)
Once a year 4 (1.7 %)
Never 21 (8.7 %)
Have you ever clicked on an
advertisement and a window
popped up saying something
along the lines of
‘‘Congratulations, you are
eligible to win an iPad!’’?
Yes 122 (50.4 %)
No 120 (49.6 %)
Have you ever clicked on a link in
a suspicious email (e.g., an
email in a different language,
with an absurd subject)?
Yes 32 (13.2 %)
No 210 (86.8 %)
526 Environ Syst Decis (2013) 33:517–529
123
In addition, endorsement of safe versus risky resolutions
to the cyber dilemmas varied by respondents’ age,
regardless of the primed recall of a friend’s prior experi-
ence. Middle-aged and older respondents were more likely
to endorse the safe choice option compared with younger
respondents. Research on age differences is inconsistent in
the domain of cyber security related to privacy (Hoofnagle
et al. 2010), risk of data loss from a cyber threat (Howe
et al. 2012—‘‘The psychology of security for the home
computer user’’ in Proceedings of 2012 IEEE Symposium
on the Security and Privacy) or fear of a cyber threat
(Alshalan 2006). Our findings suggest that younger indi-
viduals’ extensive use and dependence on computers for
daily activities may result in the association of a greater
cost with being risk averse in response to cyber dilemmas.
Younger individuals’ familiarity with computers likely
makes it easier for them to determine whether a cyber
dilemma is a real threat or a computer’s standard warning
message. In the same vein, their familiarity with computers
may also lead to a greater awareness of a major cyber
dilemma being a small probability event, the consequences
of which are likely to be repairable. Ultimately, younger
individuals do not perceive the unsafe option as overly
risky compared with the safe option.
Respondents’ income was also found to moderate the
effect of the primed recall of a friend’s prior experience
on respondents’ endorsement of safe versus risky
options. Of the three income levels, the wealthiest
respondents were the least sensitive to variations in the
primed recall of a friend’s prior cyber experience. In
the literature on cyber security, only a significant main
effect for income is reported. In a 2001 presentation by
Tyler, Zhang, Southern and Joiner at the IACIS Con-
ference, the research team reported findings suggesting
that higher income individuals have a lower probability
of considering e-commerce to be safe and therefore
avoid e-commerce transactions. Similarly, in a study by
Downs et al. (2008), respondents from more affluent
areas were reported to update their anti-virus program
more frequently than respondents from poorer areas,
further validating the tendency toward risk averse cyber
behavior for higher income individuals. Our finding
suggests that wealthier respondents were not as
impacted compared with the low and medium income
respondents by the primed prior recall experience
manipulation because they can afford to be riskier.
Their wealth allows them to have access to enhanced
baseline security measures. This creates a sense that
they are exempt from risks that apply to others and for
this reason, do not need to pay much attention to the
primed prior recall experiences and consequences.
Interestingly, there were no significant main effects or
interactions for the remaining four individual difference
variables, including sex, education, work domain, or
previous cyber victimization. The absence of main effects
for five of the six individual difference variables suggests
that respondents’ cyber dilemma decisions are determined
more by recall of prior cyber-related experiences, and not
by background of the decision maker, with the sole
exception of respondent age. The absence of interaction
effects for five of the six individual difference variables
suggests that the effect of primed recall of a prior expe-
rience is robust; respondent income was the sole moder-
ator identified.
4 Conclusion
Experiments I and II were designed to explore how com-
puter users’ responses to common cyber dilemmas are
influenced by framing and salience of prior cyber experi-
ences. Despite using two different dependent variables, the
advice the respondent would give to a friend (Experiment
I), and how the respondents themselves would respond to
cyber dilemmas (Experiment II), the extent to which the
two different questions elicit more or less risk averse
responses was found to be similar. The results indicate that
for prior near-miss experiences (the one manipulation
condition included in both experiments), the mean
responses were 2.39 and 2.97 for Experiments I and II,
respectively. This finding suggests that whether the
respondent was making a personal recommendation or
providing advice to a friend; the recalled experience
manipulation was found to significantly influence the
respondent’s endorsement of the safer cyber option. Simi-
larly, in prior cyber research, Aytes and Connolly (2004)
found that students were more attuned to cyber risks and
likely to take action against them when the primary source
of information was their own or friends’ experiences with
security problems.
The one inconsistent finding between the two experi-
ments is the effect of respondent sex on risky cyber choice
behavior. In Experiment I, females were found to be more
risk averse than males, while in Experiment II, sex was
found to be unrelated to whether respondents endorsed a
risky or safe option. Previous studies are also inconsistent
with respect to the role of sex in predicting cyber-related
behavior and decision making. At the 2012 Annual Con-
ference of the Society for Industrial and Organizational
Psychology, Byrne et al. report that women provided
slightly higher scores of behavioral intentions to click on a
risky cyber link, while Milne et al. (2009) found that males
had a greater tendency to engage in risky behaviors online.
In the context of security compliance, Downs et al. (2008)
report that males were more involved in computer security
management, such as updating their anti-virus software and
Environ Syst Decis (2013) 33:517–529 527
123
using pop-up blockers, while Herath and Rao (2009) found
women to have higher security procedure compliance
intentions, but were less likely to act on them.
One explanation for our inconsistent results related to
sex may be differences in the two populations sampled:
college students in Experiment I and a more diverse, AMT
sample in Experiment II. College samples tend to be more
sex stereotyped, such that risk tends to be judged lower by
men than by women, and females tend to have a stronger
desire to take preventative and preparedness measures
(Harris et al. 2006). This tends to be attributed to their lack
of real-world experiences; as evidenced by only a small
percentage of the sample, 24 %, have previously experi-
enced a cyber dilemma. By these assumptions, males
would be expected to be more risk seeking than females in
Experiment I. Conversely, the AMT sample consists of
older adults with more diverse backgrounds, as evidenced
in Table 5, which tends to blur the line between traditional
male and female stereotypes. In addition, 80 % of the AMT
sample had previously experienced a cyber dilemma, fur-
ther suggesting that shared experiences of males and
females could lead to the lack of sex differences found in
Experiment II.
Overall, these two experiments indicate that recall of prior
cyber experiences and framing strongly influence individual
decision making in response to cyber dilemmas. It is useful to
know about how prior experience and framing jointly
influence responses to cyber dilemmas. The implications of
our findings are that salience of prior negative experiences
certainly attenuates risky cyber behavior. We found that this
attenuation is greater for gain-framed decisions, and for low-
and middle-income respondents. Responses to cyber
dilemmas were determined more by proximal variables, such
as recall of prior experiences and framing, and were largely
robust to individual difference variables, with only a couple
of exceptions.
Given that safety in the cyber context is an abstract
concept, it would be worthwhile to further explore how
framing influences cyber dilemma decision making.
Additionally, this research design could be used to evaluate
differences across cyber dilemma contexts to examine the
robustness of the relationships identified in our research.
Such further research is warranted to better understand how
individual users respond to cyber dilemmas. This infor-
mation would be useful to cyber security policymakers
faced with the task of designing better security systems,
including computer displays and warning messages rele-
vant to cyber dilemmas.
Acknowledgments This research was supported by the U.S.
Department of Homeland Security (DHS) through the National
Center
for Risk and Economic Analysis of Terrorism Events. However,
any
opinions, findings, conclusions, and recommendations in this
article
are those of the authors and do not necessarily reflect the views
of
DHS. We would like to thank Society for Risk Analysis (SRA)
conference attendees for their feedback on this work at a
session at the
2012 SRA Annual Meeting in San Francisco. We would also
thank
the blind reviewers for their time and comments, as they were
extremely valuable in developing this paper.
References
Acquisti A, Grossklags J (2007) What can behavioral economics
teach us about privacy. In: Acquisti A, Gritzalis S, Lambrino-
udakis C, Vimercati S (eds) Digital privacy: theory,
technologies
and practices. Auerbach Publications, Florida, pp 363–377
Alshalan A (2006) Cyber-crime fear and victimization: an
analysis of
a national survey. Dissertation, Mississippi State University
Aytes K, Connolly T (2004) Computer security and risky
computing
practices: a rational choice perspective. J Organ End User
Comput 16:22–40
Barnes LR, Gruntfest EC, Hayden MH, Schultz DM, Benight C
(2007) False alarms and close calls: a conceptual model of
warning accuracy. Weather Forecast 22:1140–1147
Bateman JM, Edwards B (2002) Gender and evacuation: a closer
look
at why women are more likely to evacuate for hurricanes. Nat
Hazard Rev 3:107–117
Bourque LB, Regan R, Kelley MM, Wood MM, Kano M, Mileti
DS
(2012) An examination of the effect of perceived risk on
preparedness behavior. Environ Behav 45:615–649
Breznitz S (2013) Cry wolf: the psychology of false alarms.
Psychology Press, Florida
Buhrmester M, Kwang T, Gosling SD (2011) Amazon’s
Mechanical
Turk: a new source of inexpensive, yet high-quality, data?
Perspect Psychol Sci 6:3–5
Cameron L, Shah M (2012) Risk-taking behavior in the wake of
natural disasters. IZA Discussion Paper No. 6756. http://ssrn.
com/abstract=2157898
Dillon RL, Tinsley CH, Cronin M (2011) Why near-miss events
can
decrease an individual’s protective response to hurricanes. Risk
Anal 31:440–449
Donner WR, Rodriguez H, Diaz W (2012) Tornado warnings in
three
southern states: a qualitative analysis of public response
patterns.
J Homel Secur Emerg Manage 9:1547–7355
Dow K, Cutter SL (1998) Crying wolf: repeat responses to
hurricane
evacuation orders. Coast Manage 26:237–252
Downs DM, Ademaj I, Schuck AM (2008) Internet security:
who is
leaving the ‘virtual door’ open and why? First Monday 14.
doi:10.5210%2Ffm.v14i1.2251
Flynn J, Slovic P, Mertz CK (1994) Gender, race, and
perception of
environmental health risks. Risk Anal 14:1101–1108
Garg V, Camp J (2013) Heuristics and biases: implications for
security design. IEEE Technol Soc Mag 32:73–79
Harris C, Jenkins M, Glaser D (2006) Gender differences in risk
assessment: why do women take fewer risks than men? Judgm
Decis Mak 1:48–63
Helander MG, Khalid HM (2000) Modeling the customer in
electronic commerce. Appl Ergon 31:609–619
Herath T, Rao HR (2009) Encouraging information security
behaviors
in organizations: role of penalties, pressures and perceived
effectiveness. Decis Support Syst 47:154–165
Ho MC, Shaw D, Lin S, Chiu YC (2008) How do disaster
characteristics influence risk perception? Risk Anal 28:635–643
Hoofnagle C, King J, Li S, Turow J (2010) How different are
young
adults from older adults when it comes to information privacy
attitudes and policies? April 14, 2010. http://ssrn.com/abstract=
1589864
528 Environ Syst Decis (2013) 33:517–529
123
http://ssrn.com/abstract=2157898
http://ssrn.com/abstract=2157898
http://dx.doi.org/10.5210%2Ffm.v14i1.2251
http://ssrn.com/abstract=1589864
http://ssrn.com/abstract=1589864
Kahneman D, Tversky A (1979) Prospect theory: an analysis of
decision under risk. Econom J Econom Soc 47:263–291
Kung YW, Chen SH (2012) Perception of earthquake risk in
Taiwan:
effects of gender and past earthquake experience. Risk Anal
32:1535–1546
Kunreuther H, Pauly M (2004) Neglecting disaster: why don’t
people
insure against large losses? J Risk Uncertain 28:5–21
Mason W, Suri S (2012) Conducting behavioral research on
Amazon’s Mechanical Turk. Behav Res Methods 44:1–23
Milne GR, Labrecque LI, Cromer C (2009) Toward an
understanding
of the online consumer’s risky behavior and protection
practices.
J Consum Aff 43:449–473
Paolacci G, Chandler J, Ipeirotis P (2010) Running experiments
on
Amazon Mechanical Turk. Judgm Decis Mak 5:411–419
Shankar V, Urban GL, Sultan F (2002) Online trust: a
stakeholder
perspective, concepts, implications, and future directions. J
Stra-
teg Inf Syst 11:325–344
Siegrist M, Gutscher H (2008) Natural hazards and motivation
for
mitigation behavior: people cannot predict the affect evoked by
a
severe flood. Risk Anal 28:771–778
Simmons KM, Sutter D (2009) False alarms, tornado warnings,
and
tornado casualties. Weather Clim Soc 1:38–53
Slovic P, Peters E, Finucane ML, MacGregor DG (2005) Affect,
risk,
and decision making. Health Psychol 24:S35–S40
Tinsley CH, Dillon RL, Cronin MA (2012) How near-miss
events
amplify or attenuate risky decision making. Manage Sci
58:1596–1613
Tversky A, Kahneman D (1986) Rational choice and the framing
of
decisions. J Bus 59:S251–S278
Verendel V (2008) A prospect theory approach to security.
Technical
Report No. 08-20. Sweden. Department of Computer Science
and Engineering, Chalmers University of Technology/Goteborg
University. http://citeseerx.ist.psu.edu/viewdoc/download?doi=
10.1.1.154.9098&rep=rep1&type=pdf
Environ Syst Decis (2013) 33:517–529 529
123
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.154.9
098&rep=rep1&type=pdf
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.154.9
098&rep=rep1&type=pdfHeuristics and biases in cyber security
dilemmasAbstractIntroductionExperiment IMethodDesign
overviewScenarios and
manipulationsSubjectsResultsDiscussionExperiment
IIMethodDesign overviewScenarios and
manipulationsSubjectsResultsDiscussionConclusionAcknowledg
mentsReferences
ww.sciencedirect.com
c o m p u t e r s & s e c u r i t y 3 1 ( 2 0 1 2 ) 5 9 7 e6 1 1
Available online at w
journal homepage: www.elsevier.com/locate/cose
Leveraging behavioral science to mitigate cyber security
risk
Shari Lawrence Pfleeger a,1, Deanna D. Caputo b,*
a Institute for Information Infrastructure Protection, Dartmouth
College, 4519 Davenport St., NW, Washington, DC 20016, USA
b MITRE Corporation, 7515 Colshire Drive, McLean, VA
22102-7539, USA
a r t i c l e i n f o
Article history:
Received 16 August 2011
Received in revised form
21 November 2011
Accepted 22 December 2011
Keywords:
Cyber security
Cognitive load
Bias
Heuristics
Risk communication
Health models
* Corresponding author. Tel.: þ1 703 983 384
E-mail addresses: [email protected]
1 Tel.: þ1 603 729 6023.
0167-4048/$ e see front matter ª 2012 Publi
doi:10.1016/j.cose.2011.12.010
a b s t r a c t
Most efforts to improve cyber security focus primarily on
incorporating new technological
approaches in products and processes. However, a key element
of improvement involves
acknowledging the importance of human behavior when
designing, building and using
cyber security technology. In this survey paper, we describe
why incorporating an under-
standing of human behavior into cyber security products and
processes can lead to more
effective technology. We present two examples: the first
demonstrates how leveraging
behavioral science leads to clear improvements, and the other
illustrates how behavioral
science offers the potential for significant increases in the
effectiveness of cyber security.
Based on feedback collected from practitioners in preliminary
interviews, we narrow our
focus to two important behavioral aspects: cognitive load and
bias. Next, we identify
proven and potential behavioral science findings that have cyber
security relevance, not
only related to cognitive load and bias but also to heuristics and
behavioral science models.
We conclude by suggesting several next steps for incorporating
behavioral science findings
in our technological design, development and use.
ª 2012 Published by Elsevier Ltd.
1. Introduction create a cyber environment that provides users
with all of the
“Only amateurs attack machines; professionals target
people.” (Schneier, 2000)
What is the best way to deal with cyber attacks? Cyber
security promises protection and prevention, using both
innovative technology and an understanding of the human
user. Which aspects of human behavior offer the most
promise in making cyber security processes and products
more effective? What role should education and training play?
How can we encourage good security practices without
unnecessarily interrupting or annoying users? How can we
6.
outh.edu (S.L. Pfleeger), d
shed by Elsevier Ltd.
functionality they need without compromising enterprise or
national security? We investigate the answers to these ques-
tions by examining the behavioral science literature to iden-
tify behavioral science theories and research findings that
have the potential to improve cyber security and reduce risk.
In this paper, we report on our initial findings, describe several
behavioral science areas that offer particularly useful appli-
cations to security, and describe how to use them in a general
risk-reduction process.
The remainder of this paper is organized in five sections.
Section 2 describes some of the problems that a technology-
alone solution cannot address. Section 3 explains how we
used a set of scenarios to elicit suggestions about the behav-
iors of most concern to technology designers and users.
[email protected] (D.D. Caputo).
mailto:[email protected]
mailto:[email protected]
www.sciencedirect.com/science/journal/01674048
www.elsevier.com/locate/cose
http://dx.doi.org/10.1016/j.cose.2011.12.010
http://dx.doi.org/10.1016/j.cose.2011.12.010
http://dx.doi.org/10.1016/j.cose.2011.12.010
c o m p u t e r s & s e c u r i t y 3 1 ( 2 0 1 2 ) 5 9 7 e6 1 1598
Sections 4 and 5 highlight several areas of behavioral science
with demonstrated and potential relevance to security tech-
nology. Finally, Section 6 suggests possible next steps toward
inclusion of behavioral science in security technology’s
design, construction and use.
3 See the First Interdisciplinary Workshop on Security and
Human Behavior, described at http://www.schneier.com/blog/
archives/2008/06/security_and_http://www.cl.cam.ac.uk/wrja14/
shb08.html.
4 See workshop papers at http://www.informatik.uni-trier.de/
wley/db/conf/itrust/itrust2006.html.
5 The National Science Foundation program is interested in the
connections between social science and cyber security. It has
announced a new program that encourages computer scientists
2. Why technology alone is not enough
The media frequently express the private sector’s concern
about liability for cyber attacks and its eagerness to minimize
risk. The public sector has similar concerns, because aspects
of everyday life (such as operation and defense of critical
infrastructure, protection of national security information,
and operation of financial markets) involve both government
regulation and private sector administration.2 The govern-
ment’s concern is warranted: the Consumer’s Union found
that government was the source of one-fifth of the publicly-
reported data breaches between 2005 and mid-2008
(Consumer’s Union, 2008). The changing nature of both tech-
nology and the threat environment makes the risks to infor-
mation and infrastructure difficult to anticipate and quantify.
Problems of appropriate response to cyber incidents are
exacerbated when security technology is perceived as an
obstacle to the user. The user may be overwhelmed by diffi-
culties in security implementation, or may mistrust, misinter-
pret or override the security. A recent study of users at Virginia
Tech illustrates the problem (Virginia Tech, 2011). Bellanger
et al. examined user attitudes and the “resistance behavior” of
individuals faced with a mandatory password change. The
researchers found that, even when passwords were changed as
required, the changes were intentionally delayed and the
request perceived as being an unnecessary interruption.
“People are conscious that a password breach can have severe
consequences, but it does not affect their attitude toward the
security policy implementation.” Moreover, “the more tech-
nical competence respondents have, the less they favor the
policy enhancement. .In a voluntary implementation, that
competence may be a vector of pride and accomplishment. In
a mandatory context, the individual may feel her competence
challenged, triggering a negative attitude toward the process.”
In the past, solutions to these problems have ranged from
strict, technology-based control of computer-based human
behavior (often with inconsistent or sometimes rigid
enforcement) to comprehensive education and training of
system developers and users. Neither extreme has been
particularly successful, but recent studies suggest that
a blending of the two can lead to effective results. For
example, the U.K. Office for Standards in Education, Chil-
dren’s Services and Skills (Ofsted) evaluated the safety of
online behavior at 35 representative schools across the U.K.
“Where the provision for e-safety was outstanding, the
schools had managed rather than locked down systems. In the
best practice seen, pupils were helped, from a very early age,
to assess the risk of accessing sites and therefore gradually to
acquire skills which would help them adopt safe practices
even when they were not supervised.” (Ofsted, 2010) In other
2 See, for example, http://www.cbsnews.com/video/watch/?
id¼5578986n&tag¼related;photovideo.
words, the most successful security behaviors were exhibited
in schools where students were taught appropriate behaviors
and then trusted to behave responsibly. The Ofsted report
likens the approach to teaching children how to cross the road
safely, rather than relying on adults to accompany the chil-
dren across the road each time.
This approach is at the core of our research. Our over-
arching hypothesis is that, if humans using computer systems
are given the tools and information they need, taught the
meaning of responsible use, and then trusted to behave
appropriately with respect to cyber security, desired outcomes
may be obtained without security’s being perceived as onerous
or burdensome. By both understanding the role of human
behavior and leveraging behavioral science findings, the
designers, developers and maintainers of information infra-
structure can address real and perceived obstacles to produc-
tivity and provide more effective security. These behavioral
changes take time, so plans for initiating change should
include sufficient time to propose the change, implement it,
and have it become part of the culture or common practice.
Other evidence (Predd et al., 2008; Pfleeger et al., 2010) is
beginning to emerge that points to the importance of under-
standing human behaviors when developing and providing
cyber security.3 There is particular interest in using trust to
mitigate risk, especially online. For example, the European
Union funded a several-year, multi-disciplinary project on
online trust (iTrust),4 documenting the many ways that trust
can be created and broken. Now, frameworks are being
developed for analyzing the degree to which trust is built and
maintained in computer applications (Riegelsberger et al.,
2005). More broadly, a rich and relevant behavioral science
literature addresses critical security problems, such as
employee deviance, employee compliance, effective decision-
making, and the degree to which emotions (Lerner and
Tiedens, 2006) or stressful conditions (Klein and Salas, 2001)
can lead to riskier choices by decision-makers.5 At the same
time, there is much evidence that technological advances can
have unintended consequences that reduce trust or increase
risk (Tenner, 1991). For these reasons, we conclude that it is
important to include the human element when designing,
building and using critical systems.
To understand how to design and build systems that
encourage users to act responsibly when using them, we iden-
tified two types of behavioral science findings: those that have
already been shown to demonstrate a welcome effect on cyber
security implementation and use, and those with potential to
have such an effect. In the first case, we documented the rele-
vant findings, so that practitioners and researchers can
and social scientists to work together (Secure and Trustworthy
Cyberspace, described at
http://www.nsf.gov/pubs/2012/nsf12503/
nsf12503.htm?WT.mc_id¼USNSF_25&WT.mc_ev¼click).
http://www.cbsnews.com/video/watch/%3fid%3d5578986n%26t
ag%3drelated;photovideo
http://www.cbsnews.com/video/watch/%3fid%3d5578986n%26t
ag%3drelated;photovideo
http://www.cbsnews.com/video/watch/%3fid%3d5578986n%26t
ag%3drelated;photovideo
http://www.cbsnews.com/video/watch/%3fid%3d5578986n%26t
ag%3drelated;photovideo
http://www.schneier.com/blog/archives/2008/06/security_and_ht
tp://www.cl.cam.ac.uk/%7Erja14/shb08.html
http://www.schneier.com/blog/archives/2008/06/security_and_ht
tp://www.cl.cam.ac.uk/%7Erja14/shb08.html
http://www.schneier.com/blog/archives/2008/06/security_and_ht
tp://www.cl.cam.ac.uk/%7Erja14/shb08.html
http://www.schneier.com/blog/archives/2008/06/security_and_ht
tp://www.cl.cam.ac.uk/%7Erja14/shb08.html
http://www.informatik.uni-
trier.de/%7Eley/db/conf/itrust/itrust2006.html
http://www.informatik.uni-
trier.de/%7Eley/db/conf/itrust/itrust2006.html
http://www.nsf.gov/pubs/2012/nsf12503/nsf12503.htm%3fWT.m
c_id%3dUSNSF_25%26WT.mc_ev%3dclick
http://www.nsf.gov/pubs/2012/nsf12503/nsf12503.htm%3fWT.m
c_id%3dUSNSF_25%26WT.mc_ev%3dclick
http://www.nsf.gov/pubs/2012/nsf12503/nsf12503.htm%3fWT.m
c_id%3dUSNSF_25%26WT.mc_ev%3dclick
http://www.nsf.gov/pubs/2012/nsf12503/nsf12503.htm%3fWT.m
c_id%3dUSNSF_25%26WT.mc_ev%3dclick
http://dx.doi.org/10.1016/j.cose.2011.12.010
http://dx.doi.org/10.1016/j.cose.2011.12.010
c o m p u t e r s & s e c u r i t y 3 1 ( 2 0 1 2 ) 5 9 7 e6 1 1 599
determine which approaches are most applicable to their
environment. In the second case, we are designing a series of
studies to test promising behavioral science results in a cyber
security setting; setting with the goal of determining which
results (with associated strategies for reducing or mitigating the
behavioral problems they reflect) are the most effective.
However, applying behavioral science findings to cyber
security problems is an enormous undertaking. To maximize
the likely effectiveness of outcomes, we used a set of inter-
views to elicit practitioners’ opinions about behaviors of
concern, so that we could focus on those perceived as most
significant. We describe the interviews and results in Section
3. These findings suggest hypotheses about the role of
behavior in addressing cyber security issues.
3. Identifying behavioral aspects of security
Designers and developers of security technology can leverage
what is known about people and their perceptions to provide
more effective security. A former Israeli airport security chief
said,
“I say technology should support people. And it should be
skilled people at the center of our security concept rather
than the other way around” (Amos, 2010).
To implement this kind of human-centered security,
technologists must understand the behavioral sciences as
they design, develop and use technology. However, trans-
lating behavioral results to a technological environment can
be a difficult process. For example, system designers must
address the human elements obscured by computer media-
tion. Consumers making a purchase online trusts that the
merchant represented by the website is not simply taking
their money, but also is fulfilling its obligation to provide
goods in return. The consumer infers the human involvement
of the online merchant behind the scenes. Thus, at some level,
the buyer and seller are humans enacting a transaction
enabled by a system designed, developed and maintained by
humans. There may be neither actual human contact nor
direct knowledge of the other human actors involved, but the
transaction process reflects its human counterpart.
Preventing or mitigating adverse cyber security incidents
requires action at many stages: designing the technology
being incorporated in the infrastructure; implementing,
testing and maintaining the technology; and using the tech-
nology to provide essential products and services. Behavioral
science has addressed notions of cyber security in these
activities for many years. Indeed, Sasse and Flechais (2005)
note that secure systems are socio-technical systems in
which we should use an understanding of behavioral science
to “prevent users from being the ‘weakest link.’” For example,
some behavioral scientists have investigated how trust
mechanisms affect cyber security. Others have reported
findings related to the design and use of cyber systems, but
the relevance and degree of effect have not yet been tested.
Some of the linkage between behavioral science and security
is specific to certainkinds of systems. For example,
Castelfranchi
and Falcone (1998, 2002) analyze trust in multi-agent systems
from a behavioral perspective. They view trust as having
several
components, including beliefs that must be held to develop trust
(the social context, as described by Riegelsberger et al.(2003))
and
relationships to previousexperience (the temporal context of the
RiegelsbergereSasseeMcCarthy framework). They use psycho-
logical factors to model trust in multi-agent systems. In addition
to social and temporal concerns, we add expectations of fulfill-
ment, where someone trusting someone or something else
expects something in return (Baier, 1986). This behavioral
research sheds light on the nature of a user’s expectation and on
perceived trustworthiness of technology-mediated interactions
and has important implications related to the design of protec-
tive systems and processes.
Sasse and Flechais (2005) view security from three distinct
perspectives: product, process and panorama:
� Product. This perspective includes the effect of the security
controls, such as the policies and mechanisms on stake-
holders (e.g., designers, developers, users). The controls
involve requirements affecting physical and mental work-
load, behavior, and cost (human and financial). Users trust
the product to maintain security while getting the primary
task done.
� Process. This aspect addresses how security decisions are
made, especially in early stages of requirements-gathering
and design. The process should allow the security mecha-
nisms to be “an integral part of the design and development
of the system, rather than being ‘added on’” (Sasse and
Flechais, 2005). Because “mechanisms that are not
employed in practice, or that are used incorrectly, provide
little or no protection,” designers must consider the impli-
cations of each mechanism on workload, behavior and
workflow (Sasse and Flechais, 2005). From this perspective,
the stakeholders must trust the process to enable them to
make appropriate and effective decisions, particularly about
their primary tasks
� Panorama. This aspect describes the context in which the
security operates. Because security is usually not the
primary task, users are likely to “look for shortcuts and
workarounds, especially when users do not understand why
their behavior compromises security. .A positive security
culture, based on a shared understanding of the importance
of security. is the key to achieving desired behavior” (Sasse
and Flechais, 2005). From this perspective, the user views
security mechanisms as essential even when they seem
intrusive, limiting, or counterproductive.
3.1. Scenario creation
Because the infrastructure types and threats are vast, we used
interview results to narrow our investigation to those behav-
ioral science areas with demonstrated or likely potential to
enhance an actor’s confidence in using any information
infrastructure. To guide our interviews, we worked with two
dozen U.S. government and industry employees familiar with
information infrastructure protection issues to define three
threat scenarios relevant to protecting the information infra-
structure. The methodology and resulting analyses were
conducted by the paper’s first author and involved five steps:
http://dx.doi.org/10.1016/j.cose.2011.12.010
http://dx.doi.org/10.1016/j.cose.2011.12.010
c o m p u t e r s & s e c u r i t y 3 1 ( 2 0 1 2 ) 5 9 7 e6 1 1600
� Choosing topics. We chose three security topics to discuss,
based on recent events. The combination of the three was
intended to represent a (admittedly incomplete but) signif-
icant number of typical concerns, the discussion of which
would reveal underlying areas ripe for improvement.
� Creating a representative, realistic scenario for each topic.
Using
our knowledge of recent cyber incidents and attacks, we
created an attack scenario for each plausible topic, por-
traying a cyber security problem for which a solution would
be welcomed by industry and government.
� Identifying people with decision making authority about
cyber
security products and usage to interview about the scenarios.
We
identified people from industry and government who were
willing to participate in interviews.
� Conducting interviews. Our discussions focused on two
questions: Are these scenarios realistic, and how could the
cyber security in each situation be improved?
� Analyzing the results and their implications. We analyzed the
results of these interviews and their implications for our
research.
3.1.1. Scenario 1: improving security awareness among
builders of information infrastructure
Security is rarely the primary task of those who use the
information infrastructure. Typically, users seek information,
analyze relationships, produce documents, and perform tasks
that help them understand situations and take action. Simi-
larly, system developers often focus on these primary tasks
before incorporating security into an architecture or design.
Moreover, system developers often implement security
requirements by choosing security mechanisms that are easy
to build and test or that meet some other technical system
objective (e.g., reliability). Developers rarely take into account
the usability of the mechanism or the additional cognitive
load it places on the user. Scenario 1 describes ways to
improve security awareness among system builders so that
security is more likely to be useful and effective.
Suppose software engineers are designing and building
a system to support the creation and transmission of sensitive
documents among members of an organization. Many aspects
of document creation and transmission are well known, but
security mechanisms for evaluating sensitivity, labeling
documents appropriately and transmitting documents
securely have presented difficulties for many years. In our
scenario, software engineers are tasked to design a system
that solicits information from document creators, modifiers
and readers, so that a trust designation can be assigned to
each document. Security issues include understanding they
types of trust-related information needed, determining the
role of a changing threat environment, and defining the
frequency at which the trust information should be refreshed
and re-evaluated (particularly in light of cyber security inci-
dents that may occur during the life of the document). In
addition, the software engineers must implement some type
of summary trust designation that will have meaning to
document creators, modifiers and readers alike.
This trust designation, different from the classification of
document sensitivity, represents the degree to which both the
content and provider (or modifier) can be trusted and for how
long. For example, a document about a nation’s emerging
military capability may be highly classified (that is, highly
sensitive), regardless of whether the information provider is
highly trusted (because, for example, he has repeatedly
provided highly useful information in the past) or not (because,
for example, he frequently provides incorrect or misleading
information).
There are two important aspects of the software engineers’
security awareness. First, they must be able to select security
mechanisms for implementing the trust designation that
allow them to balance security with performance and
usability requirements. This balancing entails appreciating
and accommodating the role of security in the larger context
of the system’s intended purpose and multiple uses. Second,
the users must be able to trust that the appropriate security
mechanism is chosen. Trust means that the mechanism itself
must be appropriate to the task. For example, the Biba Integ-
rity Model (Biba, 1977), a system of computer security policies
expressed as access control rules, is designed to ensure data
integrity. The model defines a hierarchy of integrity levels,
and then prevents participants from corrupting data of an
integrity level higher than the subject, or from being corrupted
by data from a level lower than the subject. The Biba model
was developed to extend the Bell and La Padula (1973) model,
which addresses only data confidentiality. Thus, under-
standing and choice of policies and mechanisms are impor-
tant aspects in which we trust software engineers to exercise
discretion. In addition, software engineers must be able to
trust the provenance, correctness and conformance to
expectations of the security mechanisms. Here, “provenance”
means not only the applicability of the mechanisms and
algorithms but also the source of architectural or imple-
mentation modules. With the availability of open source
modules and product line architectures (see, for example,
Clements and Northrup, 2001), it is likely that some parts of
some security mechanisms will have been built for a different
purpose, often by a different team of engineers. Builders and
modifiers of the current system must know to what degree to
trust someone else’s modules.
3.1.2. Scenario 2: enhancing situational awareness during
a “cyber event”
Situational awareness is the degree to which a person or
system knows about a threat in the environment. When an
emergency is unfolding, the people and systems involved in
watching it unfold must determine what has already
happened, what is currently happening, and what is likely to
happen in the future; then, they make recommendations for
reaction based on their situational awareness. The people or
systems perceiving the situation have varying degrees of trust
in the information they gather and in the providers of that
information. When a cyber event is unfolding, information can
come from primary sources (such as sensors in process control
systems or measurements of network activity) and secondary
sources (such as human or automated interpreters of trends).
Consider analysts using a computer system that monitors
the network of power systems around the United States. The
system itself interacts with a network of systems, each of
which collects and analyzes data about power generation and
distribution stations and their access points. The analysts
http://dx.doi.org/10.1016/j.cose.2011.12.010
http://dx.doi.org/10.1016/j.cose.2011.12.010
c o m p u t e r s & s e c u r i t y 3 1 ( 2 0 1 2 ) 5 9 7 e6 1 1 601
notice a series of network failures around the country: first,
a power station in California fails, then one in Missouri, and so
on during the first few hours of the event.6 The analysts must
determine not only what is really unfolding but also how to
respond appropriately. Security and human behavior are
involved in many ways. First, the analyst must know whether
to trust the information being reported to her monitoring
system. For example, is the analyst viewing a failure in the
access point or in the monitoring system? Next, the analyst
must be able to know when and whether she has enough
information to make a decision about which reactions are
appropriate. This decision must be made in the context of an
evolving situation, where some evidence at first considered
trustworthy is eventually determined not to be (and vice versa).
Finally, the analyst must analyze the data being reported, form
hypotheses about possible causes, and then determine which
interpretation of the data to use. For instance, is the sequence
of failures the result of incorrect data transmission, a cyber
attack, random system failures, or simply the various power
companies’ having purchased some of their software from the
same vendor (whose system is now failing)? Choosing the
wrong interpretation can have serious consequences.
3.1.3. Scenario 3: supporting decisions about trustworthiness
of network transactions
On Christmas Day, 2009, a Nigerian student flying from
Amsterdam to Detroit attempted to detonate a bomb to
destroy the plane. Fortunately, the bomb did little damage,
and passengers prevented the student from completing his
intended task. However, in analyzing why the student was not
detected by a variety of airport security screens, it was
determined that important information was never presented
to the appropriate decision-makers (Baker and Hulse, 2009).
This situation forms the core of Scenario 3, where a system
queries an interconnected set of databases to find information
about a person or situation.
In this scenario, an analyst uses an interface to a collection
of data repositories, each of which contains information about
crime and terrorism. When the analyst receives a warning
about a particular person of interest, she must query the
repositories to determine what is known about that person.
There are many security issues related to this scenario. First,
the analyst must determine the degree to which she can trust
that all of the relevant information resides in at least one of
the connected repositories. After the Christmas bombing
attempt, it was revealed that the U.K. had denied a visa
request by the student, but information about the denial was
not available to the Transportation Security Administration
when decisions were made about whether to subject the
student to extra security screening. Spira (2010) points out
that the problem is not the number of databases; it is the lack
of ability to search the entire “federation” of databases.
Next, even if the relevant items are found, the most
important ones must be visible at the appropriate time. Libicki
and Pfleeger (2004) have documented the difficulties in
6 Indeed, at this stage it may not be clear that the event is
actually a cyber event. A similar event with similar characteris-
tics occurred on August 14, 2003, in the United States. See
http://
www.cnn.com/2003/US/08/14/power.outage/index.html.
“collecting the dots” before an analyst can take the next step
to connect them. If a “dot” is not as visible as it should be, it
can be overlooked or given insufficient attention during
subsequent analysis. Moreover, Spira (2010) highlights the
need for viewing the information in its appropriate context.
Third, the analyst must also determine the degree to which
each piece of relevant information can be trusted. That is, not
only must she know the accuracy and timeliness of each data
item, but she also must determine whether the data source
itself can be trusted. There are several aspects to this latter
degree of trust, such as knowing how frequently the data
source provides the information (that is, whether it is old
news), knowing whether the data source is trustworthy
enough, and whether circumstances may change the source’s
trustworthiness. For example, Predd et al. (2008) and Pfleeger
et al. (2010) point out the varying types of people with legiti-
mate access to systems taking unwelcome action. A trust-
worthy insider may become a threat because of a pending
layoff or personal problem, inattention or confusion, or her
attempt to overcome a system weakness. So the trustworthi-
ness of information and sources must be re-evaluated
repeatedly and perhaps even forecast based on predictions
about a changing environment.
Finally, the analyst must also determine the degree to
which the analysis is correct. Any analysis involves assump-
tions about variables and their importance, as well as the
relationships among dependent and independent variables.
Many times, it is a faulty assumption that leads to failure,
rather than faulty data.
3.2. Analysis of results
The three scenarios were intriguing to our interviewees, and all
agreed that they were realistic, relevant and important.
However, having the interviewees scrutinize the scenarios
revealed fewer behavioral insights than we had hoped. In each
case, the interviewee viewed each scenario from his or her
particular perspective, highlighting only a small portion of the
scenario to confirm an opinion he or she held. For example, one
of the interviewees used Scenario 3 to emphasize the need for
information sharing; another interviewee said that privacy is
a key concern, especially in situations like Scenario 2 where
significantmonitoring mustbe balanced with protecting privacy.
Nevertheless, many of the interviewees had good sugges-
tions for shaping the way forward. For instance, one said that
there is much to be learned from command and control
algorithms, where military actors have learned to deal with
risk perception, uncertainty, incomplete information, and the
need to make an important decision under extreme pressures.
There is rich literature addressing decision-making under
pressure, from Ellsberg (1964) through Klein (Klein, 1998,
2009). In particular, Klein’s models of adaptive decision-
making may be applicable (Klein and Calderwood, 1991;
Klein and Salas, 2001). While the scenario methodology was
not a structured idea generation approach, to the extent
possible, we endeavored to be unbiased in our interpretation
of interviewee responses. We were not trying to gather
support for preconceived ideas and were genuinely trying to
explore new ideas where behavioral science could be lever-
aged to address security issues.
http://www.cnn.com/2003/US/08/14/power.outage/index.html
http://www.cnn.com/2003/US/08/14/power.outage/index.html
http://dx.doi.org/10.1016/j.cose.2011.12.010
http://dx.doi.org/10.1016/j.cose.2011.12.010
c o m p u t e r s & s e c u r i t y 3 1 ( 2 0 1 2 ) 5 9 7 e6 1 1602
There were several messages that emerged from the
interviews:
� Security is intertwined with the way humans behave when
trying to meet a goal or perform a task. The separation of
primary task from secondary, as well as its impact on user
behavior, was first clearly expressed in Smith et al. (1997)
and elaborated in the security realm by Sasse et al. (2002).
Our interviews reconfirmed that, in most instances, security
is secondary to a user’s primary task (e.g., finding a piece of
information, processing a transaction, making a decision).
When security interferes, the person may ignore or even
subvert the security, since the person is rewarded for the
primary task. In some sense, the person trusts the system to
take care of security concerns. That perspective can lead to
at least two unwelcome events. First, when confronted with
uncertainty about the security of a course of action, the
person trusts that the system has assured the safety of the
action (for example, when a user opens an attachment
assuming that the system has checked it for viruses, or, as in
Scenario 3, the users assumed the bomber was not a secu-
rity risk because his name was not revealed by the security
system). Second, when, in the past, security features have
prevented or slowed task completion, a user subverts the
security because he or she may no longer trust the system to
enable effective task completion in the future. Thus,
understanding the behavioral science (rather than the
security itself) can offer new ways to design, build and use
systems whose security is understood and respected by the
user.
� Interviewees noted in all scenarios how limitations on
memory or analysis capability interfered with an analyst’s
ability to perform. One interviewee noted the abundance of
information being generated by automated systems, and
the increasing likelihood that important events would go
unnoticed (Burke, 2010). In the behavioral sciences, the term
cognitive load refers to the amount of stress placed on
working memory. First addressed by Miller (1956), who
claimed that a person’s “working memory” could deal with
at most five to nine pieces of information at once, the notion
was extended by Chase and Simon (1973) to address
memory overload during problem-solving. Several empir-
ical results (see, for example, Scandura, 1971) suggest that
individuals vary in their ability to process a given amount of
information.
� Inattentional blindness is a particular aspect of cognitive load
that played a role in each scenario. First acknowledged by
Mack and Rock (1998) and studied extensively by Simons
and his colleagues (see, for example, Simons and Chabris,
1999 and Simons and Jensen, 2009), inattentional blind-
ness refers to a person’s inability to notice unexpected
events when concentrating on a primary task. For example,
inattentional blindness may cause an analyst in Scenario 2
to miss seeing a pattern in the failure of power plants (e.g.,
that all failing power plants were in areas experiencing
severe drought), or to lead an analyst in Scenario 3 to
overlook a warning from the bomber’s father because
attention was restricted to the bomber himself.
� There is significant bias in the way each interviewee thinks
about security. This bias reflects the interviewee’s
experience, goals and expertise, evidencing itself in the way
that two people view the same situation in very different
ways. For example, interviewees with jobs that focus
primarily on privacy thought of the scenarios as protecting
data from outsiders but did not consider inadvertent
corruption. By understanding biases, security designers and
developers can anticipate likely perceptions and account for
them when designing approaches to encourage good secu-
rity behavior.
� There is a significant element of risk in each scenario, and
decision-makers have a difficult time both understanding
the nature of the risk (expressed as a combination of like-
lihood and impact) and balancing multiple perceptions of
the risk to make the best decision in the time available.
There is a considerable literature on risk perception and risk
communication, with important papers included in the
compilations by Mayo and Hollander (1991) and Slovic
(2000). By applying behavioral science findings to system
design, development and use, users can be made more
aware of the likely impact of their security-related
decisions.
The interviews revealed how practitioners (i.e., users and
developers) do and do not involve security-related concerns in
their decision-making process. Several points became clear to
us as a result of these discussions:
� Practitioners do not have a common understanding of
security.
� Practitioners do not have a heightened awareness of how
security can affect all of their job functions and roles. For
example, people feel comfortable revealing small amounts
of information in each situation but do not realize how
easily the information can aggregate into a full picture that
becomes a security concern.
� Practitioners have limited experience in dissecting a situa-
tion to identify necessary security relationships.
� The combination of narrow focus with a large (and often
growing) quantity of information continues to
cause failure to “connect the dots.” Finding a pattern or
connection among only a few dots within a large set
of data is akin to the problem of identifying a constella-
tion in a star-filled nighttime sky. Some people can find
the Big Dipper easily, when others see only too many
stars. Our interviews made clear that practitioners
need training and assistance in identifying important
aspects of a situation and in knowing how and when to
focus.
Based on the outcomes from our scenario discussions, we
narrowed our focus to cognitive load and bias as organizing
principles for an investigation of relevant behavioral science
theory and research findings that offer promise of more
secure systems. We also sought information about people’s
heuristics and models that might be useful in helping us
convey cyber security information and implement relevant
results. In the next two sections, we examine both those
behavioral science findings that have already been demon-
strated to have bearing on cyber security and those with the
potential to do so.
http://dx.doi.org/10.1016/j.cose.2011.12.010
http://dx.doi.org/10.1016/j.cose.2011.12.010
c o m p u t e r s & s e c u r i t y 3 1 ( 2 0 1 2 ) 5 9 7 e6 1 1 603
4. Areas of behavioral science with
demonstrated relevance
We begin this section by examining several key behavioral
science findings that have been demonstrated as relevant to
cyber security in general and information infrastructure
protection in particular. Then, in the next section we look at
behavioral science research that has potential to improve
cyber security. In addition, we include descriptions of
heuristics and health-related models that may assist
designers in building good security into products and
processes. In each case, we document the possible implica-
tions of each.
4.1. Findings with demonstrable relevance to cyber
security
Behavioral science findings improve product, process and
panorama in these examples.
4.1.1. Recognition easier than recollection
The behavioral science literature demonstrates that recogni-
tion is significantly easier than recall. After Rock and
Engelstein (1959) showed people a single meaningless shape,
the participants’ ability to recall it declined rapidly, but they
could recognize it almost perfectly a month later. In other
words, asking participants to recall a shape without being
shown examples was far less successful than displaying
a collection of shapes and asking them to identify which one
had been shown to them initially. Over the next two decades,
many large scale empirical studies reinforced this finding. For
example, Standing (1973) showed participants a set of
complex pictures; the number of pictures in each set ranged
from 10 to 10,000. The participants could recognize subsets of
them with 95 percent accuracy.
Dhamija and Perrig (2000) studied how well people
remember images compared with passwords, and found that
people can more reliably recognize their chosen image than
remember a selected password. This result is being applied to
user-to-computer authentication; either the user selects an
image as an authentication picture, or selects a one-time
password based on a shape or configuration. Similarly,
Zviran and Haga (1990) showed that even text-based chal-
lenge-response mechanisms and associative passwords are
an improvement over unaided password recall.
Commercial products are using these results. Lamandé
(2010) reports that the GrIDSure authentication system
(http://www.gridsure.com) has been integrated into Micro-
soft’s Unified Access Gateway (UAG) platform. This system
allows a user to authenticate herself with a one-time passcode
based on a pattern of squares chosen from a grid. When the
user wishes access, she is presented with a grid containing
randomly-assigned numbers; she then enters as her passcode
the numbers that correspond to her chosen pattern. Because
the displayed grid numbers change each time the grid is pre-
sented, the pattern enables the entered passcode to be a one-
time code. Many researchers (see, for example, Sasse, 2007;
Bond, 2008; Biddle et al., 2009) have examined aspects of
GrIDSure’s security and usability.
Other commercial products use images called Passfaces.
Introduced over ten years ago (Brostoff and Sasse, 2000) and
evaluated repeatedly (Everitt et al., 2009), Passfaces offer an
option that addresses the drawbacks of products like GrID-
Sure. However, the Consumer’s Union study (2008) and others
document the degree to which the average user manages
multiple passwordsdsometimes dozens! This security-in-
the-large leads to problems that are also shared with image
recognition: interference.
4.1.2. Interference
Frequent changes to a memorized item interfere with
remembering the new version of the item. That is, the newest
version of the item competes with the previous ones. The
frequency of change is important; for example, Underwood
(1957) discovered that, in studies in which participants were
required to memorize only a few prior lists, their level of
forgetting was much less than in studies where the partici-
pants were required to memorize many prior lists. Wixted
(2004) points out that even dissimilar things can interfere
with something a subject is trying to memorize: “.recently
formed memories that have not yet had a chance to consoli-
date are vulnerable to the interfering force of mental activity
and memory formation (even if the interfering activity is not
similar to the previously learned material).”
In empirical studies applying these findings to password
memorability, Sasse et al. (2002) showed that login failures
increased sharply as required password changes became
more frequent. In addition, Brostoff and Sasse (2003) showed
that allowing more login attempts led to more successful login
sessions; they suggest that forgiving systems result in better
compliance than very restrictive ones.
Everitt et al. (2009) and Chiasson et al. (2009) have exam-
ined the use of multiple graphical passwords. They found that
users with multiple graphical passwords made fewer errors
when recalling them, did not create passwords that were
directly related to account names, and did not use similar
passwords across multiple accounts. Moreover, even after two
weeks, recall success rates remained good with graphical
passwords and were better than those with text passwords.
Thus, there seemed to be less interference with graphical
objects than with textual ones.
Recent studies have addressed additional concerns about
recall and interference. For example, Jhawar et al. (2011)
suggest that good design can overcome these issues, and
that graphical recall can form the basis for effective security
practices.
4.1.3. Other studies at the intersection
In addition to the findings cited above, most of which are
drawn from basic cognitive psychology literature, there are
many examples of applied studies from other disciplines
where behavioral scientists studied cyber-related problems
directly. For example,
� Sociology. Cheshire and Cook (2004) applied experimental
sociological research results to four different categories of
computer-mediated interaction. They offer guidance to
computer scientists about how to build trust in online
networks. For example, they suggest treating computer-
http://www.gridsure.com
http://dx.doi.org/10.1016/j.cose.2011.12.010
http://dx.doi.org/10.1016/j.cose.2011.12.010
c o m p u t e r s & s e c u r i t y 3 1 ( 2 0 1 2 ) 5 9 7 e6 1 1604
mediated interaction as an architectural problem, using the
nature of the mediation to shape desired behavior. They
distinguish between random and fixed partners in a trans-
action, and suggest appropriate mechanisms for interaction
based on this characterization (see Fig. 1).
� Economics. Economists study the role of reputation in
establishing trust, and this literature is frequently refer-
enced in work at the intersection of economics and cyber
security. For example, many of the papers at the Workshops
on the Economics of Information Security leveraged
economic results from reputation research. Yamagishi and
Matsuda (2003) propose the use of experience-based infor-
mation about reputation to address the problem of lemons:
disappointment in expectation. They show that disap-
pointment is substantially reduced when online traders can
freely change their identities and cancel their reputations.
� Psychology and economics. There is an interaction between
actual costs and perceived costs when people interact,
particularly online. Research in this area spans both
psychology (the perception) and economics (the real costs).
Datta and Chatterjee (2008) have applied some of this
research to the transference of trust in electronic markets.
They show that the transference is complete only if agency
costs from intermediation lie within consumer thresholds.
These examples convince us that mining the behavioral
science literature more thoroughly will lead to an empirical
basis for improvements in the quality and effectiveness of
cyber security defense. This section has provided examples of
the direct application of behavioral science research to prob-
lems in cyber security. In the next section, we consider other
areas where leveraging behavioral science may reap signifi-
cant benefits in protecting the information infrastructure.
5. Areas of behavioral science with potential
relevance
There is a significant amount of behavioral science research
on methods or concepts that influence a person’s or group’s
perceptions, attitudes, and behaviors. Many findings may
have bearing on the design, construction and use of infor-
mation infrastructure protection, but the relevance and
degree of effect have not yet been tested empirically.
In this section, we identify a variety of well-studied
behavioral science findings from psychology, behavioral
•Online communities
•Online auctions
•Chat groups
•Massively multiplayer
online games
•Peer-to-peer digital
goods exchange
•Online “pickup”games
(none)•Solicitation by email
•Email attachments
from unknown
individuals
Frequency
Iterated
Interaction
One-shot
Interaction
Continuity
Random Partner Fixed Partner
Fig. 1 e Example architectural recommendations (Cheshire
and Cook, 2004).
medicine, and other disciplines where techniques have been
demonstrated to affect behavior related to cognition and bias.
We also describe several heuristics and health-related models
that have potential for improving cyber security. However,
unlike the findings in Section 4, these findings have not been
evaluated specifically in terms of changing cyber security-
related behavior. In this section, we introduce each behav-
ioral science finding, discuss a sampling of research results,
and describe the possible implications for cyber security.
5.1. Cognition
Cognition refers to the way people process and learn infor-
mation. There are several findings from research on human
cognition that may be relevant to cyber security.
5.1.1. Identifiable victim effect
The identifiable victim effect refers to the tendency of indi-
viduals to offer greater aid when a specific, identifiable person
(the victim) is observed under hardship, when compared to
a large, vaguely-defined group with the same need. For
example, many people are more willing to help a homeless
person living near the office than the several hundred
homeless living in their city. (Example: K. Jenni and G. Loe-
wenstein, “Explaining the ‘Identifiable Victim Effect’,” Journal
of Risk and Uncertainty, 14, 1997, pp. 235e257.) Implications:
Users may choose stronger security when possible negative
outcomes are tangible and personal, rather than abstract.
5.1.2. Elaboration likelihood model
The Elaboration Likelihood Model describes how attitudes are
formed and persist. It is based on the notion that there are
two main routes to attitude change: the central route and the
peripheral route. Central processes are logical, conscious,
and require a great deal of thought. Therefore, central route
processes to decision-making are only used when people are
motivated and able to pay attention. The result of central
route processing is often a permanent change in attitude, as
people adopt and elaborate on the arguments being made by
others. By contrast, when people take the peripheral route,
they do not pay attention to persuasive arguments; rather,
they are swayed by surface characteristics such as the
popularity of the speaker. In this case, attitude change is
more like to be only temporary. Research has focused on how
to get people to use the central route instead of the peripheral
route. (Example: R.E. Petty and J.T. Cacioppo, Attitudes and
Persuasion: Classic and Contemporary Approaches. Dubuque,
IA:
W. C. Brown, 1981. R.E. Petty and J.T. Cacioppo,
Communication
and Persuasion: Central and Peripheral Routes to Attitude
Change,
New York: Springer-Verlag, 1986.) Implications: One of the
best ways to motivate users to take the central route when
receiving a cyber security message is to make the message
personally relevant. Fear can also be effective in making
users pay attention, but only if levels of fear are moderate and
a solution to the fear-inducing situation is also offered;
strong fear leads to fight-or-flight (physical) reactions. The
central route leads to consideration of arguments for and
against, and the final choice is carefully considered. This
distinction can be particularly important in security aware-
ness training.
http://dx.doi.org/10.1016/j.cose.2011.12.010
http://dx.doi.org/10.1016/j.cose.2011.12.010
c o m p u t e r s & s e c u r i t y 3 1 ( 2 0 1 2 ) 5 9 7 e6 1 1 605
5.1.3. Cognitive dissonance
Cognitive dissonance is the feeling of discomfort that comes
from holding two conflicting thoughts in the mind at the
same time. A person often feels strong dissonance when she
believes something about herself (e.g., “I am a good person”)
and then does something counter to it (e.g., “I did something
bad”). The discomfort often feels like tension between the
two opposing thoughts. Cognitive dissonance is a very
powerful motivator that can lead people to change in one of
three ways: change behavior, justify behavior by changing
the conflicting attitude, or justify behavior by adding new
attitudes. Dissonance is most powerful when it is about
self-image (e.g., feelings of foolishness, immorality, etc.).
(Examples: L. Festinger, A Theory of Cognitive Dissonance,
Stanford, CA: Stanford University Press, 1957; L. Festinger
and J.M. Carlsmith, “Cognitive Consequences of Forced
Compliance,” Journal of Abnormal and Social Psychology, 58,
1959, pp. 203e211.) Implications: Cognitive dissonance is
central to many forms of persuasion to change beliefs,
values, attitudes and behaviors. To get users to change their
cyber behavior, we can first change their attitudes about
cyber security. For example, a system could emphasize
a user’s sense of foolishness concerning the cyber risks he is
taking, enabling dissonant tension to be injected suddenly
or allowed to build up over time. Then, the system can offer
the user ways to relieve the tension by changing his
behavior.
5.1.4. Social cognitive theory
Social Cognitive Theory is a theory about learning based on
two key notions: (1) people learn by watching what others do,
and (2) human thought processes are central to understanding
personality. This theory asserts that some of an individual’s
knowledge acquisition can be directly related to observing
others within the context of social interactions, experiences,
and outside media influences. (Examples: A. Bandura, “Orga-
nizational Application of Social Cognitive Theory,” Australian
Journal of Management, 13(2), 1988, pp. 275e302; A. Bandura,
“Human Agency in Social Cognitive Theory,” American
Psychologist, 44, 1989, pp. 1175e1184.) Implications: By taking
into account gender, age, and ethnicity, a cyber awareness
campaign could reduce cyber risk by using social cognitive
theory to enable users to identify with a recognizable peer and
have a greater sense of self-efficacy. The users would then be
likely to imitate the peer’s actions in order to learn appro-
priate, secure behavior.
5.1.5. Bystander effect
The bystander effect is a psychological phenomenon in which
someone is less likely to intervene in an emergency situation
when other people are present and able to help than when he
or she is alone. (Example: J.M. Darley and B. Latané,
“Bystander Intervention in Emergencies: Diffusion of
Responsibility,” Journal of Personality and Social Psychology,
8,
1968, pp. 377e383.) Implications: During a cyber event, users
may not feel compelled to increase situational awareness or
take necessary security measures because they will expect
others around them to do so. Thus, systems can be designed
with mechanisms to counter this effect, encouraging users to
take action when necessary.
5.2. Bias
Bias describes a person’s tendency to view something from
a particular perspective. This perspective prevents the person
from being objective and impartial. The following findings
about bias may be useful in designing, building and using
information infrastructure.
5.2.1. Status quo bias
Status quo bias describes the tendency of people to not change
an established behavior without a compelling incentive to do
so. (Example: W. Samuelson and R. Zeckhauser, “Status Quo
Bias in Decision Making,” Journal of Risk and Uncertainty, 1,
1988, pp. 7e59.) Implications: Users will need compelling
incentives to change their established cyber security behavior.
For example, information infrastructure can be designed to
provide incentives for people to suspect documents sent from
unknown sources. Similarly, the infrastructure can provide
designers, developers and users with feedback about their
reputations (e.g., “Sixty-three percent of your attachments are
never opened by the recipient.”) or the repercussions of their
actions (e.g., “It was your design defect that enabled this
breach”) to reduce status quo bias.
5.2.2. Framing effects
Scientists usually expect people to make rational choices
based on the information available to them. Expected utility
theory is based on the notion that people choose options that
provide the most benefit (i.e., the most utility to them) based
on the information available to them. However, there is
a growing literature providing evidence that when people
must choose among alternatives involving risk, where the
probabilities of outcomes are known, they behave contrary to
the predictions of expected utility theory. This area of study,
called prospect theory, is descriptive rather than predictive;
prospect theorists report on how people actually make choices
when confronted with information about each alternative.
One of the earliest findings in prospect theory (Tversky and
Kahneman, 1981) demonstrated that the framing of a message
can affect decision making. Framing refers to the context in
which someone interprets information, reacts to events, and
makes decisions. For example, the efficacy of a drug can be
framed in terms of number of lives saved or number of lives
lost; studies have shown that equivalent data framed in oppo-
site ways (gain vs. loss) lead to dramatically different decisions
about whether and how to use the same drug. The context or
framing of a problem can be accomplished by manipulating the
decision options or by referring to qualities of the decision-
makers, such as their norms, habits and temperament.
(Examples: D. Kahneman and A. Tversky, “Prospect Theory: An
Analysis of Decisions Under Risk,” Econometrica, 47, 1979, pp.
313e327; A. Tversky and D. Kahneman, “The Framing of Deci-
sions and the Psychology of Choice,” Science, 211, 1981, pp.
453e458.) Implications: User choices about cyber security may
be influenced by framing them as gains rather than losses, or by
appealing to particular user characteristics. Possible applica-
tions include classifying anomalous data from an intrusion
detection system log, presenting the interface to a firewall as
admitting (good) traffic vs. blocking (bad) traffic, or describing
a data mining activity as exposing malicious behavior.
http://dx.doi.org/10.1016/j.cose.2011.12.010
http://dx.doi.org/10.1016/j.cose.2011.12.010
c o m p u t e r s & s e c u r i t y 3 1 ( 2 0 1 2 ) 5 9 7 e6 1 1606
5.2.3. Optimism bias
Given the minuscule chances of winning the lottery, it is
amazing that people buy lottery tickets. Many people believe
they will do better than most others engaged in the same
activity, so they buy tickets despite evidence to the contrary.
This optimism bias shows itself in many ways, such as over-
estimating the likelihood of positive events and under-
estimating the likelihood of negative events. (Examples: N. D.
Weinstein, “Unrealistic Optimism About Future Life Events,”
Journal of Personality and Social Psychology, 39(5), November
1980, pp. 806e820; D. Dunning, C. Heath and J. M. Suls,
“Flawed
Self-Assessment: Implications for Health, Education, and the
Workplace,” Psychological Science in the Public Interest 5(3),
2004,
pp. 69e106.) Implications: Because they underestimate the
risk, users may think they are immune to cyber attacks, even
when others have been shown to be susceptible. For example,
optimism bias may enable spear phishing (messages seeming
to come from a trusted source, trying to gain unauthorized
access to data at a particular organization). Optimism bias
may also induce people to ignore preventive care measures,
such as patching, because they think they are unlikely to be
affected. To counter optimism bias, systems can be designed
to convey risk impact and likelihood in ways that relate to
people’s real experiences.
5.2.4. Control bias
Control bias refers to the tendency of people to believe they
can control or influence outcomes that they clearly cannot;
this phenomenon is sometimes called the illusion of control.
(Example: E. J. Langer, “The Illusion of Control,” Journal of
Personality and Social Psychology 32(2), 1975, pp. 311e328.)
Implications: Users may be less likely to use protective
measures (such as virus scanning, clearing cache, checking for
secure sites before entering credit card information, or paying
attention to spear phishing) when they feel they have control
over the security risks.
5.2.5. Confirmation bias
Once someone takes a position on an issue, she is more
likely to notice or give credence to evidence that supports
that position than to evidence that discredits it. This
confirmation bias (i.e., looking for evidence to confirm
a position) results in situations where people are not as
open to new ideas as they think they are. They often rein-
force their existing attitudes by selectively collecting new
evidence, interpreting evidence in a biased way, or selec-
tively recalling information from memory. For example, an
analyst finding a perceived pattern in a series of failures
will tend to cease looking for other explanations and
instead seek confirming evidence for his hypothesis.
(Example: M. Lewicka, “Confirmation Bias: Cognitive Error
or Adaptive Strategy of Action Control?” in M. Kofta, G.
Weary and G. Sedek, Personal Control in Action: Cognitive and
Motivational Mechanisms. New York: Springer. 1998, pp.
233e255.) Implications: Users may have initial impressions
about how protected (or not) the information infrastructure
is that they are using. To overcome their confirmation bias,
the system must provide users with an arsenal of evidence
to encourage them to change their current beliefs or to
mitigate their over-confidence.
5.2.6. Endowment effect
The endowment effect describes the fact that people usually
place a higher value on objects they own than objects they do
not own. A related effect is that people react more strongly to
loss than to gain; that is, they will take stronger action to keep
from losing something than to gain something. (Example: R.
Thaler, “Toward a Positive Theory of ConsumerChoice,”
Journal
of Economic Behavior and Organization, 1, 1980, pp. 39e60.)
Implications: Users may pay more (both figuratively and liter-
ally) for securitywhen it letsthemkeep something they already
have, rather than gain something new. This effect, coupled
with a framing effect, may have particular impact on privacy.
When an action is expressed as a loss of privacy (rather than
a gain in capability), people may react to it negatively.
5.3. Heuristics
In psychology, a heuristic is a simple rule inherent in human
nature or learned in order to reduce cognitive load. Thus, we
find them appealing for addressing the cognitive load issues
described earlier. The heuristics’ rules are used to explain how
people make judgments, decide issues, and solve problems;
heuristics are particularly helpful in explaining how people
deal with complex problems or incomplete information.
When heuristics fail, they can lead to systematic errors or
cognitive biases.
5.3.1. Affect heuristic
The affect heuristic enables someone to make a decision
based on an affect (i.e., a feeling) rather than on rational
deliberation. If someone has a good feeling about a situation,
he may perceive that it has low risk; likewise, a bad feeling can
lead to a higher risk perception. (Example: M. Finucane, E.
Peters and D. G. MacGregor, “The Affect Heuristic,” in T.
Gilovich, D. Griffin and D. Kahneman, Heuristics and Biases:
The
Psychology of Intuitive Judgment. Cambridge University Press,
2002, pp. 397e420.) Implications: If users perceive little risk,
the system may need a design that creates a more critical
affect toward computer security that will encourage them to
take protective measures. The system should also reward the
system administrator who looks closely at a system audit log
because something just doesn’t “feel” right.
5.3.2. Availability heuristic
The availability heuristic refers to the relationship between
ease of recall and probability. In other words, because of the
availability heuristic, someone will predict an event’s proba-
bility or frequency in a population based on the ease with
which instances of an event come to mind. The more recent,
emotional, or vivid an event is, the more likely it will come to
mind. (Example: A. Tversky and D. Kahneman, “Availability: A
Heuristic for Judging Frequency and Probability,” Cognitive
Psychology 5, 1973, pp.207e232.) Implications: Users will be
more persuaded to act responsibly if the system is designed to
use vivid, personal events as examples, rather than statistics
and facts. Moreover, if the system reports recent cyber events,
it may be more effective in encouraging users to take
measures to prevent future adverse events. Users’ choices
may also be heavily biased by the first thing that comes to
http://dx.doi.org/10.1016/j.cose.2011.12.010
http://dx.doi.org/10.1016/j.cose.2011.12.010
c o m p u t e r s & s e c u r i t y 3 1 ( 2 0 1 2 ) 5 9 7 e6 1 1 607
mind. Therefore, frequent security exercises may encourage
more desirable security behavior. On the other hand, a system
that has gone for some time without a major cyber incident
may lull the administrators into a false sense of security
because of the low frequency of events. The administrators
may then become lax in applying security updates because of
the long run of incident-free operation.
5.4. Health-related behavioral models
In cyber security, we frame many issues using health-related
metaphors because they are, in many ways, analogous. For
example, we speak of viruses and infections when describing
attacks. Similarly, we discuss increasing immunity to intru-
sions, or to increasing resilience after a successful attack. For
this reason, we believe that security design strategies can
leverage the significant research into health-related behav-
ioral models. We discuss several candidate models here.
5.4.1. Health belief model
The Health Belief Model, developed in the 1950s after the
failure of a free tuberculosis screening program, helped the
U.S. Public Health Service by attempting to explain and predict
health behaviors. It focused on attitudes and beliefs. Six
constructs describe an individual’s core beliefs based on their
perceptions of: susceptibility, severity, benefits, barriers, cues
to action, and self-efficacy of performing a given health
behavior. The perceived benefits must outweigh the barriers or
costs. (Example: I. Rosenstock, “Historical Origins of the
Health
Belief Model,” Health Education Monographs, 2(4), 1974.)
Impli-
cations: The health and security education models are similar.
If the Health Belief Model translates to cyber security aware-
ness, a user will take protective security actions if he feels that
a negative condition can be avoided (e.g., computer viruses can
be avoided), has a positive expectation that by taking a rec-
ommended action he will avoid a negative condition (e.g.,
doing a virus scan will prevent a viral infection), and believes
that he can successfully perform the recommended action
(e.g., is confident that he knows how to install virus protection
files). The model suggests success only if the benefits (e.g.,
keeping himself, his organization, and the nation safe)
outweigh the costs (e.g., download time, loss of work).
5.4.2. Extended parallel process model
The Extended Parallel Process Model (EPPM) is an extension of
the Health Belief Model that attempts to improve message
efficacy by using threats. Based on Leventhal’s danger control/
fear control framework, EPPM, which has multiple compo-
nents, explains why many fear appeals fail, incorporates fear
as a key variable, and describes the relationship between fear
and efficacy. Leventhal defines the danger control process as
an individual seeking to reduce the risk presented by taking
direct action and making adaptive changes but the fear control
process focuses on maladaptive changes to the perception,
susceptibility and severity of the risk. The EPPM provides
guidance about how to construct effective fear-appeal
messages: As long as efficacy perceptions are stronger than
threat perceptions, the user will go into danger control mode
(accepting the message and taking recommended action to
prevent danger from happening). (Examples: K. Witte, “Putting
the Fear Back into Fear Appeals: The Extended Parallel Process
Model,” Communication Monographs, 59, 1992, pp. 329e349;
H.
Leventhal, “Findings and Theory in the Study of Fear
Communications,” in L. Berkowitz, ed., Advances in Experi-
mental Social Psychology, Vol. 5, New York: Academic Press,
1970, pp. 119e186.) Implications: When used appropriately,
threats and fear can be useful in encouraging users to comply
with security. However, the messages cannot be too strong,
and users must believe that they are able to comply success-
fully with the security advice. This model may explain how to
encourage users to apply security and performance patches,
use and maintain anti-virus tools, and avoid risky online
behavior.
5.4.3. Illness representations
The health care community has a great deal of experience
with representing the nature and severity of illness to
patients, so that patients can make informed decisions about
treatment choices and health. In particular, there are lessons
to be learned from the way fear messages are used in rela-
tively acute situations to encourage people to take health-
promoting actions such as wearing seat belts or giving up
smoking. Health researchers (Leventhal et al., 1980) have
found that different types of information are needed to
influence both attitudes and reactions to a perceived threat to
health and well-being, and that the behavior changes last only
for short periods of time. In extending their initial model, the
researchers sought adaptations and coping efforts for those
patients experiencing chronic illness. The resulting illness
representations integrate the coping mechanisms with
existing schemata (i.e., the normative guidelines that people
hold), enabling patients to make sense of their symptoms and
guiding any coping actions. The illness representations have
five components: identity, timeline, consequences, control/
cure, and illness coherence. (Examples: H. Leventhal, D. Meyer
and D.R. Nerenz, “The Common Sense Representation of
Illness Danger,” in S. Rachman, ed., Contributions to Medical
Psychology, New York: Pergamon Press, 1980, pp. 17e30; H.
Leventhal, I. Brissette and E.A. Leventhal, “The Common-
sense Model of Self-Regulation of Health and Illness,” in L.D.
Cameron and H. Leventhal, eds., The Self-Regulation of Health
and Illness Behaviour, London: Routledge, 2003, pp. 42e65.)
Implications: In a well-designed system, users concerned
about whether to trust a site, person, or document can obtain
new information about their security posture and evaluate
their attempts to deal (e.g., moderate, cure or cope) with its
effects. Then, the users form new representations based upon
their experiences. These representations are likely to be
cumulative, with security information being adopted, dis-
carded or adapted as necessary. Thus, the representations are
likely to be linked to the selection of coping procedures, action
plans and outcomes. These results could be of significance for
developing incident response strategies.
5.4.4. Theory of reasoned action/theory of planned behavior
The Theory of Reasoned Action and the Theory of Planned
Behavior are based on two notions: (1) people are reasonable
and make good use of information when deciding among
behaviors, and (2) people consider the implications of their
behavior. Behavior is directed toward goals or outcomes, and
http://dx.doi.org/10.1016/j.cose.2011.12.010
http://dx.doi.org/10.1016/j.cose.2011.12.010
c o m p u t e r s & s e c u r i t y 3 1 ( 2 0 1 2 ) 5 9 7 e6 1 1608
people freely choose those behaviors that will move them
toward those goals. They can also choose not to act if they
think acting will move them away from their goals. The
theories take into account four concepts: behavioral intention,
attitude, social norms, and perceived behavioral control.
Intention to behave has a direct influence on actual behavior
as a function of attitude and subjective norms. Attitude is
a function both of the personal consequences expected from
behaving and the affective value placed on those conse-
quences. (Example: I. Ajzen, “From Intentions to Actions: A
Theory of Planned Behavior,” in J. Kuhl and J. Beckmann, eds.,
Action Control: From Cognition to Behavior. Berlin,
Heidelberg,
New York: Springer-Verlag, 1985.) Implications: To encourage
users to change their security behavior, the system must
create messages that affect users’ intentions; in turn, the
intentions are changed by influencing users’ attitudes through
identification of social norms and behavioral control. The
users must perceive that they can control the successful
completion of their tasks securely and safely.
5.4.5. Stages of change model
The Stages of Change Model assesses a person’s readiness to
initiate a new behavior, providing strategies or processes of
change to guide her through the stages of change to action and
maintenance. Change is a process involving progression
through six stages: precontemplation, contemplation
(thoughts), preparation (thoughts and action), action (actual
behavior change), maintenance, and termination. Therefore,
interventions to change behaviors must match and affect the
appropriate stage. To progress through the early stages, people
apply cognitive, affective, and evaluative processes. As people
move toward maintenance or termination, they rely more on
commitments and conditioning. (Examples: J.O. Prochaska,
J.C. Norcross and C.C. DiClemente, Changing for Good: The
Revolutionary Program That Explains the Six Stages of Change
and
Teaches You How to Free Yourself From Bad Habits. New
York: W.
Morrow, 1994; J.O. Prochaska and C.C. DiClemente, “The
Transtheoretical Approach,” in J.C. Norcross and M.R. Gold-
fried, eds. Handbook of Psychotherapy Integration, 2nd ed.,
New
York: Oxford University Press, 2005. pp. 147e171.) Implica-
tions: To change security-related behaviors, it is necessary
first to assess the users’ stage before developing processes to
elicit behavior change. For example, getting software devel-
opers to implement security in the code development life
cycle, and especially throughout the life cycle, is notoriously
difficult. Currently, much effort is directed at moving devel-
opers directly to stage four (action), without appropriate
attention to the importance of the earlier stages.
5.4.6. Precaution-adoption process theory
Theories that try to explain behavior by examining the
perceived costs and benefits of behavior change work only if
the person has enough knowledge or experience to have
formed a belief. The Precaution-Adoption Process Model seeks
to understand and explain behavior by looking at seven
consecutive stages: unaware; unengaged; deciding about
acting; decided not to act; decided to act; acting; and mainte-
nance. People should respond better to interventions that are
matched to the stage they are in. (Examples: N.D. Weinstein,
“The Precaution Adoption Process,” Health Psychology, 7(4),
1988, pp. 355e386; N.D. Weinstein and P.M. Sandman, “A
Model
of the Precaution Adoption Process: Evidence From Home
Radon Testing,” Health Psychology, 11(3), 1992, pp. 170e180.)
Implications: Security actions may be related to the seven
stages. It may be necessary to assess a user’s stage before
developing a process to elicit the desired behavior change.
6. Applying behavioral science findings: the
way forward
We have presented some early results that show why this
multi-disciplinary approach is likely to yield useful insights. In
this final section, we describe next steps for determining the
best ways to blend behavioral science with computer science
to yield improved cyber security. The recommended steps
involve encouraging multi-disciplinary workshops, perform-
ing empirical studies across disciplines, and building an
accessible repository of multi-disciplinary findings.
6.1. Workshops bridging communities
Multi-disciplinary work can be challenging for many reasons.
First, as noted by participants in a National Academy of
Science workshop (2010), there are inconsistent terminolo-
gies and definitions across disciplines. Particularly for words
like “trust” or “risk,” two different disciplines can use the
same word but with very different meanings and assump-
tions. Second, there are few incentives to publish findings
across disciplines, so many researchers work in distinct and
separate areas that do not customarily share information. For
this reason, we recommend the establishment of workshops
that bridge communities so that each community’s knowl-
edge can benefit the others’.
In July 2010, the Institute for Information Infrastructure
Protection (I3P) held a two-day workshop to bring together
members of the behavioral science community and the cyber
security community, examine how to move successfully-
evaluated findings into practice, and establish groups of
researchers willing to empirically evaluate promising findings
and assess their applicability to cyber security. The workshop
created an opportunity for the formation of groups of
researchers and practitioners eager to evaluate and adopt
more effective ways of integrating behavioral science with
cyber security. That is, the workshop is the first step in what
we hope will be a continuing partnership between computer
science and behavioral science that will improve the effec-
tiveness of cyber security.
The output of the workshop included:
� Identification of existing findings that can enhance cyber
security in the near term.
� Identification of potential behavioral science findings that
could be applied but necessitate empirical evaluations of
their effects on cyber security.
� Identification of cyber security areas and problems where
application of concepts from behavioral science could have
a positive impact.
� Establishment of an initial repository of information about
behavioral science and cyber security.
http://dx.doi.org/10.1016/j.cose.2011.12.010
http://dx.doi.org/10.1016/j.cose.2011.12.010
c o m p u t e r s & s e c u r i t y 3 1 ( 2 0 1 2 ) 5 9 7 e6 1 1 609
As a result of this workshop, several spear phishing studies
were conducted in university and industrial settings, and an
incentives study, to empirically demonstrate what kinds of
incentives (i.e., money, convenient parking spots, public
recognition, etc.) would most motivate users to have good
cyber hygiene, was designed for future administration. A
second workshop was held in October 2011 to report on the
studies’ findings and to organize further studies.
Workshops of this kind can not only act as catalysts for the
initiation of new research but can also encourage continued
interaction and cooperation across disciplines. Similar efforts
are being encouraged in several areas of cyber security,
particularly in usable security (Pfleeger, 2011).
6.2. Empirical evaluation across disciplines
We hope to expand the body of knowledge on the interactions
between human behavior and cyber security via investiga-
tions that will produce both innovative experimental designs
and data that can form the basis of experimental replication
and tailoring of applications to particular situations. However,
there are challenges to performing this type of research,
especially when resources are constrained. For example, it is
not usually possible to build the same system twice (one as
control, one as treatment) and compare the results, so good
experimental design is crucial in producing strong, credible
results with sufficient levels of external validity.
Empirical evaluation of the effects of change on cyber secu-
rity involves many things, including identifying variables,
controlling for bias and interaction effects, and determining the
degree to which results can be generalized. These are funda-
mental principles of the empirical method but are often not
understood or not applied appropriately. We hope to produce
more comprehensive guidelines for experimental design, aimed
at assisting cybersecurity practitionersandbehavioralscientists
in designing evaluations that will produce the most meaningful
results. These guidelines will highlight several issues:
� The need to design a study so that confounding variables
and bias are reduced as much as possible.
� The need to state the experimental hypothesis and identify
dependent and independent variables.
� The need to identify the research participants and deter-
mine which population is under scrutiny.
� The need for clear and complete sampling procedures, so
that the sample represents the identified population.
� The need to describe experimental conditions in enough
detail so that the reader can understand the study and also
replicate it.
� The need to do an effective post-experiment debriefing,
especially for studies where the actual intent of the study is
not revealed until the study is completed.
There are several examples of good experimental design for
studies at the intersection of behavioral science and cyber
security. For instance, many lessons were learned in an
experiment focused on insider threat (Caputo et al., 2009). In
this study, the researchers encountered several challenges in
selecting the best sample and following strict empirical proce-
dures. They documented the importance of pilot testing their
experimental design before engaging their targeted partici-
pants. In particular, it was difficult to get corporate participants
to perform the experimental tasks with the same motivation
that the average users have when doing their regular jobs.
Therefore, the researchers used pilot testing to determine what
would motivate participants. Then, the motivation was built
into the study design. Although this study used corporate
employees, real networks, and plausible tasks to make the
research environment as realistic as possible, generating data
sets in any controlled situation reduced the researchers’ ability
to generalize the findings to complex situations.
There are many studies that can benefit from better data
collection and better study design. Pfleeger et al. (2006) suggest
a roadmap for improved data collection and analysis of cyber
security information. In addition, Cook and Pfleeger (2010)
describe how to build improvements on existing data sets
and findings.
6.3. Repository of findings
We are building a repository of relevant findings, including
data sets where available, to serve at least two purposes. First,
it will provide the basis for decision-making about when and
how to include behavioral considerations in the specification,
design, construction and use of cyber security products and
processes. Second, it will enable researchers and practitioners
to replicate studies in their own settings, to confirm or refute
earlier findings and to tailor methods to particular needs and
constraints. Such information will lay the groundwork for
evidence-based cyber security.
This paper reports on the findings of our initial foray into
the blending of behavioral science and cyber security. In
recent years, there has been much talk about inviting both
disciplines to collaborate, but little work has been done to
open discussion broadly to both communities. Our workshops
took bold and broad steps, and it is hoped that the activities
reported here, built on the shoulders of work performed in
both communities over the past two decades, will encourage
others to join us in thinking more expansively about cyber
security problems and possible solutions. In particular, we
encourage others engaged in research across disciplines to
contact us, so that we can establish virtual and actual links
that move us toward understanding and implementation of
improved cyber security.
Acknowledgments
This work was sponsored by grants from the Institute for
Information Infrastructure Protection at Dartmouth College,
under award number 2006-CS-001-000001 from the US
Department of Homeland Security, National Cyber Security
Directorate.
r e f e r e n c e s
Amos Deborah. Challenge: airport screening without
discrimination. Morning Edition, National Public Radio.
http://dx.doi.org/10.1016/j.cose.2011.12.010
http://dx.doi.org/10.1016/j.cose.2011.12.010
c o m p u t e r s & s e c u r i t y 3 1 ( 2 0 1 2 ) 5 9 7 e6 1 1610
Available at: http://www.npr.org/templates/story/story.php?
storyId¼122556071; January 14, 2010.
Baier Annette. Trust and antitrust. Ethics 1986;96(2):231e60.
Baker Peter, Hulse Carl. U.S. had early signals of terror plot,
Obama says. New York Times December 2009;30:1.
Bell David E, La Padula Leonard J. Secure computing systems:
mathematical foundations. MITRE Technical Report MTR-
2547. Bedford, MA: The MITRE Corporation; 1973.
Biba Kenneth J. Integrity considerations for secure computer
systems. MITRE Technical Report MTR-3153. Bedford, MA:
The
MITRE Corporation; April 1977.
Biddle Robert, Sonia Chiasson, van Oorschot PC. Graphical
passwords: learning from the first generation. Technical
Report 09-09. Ottawa, Canada: School of Computer Science,
Carleton University; 2009.
Bond Michael. Comments on GrIDSure authentication.
Available
at: http://www.cl.cam.ac.uk/wmkb23/research/
GridsureComments.pdf; 28 March 2008.
Brostoff Sacha, Sasse M Angela. Are passfaces more usable
than
passwords? A field trial investigation. In: McDonald S, Waem
Y,
Cockton G, editors. People and computers XIVeusability or
else,
Proceedings of HCI 2000. Sunderland, UK: Springer; 2000. p.
405e24.
Brostoff Sacha, Sasse M Angela. Ten strikes and you’re out:
increasing the number of login attempts can improve
password usability. In: Proceedings of CHI 2003 Workshop on
Human-Computer Interaction and Security Systems; 2003. Ft.
Lauderdale, FL.
Burke Cody. Intelligence gathering meets information overload.
Basex TechWatch. Available at: http://www.basexblog.com/
2010/01/14/intelligence-gathering-meets-io/; 14 January 2010.
Caputo Deanna, Maloof Marcus, Stephens Gregory. Detecting
insider theft of trade secrets. IEEE Security and Privacy
NovembereDecember 2009;7(6):14e21.
Castelfranchi Cristiano, Falcone Rino. Principles of trust for
MAS:
cognitive anatomy, social importance, and quantification. In:
Proceedings of the Third International Conference on Multi
Agent Systems; 1998.
Castelfranchi Cristiano, Falcone Rino. Social trust: a cognitive
approach. In: Castelfranchi Cristano, Tan Yao-Hua, editors.
Trust and deception in virtual societies. Amsterdam: Kluwer
Academic Publishers; 2002.
Chase WG, Simon HA. Perception in chess. Cognitive
Psychology
1973;4(1):55e81.
Cheshire Coye, Cook Karen. The emergence of trust networks
under uncertainty: implications for internet interactions.
Analyse & Kritik 2004;26:220e40.
Chiasson Sonia, Alain Forget, Elizabeth Stobert, van
Oorschot Paul C, Biddle Robert. Multiple password
interference in text passwords and click-based graphical
passwords. ACM Computer and Communications Security
(CCS); November 2009:500e11.
Clements Paul, Northrup Linda. Software product lines:
practices
and patterns. Reading, MA: Addison-Wesley; 2001.
Consumer’s Union. ID leaks: a surprising source is your
government at work. Consumer Reports. Available at: http://
www.consumerreports.org/cro/money/credit-loan/identity-
theft/government-id-leaks/overview/government-id-leaks-ov.
htm; September 2008.
Cook Ian P, Pfleeger Shari Lawrence. Security decision support
challenges in data collection and use. IEEE Security and
Privacy MayeJune 2010;8(3):28e35.
Datta Pratim, Chatterjee Sutirtha. The economics and
psychology
of consumer trust in intermediaries in electronic markets: the
EM-trust framework. European Journal of Information
Systems February 2008;17(1):12e28.
Dhamija Rachna, Perrig Adrian. Déjà Vu: a user study using
images for authentication. In: Proceedings of the 9th USENIX
Security Symposium; August 2000. Denver, CO.
Ellsberg Daniel J. Risk, ambiguity and decision. RAND Report
D-
12995. Santa Monica, CA: RAND Corporation; 1964.
Everitt Katherine, Bragin Tanya, Fogarty James, Kohno
Tadayoshi.
A comprehensive study of frequency, interference, and
training of multiple graphical passwords. In: ACM Conference
on Human Factors in Computing Systems (CHI); April 2009.
Jhawar Ravi, Inglesant Philip, Sasse Martina Angela,
Courtois Nicolas. Make mine a quadruple: strengthening the
security of graphical one-time PIN authentication. In:
Proceedings of the Fifth International Conference on Network
and Systems Security; September 6e8, 2011. Milan, Italy.
Klein Gary A. Sources of power: how people make decisions.
Cambridge, MA: MIT Press; 1998.
Klein Gary A. Streetlights and shadows: searching for the keys
to
adaptive decision making. Cambridge, MA: MIT Press; 2009.
Klein GA, Calderwood R. Decision models: some lessons from
the
field. IEEE Transactions on Systems, Man and Cybernetics
September/October 1991;21(5):1018e26.
Klein Gary A, Salas Eduardo, editors. Linking expertise and
naturalistic decision making. Erlbaum; 2001.
Lamandé Emmanuelle. GrIDSure authenticates Microsoft’s
latest
remote application platform. Global Security Mag. Available at:
http://www.globalsecuritymag.com/GrIDsure-authenticates-
Microsoft-s, 20100427, 17307.html; 27 April 2010.
Leventhal H, Meyer D, Nerenz DR. The Common Sense
Representation of Illness Danger. In: Rachman S, editor.
Contributions to Medical Psychology. New York: Pergamon
Press; 1980. p. 17e30.
Lerner JS, Tiedens LZ. Portrait of the angry decision maker:
how
appraisal tendencies shape anger’s influence on cognition.
Journal of Behavioral Decision Making 2006;19:115e37
(Special
Issue on Emotion and Decision Making).
Libicki Martin C, Pfleeger Shari Lawrence. Collecting the dots:
problem formulation and solution elements. RAND Occasional
Paper OP-103-RC. Santa Monica, CA: RAND Corporation;
2004.
Mack A, Rock I. Inattentional blindness. Cambridge, MA: MIT
Press; 1998.
Mayo Deborah, Hollander Rachelle, editors. Acceptable
evidence:
science and values in risk management. Oxford University
Press; 1991.
Miller George A. The magic number seven plus or minus two:
some limits on our capacity to process information.
Psychological Review 1956;63:81e97.
National Academy of Science. Toward better usability, security
and privacy of information technology. Report of a Workshop.
Washington, DC: National Academies Press; 2010.
Ofsted (U.K. Office for Standards in Education, Children’s
Services
and Skills). The safe use of new technologies. Report 090231.
Manchester, UK: Ofsted; February 2010.
Pfleeger Shari Lawrence. Draft report on the NIST workshop.
Available at: http://www.thei3p.org/docs/publications/436.
pdf; March 2011.
Pfleeger Shari Lawrence, Predd Joel, Hunker Jeffrey, Bulford
Carla.
Insiders behaving badly: addressing bad actors and their
actions. IEEE Transactions on Information Forensics and
Security March 2010;5(2).
Pfleeger Shari Lawrence, Rue Rachel, Horwitz Jay,
Balakrishnan Aruna. Investing in cyber security: the path to
good practice. Cutter IT Journal January 2006;19(1):11e8.
Predd Joel, Pfleeger Shari Lawrence, Hunker Jeffrey, Bulford
Carla.
Insiders behaving badly. IEEE Security and Privacy July/August
2008;6(4):66e70.
Riegelsberger Jens, Sasse M Angela, McCarthy John D. The
researcher’s dilemma: evaluating trust in computer-mediated
communication. International Journal of Human-Computer
Studies 2003;58(6):759e81.
Riegelsberger Jens, Sasse M Angela, McCarthy John D. The
mechanics of trust: a framework for research and design.
http://www.npr.org/templates/story/story.php%253fstoryId%253
d122556071
http://www.npr.org/templates/story/story.php%253fstoryId%253
d122556071
http://www.npr.org/templates/story/story.php%253fstoryId%253
d122556071
http://www.cl.cam.ac.uk/%7Emkb23/research/GridsureComment
s.pdf
http://www.cl.cam.ac.uk/%7Emkb23/research/GridsureComment
s.pdf
http://www.cl.cam.ac.uk/%7Emkb23/research/GridsureComment
s.pdf
http://www.basexblog.com/2010/01/14/intelligence-gathering-
meets-io/
http://www.basexblog.com/2010/01/14/intelligence-gathering-
meets-io/
http://www.consumerreports.org/cro/money/credit-loan/identity-
theft/government-id-leaks/overview/government-id-leaks-
ov.htm
http://www.consumerreports.org/cro/money/credit-loan/identity-
theft/government-id-leaks/overview/government-id-leaks-
ov.htm
http://www.consumerreports.org/cro/money/credit-loan/identity-
theft/government-id-leaks/overview/government-id-leaks-
ov.htm
http://www.consumerreports.org/cro/money/credit-loan/identity-
theft/government-id-leaks/overview/government-id-leaks-
ov.htm
http://www.globalsecuritymag.com/GrIDsure-authenticates-
Microsoft-s,%2020100427,%2017307.html
http://www.globalsecuritymag.com/GrIDsure-authenticates-
Microsoft-s,%2020100427,%2017307.html
http://www.thei3p.org/docs/publications/436.pdf
http://www.thei3p.org/docs/publications/436.pdf
http://dx.doi.org/10.1016/j.cose.2011.12.010
http://dx.doi.org/10.1016/j.cose.2011.12.010
c o m p u t e r s & s e c u r i t y 3 1 ( 2 0 1 2 ) 5 9 7 e6 1 1 611
International Journal of HumaneComputer Studies 2005;62(3):
381e422.
Rock I, Engelstein P. A study of memory for visual form.
American
Journal of Psychology 1959;72:221e9.
Sasse M Angela. GrIDsure usability trials, http://www.gridsure.
com/uploads/UCL%20Report%20Summary%20.pdf; 2007.
Sasse M Angela, Brostoff Sacha, Weirich Dirk. Transforming
the
‘weakest link: a human-computer interaction approach to
usable and effective security. In: Temple R, Regnault J, editors.
Internet and wireless security. London: IEE Press; 2002. p.
243e58.
Sasse M Angela, Flechais Ivan. Usable security: why do we
need
it? How do we get it? In: Cranor Lorrie Faith, Garfinkel
Simson,
editors. Security and usability. Sebastopol, CA: O’Reilly
Publishing; 2005. p. 13e30.
Scandura JM. Deterministic theorizing in structural learning:
three levels of empiricism. Journal of Structural Learning 1971;
3:21e53.
Schneier Bruce. Semantic attacks: the third wave of network
attacks. In: Crypto-gram newsletter. At: http://Www.Schneier.
Com/Crypto-Gram-0010.Html; October 15, 2000.
Simons Daniel J, Chabris CF. Gorillas in our midst: sustained
inattentional blindness for dynamic events. Perception 1999;
28:1059e74.
Simons Daniel J, Jensen Melinda S. The effects of individual
differences and task difficulty on inattentional blindness.
Psychonomic Bulletin & Review 2009;16(2):398e403.
Slovic Paul, editor. The perception of risk. London: Earthscan
Ltd.;
2000.
Smith Walter, Hill Becky, Long John, Whitefield Andy. A
design-oriented framework for modelling the planning and
control of multiple task work in secretarial office
administration. Behaviour and Information Technology
1997;16(3):161e83.
Spira Jonathan B. The Christmas day terrorism plot: how
information overload prevailed and counterterrorism
knowledge sharing failed. Basex TechWatch. Available at:
http://www.basexblog.com/category/analysts/jonathan-b-
spira/; 4 January 2010.
Standing L. Learning 10,000 pictures. Quarterly Journal of
Experimental Psychology 1973;27:207e22.
Tenner Edward. Why things bite back: technology and the
revenge of unintended consequences. Vintage Press; 1991.
Tversky A, Kahneman D. The Framing of Decisions and the
Psychology of Choice. Science 1981;211:453e8.
Underwood BJ. Interference and forgetting. Psychological
Review
1957;64:49e60.
Virginia Tech. When users resist: how to change management
and user resistance to password security. Pamplin, Fall 2011.
Available at: http://www.magazine.pamplin.vt.edu/fall11/
passwordsecurity.html.
Wixted John T. The psychology and neuroscience of forgetting.
Annual Review of Psychology 2004;55:235e69.
Yamagishi T, Matsuda M. The role of reputation in open and
closed societies: an experimental study of online trading.
Center for the Study of Cultural and Ecological Foundations of
Mind; 2003. Working Paper Series 8.
Zviran Moshe, Haga William J. Cognitive passwords: the key to
easy access control. Computers and Security 1990;8(9):723e36.
Shari Lawrence Pfleeger is the Research Director for the
Institute
for Information Infrastructure Protection (I3P), a consortium of
universities, national laboratories and non-profits dedicated to
improving IT security, reliability and dependability. Pfleeger
earned a PhD in information technology and engineering from
George Mason University.
Deanna Caputo, a lead behavioral psychologist at the MITRE
Corporation, investigates questions addressing the intersection
of
social science and computer science, such as insider threat and
effective ways to change behavior. She holds a bachelor’s
degree
in psychology from Santa Clara University and a PhD in social
and
personality psychology from Cornell University.
http://www.gridsure.com/uploads/UCL%20Report%20Summary
%20.pdf
http://www.gridsure.com/uploads/UCL%20Report%20Summary
%20.pdf
http://www.schneier.com/crypto-gram-0010.html
http://www.schneier.com/crypto-gram-0010.html
http://www.basexblog.com/category/analysts/jonathan-b-spira/
http://www.basexblog.com/category/analysts/jonathan-b-spira/
http://www.magazine.pamplin.vt.edu/fall11/passwordsecurity.ht
ml
http://www.magazine.pamplin.vt.edu/fall11/passwordsecurity.ht
ml
http://dx.doi.org/10.1016/j.cose.2011.12.010
http://dx.doi.org/10.1016/j.cose.2011.12.010Leveraging
behavioral science to mitigate cyber security risk1.
Introduction2. Why technology alone is not enough3.
Identifying behavioral aspects of security3.1. Scenario
creation3.1.1. Scenario 1: improving security awareness among
builders of information infrastructure3.1.2. Scenario 2:
enhancing situational awareness during a “cyber event”3.1.3.
Scenario 3: supporting decisions about trustworthiness of
network transactions3.2. Analysis of results4. Areas of
behavioral science with demonstrated relevance4.1. Findings
with demonstrable relevance to cyber security4.1.1. Recognition
easier than recollection4.1.2. Interference4.1.3. Other studies at
the intersection5. Areas of behavioral science with potential
relevance5.1. Cognition5.1.1. Identifiable victim effect5.1.2.
Elaboration likelihood model5.1.3. Cognitive dissonance5.1.4.
Social cognitive theory5.1.5. Bystander effect5.2. Bias5.2.1.
Status quo bias5.2.2. Framing effects5.2.3. Optimism bias5.2.4.
Control bias5.2.5. Confirmation bias5.2.6. Endowment
effect5.3. Heuristics5.3.1. Affect heuristic5.3.2. Availability
heuristic5.4. Health-related behavioral models5.4.1. Health
belief model5.4.2. Extended parallel process model5.4.3. Illness
representations5.4.4. Theory of reasoned action/theory of
planned behavior5.4.5. Stages of change model5.4.6.
Precaution-adoption process theory6. Applying behavioral
science findings: the way forward6.1. Workshops bridging
communities6.2. Empirical evaluation across disciplines6.3.
Repository of findingsAcknowledgmentsReferences
Homework 2
Due a week after the first class at 11:59 pm
Read the assigned articles in D2L. Answer the questions below.
The answers must demonstrate that you have substantively
engaged with the material and you haven’t simply goggled the
question and copy/pasted the answer.
1. How should people make decisions, according to economists?
2. Are you more likely to be killed by a bear or a bee? Why did
you answer that way? Why might someone answer the other
way?
3. Scenario: Bob, who just got a new laptop, is working on an
important project that requires him to use it. If he has a limited
amount of time to consider whether to install a virus scanner or
not, what will be the first cost or benefit to come to mind? What
decision is he likely to make based on that thought?
4. Pick one of the heuristics we discussed in class and come up
with another example of how it might apply in cybersecurity
5. Alice is responsible for all cybersecurity decisions in her
organization. How should she allocate her attention, and how is
she likely to allocate her attention? Consider some examples: a
vulnerability that affects a system that does not connect to the
Internet, phishing attempts on her users, and Advanced
Persistent Threats from nation state actors. Which should she
prioritize, and which is she likely to prioritize?
6. How can an end-user tell whether their account is secure?
What factors might lead a person to believe that an account was
secure when it wasn’t?
Page | 1
This document is licensed with a Creative Commons
Attribution 4.0 International License ©2017

More Related Content

PDF
Twin Behavioral Security Chung Galletta 2014 IFIP Roode Revised.pdf
PPT
Measuring Risk - What Doesn’t Work and What Does
DOCX
ISSC451 Cybercrime.docx
PPT
6-Eiser-White-II (1).pptjvlblkblkblklkllkb
DOCX
Running head ANNOTATED BIBLIOGRAPHYANNOTATED BIBLIOGRAPHY2.docx
PDF
Favourite Film Essay
PDF
Let me guess covid will be in all top risk studies this year
PDF
Law and Order: helping hospital and doctors recognize and manage risk
Twin Behavioral Security Chung Galletta 2014 IFIP Roode Revised.pdf
Measuring Risk - What Doesn’t Work and What Does
ISSC451 Cybercrime.docx
6-Eiser-White-II (1).pptjvlblkblkblklkllkb
Running head ANNOTATED BIBLIOGRAPHYANNOTATED BIBLIOGRAPHY2.docx
Favourite Film Essay
Let me guess covid will be in all top risk studies this year
Law and Order: helping hospital and doctors recognize and manage risk

Similar to RATIO ANALYSIS RATIO ANALYSIS Note Please change the column names.docx (19)

PDF
Data Protection Impact Assessment for Cloud Users
PDF
DOES DIGITAL NATIVE STATUS IMPACT END-USER ANTIVIRUS USAGE?
DOCX
The Journal of forensic PsychiaTry & Psychology, 2017Vol. 28.docx
PDF
cybersecurity-insurance-report 2024 doc.pdf
PDF
Risking Other People’s Money: Experimental Evidence on Bonus Schemes, Competi...
PDF
Database Security Is Vital For Any And Every Organization
DOC
RCDM Dissertation 2 - Laurence Horton (129040266)
PDF
Comments to FTC on Mobile Data Privacy
PDF
Existential Risk Prevention as Global Priority
DOCX
BOS 3651, Total Environmental Health and Safety Managemen.docx
PDF
CHI abstract camera ready
PPTX
White Paper Presentation
DOCX
Sonal Pirani Discussion  When the end-users are involved in t.docx
DOCX
Healthcares Vulnerability to Ransomware AttacksResearch questio
PPTX
A Pluralistic Understanding of Sustainable Engineering Science
PDF
Understanding Construction Workers’ Risk Decisions Using Cognitive Continuum ...
PDF
APPLYING THE HEALTH BELIEF MODEL TO CARDIAC IMPLANTED MEDICAL DEVICE PATIENTS
PDF
CONCEPTUALIZING AI RISK
DOCX
Discussion Questions The difficulty in predicting the future is .docx
Data Protection Impact Assessment for Cloud Users
DOES DIGITAL NATIVE STATUS IMPACT END-USER ANTIVIRUS USAGE?
The Journal of forensic PsychiaTry & Psychology, 2017Vol. 28.docx
cybersecurity-insurance-report 2024 doc.pdf
Risking Other People’s Money: Experimental Evidence on Bonus Schemes, Competi...
Database Security Is Vital For Any And Every Organization
RCDM Dissertation 2 - Laurence Horton (129040266)
Comments to FTC on Mobile Data Privacy
Existential Risk Prevention as Global Priority
BOS 3651, Total Environmental Health and Safety Managemen.docx
CHI abstract camera ready
White Paper Presentation
Sonal Pirani Discussion  When the end-users are involved in t.docx
Healthcares Vulnerability to Ransomware AttacksResearch questio
A Pluralistic Understanding of Sustainable Engineering Science
Understanding Construction Workers’ Risk Decisions Using Cognitive Continuum ...
APPLYING THE HEALTH BELIEF MODEL TO CARDIAC IMPLANTED MEDICAL DEVICE PATIENTS
CONCEPTUALIZING AI RISK
Discussion Questions The difficulty in predicting the future is .docx

More from audeleypearl (20)

DOCX
Mr. Bush, a 45-year-old middle school teacher arrives at the emergen.docx
DOCX
Movie Project Presentation Movie TroyInclude Architecture i.docx
DOCX
Motivation and Retention Discuss the specific strategies you pl.docx
DOCX
Mother of the Year In recognition of superlative paren.docx
DOCX
Mrs. G, a 55 year old Hispanic female, presents to the office for he.docx
DOCX
Mr. Rivera is a 72-year-old patient with end stage COPD who is in th.docx
DOCX
Mr. B, a 40-year-old avid long-distance runner previously in goo.docx
DOCX
Moving members of the organization through the change process ca.docx
DOCX
Mr. Friend is acrime analystwith the SantaCruz, Califo.docx
DOCX
Mr. E is a pleasant, 70-year-old, black, maleSource Self, rel.docx
DOCX
Motor Milestones occur in a predictable developmental progression in.docx
DOCX
Most women experience their closest friendships with those of th.docx
DOCX
Most patients with mental health disorders are not aggressive. Howev.docx
DOCX
Most of our class readings and discussions to date have dealt wi.docx
DOCX
Most people agree we live in stressful times. Does stress and re.docx
DOCX
Most of the ethical prescriptions of normative moral philosophy .docx
DOCX
Most healthcare organizations in the country are implementing qualit.docx
DOCX
More work is necessary on how to efficiently model uncertainty in ML.docx
DOCX
Mortgage-Backed Securities and the Financial CrisisKelly Finn.docx
DOCX
Moral Development  Lawrence Kohlberg developed six stages to mora.docx
Mr. Bush, a 45-year-old middle school teacher arrives at the emergen.docx
Movie Project Presentation Movie TroyInclude Architecture i.docx
Motivation and Retention Discuss the specific strategies you pl.docx
Mother of the Year In recognition of superlative paren.docx
Mrs. G, a 55 year old Hispanic female, presents to the office for he.docx
Mr. Rivera is a 72-year-old patient with end stage COPD who is in th.docx
Mr. B, a 40-year-old avid long-distance runner previously in goo.docx
Moving members of the organization through the change process ca.docx
Mr. Friend is acrime analystwith the SantaCruz, Califo.docx
Mr. E is a pleasant, 70-year-old, black, maleSource Self, rel.docx
Motor Milestones occur in a predictable developmental progression in.docx
Most women experience their closest friendships with those of th.docx
Most patients with mental health disorders are not aggressive. Howev.docx
Most of our class readings and discussions to date have dealt wi.docx
Most people agree we live in stressful times. Does stress and re.docx
Most of the ethical prescriptions of normative moral philosophy .docx
Most healthcare organizations in the country are implementing qualit.docx
More work is necessary on how to efficiently model uncertainty in ML.docx
Mortgage-Backed Securities and the Financial CrisisKelly Finn.docx
Moral Development  Lawrence Kohlberg developed six stages to mora.docx

Recently uploaded (20)

PPTX
Pharma ospi slides which help in ospi learning
PDF
STATICS OF THE RIGID BODIES Hibbelers.pdf
PDF
Supply Chain Operations Speaking Notes -ICLT Program
PDF
Microbial disease of the cardiovascular and lymphatic systems
PPTX
master seminar digital applications in india
PPTX
Pharmacology of Heart Failure /Pharmacotherapy of CHF
PDF
Weekly quiz Compilation Jan -July 25.pdf
PDF
Classroom Observation Tools for Teachers
PPTX
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
PPTX
Cell Types and Its function , kingdom of life
PPTX
human mycosis Human fungal infections are called human mycosis..pptx
PDF
Anesthesia in Laparoscopic Surgery in India
PPTX
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
PPTX
Microbial diseases, their pathogenesis and prophylaxis
PDF
GENETICS IN BIOLOGY IN SECONDARY LEVEL FORM 3
PPTX
Cell Structure & Organelles in detailed.
PDF
2.FourierTransform-ShortQuestionswithAnswers.pdf
PDF
Chinmaya Tiranga quiz Grand Finale.pdf
PDF
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
PPTX
Introduction-to-Literarature-and-Literary-Studies-week-Prelim-coverage.pptx
Pharma ospi slides which help in ospi learning
STATICS OF THE RIGID BODIES Hibbelers.pdf
Supply Chain Operations Speaking Notes -ICLT Program
Microbial disease of the cardiovascular and lymphatic systems
master seminar digital applications in india
Pharmacology of Heart Failure /Pharmacotherapy of CHF
Weekly quiz Compilation Jan -July 25.pdf
Classroom Observation Tools for Teachers
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
Cell Types and Its function , kingdom of life
human mycosis Human fungal infections are called human mycosis..pptx
Anesthesia in Laparoscopic Surgery in India
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
Microbial diseases, their pathogenesis and prophylaxis
GENETICS IN BIOLOGY IN SECONDARY LEVEL FORM 3
Cell Structure & Organelles in detailed.
2.FourierTransform-ShortQuestionswithAnswers.pdf
Chinmaya Tiranga quiz Grand Finale.pdf
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
Introduction-to-Literarature-and-Literary-Studies-week-Prelim-coverage.pptx

RATIO ANALYSIS RATIO ANALYSIS Note Please change the column names.docx

  • 1. RATIO ANALYSIS RATIO ANALYSIS Note: Please change the column names based on your industry and your selected companies.RATIOS<INDUSTRY><COMPANY #1><COMPANY #2>ANALYSIS (your comments), which company is stronger, better/worse than industry, what results meanProfitability Ratios (%)show calculationshow ending resultshoww Calculationshow resultGross Margin EBITD Margin Operating Margin Pretax Margin Effective Tax Rate Financial StrengthQuick RatioCurrent Ratio LT Debt to Equity Total Debt to Equity Interest Coverage Valuation RatiosP/E Ratio Price to Sales (P/S)Price to Book (P/B)Price to Tangible Book Price to Cash FlowPrice to Free Cash Flow Management Effectiveness (%)Return On Assets Return On Investment Return On Equity DividendsDividend YieldPayout Ratio EfficiencyRevenue/Employee Net Income/Employee Receivable TurnoverInventory TurnoverAsset Turnover SummaryWhat is ratio analysis? Briefly explain in this space, and reference your resources: Referring to your ratio analysis above, in which company would you be willing to invest, and why? Heuristics and biases in cyber security dilemmas Heather Rosoff • Jinshu Cui • Richard S. John Published online: 28 September 2013 � Springer Science+Business Media New York 2013 Abstract Cyber security often depends on decisions made by human operators, who are commonly considered a
  • 2. major cause of security failures. We conducted 2 behav- ioral experiments to explore whether and how cyber security decision-making responses depend on gain–loss framing and salience of a primed recall prior experience. In Experiment I, we employed a 2 9 2 factorial design, manipulating the frame (gain vs. loss) and the presence versus absence of a prior near-miss experience. Results suggest that the experience of a near-miss significantly increased respondents’ endorsement of safer response options under a gain frame. Overall, female respondents were more likely to select a risk averse (safe) response compared with males. Experiment II followed the same general paradigm, framing all consequences in a loss frame and manipulating recall to include one of three possible prior experiences: false alarm, near-miss, or a hit involving a loss of data. Results indicate that the manipulated prior hit experience significantly increased the likelihood of respondents’ endorsement of a safer response relative to
  • 3. the manipulated prior near-miss experience. Conversely, the manipulated prior false-alarm experience significantly decreased respondents’ likelihood of endorsing a safer response relative to the manipulated prior near-miss experience. These results also showed a main effect for age and were moderated by respondent’s income level. Keywords Cyber security � Framing effect � Near-miss � Decision making 1 Introduction Individual users regularly make decisions that affect the security of their personal devices connected to the internet and, in turn, to the security of the cybersphere. For example, they must decide whether to install software to protect from viruses and hackers, download files from unknown sources, or submit personal identification infor- mation for web site access or online purchases. Such decisions involve actions that could result in various neg- ative consequences (loss of data, reduced computer per- formance or destruction of a computer’s hard drive).
  • 4. Conversely, other alternative actions are available that could protect individuals from negative outcomes, but also could limit the efficiency and ease of use of the personal device. Aytes and Connolly (2004) propose a decision model of computer-related behavior that suggests individuals make a rational choice to either engage in safe or unsafe cyber behavior. In their model, individual behavior is driven by perceptions of the usefulness of safe and unsafe behaviors and the consequences of each. More specifically, the model captures how information sources, the user’s base knowl- edge of cyber security, the user’s relevant perceptions (e.g., interpretations of the applicability of the knowledge), and the user’s risk attitude influence individual cyber decision making. H. Rosoff (&) Sol Price School of Public Policy, University of Southern California, Los Angeles, CA, USA
  • 5. e-mail: [email protected] H. Rosoff � J. Cui � R. S. John Center for Risk and Economic Analysis of Terrorism Events (CREATE), University of Southern California, Los Angeles, CA, USA J. Cui � R. S. John Department of Psychology, University of Southern California, Los Angeles, CA, USA 123 Environ Syst Decis (2013) 33:517–529 DOI 10.1007/s10669-013-9473-2 This paper reports on two behavioral experiments, using over 500 respondents, designed to explore whether and how recommended cyber security decision-making responses depend on gain–loss framing and salience of prior cyber dilemma experiences. More specifically, we explored whether priming individuals to recall a prior cyber-related experience influenced their decision to select either a safe versus risky option in responding to a hypo-
  • 6. thetical cyber dilemma. We hypothesized that recall of a hit experience involving negative consequences would increase feelings of vulnerability, even more so than a near-miss, and lead to the endorsement of a risk averse option. This result has been reported in the disaster liter- ature, which has shown that individual decision making depends on prior experiences, including hits, near-misses (events where a hazardous or fatal outcome could have occurred, but do not), and false alarms (Barnes et al. 2007; Dillon et al. 2011; Siegrist and Gutscher 2008). Further- more, damage from past disasters has been shown to sig- nificantly influence individual perceptions of future risk and to motivate more protective and mitigation-related behavior (Kunreuther and Pauly 2004; Siegrist and Gut- scher 2008; Slovic et al. 2005). We anticipated that the effect of prior near-miss expe- riences would depend on the interpretation of the prior near-miss event by the respondent. This expectation was
  • 7. based on near-miss research that has shown that future- intended mitigation behavior depends greatly on the per- ception of the near-miss event outcome. Tinsley et al. (2012) describe two near-miss types—a resilient and vul- nerable near-miss. A resilient near-miss is as an event that did not occur. In these situations, individuals were found to underestimate the danger of subsequent events and were more likely to engage in risky behavior by choosing not to take protective action. A vulnerable near-miss occurs when a disaster almost happened. New information is incorpo- rated into the assessment that counters the basic ‘‘near- miss’’ definition and results in the individual being more inclined to engage in risk averse behavior (the opposite behavior related to a resilient near-miss interpretation). In the cyber context, we expected that respondents who fail to recognize a prior near-miss as a cyber threat would be more likely to recommend the risky course of action. However, if respondents view a recalled near-miss as evidence of vul-
  • 8. nerability, then they would be more inclined to endorse the safer option. In the case of a recalled prior false-alarm experience, one hypothesis known as the ‘‘cry-wolf effect’’ (Breznitz 2013) suggests that predictions of disasters that do not materialize affect beliefs about the uncertainty associated with future events. In this context, false alarms are believed to create complacency and reduce willingness to respond to future warnings, resulting in a greater likelihood of engaging in risky behavior (Barnes et al. 2007; Donner et al. 2012; Dow and Cutter 1998; Simmons and Sutter 2009). In contrast, there is research showing that the public may have a higher tolerance for false alarms than antici- pated. This is because of the increased credibility given to the event due to the frequency with which it is discussed, both through media sources and informal discussion, thus, suggesting that false alarms might increase individuals’ willingness to be risk averse (Dow and Cutter 1998). We
  • 9. anticipated that recall of prior false alarms would likely make respondents feel less vulnerable and more willing to prefer the risky option, compared with the near-miss and hit conditions. In our research, we also anticipated that there would be some influence of framing on individual cyber decision making under risk. Prospect theory and related empirical research suggest that decision making under risk depends on whether potential outcomes are perceived as a gain or as a loss in relation to a reference point (Kahneman and Tversky 1979; Tversky and Kahneman 1986). A common finding in the literature on individual preferences in deci- sion making shows that people tend to avoid risk under gain frames, but seek risk when outcomes are framed as a loss. Prospect theory is discussed in the security literature, but empirical studies in cyber security contexts are limited (Acquisti and Grossklags 2007; Garg and Camp 2013;
  • 10. Helander and Khalid 2000; Shankar et al. 2002; Verendel 2008). Among the security studies that have been con- ducted, the results are mixed. The work by Schroeder and colleagues on computer information security presented at the 2006 Information Resources Management Association International Conference found that decision makers were risk averse in the gain frame, yet they showed no risk preference in the loss frame. Similarly, in a 1999 presen- tation about online shopping behavior by Helander and Du at the International Conference on TQM and Human Fac- tors, perceived risk of credit card fraud and the potential for price inflation did not negatively affect purchase intention (loss frame), while perceived value of a product was found to positively affect purchase intention. We anticipated that gain-framed messages in cyber dilemmas would increase endorsement of protective responses and loss-framed messages would have no effect on the endorsement of protective options.
  • 11. We also explored how subject variables affect the strength and/or the direction of the relationship between the manipulated variables, prior experience and gain–loss framing, and the dependent variable, endorsement of safe or unsafe options in response to cyber dilemmas. For example, one possibility is that the relationship between prior experience and risk averse behavior is greater for individuals with higher self-reported victimization given 518 Environ Syst Decis (2013) 33:517–529 123 their increased exposure to cyber dilemma consequences. Another possibility is that the relationship between the gain frame and protective behavior would be less for younger individuals because they are more familiar and comfortable with the nuances of internet security. We anticipated that there would be some difference in the patterns of response as a function of sex, age, income, education, job domain,
  • 12. and self-reported victimization. The next section of this article describes the methods, results, and a brief discussion for Experiment I, and Sect. 3 describes the methods, results, and a brief discussion for Experiment II. The paper closes with a discussion of findings across both experiments and how these results suggest approaches to enhance and improve cyber security by taking into account user decision making. 2 Experiment I We conducted an experiment of risky cyber dilemmas with two manipulated variables, gain–loss framing and primed recall of a prior personal near-miss experience, to evaluate individual cyber user decision making. The cyber dilem- mas were developed to capture commonly confronted risky cyber choices faced by individual users. In addition, in Experiment I, the dependent variable focused on the advice the respondent would provide to their best friend so as to encourage more normative thinking about what might be
  • 13. the correct response to the cyber dilemma. As such, each cyber scenario described a risky choice dilemma faced by the respondent’s ‘‘best friend,’’ and the respondent was asked to recommend either a safe but inconvenient course of action (e.g., recommend not downloading the music file from an unknown source), or a risky but more convenient option (e.g., recommend downloading the music file from an unknown source). 2.1 Method 2.1.1 Design overview In Experiment I, four cyber dilemmas were developed to evaluate respondents’ risky choice behavior using a 2 (recalled personal near-miss experience or no recall control condition) by 2 (gain versus loss-framed message) mixed model factorial design with two dichotomous subject variables: sex and self-reported victimization. Each par- ticipant received all four dilemmas in a constant order. Within this order, each of the four treatment conditions was
  • 14. paired with each of the four dilemmas and counterbalanced such that each of the dilemmas was randomly assigned to each of the four treatment conditions. After each cyber dilemma, respondents were asked to respond on a 6-point scale (1 = strongly disagree to 6 = strongly agree) whether they would advise their ‘‘best friend’’ to proceed in taking a risky course of action. Responses of 1–3 indicated endorsement of the safe but inconvenient option, while responses of 4–6 indicated endorsement of the risky but expedient option. Following the four cyber dilemmas, respondents were given four attention check questions to determine whether they were reading the cyber scenarios carefully. In addition, basic demographic information was collected as well as infor- mation on each respondent’s personal experience and self- reported victimization, if any, with the topics of the cyber dilemmas. 2.1.2 Scenarios and manipulations
  • 15. The four cyber dilemma scenarios involved the threat of a computer virus resulting from the download of a music file, the use of an unknown USB drive device, the download of a Facebook application, and the risk of financial fraud from an online purchase. Gain–loss framing and primed recall of a prior personal experience were manipulated independent variables. The framing messages were used to describe the potential outcome of the risky cyber choice. The gain- framed messages endorsed the safe, more protective rec- ommendation. For example, for the download of a music file scenario, the gain frame was worded as ‘‘If she presses ‘do not proceed,’ she may avoid the risk of acquiring a virus that will cause serious damage to her computer.’’ Conversely, the loss-framed messages endorsed the risky option/choice. For the download of a music file scenario, the loss frame was worded as ‘‘If she presses ‘proceed,’ she may risk acquiring a virus that will cause serious damage to her computer.’’ The experimental design also included a
  • 16. manipulation of primed recall of a prior personal experi- ence. Respondents either recalled a near-miss experience of their own before advising their friend, or did not (a control condition). In each near-miss experience, the respondent’s dilemma was similar to the situation faced by their best friend and the consequences of the threat were benign. A complete description of the four scenarios, including the near-miss and gain–loss framing manipulations, is pro- vided in Table 1. 2.1.3 Subjects The experiment was conducted using the University of Southern California’s Psychology Subject Pool. Students participated for course credit. Of the 365 students who participated in the experiment, 99 were omitted for not answering all 4 of the attention check questions correctly, resulting in a sample of 266 respondents. Most, 203 (76 %) Environ Syst Decis (2013) 33:517–529 519 123
  • 17. Table 1 Summary of four scenarios and manipulations (Experiment I) Scenario 1: Music File Scenario 2: USB Scenario 3: Facebook Scenario 4: Rare Book Scenario Your best friend has contacted you for advice. She wants to open a music file linking to an early release of her favorite band’s new album. When she clicks on the link, a window pops up indicating that she needs to turn off her firewall program in order to access the file Your best friend has contacted you for advice. Her computer keeps crashing because it is overloaded with programs, documents and media files. She consults a
  • 18. computer technician who advises her to purchase a 1 terabyte USB drive (data storage device) to free up space on her computer. She does her research and narrows down the selection to two choices Your best friend has contacted you for advice. She has opened her Facebook page to find an app request for a game that her friends have been really excited about. In order to download the app, access to some of her personal information is required including her User ID and other information from her profile Your best friend has contacted you
  • 19. for advice. She is going to buy a rare book from an unknown online store. The book is highly desirable, expensive and only available from this online store’s website. By deciding to purchase the book online with her credit card, there is a risk that her personal information will be exploited which can generate unauthorized credit card charges. Her credit card charges $50 for the investigation and retrieval of funds expended when resolving fraudulent credit card issues Gain framing
  • 20. If she presses ‘‘do not proceed,’’ she may avoid the risk of acquiring a virus that will cause serious damage to her computer The first USB drive when used on a computer other than your own has a 10 % chance of becoming infected with a virus that will delete all the files and programs on the drive. The second drive is double the price, but has less than a 5 % chance of becoming infected with a virus when used on a computer other than your own If she chooses not to agree to the terms of the app, she is protecting her private
  • 21. information from being made available to the developer of the app If she decides not to buy the book, she may save up to $50 and the time spent talking with the credit card company Loss framing If she presses ‘‘proceed,’’ she may risk acquiring a virus that will cause serious damage to her computer The first USB drive when used on a computer other than her own has a 5 % chance of becoming infected with a virus that will delete all the files and programs
  • 22. on the drive. The second drive is half the price but has more than a 5 % chance of becoming infected with a virus when used on a computer other than her own If she chooses to agree to the terms of the app, she risks the chance of her private information being made available to the developer of the app If she decides to buy the book, she may lose up to $50 and the time spent talking with the credit card company Near-miss experience As you consider how to advise
  • 23. your friend, you recall that you were confronted by a similar situation in the past. You attempted to open a link to a music file and a window popped up saying that you need to turn off your firewall program in order to access the file. You pressed ‘‘proceed’’ and your computer immediately crashed. Fortunately, after restarting your computer everything was functioning normally again As you consider how to advise your friend, you recall that your USB drive recently was infected with a virus after being plugged into a computer at work. You
  • 24. contacted a computer technician to see if there was any way to repair the drive. The technician was able to recover all the files and told you that you were really lucky because normally such drives cannot be restored As you consider how to advise your friend, you recall that you once agreed to share some of your personal information in order to download an app on Facebook. The developers of the app made your User ID publicly available and because of this you started to receive messages from strangers on your profile page. You were very upset about the
  • 25. invasion of your privacy. Fortunately, you discovered that you could change the privacy settings of your profile so that only your friends could access your page As you consider how to advise your friend, you recall that you once purchased a rare book from an unknown online store. You were expecting the book to arrive 1 week later. About 2 weeks later, you had yet to receive the book. You were very concerned that you had done business with a fake online store. You contacted the store’s customer service who
  • 26. fortunately tracked down the book’s location and had it shipped with overnight delivery. Question Below please indicate your level of agreement with the statement ‘‘You will advise your best friend to press ‘‘proceed’’ and risk acquiring a virus that will cause serious damage to her computer’’ Below please indicate your level of agreement with the statement ‘‘You will advise your best friend to buy the first USB drive that has a 10 % chance of becoming infected with a virus.’’/’’You will advise your best friend to buy the second
  • 27. USB drive that has a greater than 5 % chance of becoming infected with a virus’’ Below please indicate your level of agreement with the statement ‘‘You will advise your best friend to download the app and risk having her private information made available to the app developer’’ Below please indicate your level of agreement with the statement: ‘‘You will advise your best friend to purchase the book online and risk having her personal information exploited’’ 520 Environ Syst Decis (2013) 33:517–529 123
  • 28. of the respondents, were female. Respondents ranged in age from 18 to 41 years (95 % percentile is 22 years old). Table 2 shows a summary of personal experience and self-reported victimization associated with each of the four cyber dilemmas. All respondents reported having been a victim of one of the four cyber dilemmas. Twenty-four percent of respondents further reported being a victim of one or more of the four cyber dilemmas. We coded whether the respondent had ever been victimized by one of the four scenarios as a variable of self-reported victimization. 2.2 Results Raw responses (1–6) were centered around the midpoint (3.5) such that negative responses indicate endorsement of the safe option, and positive responses indicate endorse- ment of the risky option. Mean endorsement responses for each of the four treatment conditions are displayed in Fig. 1. The negative means in all four conditions indicate
  • 29. that subjects were more likely to endorse risk averse actions compared with the risky alternative. 1 In addition, a 2 (recalled personal near-miss experience or no recall control condition) by 2 (gain vs. loss-framed message) by 2 (sex) by 2 (self-reported victimization) 4-way factorial ANOVA was used to evaluate respondents endorsement of risky versus safe options in cyber dilem- mas. Analyses were specified to only include main effects and 2-way interactions with the manipulated variables. Preliminary data screening was conducted, and q–q plots indicated that the dependent variable is approximately normally distributed. Results indicated that the near-miss manipulation was significant, F (1, 260) = 7.42, p = .01, g2 = .03. Respondents who received a description of a recalled near- miss experience preferred the safe but inconvenient option to the risky, more expedient option. No main effect was found for the gain–loss framing manipulation, suggesting
  • 30. that respondents were indifferent between safe versus risky decision options when the outcomes were described as gains or losses from a reference point. There also was a significant interaction between the framing and near-miss manipulations: F (1, 260) = 4.01, p = .05, g2 = .02. As seen in Fig. 1, the near-miss manipulation was much larger under the gain frame compared with the loss frame. Basic demographic data also was collected to assess whether individual differences moderated the effect of the two manipulations. A significant main effect was found for sex: F (1, 260) = 3.81, p = .05, g2 = .01; Sex’s cohen’s d are 0.33 for gain framing without near-miss, 0.09 for gain framing with near-miss, 0.18 for loss framing without near- miss, and 0.19 for loss framing with near-miss. Female respondents were more likely to avoid risks and choose the safe option. No significant main effect was found for self- reported victimization. Also, none of the interactions were significant; sex and framing, sex and near-miss experience, victimization and framing, and victimization and near-
  • 31. miss. Table 2 Summary of experience and victimization Scenario (N = 266) Personal experience Previous victimization Music file download 205 (77 %) 40 (15 %) USB drive 110 (41 %) 12 (4.5 %) Facebook App download 253 a (95 %) 3 (1 %) Online purchase 259 (97 %) 18 (7 %) Overall (at least once) 265 b (100 %) 64 (24 %) a An app is downloaded from Facebook at least once a week
  • 32. b There is one missing value 1 Since the four scenarios are in a constant order, a second analysis was run that ignored the manipulated factors and included scenario/ order as a repeated factor. A one-way repeated measure ANOVA found a significant scenario/order effect: F (3, 265) = 30.42, p .001, g2 = .10. Over time, respondents were more likely to endorse the risky option. Because the nature of the dilemma scenario and order are confounded, it is impossible to determine whether the significant main effect indicates an order effect or a scenarios effect or a combination of both. The counterbalanced design distributed all 4 combinations of framing and prior experience recall evenly across the four scenario dilemmas. Order and/or scenario effects are independent of the manipulated factors, and thus are included in the error term in
  • 35. Loss Risky Safe Framing Fig. 1 Mean endorsement of risky versus safe responses to cyber threats by gain–loss frame and prior near-miss Environ Syst Decis (2013) 33:517–529 521 123 2.3 Discussion The results of Experiment I suggest that respondents’ cyber security recommendations to their best friend were signif- icantly influenced by the personal experience recall manipulation. More specifically, respondents who recalled a near-miss experience were more likely to advise their best friend to avoid making the risky cyber choice com- pared with their no recall counterpart. This finding is consistent with Tinsley et al. (2012) definition of a vul- nerable near-miss—an ‘‘almost happened’’ event that
  • 36. makes individuals feel vulnerable and, in turn, leads to a greater likelihood of endorsing the safer option. Respondents who recalled a near-miss experience were even more likely to advise their best friend to take the safer course of action if they also received the gain message. Comparatively, the loss frame had a negligible effect on the primed recall prior experience manipulation. That is, respondents who received the loss frame were as likely to recommend the risk averse course of action to their best friend regardless of whether their prior experience was a near-miss or not. This finding suggests that people will be more risk averse when they are exposed either to a recall of a prior near-miss and/or a loss frame. The combination of no prior recall of a near-miss and a gain frame did produce less risk averse responses. This suggests a highly interac- tive, synergistic effect, in which the frame and the near- miss recall substitute for each other. In addition, sex and prior victimization were found to
  • 37. have no moderating effect on the relationship between cyber dilemma responses and the two manipulated variables. Cyber dilemma decision making was found to significantly vary by respondents’ sex, but not by self-reported victim- ization. The results suggest that females make more pro- tective decisions when faced with risky cyber dilemmas compared with males. This pattern has been replicated in cyber research in an experiment of online shopping services where males demonstrated a greater tendency to engage in risky behavior online (Milne et al. 2009). Disaster risk per- ception studies also have shown that risks tend to be judged higher by females (Flynn et al. 1994; Bateman and Edwards 2002; Kung and Chen 2012; Bourque et al. 2012) and that females tend to have a stronger desire to take preventative and preparedness measures compared with males (Ho et al. 2008; Cameron and Shah 2012). 3 Experiment II The primary purpose of Experiment II was to expand the
  • 38. primed recall prior experience manipulation to compare three prior cyber experiences: a near-miss, a false alarm, and a hit involving a loss of data. The prior cyber experi- ence recall prime for Experiment II involved experiences of a good friend, rather than the respondents’ past experi- ences (used in Experiment I). We also posed all questions using a loss frame to enhance the ecological validity of the cyber dilemmas posed, the consequences of which are naturally perceived as losses from a status quo. The dependent variable was also changed for Experiment II. Each respondent was asked to report whether they would select the safe or risky option in response to their own cyber dilemma, as opposed to providing advice to their best friend involved in a risky cyber dilemma as in Experiment I. One interpretation of the finding from Experiment I that respondents generally favored the safe option was that they were possibly more risk averse in advising a friend com- pared to how they would respond to their own cyber
  • 39. dilemma. By posing the dilemma in the first person, we sought to characterize how respondents would be likely to respond when facing a cyber dilemma. The cyber dilemmas were also described in a more concrete fashion for Experiment II, including a ‘‘screenshot’’ of the dilemma facing the respondent. 3.1 Method 3.1.1 Design overview In Experiment II, three cyber dilemmas were constructed to evaluate respondents’ risky choice behavior using one manipulated variable, recall of a friend’s false alarm, near- miss or hit experience. In addition, six individual differ- ence variables were included in the design: sex, age, income, education, job domain, and self-reported victim- ization. Each participant received all three dilemmas in a constant order. Each of the three primed recall prior cyber experiences was paired with one of the three scenarios in a counterbalanced design such that each of the cyber
  • 40. dilemmas appeared in each of the three treatment condi- tions with equal frequency. After each cyber dilemma, respondents were asked to respond on a 6-point scale (1 = strongly disagree to 6 = strongly agree) regarding their intention to ignore the warning and proceed with the riskier course of action. Following all three cyber dilemmas, respondents were given three attention check questions related to the nature of each dilemma. Respondents also were asked to provide basic demographic information and answer a series of questions about their experience with computers and cyber dilemmas, such as their experience with purchasing from a fraudulent online store, being locked out from an online account, or having unauthorized withdrawals made from their online banking account. 522 Environ Syst Decis (2013) 33:517–529 123
  • 41. 3.1.2 Scenarios and manipulations The three cyber dilemma scenarios involved the threat of causing serious damage to the respondents’ computer as a result of downloading a music file, installing a plug-in for an online game, and downloading a media player to legally stream videos. The scenarios were written to share the same possible negative outcome—the computer’s operat- ing system crashes, resulting in an unusable computer until repaired. Establishing uniformity of consequences across the three scenarios reduced potential unexplained variance across the three levels of the manipulated variable. Experiment II also included screenshots of ‘‘pop-up’’ window images similar to those that would appear on the computer display when the cyber dilemma is presented. These images were intended to make the scenarios more concrete and enhance the realism of the cyber dilemma scenarios. Primed recall of a friend’s prior cyber experience was
  • 42. the only manipulated variable in this experiment. Respondents either recalled their friend’s near-miss, false alarm or hit experience before deciding whether to select the safe or risky option in response to the described cyber dilemma. All potential outcomes were presented in a loss frame, with wording held constant except for details spe- cific to the scenario under consideration. For example, the wording of the loss frame for the hit outcome of the download a music file scenario was ‘‘She pressed ‘allow access’ and her computer immediately crashed. She ended up having to wipe the computer’s hard drive clean and to reinstall the operating system.’’ The only modification made for the installation of the plug-in scenario was switching the words ‘‘allow access’’ to ‘‘run.’’ A complete description of the scenarios, including the primed recall of the friend’s prior experiences, is provided in Table 3. 3.1.3 Subjects Three hundred and seventy-six US residents were recruited
  • 43. through Amazon Mechanical Turk (AMT) to participate in the experiment. Researchers have assessed the representa- tiveness of AMT samples compared with convenience samples found locally and found AMT samples to be representative (Buhrmester et al. 2011; Mason and Suri 2012; Paolacci et al. 2010) and ‘‘significantly more diverse than typical American college samples’’ (Buhrmester et al. 2011). Each respondent earned $1 for completion of the experiment. After removing respondents who did not answer all three of the attention check questions correctly or completed the experiment in less than 7 min, the sample consisted of 247 respondents. Five additional respondents skipped questions, resulting in a final sample size of N = 242. Table 4 includes a summary of sample characteristics, including sex, age, income, education, job domain, and self-reported victimization. Self-reported victimization is defined in terms of experiences with four types of negative cyber events: (1) getting a virus on an
  • 44. electronic device, (2) purchasing from a fraudulent online store, (3) being locked out from an online account, or (4) having unauthorized withdrawals made from their online banking account. Respondents also responded to a number of experience questions that are summarized in Table 5 as additional detail about the study sample. 3.2 Results A mixed model ANOVA with one within-subject factor (primed recall of a prior experience) and six individual difference variables as between-subject factors were used. This model included only the seven main effects and the six 2-way interactions involving the manipulated within- subject variable and each of the six between-subject vari- able. Preliminary data screening was done; q–q plots showed the scores on the repeated measures variable, prior salient experience, to have an approximately normal distribution. 2
  • 45. Results show that the primed recall prior experience manipulation had a significant effect on how respondents intended to respond to the cyber dilemmas, F (1, 231) = 31.60, p .00, g2 = .12. Moreover, post hoc comparisons using the least significant difference (LSD) test indicate that the mean score for the false-alarm con- dition (M = 3.65, SD = 0.11) was significantly different from the near-miss condition (M = 2.97, SD = 0.11) with p .01, and the hit condition (M = 2.34, SD = 0.11) significantly differed from the near-miss and false-alarm conditions with p .01. This suggests that respondents who received a description of a friend’s near-miss experi- ence recall preferred the safer, risk averse option compared with respondents who were primed to recall a friend’s prior false-alarm experience. Respondents were found to be even more likely to select the safe option when they were primed to recall a friend’s prior hit experience. As displayed in Fig. 2, the positive means for the false-alarm condition indicate that respondents were more likely to engage in risky behavior compared with the negative means for the
  • 46. near-miss and hit conditions. The analysis also included both main effects and inter- action terms for six different subject variables, including 2 As in Exp I, a one-way repeated measure ANOVA shows there is a significant scenario/order effect: F (2, 265) =4.47, p = .035, g2 = .02. Over time and/or scenario, respondents were more likely to endorse the risky option. However, as in Experiment I, it is difficult to determine whether the main effect is for the scenarios or the order effect. The study design we used overcame this limitation by using a counterbalanced design. Environ Syst Decis (2013) 33:517–529 523 123 T a
  • 137. o u r c o m p u te r’ ’ 524 Environ Syst Decis (2013) 33:517–529 123 sex, age, level of education, income level, job domain, and self-reported victimization. For the purpose of analysis, age was collapsed into three levels: 18–29, 30–39, and 40 years and older; education level was collapsed into three cate- gories: high school and 2-year college, 4-year college, and master’s degree or higher; and annual income level was collapsed into three categories: below $30,000/year,
  • 138. $30,000–$59,999/year, and $60,000/year and more. The results of the ANOVA indicated there was a sig- nificant main effect for age: F (2, 231) = 4.9, p = .01, g2 = .04, and no significant main effects for sex, education, income, job domain, and self-reported victim- ization. Figure 2 suggests that younger respondents com- pared with older respondents were more likely to choose the riskier option in cyber dilemmas across all 3 levels of the primed prior recall experience manipulation. Results also showed a significant interaction effect between income and the primed prior recall experience manipulation: F (2, 231) = 3.40, p = .01, g2 = .03. Fig- ure 3 indicates that respondents with higher income levels (greater than $60 K per year) were less sensitive to the primed recall of a friend’s experience. There was no Table 4 Demographic information for AMT respondents Demographic variable (N = 242) Variable response category Number and percentage of sample Sex Male 108 (44.6 %)
  • 139. Female 134 (55.4 %) Highest level of education High school 65 (26.9 %) 2-year college 38 (15.7 %) 4-year college 102 (42.1 %) Master’s degree 30 (12.4 %) Professional (e.g., M.D., Ph.D., J.D.) degree 7 (2.9 %) Personal gross annual income range Below $20,000/year 66 (27.3 %) $20,000–$29,999/year 31 (12.8 %) $30,000–$39,999/year 35 (14.5 %) $40,000–$49,999/year 28 (11.6 %) $50,000–$59,999/year 15 (6.2 %) $60,000–$69,999/year 23 (9.5 %) $70,000–$79,999/year 13 (5.4 %) $80,000–$89,999/year 10 (4.1 %) $90,000/year or more 21 (8.7 %)
  • 140. Does your work relate to technology? I use computers normally but my work has nothing to do with technology. 172 (71.1 %) My work is about technology 70 (28.9 %) Victim of getting a virus on an electronic device Yes 165 (68.2 %) No 77 (31.8 %) Victim of purchasing from a fake online store Yes 15 (6.2 %) No 221 (91.3 %) I don’t shop online 6 (2.5 %) Victim of failure to log into an online account Yes 85 (35.1 %) No 157 (64.9 %) Victim of unauthorized withdrawals from an online banking account Yes 44 (18.2 %) No 198 (81.8 %) Overall self-reported victimization None 46 (19.0 %) One type 104 (43.0 %) Two or more types 92 (38.0 %)
  • 141. Age (years) Range 18–75 Percentiles 25th 27 50th 33 75th 44 Environ Syst Decis (2013) 33:517–529 525 123 significant interaction effect between the manipulation and the other five individual difference variables, including sex: F (1, 231) 1, age: F (2, 231) = 1.84, p = .12, g2 = .02, education: F (2, 231) 1, job domain, F (1, 231) = 2.01, p = .14, g2 = .01, and self-reported victimization, F (2, 231) = 2.03, p = .09, g2 = .02. 3.3 Discussion Responses to risky cyber dilemmas in Experiment II were significantly predicted by the primed recall of a friend’s prior cyber experience. Consistent with our hypotheses, the more negative the consequence associated with the prior cyber experience, the more likely the respondents were to
  • 142. choose the safer course of action. In particular, respondents who were primed to recall a prior near-miss or hit event interpreted the experience as a sign of vulnerability com- pared with the recall of a prior false alarm and, in turn, were more likely to promote more conservative (safe) endorsements of actions. In the case of false alarms, our findings suggest that respondents were more likely to endorse the risky alternative. -2 -1.5 -1 -0.5 0 0.5 1 False-alarm Near-miss Hit M e a n
  • 144. o n Salient Prior Experience Endorsement by Age 18 - 29 years old 30 - 39 years old 40 years old and older Risky Safe Age Fig. 2 Mean endorsement of risky versus safe responses to cyber threats by primed recall of friend’s prior experience and age M e a n E n d o
  • 146. Income Fig. 3 Mean endorsement of risky versus safe responses to cyber threats by primed recall of friend’s prior experience and income level Table 5 Cyber-related responses for AMT respondents Questions Response category Number and percentage of sample Personal computer PC 213 (88.0 %) Mac 28 (11.6 %) Do not have a personal computer 1 (0.4 %) Smartphone iOS 67 (27.6 %) Android 95 (39.3 %) Do not have a
  • 147. smartphone 80 (33.1 %) Protection software Yes 211 (87.2 %) No 31 (12.8 %) Have you ever downloaded free music, an e-book, a movie, or a television show from an unfamiliar website found through a Google search? Yes 135 (55.8 %) No 107 (44.2 %) How often do you access your social networking accounts (Facebook, Twitter, Myspace, MSN, Match.com, etc.)? Every day 150 (62.0 %) Once a week 35 (14.5 %) Once a month 8 (3.3 %)
  • 148. 2-3 times a month 10 (4.1 %) Every couple months 14 (5.8 %) Once a year 4 (1.7 %) Never 21 (8.7 %) Have you ever clicked on an advertisement and a window popped up saying something along the lines of ‘‘Congratulations, you are eligible to win an iPad!’’? Yes 122 (50.4 %) No 120 (49.6 %) Have you ever clicked on a link in a suspicious email (e.g., an
  • 149. email in a different language, with an absurd subject)? Yes 32 (13.2 %) No 210 (86.8 %) 526 Environ Syst Decis (2013) 33:517–529 123 In addition, endorsement of safe versus risky resolutions to the cyber dilemmas varied by respondents’ age, regardless of the primed recall of a friend’s prior experi- ence. Middle-aged and older respondents were more likely to endorse the safe choice option compared with younger respondents. Research on age differences is inconsistent in the domain of cyber security related to privacy (Hoofnagle et al. 2010), risk of data loss from a cyber threat (Howe et al. 2012—‘‘The psychology of security for the home computer user’’ in Proceedings of 2012 IEEE Symposium on the Security and Privacy) or fear of a cyber threat
  • 150. (Alshalan 2006). Our findings suggest that younger indi- viduals’ extensive use and dependence on computers for daily activities may result in the association of a greater cost with being risk averse in response to cyber dilemmas. Younger individuals’ familiarity with computers likely makes it easier for them to determine whether a cyber dilemma is a real threat or a computer’s standard warning message. In the same vein, their familiarity with computers may also lead to a greater awareness of a major cyber dilemma being a small probability event, the consequences of which are likely to be repairable. Ultimately, younger individuals do not perceive the unsafe option as overly risky compared with the safe option. Respondents’ income was also found to moderate the effect of the primed recall of a friend’s prior experience on respondents’ endorsement of safe versus risky options. Of the three income levels, the wealthiest respondents were the least sensitive to variations in the
  • 151. primed recall of a friend’s prior cyber experience. In the literature on cyber security, only a significant main effect for income is reported. In a 2001 presentation by Tyler, Zhang, Southern and Joiner at the IACIS Con- ference, the research team reported findings suggesting that higher income individuals have a lower probability of considering e-commerce to be safe and therefore avoid e-commerce transactions. Similarly, in a study by Downs et al. (2008), respondents from more affluent areas were reported to update their anti-virus program more frequently than respondents from poorer areas, further validating the tendency toward risk averse cyber behavior for higher income individuals. Our finding suggests that wealthier respondents were not as impacted compared with the low and medium income respondents by the primed prior recall experience manipulation because they can afford to be riskier. Their wealth allows them to have access to enhanced
  • 152. baseline security measures. This creates a sense that they are exempt from risks that apply to others and for this reason, do not need to pay much attention to the primed prior recall experiences and consequences. Interestingly, there were no significant main effects or interactions for the remaining four individual difference variables, including sex, education, work domain, or previous cyber victimization. The absence of main effects for five of the six individual difference variables suggests that respondents’ cyber dilemma decisions are determined more by recall of prior cyber-related experiences, and not by background of the decision maker, with the sole exception of respondent age. The absence of interaction effects for five of the six individual difference variables suggests that the effect of primed recall of a prior expe- rience is robust; respondent income was the sole moder- ator identified. 4 Conclusion
  • 153. Experiments I and II were designed to explore how com- puter users’ responses to common cyber dilemmas are influenced by framing and salience of prior cyber experi- ences. Despite using two different dependent variables, the advice the respondent would give to a friend (Experiment I), and how the respondents themselves would respond to cyber dilemmas (Experiment II), the extent to which the two different questions elicit more or less risk averse responses was found to be similar. The results indicate that for prior near-miss experiences (the one manipulation condition included in both experiments), the mean responses were 2.39 and 2.97 for Experiments I and II, respectively. This finding suggests that whether the respondent was making a personal recommendation or providing advice to a friend; the recalled experience manipulation was found to significantly influence the respondent’s endorsement of the safer cyber option. Simi- larly, in prior cyber research, Aytes and Connolly (2004)
  • 154. found that students were more attuned to cyber risks and likely to take action against them when the primary source of information was their own or friends’ experiences with security problems. The one inconsistent finding between the two experi- ments is the effect of respondent sex on risky cyber choice behavior. In Experiment I, females were found to be more risk averse than males, while in Experiment II, sex was found to be unrelated to whether respondents endorsed a risky or safe option. Previous studies are also inconsistent with respect to the role of sex in predicting cyber-related behavior and decision making. At the 2012 Annual Con- ference of the Society for Industrial and Organizational Psychology, Byrne et al. report that women provided slightly higher scores of behavioral intentions to click on a risky cyber link, while Milne et al. (2009) found that males had a greater tendency to engage in risky behaviors online. In the context of security compliance, Downs et al. (2008)
  • 155. report that males were more involved in computer security management, such as updating their anti-virus software and Environ Syst Decis (2013) 33:517–529 527 123 using pop-up blockers, while Herath and Rao (2009) found women to have higher security procedure compliance intentions, but were less likely to act on them. One explanation for our inconsistent results related to sex may be differences in the two populations sampled: college students in Experiment I and a more diverse, AMT sample in Experiment II. College samples tend to be more sex stereotyped, such that risk tends to be judged lower by men than by women, and females tend to have a stronger desire to take preventative and preparedness measures (Harris et al. 2006). This tends to be attributed to their lack of real-world experiences; as evidenced by only a small percentage of the sample, 24 %, have previously experi-
  • 156. enced a cyber dilemma. By these assumptions, males would be expected to be more risk seeking than females in Experiment I. Conversely, the AMT sample consists of older adults with more diverse backgrounds, as evidenced in Table 5, which tends to blur the line between traditional male and female stereotypes. In addition, 80 % of the AMT sample had previously experienced a cyber dilemma, fur- ther suggesting that shared experiences of males and females could lead to the lack of sex differences found in Experiment II. Overall, these two experiments indicate that recall of prior cyber experiences and framing strongly influence individual decision making in response to cyber dilemmas. It is useful to know about how prior experience and framing jointly influence responses to cyber dilemmas. The implications of our findings are that salience of prior negative experiences certainly attenuates risky cyber behavior. We found that this attenuation is greater for gain-framed decisions, and for low-
  • 157. and middle-income respondents. Responses to cyber dilemmas were determined more by proximal variables, such as recall of prior experiences and framing, and were largely robust to individual difference variables, with only a couple of exceptions. Given that safety in the cyber context is an abstract concept, it would be worthwhile to further explore how framing influences cyber dilemma decision making. Additionally, this research design could be used to evaluate differences across cyber dilemma contexts to examine the robustness of the relationships identified in our research. Such further research is warranted to better understand how individual users respond to cyber dilemmas. This infor- mation would be useful to cyber security policymakers faced with the task of designing better security systems, including computer displays and warning messages rele- vant to cyber dilemmas. Acknowledgments This research was supported by the U.S.
  • 158. Department of Homeland Security (DHS) through the National Center for Risk and Economic Analysis of Terrorism Events. However, any opinions, findings, conclusions, and recommendations in this article are those of the authors and do not necessarily reflect the views of DHS. We would like to thank Society for Risk Analysis (SRA) conference attendees for their feedback on this work at a session at the 2012 SRA Annual Meeting in San Francisco. We would also thank the blind reviewers for their time and comments, as they were extremely valuable in developing this paper. References Acquisti A, Grossklags J (2007) What can behavioral economics teach us about privacy. In: Acquisti A, Gritzalis S, Lambrino- udakis C, Vimercati S (eds) Digital privacy: theory, technologies and practices. Auerbach Publications, Florida, pp 363–377 Alshalan A (2006) Cyber-crime fear and victimization: an
  • 159. analysis of a national survey. Dissertation, Mississippi State University Aytes K, Connolly T (2004) Computer security and risky computing practices: a rational choice perspective. J Organ End User Comput 16:22–40 Barnes LR, Gruntfest EC, Hayden MH, Schultz DM, Benight C (2007) False alarms and close calls: a conceptual model of warning accuracy. Weather Forecast 22:1140–1147 Bateman JM, Edwards B (2002) Gender and evacuation: a closer look at why women are more likely to evacuate for hurricanes. Nat Hazard Rev 3:107–117 Bourque LB, Regan R, Kelley MM, Wood MM, Kano M, Mileti DS (2012) An examination of the effect of perceived risk on preparedness behavior. Environ Behav 45:615–649 Breznitz S (2013) Cry wolf: the psychology of false alarms. Psychology Press, Florida Buhrmester M, Kwang T, Gosling SD (2011) Amazon’s
  • 160. Mechanical Turk: a new source of inexpensive, yet high-quality, data? Perspect Psychol Sci 6:3–5 Cameron L, Shah M (2012) Risk-taking behavior in the wake of natural disasters. IZA Discussion Paper No. 6756. http://ssrn. com/abstract=2157898 Dillon RL, Tinsley CH, Cronin M (2011) Why near-miss events can decrease an individual’s protective response to hurricanes. Risk Anal 31:440–449 Donner WR, Rodriguez H, Diaz W (2012) Tornado warnings in three southern states: a qualitative analysis of public response patterns. J Homel Secur Emerg Manage 9:1547–7355 Dow K, Cutter SL (1998) Crying wolf: repeat responses to hurricane evacuation orders. Coast Manage 26:237–252 Downs DM, Ademaj I, Schuck AM (2008) Internet security: who is leaving the ‘virtual door’ open and why? First Monday 14.
  • 161. doi:10.5210%2Ffm.v14i1.2251 Flynn J, Slovic P, Mertz CK (1994) Gender, race, and perception of environmental health risks. Risk Anal 14:1101–1108 Garg V, Camp J (2013) Heuristics and biases: implications for security design. IEEE Technol Soc Mag 32:73–79 Harris C, Jenkins M, Glaser D (2006) Gender differences in risk assessment: why do women take fewer risks than men? Judgm Decis Mak 1:48–63 Helander MG, Khalid HM (2000) Modeling the customer in electronic commerce. Appl Ergon 31:609–619 Herath T, Rao HR (2009) Encouraging information security behaviors in organizations: role of penalties, pressures and perceived effectiveness. Decis Support Syst 47:154–165 Ho MC, Shaw D, Lin S, Chiu YC (2008) How do disaster characteristics influence risk perception? Risk Anal 28:635–643 Hoofnagle C, King J, Li S, Turow J (2010) How different are young
  • 162. adults from older adults when it comes to information privacy attitudes and policies? April 14, 2010. http://ssrn.com/abstract= 1589864 528 Environ Syst Decis (2013) 33:517–529 123 http://ssrn.com/abstract=2157898 http://ssrn.com/abstract=2157898 http://dx.doi.org/10.5210%2Ffm.v14i1.2251 http://ssrn.com/abstract=1589864 http://ssrn.com/abstract=1589864 Kahneman D, Tversky A (1979) Prospect theory: an analysis of decision under risk. Econom J Econom Soc 47:263–291 Kung YW, Chen SH (2012) Perception of earthquake risk in Taiwan: effects of gender and past earthquake experience. Risk Anal 32:1535–1546 Kunreuther H, Pauly M (2004) Neglecting disaster: why don’t people insure against large losses? J Risk Uncertain 28:5–21 Mason W, Suri S (2012) Conducting behavioral research on Amazon’s Mechanical Turk. Behav Res Methods 44:1–23
  • 163. Milne GR, Labrecque LI, Cromer C (2009) Toward an understanding of the online consumer’s risky behavior and protection practices. J Consum Aff 43:449–473 Paolacci G, Chandler J, Ipeirotis P (2010) Running experiments on Amazon Mechanical Turk. Judgm Decis Mak 5:411–419 Shankar V, Urban GL, Sultan F (2002) Online trust: a stakeholder perspective, concepts, implications, and future directions. J Stra- teg Inf Syst 11:325–344 Siegrist M, Gutscher H (2008) Natural hazards and motivation for mitigation behavior: people cannot predict the affect evoked by a severe flood. Risk Anal 28:771–778 Simmons KM, Sutter D (2009) False alarms, tornado warnings, and tornado casualties. Weather Clim Soc 1:38–53 Slovic P, Peters E, Finucane ML, MacGregor DG (2005) Affect,
  • 164. risk, and decision making. Health Psychol 24:S35–S40 Tinsley CH, Dillon RL, Cronin MA (2012) How near-miss events amplify or attenuate risky decision making. Manage Sci 58:1596–1613 Tversky A, Kahneman D (1986) Rational choice and the framing of decisions. J Bus 59:S251–S278 Verendel V (2008) A prospect theory approach to security. Technical Report No. 08-20. Sweden. Department of Computer Science and Engineering, Chalmers University of Technology/Goteborg University. http://citeseerx.ist.psu.edu/viewdoc/download?doi= 10.1.1.154.9098&rep=rep1&type=pdf Environ Syst Decis (2013) 33:517–529 529 123 http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.154.9 098&rep=rep1&type=pdf http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.154.9 098&rep=rep1&type=pdfHeuristics and biases in cyber security dilemmasAbstractIntroductionExperiment IMethodDesign
  • 165. overviewScenarios and manipulationsSubjectsResultsDiscussionExperiment IIMethodDesign overviewScenarios and manipulationsSubjectsResultsDiscussionConclusionAcknowledg mentsReferences ww.sciencedirect.com c o m p u t e r s & s e c u r i t y 3 1 ( 2 0 1 2 ) 5 9 7 e6 1 1 Available online at w journal homepage: www.elsevier.com/locate/cose Leveraging behavioral science to mitigate cyber security risk Shari Lawrence Pfleeger a,1, Deanna D. Caputo b,* a Institute for Information Infrastructure Protection, Dartmouth College, 4519 Davenport St., NW, Washington, DC 20016, USA b MITRE Corporation, 7515 Colshire Drive, McLean, VA 22102-7539, USA a r t i c l e i n f o Article history: Received 16 August 2011 Received in revised form 21 November 2011 Accepted 22 December 2011 Keywords: Cyber security
  • 166. Cognitive load Bias Heuristics Risk communication Health models * Corresponding author. Tel.: þ1 703 983 384 E-mail addresses: [email protected] 1 Tel.: þ1 603 729 6023. 0167-4048/$ e see front matter ª 2012 Publi doi:10.1016/j.cose.2011.12.010 a b s t r a c t Most efforts to improve cyber security focus primarily on incorporating new technological approaches in products and processes. However, a key element of improvement involves acknowledging the importance of human behavior when designing, building and using cyber security technology. In this survey paper, we describe why incorporating an under- standing of human behavior into cyber security products and processes can lead to more effective technology. We present two examples: the first demonstrates how leveraging behavioral science leads to clear improvements, and the other illustrates how behavioral
  • 167. science offers the potential for significant increases in the effectiveness of cyber security. Based on feedback collected from practitioners in preliminary interviews, we narrow our focus to two important behavioral aspects: cognitive load and bias. Next, we identify proven and potential behavioral science findings that have cyber security relevance, not only related to cognitive load and bias but also to heuristics and behavioral science models. We conclude by suggesting several next steps for incorporating behavioral science findings in our technological design, development and use. ª 2012 Published by Elsevier Ltd. 1. Introduction create a cyber environment that provides users with all of the “Only amateurs attack machines; professionals target people.” (Schneier, 2000) What is the best way to deal with cyber attacks? Cyber security promises protection and prevention, using both innovative technology and an understanding of the human user. Which aspects of human behavior offer the most
  • 168. promise in making cyber security processes and products more effective? What role should education and training play? How can we encourage good security practices without unnecessarily interrupting or annoying users? How can we 6. outh.edu (S.L. Pfleeger), d shed by Elsevier Ltd. functionality they need without compromising enterprise or national security? We investigate the answers to these ques- tions by examining the behavioral science literature to iden- tify behavioral science theories and research findings that have the potential to improve cyber security and reduce risk. In this paper, we report on our initial findings, describe several behavioral science areas that offer particularly useful appli- cations to security, and describe how to use them in a general risk-reduction process. The remainder of this paper is organized in five sections. Section 2 describes some of the problems that a technology- alone solution cannot address. Section 3 explains how we used a set of scenarios to elicit suggestions about the behav-
  • 169. iors of most concern to technology designers and users. [email protected] (D.D. Caputo). mailto:[email protected] mailto:[email protected] www.sciencedirect.com/science/journal/01674048 www.elsevier.com/locate/cose http://dx.doi.org/10.1016/j.cose.2011.12.010 http://dx.doi.org/10.1016/j.cose.2011.12.010 http://dx.doi.org/10.1016/j.cose.2011.12.010 c o m p u t e r s & s e c u r i t y 3 1 ( 2 0 1 2 ) 5 9 7 e6 1 1598 Sections 4 and 5 highlight several areas of behavioral science with demonstrated and potential relevance to security tech- nology. Finally, Section 6 suggests possible next steps toward inclusion of behavioral science in security technology’s design, construction and use. 3 See the First Interdisciplinary Workshop on Security and Human Behavior, described at http://www.schneier.com/blog/ archives/2008/06/security_and_http://www.cl.cam.ac.uk/wrja14/ shb08.html. 4 See workshop papers at http://www.informatik.uni-trier.de/ wley/db/conf/itrust/itrust2006.html. 5 The National Science Foundation program is interested in the connections between social science and cyber security. It has announced a new program that encourages computer scientists 2. Why technology alone is not enough
  • 170. The media frequently express the private sector’s concern about liability for cyber attacks and its eagerness to minimize risk. The public sector has similar concerns, because aspects of everyday life (such as operation and defense of critical infrastructure, protection of national security information, and operation of financial markets) involve both government regulation and private sector administration.2 The govern- ment’s concern is warranted: the Consumer’s Union found that government was the source of one-fifth of the publicly- reported data breaches between 2005 and mid-2008 (Consumer’s Union, 2008). The changing nature of both tech- nology and the threat environment makes the risks to infor- mation and infrastructure difficult to anticipate and quantify. Problems of appropriate response to cyber incidents are exacerbated when security technology is perceived as an obstacle to the user. The user may be overwhelmed by diffi- culties in security implementation, or may mistrust, misinter- pret or override the security. A recent study of users at Virginia
  • 171. Tech illustrates the problem (Virginia Tech, 2011). Bellanger et al. examined user attitudes and the “resistance behavior” of individuals faced with a mandatory password change. The researchers found that, even when passwords were changed as required, the changes were intentionally delayed and the request perceived as being an unnecessary interruption. “People are conscious that a password breach can have severe consequences, but it does not affect their attitude toward the security policy implementation.” Moreover, “the more tech- nical competence respondents have, the less they favor the policy enhancement. .In a voluntary implementation, that competence may be a vector of pride and accomplishment. In a mandatory context, the individual may feel her competence challenged, triggering a negative attitude toward the process.” In the past, solutions to these problems have ranged from strict, technology-based control of computer-based human behavior (often with inconsistent or sometimes rigid enforcement) to comprehensive education and training of
  • 172. system developers and users. Neither extreme has been particularly successful, but recent studies suggest that a blending of the two can lead to effective results. For example, the U.K. Office for Standards in Education, Chil- dren’s Services and Skills (Ofsted) evaluated the safety of online behavior at 35 representative schools across the U.K. “Where the provision for e-safety was outstanding, the schools had managed rather than locked down systems. In the best practice seen, pupils were helped, from a very early age, to assess the risk of accessing sites and therefore gradually to acquire skills which would help them adopt safe practices even when they were not supervised.” (Ofsted, 2010) In other 2 See, for example, http://www.cbsnews.com/video/watch/? id¼5578986n&tag¼related;photovideo. words, the most successful security behaviors were exhibited in schools where students were taught appropriate behaviors and then trusted to behave responsibly. The Ofsted report likens the approach to teaching children how to cross the road safely, rather than relying on adults to accompany the chil- dren across the road each time.
  • 173. This approach is at the core of our research. Our over- arching hypothesis is that, if humans using computer systems are given the tools and information they need, taught the meaning of responsible use, and then trusted to behave appropriately with respect to cyber security, desired outcomes may be obtained without security’s being perceived as onerous or burdensome. By both understanding the role of human behavior and leveraging behavioral science findings, the designers, developers and maintainers of information infra- structure can address real and perceived obstacles to produc- tivity and provide more effective security. These behavioral changes take time, so plans for initiating change should include sufficient time to propose the change, implement it, and have it become part of the culture or common practice. Other evidence (Predd et al., 2008; Pfleeger et al., 2010) is beginning to emerge that points to the importance of under- standing human behaviors when developing and providing cyber security.3 There is particular interest in using trust to
  • 174. mitigate risk, especially online. For example, the European Union funded a several-year, multi-disciplinary project on online trust (iTrust),4 documenting the many ways that trust can be created and broken. Now, frameworks are being developed for analyzing the degree to which trust is built and maintained in computer applications (Riegelsberger et al., 2005). More broadly, a rich and relevant behavioral science literature addresses critical security problems, such as employee deviance, employee compliance, effective decision- making, and the degree to which emotions (Lerner and Tiedens, 2006) or stressful conditions (Klein and Salas, 2001) can lead to riskier choices by decision-makers.5 At the same time, there is much evidence that technological advances can have unintended consequences that reduce trust or increase risk (Tenner, 1991). For these reasons, we conclude that it is important to include the human element when designing, building and using critical systems. To understand how to design and build systems that
  • 175. encourage users to act responsibly when using them, we iden- tified two types of behavioral science findings: those that have already been shown to demonstrate a welcome effect on cyber security implementation and use, and those with potential to have such an effect. In the first case, we documented the rele- vant findings, so that practitioners and researchers can and social scientists to work together (Secure and Trustworthy Cyberspace, described at http://www.nsf.gov/pubs/2012/nsf12503/ nsf12503.htm?WT.mc_id¼USNSF_25&WT.mc_ev¼click). http://www.cbsnews.com/video/watch/%3fid%3d5578986n%26t ag%3drelated;photovideo http://www.cbsnews.com/video/watch/%3fid%3d5578986n%26t ag%3drelated;photovideo http://www.cbsnews.com/video/watch/%3fid%3d5578986n%26t ag%3drelated;photovideo http://www.cbsnews.com/video/watch/%3fid%3d5578986n%26t ag%3drelated;photovideo http://www.schneier.com/blog/archives/2008/06/security_and_ht tp://www.cl.cam.ac.uk/%7Erja14/shb08.html http://www.schneier.com/blog/archives/2008/06/security_and_ht tp://www.cl.cam.ac.uk/%7Erja14/shb08.html http://www.schneier.com/blog/archives/2008/06/security_and_ht tp://www.cl.cam.ac.uk/%7Erja14/shb08.html http://www.schneier.com/blog/archives/2008/06/security_and_ht tp://www.cl.cam.ac.uk/%7Erja14/shb08.html http://www.informatik.uni- trier.de/%7Eley/db/conf/itrust/itrust2006.html http://www.informatik.uni-
  • 176. trier.de/%7Eley/db/conf/itrust/itrust2006.html http://www.nsf.gov/pubs/2012/nsf12503/nsf12503.htm%3fWT.m c_id%3dUSNSF_25%26WT.mc_ev%3dclick http://www.nsf.gov/pubs/2012/nsf12503/nsf12503.htm%3fWT.m c_id%3dUSNSF_25%26WT.mc_ev%3dclick http://www.nsf.gov/pubs/2012/nsf12503/nsf12503.htm%3fWT.m c_id%3dUSNSF_25%26WT.mc_ev%3dclick http://www.nsf.gov/pubs/2012/nsf12503/nsf12503.htm%3fWT.m c_id%3dUSNSF_25%26WT.mc_ev%3dclick http://dx.doi.org/10.1016/j.cose.2011.12.010 http://dx.doi.org/10.1016/j.cose.2011.12.010 c o m p u t e r s & s e c u r i t y 3 1 ( 2 0 1 2 ) 5 9 7 e6 1 1 599 determine which approaches are most applicable to their environment. In the second case, we are designing a series of studies to test promising behavioral science results in a cyber security setting; setting with the goal of determining which results (with associated strategies for reducing or mitigating the behavioral problems they reflect) are the most effective. However, applying behavioral science findings to cyber security problems is an enormous undertaking. To maximize the likely effectiveness of outcomes, we used a set of inter- views to elicit practitioners’ opinions about behaviors of concern, so that we could focus on those perceived as most
  • 177. significant. We describe the interviews and results in Section 3. These findings suggest hypotheses about the role of behavior in addressing cyber security issues. 3. Identifying behavioral aspects of security Designers and developers of security technology can leverage what is known about people and their perceptions to provide more effective security. A former Israeli airport security chief said, “I say technology should support people. And it should be skilled people at the center of our security concept rather than the other way around” (Amos, 2010). To implement this kind of human-centered security, technologists must understand the behavioral sciences as they design, develop and use technology. However, trans- lating behavioral results to a technological environment can be a difficult process. For example, system designers must address the human elements obscured by computer media- tion. Consumers making a purchase online trusts that the merchant represented by the website is not simply taking
  • 178. their money, but also is fulfilling its obligation to provide goods in return. The consumer infers the human involvement of the online merchant behind the scenes. Thus, at some level, the buyer and seller are humans enacting a transaction enabled by a system designed, developed and maintained by humans. There may be neither actual human contact nor direct knowledge of the other human actors involved, but the transaction process reflects its human counterpart. Preventing or mitigating adverse cyber security incidents requires action at many stages: designing the technology being incorporated in the infrastructure; implementing, testing and maintaining the technology; and using the tech- nology to provide essential products and services. Behavioral science has addressed notions of cyber security in these activities for many years. Indeed, Sasse and Flechais (2005) note that secure systems are socio-technical systems in which we should use an understanding of behavioral science to “prevent users from being the ‘weakest link.’” For example,
  • 179. some behavioral scientists have investigated how trust mechanisms affect cyber security. Others have reported findings related to the design and use of cyber systems, but the relevance and degree of effect have not yet been tested. Some of the linkage between behavioral science and security is specific to certainkinds of systems. For example, Castelfranchi and Falcone (1998, 2002) analyze trust in multi-agent systems from a behavioral perspective. They view trust as having several components, including beliefs that must be held to develop trust (the social context, as described by Riegelsberger et al.(2003)) and relationships to previousexperience (the temporal context of the RiegelsbergereSasseeMcCarthy framework). They use psycho- logical factors to model trust in multi-agent systems. In addition to social and temporal concerns, we add expectations of fulfill- ment, where someone trusting someone or something else expects something in return (Baier, 1986). This behavioral research sheds light on the nature of a user’s expectation and on
  • 180. perceived trustworthiness of technology-mediated interactions and has important implications related to the design of protec- tive systems and processes. Sasse and Flechais (2005) view security from three distinct perspectives: product, process and panorama: � Product. This perspective includes the effect of the security controls, such as the policies and mechanisms on stake- holders (e.g., designers, developers, users). The controls involve requirements affecting physical and mental work- load, behavior, and cost (human and financial). Users trust the product to maintain security while getting the primary task done. � Process. This aspect addresses how security decisions are made, especially in early stages of requirements-gathering and design. The process should allow the security mecha- nisms to be “an integral part of the design and development of the system, rather than being ‘added on’” (Sasse and Flechais, 2005). Because “mechanisms that are not employed in practice, or that are used incorrectly, provide
  • 181. little or no protection,” designers must consider the impli- cations of each mechanism on workload, behavior and workflow (Sasse and Flechais, 2005). From this perspective, the stakeholders must trust the process to enable them to make appropriate and effective decisions, particularly about their primary tasks � Panorama. This aspect describes the context in which the security operates. Because security is usually not the primary task, users are likely to “look for shortcuts and workarounds, especially when users do not understand why their behavior compromises security. .A positive security culture, based on a shared understanding of the importance of security. is the key to achieving desired behavior” (Sasse and Flechais, 2005). From this perspective, the user views security mechanisms as essential even when they seem intrusive, limiting, or counterproductive. 3.1. Scenario creation Because the infrastructure types and threats are vast, we used interview results to narrow our investigation to those behav-
  • 182. ioral science areas with demonstrated or likely potential to enhance an actor’s confidence in using any information infrastructure. To guide our interviews, we worked with two dozen U.S. government and industry employees familiar with information infrastructure protection issues to define three threat scenarios relevant to protecting the information infra- structure. The methodology and resulting analyses were conducted by the paper’s first author and involved five steps: http://dx.doi.org/10.1016/j.cose.2011.12.010 http://dx.doi.org/10.1016/j.cose.2011.12.010 c o m p u t e r s & s e c u r i t y 3 1 ( 2 0 1 2 ) 5 9 7 e6 1 1600 � Choosing topics. We chose three security topics to discuss, based on recent events. The combination of the three was intended to represent a (admittedly incomplete but) signif- icant number of typical concerns, the discussion of which would reveal underlying areas ripe for improvement. � Creating a representative, realistic scenario for each topic. Using our knowledge of recent cyber incidents and attacks, we created an attack scenario for each plausible topic, por-
  • 183. traying a cyber security problem for which a solution would be welcomed by industry and government. � Identifying people with decision making authority about cyber security products and usage to interview about the scenarios. We identified people from industry and government who were willing to participate in interviews. � Conducting interviews. Our discussions focused on two questions: Are these scenarios realistic, and how could the cyber security in each situation be improved? � Analyzing the results and their implications. We analyzed the results of these interviews and their implications for our research. 3.1.1. Scenario 1: improving security awareness among builders of information infrastructure Security is rarely the primary task of those who use the information infrastructure. Typically, users seek information, analyze relationships, produce documents, and perform tasks that help them understand situations and take action. Simi- larly, system developers often focus on these primary tasks before incorporating security into an architecture or design.
  • 184. Moreover, system developers often implement security requirements by choosing security mechanisms that are easy to build and test or that meet some other technical system objective (e.g., reliability). Developers rarely take into account the usability of the mechanism or the additional cognitive load it places on the user. Scenario 1 describes ways to improve security awareness among system builders so that security is more likely to be useful and effective. Suppose software engineers are designing and building a system to support the creation and transmission of sensitive documents among members of an organization. Many aspects of document creation and transmission are well known, but security mechanisms for evaluating sensitivity, labeling documents appropriately and transmitting documents securely have presented difficulties for many years. In our scenario, software engineers are tasked to design a system that solicits information from document creators, modifiers and readers, so that a trust designation can be assigned to
  • 185. each document. Security issues include understanding they types of trust-related information needed, determining the role of a changing threat environment, and defining the frequency at which the trust information should be refreshed and re-evaluated (particularly in light of cyber security inci- dents that may occur during the life of the document). In addition, the software engineers must implement some type of summary trust designation that will have meaning to document creators, modifiers and readers alike. This trust designation, different from the classification of document sensitivity, represents the degree to which both the content and provider (or modifier) can be trusted and for how long. For example, a document about a nation’s emerging military capability may be highly classified (that is, highly sensitive), regardless of whether the information provider is highly trusted (because, for example, he has repeatedly provided highly useful information in the past) or not (because, for example, he frequently provides incorrect or misleading
  • 186. information). There are two important aspects of the software engineers’ security awareness. First, they must be able to select security mechanisms for implementing the trust designation that allow them to balance security with performance and usability requirements. This balancing entails appreciating and accommodating the role of security in the larger context of the system’s intended purpose and multiple uses. Second, the users must be able to trust that the appropriate security mechanism is chosen. Trust means that the mechanism itself must be appropriate to the task. For example, the Biba Integ- rity Model (Biba, 1977), a system of computer security policies expressed as access control rules, is designed to ensure data integrity. The model defines a hierarchy of integrity levels, and then prevents participants from corrupting data of an integrity level higher than the subject, or from being corrupted by data from a level lower than the subject. The Biba model was developed to extend the Bell and La Padula (1973) model,
  • 187. which addresses only data confidentiality. Thus, under- standing and choice of policies and mechanisms are impor- tant aspects in which we trust software engineers to exercise discretion. In addition, software engineers must be able to trust the provenance, correctness and conformance to expectations of the security mechanisms. Here, “provenance” means not only the applicability of the mechanisms and algorithms but also the source of architectural or imple- mentation modules. With the availability of open source modules and product line architectures (see, for example, Clements and Northrup, 2001), it is likely that some parts of some security mechanisms will have been built for a different purpose, often by a different team of engineers. Builders and modifiers of the current system must know to what degree to trust someone else’s modules. 3.1.2. Scenario 2: enhancing situational awareness during a “cyber event” Situational awareness is the degree to which a person or system knows about a threat in the environment. When an
  • 188. emergency is unfolding, the people and systems involved in watching it unfold must determine what has already happened, what is currently happening, and what is likely to happen in the future; then, they make recommendations for reaction based on their situational awareness. The people or systems perceiving the situation have varying degrees of trust in the information they gather and in the providers of that information. When a cyber event is unfolding, information can come from primary sources (such as sensors in process control systems or measurements of network activity) and secondary sources (such as human or automated interpreters of trends). Consider analysts using a computer system that monitors the network of power systems around the United States. The system itself interacts with a network of systems, each of which collects and analyzes data about power generation and distribution stations and their access points. The analysts http://dx.doi.org/10.1016/j.cose.2011.12.010 http://dx.doi.org/10.1016/j.cose.2011.12.010
  • 189. c o m p u t e r s & s e c u r i t y 3 1 ( 2 0 1 2 ) 5 9 7 e6 1 1 601 notice a series of network failures around the country: first, a power station in California fails, then one in Missouri, and so on during the first few hours of the event.6 The analysts must determine not only what is really unfolding but also how to respond appropriately. Security and human behavior are involved in many ways. First, the analyst must know whether to trust the information being reported to her monitoring system. For example, is the analyst viewing a failure in the access point or in the monitoring system? Next, the analyst must be able to know when and whether she has enough information to make a decision about which reactions are appropriate. This decision must be made in the context of an evolving situation, where some evidence at first considered trustworthy is eventually determined not to be (and vice versa). Finally, the analyst must analyze the data being reported, form hypotheses about possible causes, and then determine which interpretation of the data to use. For instance, is the sequence of failures the result of incorrect data transmission, a cyber
  • 190. attack, random system failures, or simply the various power companies’ having purchased some of their software from the same vendor (whose system is now failing)? Choosing the wrong interpretation can have serious consequences. 3.1.3. Scenario 3: supporting decisions about trustworthiness of network transactions On Christmas Day, 2009, a Nigerian student flying from Amsterdam to Detroit attempted to detonate a bomb to destroy the plane. Fortunately, the bomb did little damage, and passengers prevented the student from completing his intended task. However, in analyzing why the student was not detected by a variety of airport security screens, it was determined that important information was never presented to the appropriate decision-makers (Baker and Hulse, 2009). This situation forms the core of Scenario 3, where a system queries an interconnected set of databases to find information about a person or situation. In this scenario, an analyst uses an interface to a collection of data repositories, each of which contains information about
  • 191. crime and terrorism. When the analyst receives a warning about a particular person of interest, she must query the repositories to determine what is known about that person. There are many security issues related to this scenario. First, the analyst must determine the degree to which she can trust that all of the relevant information resides in at least one of the connected repositories. After the Christmas bombing attempt, it was revealed that the U.K. had denied a visa request by the student, but information about the denial was not available to the Transportation Security Administration when decisions were made about whether to subject the student to extra security screening. Spira (2010) points out that the problem is not the number of databases; it is the lack of ability to search the entire “federation” of databases. Next, even if the relevant items are found, the most important ones must be visible at the appropriate time. Libicki and Pfleeger (2004) have documented the difficulties in 6 Indeed, at this stage it may not be clear that the event is actually a cyber event. A similar event with similar characteris-
  • 192. tics occurred on August 14, 2003, in the United States. See http:// www.cnn.com/2003/US/08/14/power.outage/index.html. “collecting the dots” before an analyst can take the next step to connect them. If a “dot” is not as visible as it should be, it can be overlooked or given insufficient attention during subsequent analysis. Moreover, Spira (2010) highlights the need for viewing the information in its appropriate context. Third, the analyst must also determine the degree to which each piece of relevant information can be trusted. That is, not only must she know the accuracy and timeliness of each data item, but she also must determine whether the data source itself can be trusted. There are several aspects to this latter degree of trust, such as knowing how frequently the data source provides the information (that is, whether it is old news), knowing whether the data source is trustworthy enough, and whether circumstances may change the source’s trustworthiness. For example, Predd et al. (2008) and Pfleeger et al. (2010) point out the varying types of people with legiti- mate access to systems taking unwelcome action. A trust-
  • 193. worthy insider may become a threat because of a pending layoff or personal problem, inattention or confusion, or her attempt to overcome a system weakness. So the trustworthi- ness of information and sources must be re-evaluated repeatedly and perhaps even forecast based on predictions about a changing environment. Finally, the analyst must also determine the degree to which the analysis is correct. Any analysis involves assump- tions about variables and their importance, as well as the relationships among dependent and independent variables. Many times, it is a faulty assumption that leads to failure, rather than faulty data. 3.2. Analysis of results The three scenarios were intriguing to our interviewees, and all agreed that they were realistic, relevant and important. However, having the interviewees scrutinize the scenarios revealed fewer behavioral insights than we had hoped. In each case, the interviewee viewed each scenario from his or her
  • 194. particular perspective, highlighting only a small portion of the scenario to confirm an opinion he or she held. For example, one of the interviewees used Scenario 3 to emphasize the need for information sharing; another interviewee said that privacy is a key concern, especially in situations like Scenario 2 where significantmonitoring mustbe balanced with protecting privacy. Nevertheless, many of the interviewees had good sugges- tions for shaping the way forward. For instance, one said that there is much to be learned from command and control algorithms, where military actors have learned to deal with risk perception, uncertainty, incomplete information, and the need to make an important decision under extreme pressures. There is rich literature addressing decision-making under pressure, from Ellsberg (1964) through Klein (Klein, 1998, 2009). In particular, Klein’s models of adaptive decision- making may be applicable (Klein and Calderwood, 1991; Klein and Salas, 2001). While the scenario methodology was not a structured idea generation approach, to the extent
  • 195. possible, we endeavored to be unbiased in our interpretation of interviewee responses. We were not trying to gather support for preconceived ideas and were genuinely trying to explore new ideas where behavioral science could be lever- aged to address security issues. http://www.cnn.com/2003/US/08/14/power.outage/index.html http://www.cnn.com/2003/US/08/14/power.outage/index.html http://dx.doi.org/10.1016/j.cose.2011.12.010 http://dx.doi.org/10.1016/j.cose.2011.12.010 c o m p u t e r s & s e c u r i t y 3 1 ( 2 0 1 2 ) 5 9 7 e6 1 1602 There were several messages that emerged from the interviews: � Security is intertwined with the way humans behave when trying to meet a goal or perform a task. The separation of primary task from secondary, as well as its impact on user behavior, was first clearly expressed in Smith et al. (1997) and elaborated in the security realm by Sasse et al. (2002). Our interviews reconfirmed that, in most instances, security is secondary to a user’s primary task (e.g., finding a piece of information, processing a transaction, making a decision).
  • 196. When security interferes, the person may ignore or even subvert the security, since the person is rewarded for the primary task. In some sense, the person trusts the system to take care of security concerns. That perspective can lead to at least two unwelcome events. First, when confronted with uncertainty about the security of a course of action, the person trusts that the system has assured the safety of the action (for example, when a user opens an attachment assuming that the system has checked it for viruses, or, as in Scenario 3, the users assumed the bomber was not a secu- rity risk because his name was not revealed by the security system). Second, when, in the past, security features have prevented or slowed task completion, a user subverts the security because he or she may no longer trust the system to enable effective task completion in the future. Thus, understanding the behavioral science (rather than the security itself) can offer new ways to design, build and use systems whose security is understood and respected by the
  • 197. user. � Interviewees noted in all scenarios how limitations on memory or analysis capability interfered with an analyst’s ability to perform. One interviewee noted the abundance of information being generated by automated systems, and the increasing likelihood that important events would go unnoticed (Burke, 2010). In the behavioral sciences, the term cognitive load refers to the amount of stress placed on working memory. First addressed by Miller (1956), who claimed that a person’s “working memory” could deal with at most five to nine pieces of information at once, the notion was extended by Chase and Simon (1973) to address memory overload during problem-solving. Several empir- ical results (see, for example, Scandura, 1971) suggest that individuals vary in their ability to process a given amount of information. � Inattentional blindness is a particular aspect of cognitive load that played a role in each scenario. First acknowledged by Mack and Rock (1998) and studied extensively by Simons
  • 198. and his colleagues (see, for example, Simons and Chabris, 1999 and Simons and Jensen, 2009), inattentional blind- ness refers to a person’s inability to notice unexpected events when concentrating on a primary task. For example, inattentional blindness may cause an analyst in Scenario 2 to miss seeing a pattern in the failure of power plants (e.g., that all failing power plants were in areas experiencing severe drought), or to lead an analyst in Scenario 3 to overlook a warning from the bomber’s father because attention was restricted to the bomber himself. � There is significant bias in the way each interviewee thinks about security. This bias reflects the interviewee’s experience, goals and expertise, evidencing itself in the way that two people view the same situation in very different ways. For example, interviewees with jobs that focus primarily on privacy thought of the scenarios as protecting data from outsiders but did not consider inadvertent corruption. By understanding biases, security designers and developers can anticipate likely perceptions and account for
  • 199. them when designing approaches to encourage good secu- rity behavior. � There is a significant element of risk in each scenario, and decision-makers have a difficult time both understanding the nature of the risk (expressed as a combination of like- lihood and impact) and balancing multiple perceptions of the risk to make the best decision in the time available. There is a considerable literature on risk perception and risk communication, with important papers included in the compilations by Mayo and Hollander (1991) and Slovic (2000). By applying behavioral science findings to system design, development and use, users can be made more aware of the likely impact of their security-related decisions. The interviews revealed how practitioners (i.e., users and developers) do and do not involve security-related concerns in their decision-making process. Several points became clear to us as a result of these discussions:
  • 200. � Practitioners do not have a common understanding of security. � Practitioners do not have a heightened awareness of how security can affect all of their job functions and roles. For example, people feel comfortable revealing small amounts of information in each situation but do not realize how easily the information can aggregate into a full picture that becomes a security concern. � Practitioners have limited experience in dissecting a situa- tion to identify necessary security relationships. � The combination of narrow focus with a large (and often growing) quantity of information continues to cause failure to “connect the dots.” Finding a pattern or connection among only a few dots within a large set of data is akin to the problem of identifying a constella- tion in a star-filled nighttime sky. Some people can find the Big Dipper easily, when others see only too many stars. Our interviews made clear that practitioners need training and assistance in identifying important aspects of a situation and in knowing how and when to
  • 201. focus. Based on the outcomes from our scenario discussions, we narrowed our focus to cognitive load and bias as organizing principles for an investigation of relevant behavioral science theory and research findings that offer promise of more secure systems. We also sought information about people’s heuristics and models that might be useful in helping us convey cyber security information and implement relevant results. In the next two sections, we examine both those behavioral science findings that have already been demon- strated to have bearing on cyber security and those with the potential to do so. http://dx.doi.org/10.1016/j.cose.2011.12.010 http://dx.doi.org/10.1016/j.cose.2011.12.010 c o m p u t e r s & s e c u r i t y 3 1 ( 2 0 1 2 ) 5 9 7 e6 1 1 603 4. Areas of behavioral science with demonstrated relevance We begin this section by examining several key behavioral science findings that have been demonstrated as relevant to
  • 202. cyber security in general and information infrastructure protection in particular. Then, in the next section we look at behavioral science research that has potential to improve cyber security. In addition, we include descriptions of heuristics and health-related models that may assist designers in building good security into products and processes. In each case, we document the possible implica- tions of each. 4.1. Findings with demonstrable relevance to cyber security Behavioral science findings improve product, process and panorama in these examples. 4.1.1. Recognition easier than recollection The behavioral science literature demonstrates that recogni- tion is significantly easier than recall. After Rock and Engelstein (1959) showed people a single meaningless shape, the participants’ ability to recall it declined rapidly, but they could recognize it almost perfectly a month later. In other words, asking participants to recall a shape without being shown examples was far less successful than displaying
  • 203. a collection of shapes and asking them to identify which one had been shown to them initially. Over the next two decades, many large scale empirical studies reinforced this finding. For example, Standing (1973) showed participants a set of complex pictures; the number of pictures in each set ranged from 10 to 10,000. The participants could recognize subsets of them with 95 percent accuracy. Dhamija and Perrig (2000) studied how well people remember images compared with passwords, and found that people can more reliably recognize their chosen image than remember a selected password. This result is being applied to user-to-computer authentication; either the user selects an image as an authentication picture, or selects a one-time password based on a shape or configuration. Similarly, Zviran and Haga (1990) showed that even text-based chal- lenge-response mechanisms and associative passwords are an improvement over unaided password recall. Commercial products are using these results. Lamandé
  • 204. (2010) reports that the GrIDSure authentication system (http://www.gridsure.com) has been integrated into Micro- soft’s Unified Access Gateway (UAG) platform. This system allows a user to authenticate herself with a one-time passcode based on a pattern of squares chosen from a grid. When the user wishes access, she is presented with a grid containing randomly-assigned numbers; she then enters as her passcode the numbers that correspond to her chosen pattern. Because the displayed grid numbers change each time the grid is pre- sented, the pattern enables the entered passcode to be a one- time code. Many researchers (see, for example, Sasse, 2007; Bond, 2008; Biddle et al., 2009) have examined aspects of GrIDSure’s security and usability. Other commercial products use images called Passfaces. Introduced over ten years ago (Brostoff and Sasse, 2000) and evaluated repeatedly (Everitt et al., 2009), Passfaces offer an option that addresses the drawbacks of products like GrID- Sure. However, the Consumer’s Union study (2008) and others
  • 205. document the degree to which the average user manages multiple passwordsdsometimes dozens! This security-in- the-large leads to problems that are also shared with image recognition: interference. 4.1.2. Interference Frequent changes to a memorized item interfere with remembering the new version of the item. That is, the newest version of the item competes with the previous ones. The frequency of change is important; for example, Underwood (1957) discovered that, in studies in which participants were required to memorize only a few prior lists, their level of forgetting was much less than in studies where the partici- pants were required to memorize many prior lists. Wixted (2004) points out that even dissimilar things can interfere with something a subject is trying to memorize: “.recently formed memories that have not yet had a chance to consoli- date are vulnerable to the interfering force of mental activity and memory formation (even if the interfering activity is not similar to the previously learned material).”
  • 206. In empirical studies applying these findings to password memorability, Sasse et al. (2002) showed that login failures increased sharply as required password changes became more frequent. In addition, Brostoff and Sasse (2003) showed that allowing more login attempts led to more successful login sessions; they suggest that forgiving systems result in better compliance than very restrictive ones. Everitt et al. (2009) and Chiasson et al. (2009) have exam- ined the use of multiple graphical passwords. They found that users with multiple graphical passwords made fewer errors when recalling them, did not create passwords that were directly related to account names, and did not use similar passwords across multiple accounts. Moreover, even after two weeks, recall success rates remained good with graphical passwords and were better than those with text passwords. Thus, there seemed to be less interference with graphical objects than with textual ones. Recent studies have addressed additional concerns about
  • 207. recall and interference. For example, Jhawar et al. (2011) suggest that good design can overcome these issues, and that graphical recall can form the basis for effective security practices. 4.1.3. Other studies at the intersection In addition to the findings cited above, most of which are drawn from basic cognitive psychology literature, there are many examples of applied studies from other disciplines where behavioral scientists studied cyber-related problems directly. For example, � Sociology. Cheshire and Cook (2004) applied experimental sociological research results to four different categories of computer-mediated interaction. They offer guidance to computer scientists about how to build trust in online networks. For example, they suggest treating computer- http://www.gridsure.com http://dx.doi.org/10.1016/j.cose.2011.12.010 http://dx.doi.org/10.1016/j.cose.2011.12.010 c o m p u t e r s & s e c u r i t y 3 1 ( 2 0 1 2 ) 5 9 7 e6 1 1604 mediated interaction as an architectural problem, using the
  • 208. nature of the mediation to shape desired behavior. They distinguish between random and fixed partners in a trans- action, and suggest appropriate mechanisms for interaction based on this characterization (see Fig. 1). � Economics. Economists study the role of reputation in establishing trust, and this literature is frequently refer- enced in work at the intersection of economics and cyber security. For example, many of the papers at the Workshops on the Economics of Information Security leveraged economic results from reputation research. Yamagishi and Matsuda (2003) propose the use of experience-based infor- mation about reputation to address the problem of lemons: disappointment in expectation. They show that disap- pointment is substantially reduced when online traders can freely change their identities and cancel their reputations. � Psychology and economics. There is an interaction between actual costs and perceived costs when people interact, particularly online. Research in this area spans both psychology (the perception) and economics (the real costs).
  • 209. Datta and Chatterjee (2008) have applied some of this research to the transference of trust in electronic markets. They show that the transference is complete only if agency costs from intermediation lie within consumer thresholds. These examples convince us that mining the behavioral science literature more thoroughly will lead to an empirical basis for improvements in the quality and effectiveness of cyber security defense. This section has provided examples of the direct application of behavioral science research to prob- lems in cyber security. In the next section, we consider other areas where leveraging behavioral science may reap signifi- cant benefits in protecting the information infrastructure. 5. Areas of behavioral science with potential relevance There is a significant amount of behavioral science research on methods or concepts that influence a person’s or group’s perceptions, attitudes, and behaviors. Many findings may have bearing on the design, construction and use of infor- mation infrastructure protection, but the relevance and
  • 210. degree of effect have not yet been tested empirically. In this section, we identify a variety of well-studied behavioral science findings from psychology, behavioral •Online communities •Online auctions •Chat groups •Massively multiplayer online games •Peer-to-peer digital goods exchange •Online “pickup”games (none)•Solicitation by email •Email attachments from unknown individuals Frequency Iterated Interaction One-shot Interaction Continuity Random Partner Fixed Partner Fig. 1 e Example architectural recommendations (Cheshire
  • 211. and Cook, 2004). medicine, and other disciplines where techniques have been demonstrated to affect behavior related to cognition and bias. We also describe several heuristics and health-related models that have potential for improving cyber security. However, unlike the findings in Section 4, these findings have not been evaluated specifically in terms of changing cyber security- related behavior. In this section, we introduce each behav- ioral science finding, discuss a sampling of research results, and describe the possible implications for cyber security. 5.1. Cognition Cognition refers to the way people process and learn infor- mation. There are several findings from research on human cognition that may be relevant to cyber security. 5.1.1. Identifiable victim effect The identifiable victim effect refers to the tendency of indi- viduals to offer greater aid when a specific, identifiable person (the victim) is observed under hardship, when compared to a large, vaguely-defined group with the same need. For
  • 212. example, many people are more willing to help a homeless person living near the office than the several hundred homeless living in their city. (Example: K. Jenni and G. Loe- wenstein, “Explaining the ‘Identifiable Victim Effect’,” Journal of Risk and Uncertainty, 14, 1997, pp. 235e257.) Implications: Users may choose stronger security when possible negative outcomes are tangible and personal, rather than abstract. 5.1.2. Elaboration likelihood model The Elaboration Likelihood Model describes how attitudes are formed and persist. It is based on the notion that there are two main routes to attitude change: the central route and the peripheral route. Central processes are logical, conscious, and require a great deal of thought. Therefore, central route processes to decision-making are only used when people are motivated and able to pay attention. The result of central route processing is often a permanent change in attitude, as people adopt and elaborate on the arguments being made by others. By contrast, when people take the peripheral route,
  • 213. they do not pay attention to persuasive arguments; rather, they are swayed by surface characteristics such as the popularity of the speaker. In this case, attitude change is more like to be only temporary. Research has focused on how to get people to use the central route instead of the peripheral route. (Example: R.E. Petty and J.T. Cacioppo, Attitudes and Persuasion: Classic and Contemporary Approaches. Dubuque, IA: W. C. Brown, 1981. R.E. Petty and J.T. Cacioppo, Communication and Persuasion: Central and Peripheral Routes to Attitude Change, New York: Springer-Verlag, 1986.) Implications: One of the best ways to motivate users to take the central route when receiving a cyber security message is to make the message personally relevant. Fear can also be effective in making users pay attention, but only if levels of fear are moderate and a solution to the fear-inducing situation is also offered; strong fear leads to fight-or-flight (physical) reactions. The central route leads to consideration of arguments for and
  • 214. against, and the final choice is carefully considered. This distinction can be particularly important in security aware- ness training. http://dx.doi.org/10.1016/j.cose.2011.12.010 http://dx.doi.org/10.1016/j.cose.2011.12.010 c o m p u t e r s & s e c u r i t y 3 1 ( 2 0 1 2 ) 5 9 7 e6 1 1 605 5.1.3. Cognitive dissonance Cognitive dissonance is the feeling of discomfort that comes from holding two conflicting thoughts in the mind at the same time. A person often feels strong dissonance when she believes something about herself (e.g., “I am a good person”) and then does something counter to it (e.g., “I did something bad”). The discomfort often feels like tension between the two opposing thoughts. Cognitive dissonance is a very powerful motivator that can lead people to change in one of three ways: change behavior, justify behavior by changing the conflicting attitude, or justify behavior by adding new attitudes. Dissonance is most powerful when it is about self-image (e.g., feelings of foolishness, immorality, etc.).
  • 215. (Examples: L. Festinger, A Theory of Cognitive Dissonance, Stanford, CA: Stanford University Press, 1957; L. Festinger and J.M. Carlsmith, “Cognitive Consequences of Forced Compliance,” Journal of Abnormal and Social Psychology, 58, 1959, pp. 203e211.) Implications: Cognitive dissonance is central to many forms of persuasion to change beliefs, values, attitudes and behaviors. To get users to change their cyber behavior, we can first change their attitudes about cyber security. For example, a system could emphasize a user’s sense of foolishness concerning the cyber risks he is taking, enabling dissonant tension to be injected suddenly or allowed to build up over time. Then, the system can offer the user ways to relieve the tension by changing his behavior. 5.1.4. Social cognitive theory Social Cognitive Theory is a theory about learning based on two key notions: (1) people learn by watching what others do, and (2) human thought processes are central to understanding
  • 216. personality. This theory asserts that some of an individual’s knowledge acquisition can be directly related to observing others within the context of social interactions, experiences, and outside media influences. (Examples: A. Bandura, “Orga- nizational Application of Social Cognitive Theory,” Australian Journal of Management, 13(2), 1988, pp. 275e302; A. Bandura, “Human Agency in Social Cognitive Theory,” American Psychologist, 44, 1989, pp. 1175e1184.) Implications: By taking into account gender, age, and ethnicity, a cyber awareness campaign could reduce cyber risk by using social cognitive theory to enable users to identify with a recognizable peer and have a greater sense of self-efficacy. The users would then be likely to imitate the peer’s actions in order to learn appro- priate, secure behavior. 5.1.5. Bystander effect The bystander effect is a psychological phenomenon in which someone is less likely to intervene in an emergency situation when other people are present and able to help than when he or she is alone. (Example: J.M. Darley and B. Latané,
  • 217. “Bystander Intervention in Emergencies: Diffusion of Responsibility,” Journal of Personality and Social Psychology, 8, 1968, pp. 377e383.) Implications: During a cyber event, users may not feel compelled to increase situational awareness or take necessary security measures because they will expect others around them to do so. Thus, systems can be designed with mechanisms to counter this effect, encouraging users to take action when necessary. 5.2. Bias Bias describes a person’s tendency to view something from a particular perspective. This perspective prevents the person from being objective and impartial. The following findings about bias may be useful in designing, building and using information infrastructure. 5.2.1. Status quo bias Status quo bias describes the tendency of people to not change an established behavior without a compelling incentive to do so. (Example: W. Samuelson and R. Zeckhauser, “Status Quo
  • 218. Bias in Decision Making,” Journal of Risk and Uncertainty, 1, 1988, pp. 7e59.) Implications: Users will need compelling incentives to change their established cyber security behavior. For example, information infrastructure can be designed to provide incentives for people to suspect documents sent from unknown sources. Similarly, the infrastructure can provide designers, developers and users with feedback about their reputations (e.g., “Sixty-three percent of your attachments are never opened by the recipient.”) or the repercussions of their actions (e.g., “It was your design defect that enabled this breach”) to reduce status quo bias. 5.2.2. Framing effects Scientists usually expect people to make rational choices based on the information available to them. Expected utility theory is based on the notion that people choose options that provide the most benefit (i.e., the most utility to them) based on the information available to them. However, there is a growing literature providing evidence that when people must choose among alternatives involving risk, where the
  • 219. probabilities of outcomes are known, they behave contrary to the predictions of expected utility theory. This area of study, called prospect theory, is descriptive rather than predictive; prospect theorists report on how people actually make choices when confronted with information about each alternative. One of the earliest findings in prospect theory (Tversky and Kahneman, 1981) demonstrated that the framing of a message can affect decision making. Framing refers to the context in which someone interprets information, reacts to events, and makes decisions. For example, the efficacy of a drug can be framed in terms of number of lives saved or number of lives lost; studies have shown that equivalent data framed in oppo- site ways (gain vs. loss) lead to dramatically different decisions about whether and how to use the same drug. The context or framing of a problem can be accomplished by manipulating the decision options or by referring to qualities of the decision- makers, such as their norms, habits and temperament. (Examples: D. Kahneman and A. Tversky, “Prospect Theory: An
  • 220. Analysis of Decisions Under Risk,” Econometrica, 47, 1979, pp. 313e327; A. Tversky and D. Kahneman, “The Framing of Deci- sions and the Psychology of Choice,” Science, 211, 1981, pp. 453e458.) Implications: User choices about cyber security may be influenced by framing them as gains rather than losses, or by appealing to particular user characteristics. Possible applica- tions include classifying anomalous data from an intrusion detection system log, presenting the interface to a firewall as admitting (good) traffic vs. blocking (bad) traffic, or describing a data mining activity as exposing malicious behavior. http://dx.doi.org/10.1016/j.cose.2011.12.010 http://dx.doi.org/10.1016/j.cose.2011.12.010 c o m p u t e r s & s e c u r i t y 3 1 ( 2 0 1 2 ) 5 9 7 e6 1 1606 5.2.3. Optimism bias Given the minuscule chances of winning the lottery, it is amazing that people buy lottery tickets. Many people believe they will do better than most others engaged in the same activity, so they buy tickets despite evidence to the contrary. This optimism bias shows itself in many ways, such as over-
  • 221. estimating the likelihood of positive events and under- estimating the likelihood of negative events. (Examples: N. D. Weinstein, “Unrealistic Optimism About Future Life Events,” Journal of Personality and Social Psychology, 39(5), November 1980, pp. 806e820; D. Dunning, C. Heath and J. M. Suls, “Flawed Self-Assessment: Implications for Health, Education, and the Workplace,” Psychological Science in the Public Interest 5(3), 2004, pp. 69e106.) Implications: Because they underestimate the risk, users may think they are immune to cyber attacks, even when others have been shown to be susceptible. For example, optimism bias may enable spear phishing (messages seeming to come from a trusted source, trying to gain unauthorized access to data at a particular organization). Optimism bias may also induce people to ignore preventive care measures, such as patching, because they think they are unlikely to be affected. To counter optimism bias, systems can be designed to convey risk impact and likelihood in ways that relate to
  • 222. people’s real experiences. 5.2.4. Control bias Control bias refers to the tendency of people to believe they can control or influence outcomes that they clearly cannot; this phenomenon is sometimes called the illusion of control. (Example: E. J. Langer, “The Illusion of Control,” Journal of Personality and Social Psychology 32(2), 1975, pp. 311e328.) Implications: Users may be less likely to use protective measures (such as virus scanning, clearing cache, checking for secure sites before entering credit card information, or paying attention to spear phishing) when they feel they have control over the security risks. 5.2.5. Confirmation bias Once someone takes a position on an issue, she is more likely to notice or give credence to evidence that supports that position than to evidence that discredits it. This confirmation bias (i.e., looking for evidence to confirm a position) results in situations where people are not as open to new ideas as they think they are. They often rein-
  • 223. force their existing attitudes by selectively collecting new evidence, interpreting evidence in a biased way, or selec- tively recalling information from memory. For example, an analyst finding a perceived pattern in a series of failures will tend to cease looking for other explanations and instead seek confirming evidence for his hypothesis. (Example: M. Lewicka, “Confirmation Bias: Cognitive Error or Adaptive Strategy of Action Control?” in M. Kofta, G. Weary and G. Sedek, Personal Control in Action: Cognitive and Motivational Mechanisms. New York: Springer. 1998, pp. 233e255.) Implications: Users may have initial impressions about how protected (or not) the information infrastructure is that they are using. To overcome their confirmation bias, the system must provide users with an arsenal of evidence to encourage them to change their current beliefs or to mitigate their over-confidence. 5.2.6. Endowment effect The endowment effect describes the fact that people usually place a higher value on objects they own than objects they do
  • 224. not own. A related effect is that people react more strongly to loss than to gain; that is, they will take stronger action to keep from losing something than to gain something. (Example: R. Thaler, “Toward a Positive Theory of ConsumerChoice,” Journal of Economic Behavior and Organization, 1, 1980, pp. 39e60.) Implications: Users may pay more (both figuratively and liter- ally) for securitywhen it letsthemkeep something they already have, rather than gain something new. This effect, coupled with a framing effect, may have particular impact on privacy. When an action is expressed as a loss of privacy (rather than a gain in capability), people may react to it negatively. 5.3. Heuristics In psychology, a heuristic is a simple rule inherent in human nature or learned in order to reduce cognitive load. Thus, we find them appealing for addressing the cognitive load issues described earlier. The heuristics’ rules are used to explain how people make judgments, decide issues, and solve problems; heuristics are particularly helpful in explaining how people
  • 225. deal with complex problems or incomplete information. When heuristics fail, they can lead to systematic errors or cognitive biases. 5.3.1. Affect heuristic The affect heuristic enables someone to make a decision based on an affect (i.e., a feeling) rather than on rational deliberation. If someone has a good feeling about a situation, he may perceive that it has low risk; likewise, a bad feeling can lead to a higher risk perception. (Example: M. Finucane, E. Peters and D. G. MacGregor, “The Affect Heuristic,” in T. Gilovich, D. Griffin and D. Kahneman, Heuristics and Biases: The Psychology of Intuitive Judgment. Cambridge University Press, 2002, pp. 397e420.) Implications: If users perceive little risk, the system may need a design that creates a more critical affect toward computer security that will encourage them to take protective measures. The system should also reward the system administrator who looks closely at a system audit log because something just doesn’t “feel” right.
  • 226. 5.3.2. Availability heuristic The availability heuristic refers to the relationship between ease of recall and probability. In other words, because of the availability heuristic, someone will predict an event’s proba- bility or frequency in a population based on the ease with which instances of an event come to mind. The more recent, emotional, or vivid an event is, the more likely it will come to mind. (Example: A. Tversky and D. Kahneman, “Availability: A Heuristic for Judging Frequency and Probability,” Cognitive Psychology 5, 1973, pp.207e232.) Implications: Users will be more persuaded to act responsibly if the system is designed to use vivid, personal events as examples, rather than statistics and facts. Moreover, if the system reports recent cyber events, it may be more effective in encouraging users to take measures to prevent future adverse events. Users’ choices may also be heavily biased by the first thing that comes to http://dx.doi.org/10.1016/j.cose.2011.12.010 http://dx.doi.org/10.1016/j.cose.2011.12.010
  • 227. c o m p u t e r s & s e c u r i t y 3 1 ( 2 0 1 2 ) 5 9 7 e6 1 1 607 mind. Therefore, frequent security exercises may encourage more desirable security behavior. On the other hand, a system that has gone for some time without a major cyber incident may lull the administrators into a false sense of security because of the low frequency of events. The administrators may then become lax in applying security updates because of the long run of incident-free operation. 5.4. Health-related behavioral models In cyber security, we frame many issues using health-related metaphors because they are, in many ways, analogous. For example, we speak of viruses and infections when describing attacks. Similarly, we discuss increasing immunity to intru- sions, or to increasing resilience after a successful attack. For this reason, we believe that security design strategies can leverage the significant research into health-related behav- ioral models. We discuss several candidate models here. 5.4.1. Health belief model The Health Belief Model, developed in the 1950s after the
  • 228. failure of a free tuberculosis screening program, helped the U.S. Public Health Service by attempting to explain and predict health behaviors. It focused on attitudes and beliefs. Six constructs describe an individual’s core beliefs based on their perceptions of: susceptibility, severity, benefits, barriers, cues to action, and self-efficacy of performing a given health behavior. The perceived benefits must outweigh the barriers or costs. (Example: I. Rosenstock, “Historical Origins of the Health Belief Model,” Health Education Monographs, 2(4), 1974.) Impli- cations: The health and security education models are similar. If the Health Belief Model translates to cyber security aware- ness, a user will take protective security actions if he feels that a negative condition can be avoided (e.g., computer viruses can be avoided), has a positive expectation that by taking a rec- ommended action he will avoid a negative condition (e.g., doing a virus scan will prevent a viral infection), and believes that he can successfully perform the recommended action
  • 229. (e.g., is confident that he knows how to install virus protection files). The model suggests success only if the benefits (e.g., keeping himself, his organization, and the nation safe) outweigh the costs (e.g., download time, loss of work). 5.4.2. Extended parallel process model The Extended Parallel Process Model (EPPM) is an extension of the Health Belief Model that attempts to improve message efficacy by using threats. Based on Leventhal’s danger control/ fear control framework, EPPM, which has multiple compo- nents, explains why many fear appeals fail, incorporates fear as a key variable, and describes the relationship between fear and efficacy. Leventhal defines the danger control process as an individual seeking to reduce the risk presented by taking direct action and making adaptive changes but the fear control process focuses on maladaptive changes to the perception, susceptibility and severity of the risk. The EPPM provides guidance about how to construct effective fear-appeal messages: As long as efficacy perceptions are stronger than threat perceptions, the user will go into danger control mode
  • 230. (accepting the message and taking recommended action to prevent danger from happening). (Examples: K. Witte, “Putting the Fear Back into Fear Appeals: The Extended Parallel Process Model,” Communication Monographs, 59, 1992, pp. 329e349; H. Leventhal, “Findings and Theory in the Study of Fear Communications,” in L. Berkowitz, ed., Advances in Experi- mental Social Psychology, Vol. 5, New York: Academic Press, 1970, pp. 119e186.) Implications: When used appropriately, threats and fear can be useful in encouraging users to comply with security. However, the messages cannot be too strong, and users must believe that they are able to comply success- fully with the security advice. This model may explain how to encourage users to apply security and performance patches, use and maintain anti-virus tools, and avoid risky online behavior. 5.4.3. Illness representations The health care community has a great deal of experience with representing the nature and severity of illness to
  • 231. patients, so that patients can make informed decisions about treatment choices and health. In particular, there are lessons to be learned from the way fear messages are used in rela- tively acute situations to encourage people to take health- promoting actions such as wearing seat belts or giving up smoking. Health researchers (Leventhal et al., 1980) have found that different types of information are needed to influence both attitudes and reactions to a perceived threat to health and well-being, and that the behavior changes last only for short periods of time. In extending their initial model, the researchers sought adaptations and coping efforts for those patients experiencing chronic illness. The resulting illness representations integrate the coping mechanisms with existing schemata (i.e., the normative guidelines that people hold), enabling patients to make sense of their symptoms and guiding any coping actions. The illness representations have five components: identity, timeline, consequences, control/ cure, and illness coherence. (Examples: H. Leventhal, D. Meyer
  • 232. and D.R. Nerenz, “The Common Sense Representation of Illness Danger,” in S. Rachman, ed., Contributions to Medical Psychology, New York: Pergamon Press, 1980, pp. 17e30; H. Leventhal, I. Brissette and E.A. Leventhal, “The Common- sense Model of Self-Regulation of Health and Illness,” in L.D. Cameron and H. Leventhal, eds., The Self-Regulation of Health and Illness Behaviour, London: Routledge, 2003, pp. 42e65.) Implications: In a well-designed system, users concerned about whether to trust a site, person, or document can obtain new information about their security posture and evaluate their attempts to deal (e.g., moderate, cure or cope) with its effects. Then, the users form new representations based upon their experiences. These representations are likely to be cumulative, with security information being adopted, dis- carded or adapted as necessary. Thus, the representations are likely to be linked to the selection of coping procedures, action plans and outcomes. These results could be of significance for developing incident response strategies.
  • 233. 5.4.4. Theory of reasoned action/theory of planned behavior The Theory of Reasoned Action and the Theory of Planned Behavior are based on two notions: (1) people are reasonable and make good use of information when deciding among behaviors, and (2) people consider the implications of their behavior. Behavior is directed toward goals or outcomes, and http://dx.doi.org/10.1016/j.cose.2011.12.010 http://dx.doi.org/10.1016/j.cose.2011.12.010 c o m p u t e r s & s e c u r i t y 3 1 ( 2 0 1 2 ) 5 9 7 e6 1 1608 people freely choose those behaviors that will move them toward those goals. They can also choose not to act if they think acting will move them away from their goals. The theories take into account four concepts: behavioral intention, attitude, social norms, and perceived behavioral control. Intention to behave has a direct influence on actual behavior as a function of attitude and subjective norms. Attitude is a function both of the personal consequences expected from behaving and the affective value placed on those conse- quences. (Example: I. Ajzen, “From Intentions to Actions: A
  • 234. Theory of Planned Behavior,” in J. Kuhl and J. Beckmann, eds., Action Control: From Cognition to Behavior. Berlin, Heidelberg, New York: Springer-Verlag, 1985.) Implications: To encourage users to change their security behavior, the system must create messages that affect users’ intentions; in turn, the intentions are changed by influencing users’ attitudes through identification of social norms and behavioral control. The users must perceive that they can control the successful completion of their tasks securely and safely. 5.4.5. Stages of change model The Stages of Change Model assesses a person’s readiness to initiate a new behavior, providing strategies or processes of change to guide her through the stages of change to action and maintenance. Change is a process involving progression through six stages: precontemplation, contemplation (thoughts), preparation (thoughts and action), action (actual behavior change), maintenance, and termination. Therefore, interventions to change behaviors must match and affect the
  • 235. appropriate stage. To progress through the early stages, people apply cognitive, affective, and evaluative processes. As people move toward maintenance or termination, they rely more on commitments and conditioning. (Examples: J.O. Prochaska, J.C. Norcross and C.C. DiClemente, Changing for Good: The Revolutionary Program That Explains the Six Stages of Change and Teaches You How to Free Yourself From Bad Habits. New York: W. Morrow, 1994; J.O. Prochaska and C.C. DiClemente, “The Transtheoretical Approach,” in J.C. Norcross and M.R. Gold- fried, eds. Handbook of Psychotherapy Integration, 2nd ed., New York: Oxford University Press, 2005. pp. 147e171.) Implica- tions: To change security-related behaviors, it is necessary first to assess the users’ stage before developing processes to elicit behavior change. For example, getting software devel- opers to implement security in the code development life cycle, and especially throughout the life cycle, is notoriously difficult. Currently, much effort is directed at moving devel-
  • 236. opers directly to stage four (action), without appropriate attention to the importance of the earlier stages. 5.4.6. Precaution-adoption process theory Theories that try to explain behavior by examining the perceived costs and benefits of behavior change work only if the person has enough knowledge or experience to have formed a belief. The Precaution-Adoption Process Model seeks to understand and explain behavior by looking at seven consecutive stages: unaware; unengaged; deciding about acting; decided not to act; decided to act; acting; and mainte- nance. People should respond better to interventions that are matched to the stage they are in. (Examples: N.D. Weinstein, “The Precaution Adoption Process,” Health Psychology, 7(4), 1988, pp. 355e386; N.D. Weinstein and P.M. Sandman, “A Model of the Precaution Adoption Process: Evidence From Home Radon Testing,” Health Psychology, 11(3), 1992, pp. 170e180.) Implications: Security actions may be related to the seven stages. It may be necessary to assess a user’s stage before
  • 237. developing a process to elicit the desired behavior change. 6. Applying behavioral science findings: the way forward We have presented some early results that show why this multi-disciplinary approach is likely to yield useful insights. In this final section, we describe next steps for determining the best ways to blend behavioral science with computer science to yield improved cyber security. The recommended steps involve encouraging multi-disciplinary workshops, perform- ing empirical studies across disciplines, and building an accessible repository of multi-disciplinary findings. 6.1. Workshops bridging communities Multi-disciplinary work can be challenging for many reasons. First, as noted by participants in a National Academy of Science workshop (2010), there are inconsistent terminolo- gies and definitions across disciplines. Particularly for words like “trust” or “risk,” two different disciplines can use the same word but with very different meanings and assump- tions. Second, there are few incentives to publish findings
  • 238. across disciplines, so many researchers work in distinct and separate areas that do not customarily share information. For this reason, we recommend the establishment of workshops that bridge communities so that each community’s knowl- edge can benefit the others’. In July 2010, the Institute for Information Infrastructure Protection (I3P) held a two-day workshop to bring together members of the behavioral science community and the cyber security community, examine how to move successfully- evaluated findings into practice, and establish groups of researchers willing to empirically evaluate promising findings and assess their applicability to cyber security. The workshop created an opportunity for the formation of groups of researchers and practitioners eager to evaluate and adopt more effective ways of integrating behavioral science with cyber security. That is, the workshop is the first step in what we hope will be a continuing partnership between computer science and behavioral science that will improve the effec-
  • 239. tiveness of cyber security. The output of the workshop included: � Identification of existing findings that can enhance cyber security in the near term. � Identification of potential behavioral science findings that could be applied but necessitate empirical evaluations of their effects on cyber security. � Identification of cyber security areas and problems where application of concepts from behavioral science could have a positive impact. � Establishment of an initial repository of information about behavioral science and cyber security. http://dx.doi.org/10.1016/j.cose.2011.12.010 http://dx.doi.org/10.1016/j.cose.2011.12.010 c o m p u t e r s & s e c u r i t y 3 1 ( 2 0 1 2 ) 5 9 7 e6 1 1 609 As a result of this workshop, several spear phishing studies were conducted in university and industrial settings, and an incentives study, to empirically demonstrate what kinds of incentives (i.e., money, convenient parking spots, public recognition, etc.) would most motivate users to have good cyber hygiene, was designed for future administration. A
  • 240. second workshop was held in October 2011 to report on the studies’ findings and to organize further studies. Workshops of this kind can not only act as catalysts for the initiation of new research but can also encourage continued interaction and cooperation across disciplines. Similar efforts are being encouraged in several areas of cyber security, particularly in usable security (Pfleeger, 2011). 6.2. Empirical evaluation across disciplines We hope to expand the body of knowledge on the interactions between human behavior and cyber security via investiga- tions that will produce both innovative experimental designs and data that can form the basis of experimental replication and tailoring of applications to particular situations. However, there are challenges to performing this type of research, especially when resources are constrained. For example, it is not usually possible to build the same system twice (one as control, one as treatment) and compare the results, so good experimental design is crucial in producing strong, credible
  • 241. results with sufficient levels of external validity. Empirical evaluation of the effects of change on cyber secu- rity involves many things, including identifying variables, controlling for bias and interaction effects, and determining the degree to which results can be generalized. These are funda- mental principles of the empirical method but are often not understood or not applied appropriately. We hope to produce more comprehensive guidelines for experimental design, aimed at assisting cybersecurity practitionersandbehavioralscientists in designing evaluations that will produce the most meaningful results. These guidelines will highlight several issues: � The need to design a study so that confounding variables and bias are reduced as much as possible. � The need to state the experimental hypothesis and identify dependent and independent variables. � The need to identify the research participants and deter- mine which population is under scrutiny. � The need for clear and complete sampling procedures, so that the sample represents the identified population. � The need to describe experimental conditions in enough
  • 242. detail so that the reader can understand the study and also replicate it. � The need to do an effective post-experiment debriefing, especially for studies where the actual intent of the study is not revealed until the study is completed. There are several examples of good experimental design for studies at the intersection of behavioral science and cyber security. For instance, many lessons were learned in an experiment focused on insider threat (Caputo et al., 2009). In this study, the researchers encountered several challenges in selecting the best sample and following strict empirical proce- dures. They documented the importance of pilot testing their experimental design before engaging their targeted partici- pants. In particular, it was difficult to get corporate participants to perform the experimental tasks with the same motivation that the average users have when doing their regular jobs. Therefore, the researchers used pilot testing to determine what would motivate participants. Then, the motivation was built into the study design. Although this study used corporate
  • 243. employees, real networks, and plausible tasks to make the research environment as realistic as possible, generating data sets in any controlled situation reduced the researchers’ ability to generalize the findings to complex situations. There are many studies that can benefit from better data collection and better study design. Pfleeger et al. (2006) suggest a roadmap for improved data collection and analysis of cyber security information. In addition, Cook and Pfleeger (2010) describe how to build improvements on existing data sets and findings. 6.3. Repository of findings We are building a repository of relevant findings, including data sets where available, to serve at least two purposes. First, it will provide the basis for decision-making about when and how to include behavioral considerations in the specification, design, construction and use of cyber security products and processes. Second, it will enable researchers and practitioners to replicate studies in their own settings, to confirm or refute
  • 244. earlier findings and to tailor methods to particular needs and constraints. Such information will lay the groundwork for evidence-based cyber security. This paper reports on the findings of our initial foray into the blending of behavioral science and cyber security. In recent years, there has been much talk about inviting both disciplines to collaborate, but little work has been done to open discussion broadly to both communities. Our workshops took bold and broad steps, and it is hoped that the activities reported here, built on the shoulders of work performed in both communities over the past two decades, will encourage others to join us in thinking more expansively about cyber security problems and possible solutions. In particular, we encourage others engaged in research across disciplines to contact us, so that we can establish virtual and actual links that move us toward understanding and implementation of improved cyber security. Acknowledgments
  • 245. This work was sponsored by grants from the Institute for Information Infrastructure Protection at Dartmouth College, under award number 2006-CS-001-000001 from the US Department of Homeland Security, National Cyber Security Directorate. r e f e r e n c e s Amos Deborah. Challenge: airport screening without discrimination. Morning Edition, National Public Radio. http://dx.doi.org/10.1016/j.cose.2011.12.010 http://dx.doi.org/10.1016/j.cose.2011.12.010 c o m p u t e r s & s e c u r i t y 3 1 ( 2 0 1 2 ) 5 9 7 e6 1 1610 Available at: http://www.npr.org/templates/story/story.php? storyId¼122556071; January 14, 2010. Baier Annette. Trust and antitrust. Ethics 1986;96(2):231e60. Baker Peter, Hulse Carl. U.S. had early signals of terror plot, Obama says. New York Times December 2009;30:1. Bell David E, La Padula Leonard J. Secure computing systems: mathematical foundations. MITRE Technical Report MTR- 2547. Bedford, MA: The MITRE Corporation; 1973. Biba Kenneth J. Integrity considerations for secure computer systems. MITRE Technical Report MTR-3153. Bedford, MA: The MITRE Corporation; April 1977.
  • 246. Biddle Robert, Sonia Chiasson, van Oorschot PC. Graphical passwords: learning from the first generation. Technical Report 09-09. Ottawa, Canada: School of Computer Science, Carleton University; 2009. Bond Michael. Comments on GrIDSure authentication. Available at: http://www.cl.cam.ac.uk/wmkb23/research/ GridsureComments.pdf; 28 March 2008. Brostoff Sacha, Sasse M Angela. Are passfaces more usable than passwords? A field trial investigation. In: McDonald S, Waem Y, Cockton G, editors. People and computers XIVeusability or else, Proceedings of HCI 2000. Sunderland, UK: Springer; 2000. p. 405e24. Brostoff Sacha, Sasse M Angela. Ten strikes and you’re out: increasing the number of login attempts can improve password usability. In: Proceedings of CHI 2003 Workshop on Human-Computer Interaction and Security Systems; 2003. Ft. Lauderdale, FL. Burke Cody. Intelligence gathering meets information overload. Basex TechWatch. Available at: http://www.basexblog.com/ 2010/01/14/intelligence-gathering-meets-io/; 14 January 2010. Caputo Deanna, Maloof Marcus, Stephens Gregory. Detecting insider theft of trade secrets. IEEE Security and Privacy NovembereDecember 2009;7(6):14e21. Castelfranchi Cristiano, Falcone Rino. Principles of trust for MAS: cognitive anatomy, social importance, and quantification. In:
  • 247. Proceedings of the Third International Conference on Multi Agent Systems; 1998. Castelfranchi Cristiano, Falcone Rino. Social trust: a cognitive approach. In: Castelfranchi Cristano, Tan Yao-Hua, editors. Trust and deception in virtual societies. Amsterdam: Kluwer Academic Publishers; 2002. Chase WG, Simon HA. Perception in chess. Cognitive Psychology 1973;4(1):55e81. Cheshire Coye, Cook Karen. The emergence of trust networks under uncertainty: implications for internet interactions. Analyse & Kritik 2004;26:220e40. Chiasson Sonia, Alain Forget, Elizabeth Stobert, van Oorschot Paul C, Biddle Robert. Multiple password interference in text passwords and click-based graphical passwords. ACM Computer and Communications Security (CCS); November 2009:500e11. Clements Paul, Northrup Linda. Software product lines: practices and patterns. Reading, MA: Addison-Wesley; 2001. Consumer’s Union. ID leaks: a surprising source is your government at work. Consumer Reports. Available at: http:// www.consumerreports.org/cro/money/credit-loan/identity- theft/government-id-leaks/overview/government-id-leaks-ov. htm; September 2008. Cook Ian P, Pfleeger Shari Lawrence. Security decision support challenges in data collection and use. IEEE Security and Privacy MayeJune 2010;8(3):28e35.
  • 248. Datta Pratim, Chatterjee Sutirtha. The economics and psychology of consumer trust in intermediaries in electronic markets: the EM-trust framework. European Journal of Information Systems February 2008;17(1):12e28. Dhamija Rachna, Perrig Adrian. Déjà Vu: a user study using images for authentication. In: Proceedings of the 9th USENIX Security Symposium; August 2000. Denver, CO. Ellsberg Daniel J. Risk, ambiguity and decision. RAND Report D- 12995. Santa Monica, CA: RAND Corporation; 1964. Everitt Katherine, Bragin Tanya, Fogarty James, Kohno Tadayoshi. A comprehensive study of frequency, interference, and training of multiple graphical passwords. In: ACM Conference on Human Factors in Computing Systems (CHI); April 2009. Jhawar Ravi, Inglesant Philip, Sasse Martina Angela, Courtois Nicolas. Make mine a quadruple: strengthening the security of graphical one-time PIN authentication. In: Proceedings of the Fifth International Conference on Network and Systems Security; September 6e8, 2011. Milan, Italy. Klein Gary A. Sources of power: how people make decisions. Cambridge, MA: MIT Press; 1998. Klein Gary A. Streetlights and shadows: searching for the keys to adaptive decision making. Cambridge, MA: MIT Press; 2009. Klein GA, Calderwood R. Decision models: some lessons from the field. IEEE Transactions on Systems, Man and Cybernetics September/October 1991;21(5):1018e26.
  • 249. Klein Gary A, Salas Eduardo, editors. Linking expertise and naturalistic decision making. Erlbaum; 2001. Lamandé Emmanuelle. GrIDSure authenticates Microsoft’s latest remote application platform. Global Security Mag. Available at: http://www.globalsecuritymag.com/GrIDsure-authenticates- Microsoft-s, 20100427, 17307.html; 27 April 2010. Leventhal H, Meyer D, Nerenz DR. The Common Sense Representation of Illness Danger. In: Rachman S, editor. Contributions to Medical Psychology. New York: Pergamon Press; 1980. p. 17e30. Lerner JS, Tiedens LZ. Portrait of the angry decision maker: how appraisal tendencies shape anger’s influence on cognition. Journal of Behavioral Decision Making 2006;19:115e37 (Special Issue on Emotion and Decision Making). Libicki Martin C, Pfleeger Shari Lawrence. Collecting the dots: problem formulation and solution elements. RAND Occasional Paper OP-103-RC. Santa Monica, CA: RAND Corporation; 2004. Mack A, Rock I. Inattentional blindness. Cambridge, MA: MIT Press; 1998. Mayo Deborah, Hollander Rachelle, editors. Acceptable evidence: science and values in risk management. Oxford University Press; 1991. Miller George A. The magic number seven plus or minus two:
  • 250. some limits on our capacity to process information. Psychological Review 1956;63:81e97. National Academy of Science. Toward better usability, security and privacy of information technology. Report of a Workshop. Washington, DC: National Academies Press; 2010. Ofsted (U.K. Office for Standards in Education, Children’s Services and Skills). The safe use of new technologies. Report 090231. Manchester, UK: Ofsted; February 2010. Pfleeger Shari Lawrence. Draft report on the NIST workshop. Available at: http://www.thei3p.org/docs/publications/436. pdf; March 2011. Pfleeger Shari Lawrence, Predd Joel, Hunker Jeffrey, Bulford Carla. Insiders behaving badly: addressing bad actors and their actions. IEEE Transactions on Information Forensics and Security March 2010;5(2). Pfleeger Shari Lawrence, Rue Rachel, Horwitz Jay, Balakrishnan Aruna. Investing in cyber security: the path to good practice. Cutter IT Journal January 2006;19(1):11e8. Predd Joel, Pfleeger Shari Lawrence, Hunker Jeffrey, Bulford Carla. Insiders behaving badly. IEEE Security and Privacy July/August 2008;6(4):66e70. Riegelsberger Jens, Sasse M Angela, McCarthy John D. The researcher’s dilemma: evaluating trust in computer-mediated communication. International Journal of Human-Computer Studies 2003;58(6):759e81.
  • 251. Riegelsberger Jens, Sasse M Angela, McCarthy John D. The mechanics of trust: a framework for research and design. http://www.npr.org/templates/story/story.php%253fstoryId%253 d122556071 http://www.npr.org/templates/story/story.php%253fstoryId%253 d122556071 http://www.npr.org/templates/story/story.php%253fstoryId%253 d122556071 http://www.cl.cam.ac.uk/%7Emkb23/research/GridsureComment s.pdf http://www.cl.cam.ac.uk/%7Emkb23/research/GridsureComment s.pdf http://www.cl.cam.ac.uk/%7Emkb23/research/GridsureComment s.pdf http://www.basexblog.com/2010/01/14/intelligence-gathering- meets-io/ http://www.basexblog.com/2010/01/14/intelligence-gathering- meets-io/ http://www.consumerreports.org/cro/money/credit-loan/identity- theft/government-id-leaks/overview/government-id-leaks- ov.htm http://www.consumerreports.org/cro/money/credit-loan/identity- theft/government-id-leaks/overview/government-id-leaks- ov.htm http://www.consumerreports.org/cro/money/credit-loan/identity- theft/government-id-leaks/overview/government-id-leaks- ov.htm http://www.consumerreports.org/cro/money/credit-loan/identity- theft/government-id-leaks/overview/government-id-leaks- ov.htm http://www.globalsecuritymag.com/GrIDsure-authenticates- Microsoft-s,%2020100427,%2017307.html http://www.globalsecuritymag.com/GrIDsure-authenticates- Microsoft-s,%2020100427,%2017307.html http://www.thei3p.org/docs/publications/436.pdf
  • 252. http://www.thei3p.org/docs/publications/436.pdf http://dx.doi.org/10.1016/j.cose.2011.12.010 http://dx.doi.org/10.1016/j.cose.2011.12.010 c o m p u t e r s & s e c u r i t y 3 1 ( 2 0 1 2 ) 5 9 7 e6 1 1 611 International Journal of HumaneComputer Studies 2005;62(3): 381e422. Rock I, Engelstein P. A study of memory for visual form. American Journal of Psychology 1959;72:221e9. Sasse M Angela. GrIDsure usability trials, http://www.gridsure. com/uploads/UCL%20Report%20Summary%20.pdf; 2007. Sasse M Angela, Brostoff Sacha, Weirich Dirk. Transforming the ‘weakest link: a human-computer interaction approach to usable and effective security. In: Temple R, Regnault J, editors. Internet and wireless security. London: IEE Press; 2002. p. 243e58. Sasse M Angela, Flechais Ivan. Usable security: why do we need it? How do we get it? In: Cranor Lorrie Faith, Garfinkel Simson, editors. Security and usability. Sebastopol, CA: O’Reilly Publishing; 2005. p. 13e30. Scandura JM. Deterministic theorizing in structural learning: three levels of empiricism. Journal of Structural Learning 1971; 3:21e53. Schneier Bruce. Semantic attacks: the third wave of network attacks. In: Crypto-gram newsletter. At: http://Www.Schneier.
  • 253. Com/Crypto-Gram-0010.Html; October 15, 2000. Simons Daniel J, Chabris CF. Gorillas in our midst: sustained inattentional blindness for dynamic events. Perception 1999; 28:1059e74. Simons Daniel J, Jensen Melinda S. The effects of individual differences and task difficulty on inattentional blindness. Psychonomic Bulletin & Review 2009;16(2):398e403. Slovic Paul, editor. The perception of risk. London: Earthscan Ltd.; 2000. Smith Walter, Hill Becky, Long John, Whitefield Andy. A design-oriented framework for modelling the planning and control of multiple task work in secretarial office administration. Behaviour and Information Technology 1997;16(3):161e83. Spira Jonathan B. The Christmas day terrorism plot: how information overload prevailed and counterterrorism knowledge sharing failed. Basex TechWatch. Available at: http://www.basexblog.com/category/analysts/jonathan-b- spira/; 4 January 2010. Standing L. Learning 10,000 pictures. Quarterly Journal of Experimental Psychology 1973;27:207e22. Tenner Edward. Why things bite back: technology and the revenge of unintended consequences. Vintage Press; 1991. Tversky A, Kahneman D. The Framing of Decisions and the Psychology of Choice. Science 1981;211:453e8. Underwood BJ. Interference and forgetting. Psychological
  • 254. Review 1957;64:49e60. Virginia Tech. When users resist: how to change management and user resistance to password security. Pamplin, Fall 2011. Available at: http://www.magazine.pamplin.vt.edu/fall11/ passwordsecurity.html. Wixted John T. The psychology and neuroscience of forgetting. Annual Review of Psychology 2004;55:235e69. Yamagishi T, Matsuda M. The role of reputation in open and closed societies: an experimental study of online trading. Center for the Study of Cultural and Ecological Foundations of Mind; 2003. Working Paper Series 8. Zviran Moshe, Haga William J. Cognitive passwords: the key to easy access control. Computers and Security 1990;8(9):723e36. Shari Lawrence Pfleeger is the Research Director for the Institute for Information Infrastructure Protection (I3P), a consortium of universities, national laboratories and non-profits dedicated to improving IT security, reliability and dependability. Pfleeger earned a PhD in information technology and engineering from George Mason University. Deanna Caputo, a lead behavioral psychologist at the MITRE Corporation, investigates questions addressing the intersection of social science and computer science, such as insider threat and effective ways to change behavior. She holds a bachelor’s degree in psychology from Santa Clara University and a PhD in social and personality psychology from Cornell University.
  • 255. http://www.gridsure.com/uploads/UCL%20Report%20Summary %20.pdf http://www.gridsure.com/uploads/UCL%20Report%20Summary %20.pdf http://www.schneier.com/crypto-gram-0010.html http://www.schneier.com/crypto-gram-0010.html http://www.basexblog.com/category/analysts/jonathan-b-spira/ http://www.basexblog.com/category/analysts/jonathan-b-spira/ http://www.magazine.pamplin.vt.edu/fall11/passwordsecurity.ht ml http://www.magazine.pamplin.vt.edu/fall11/passwordsecurity.ht ml http://dx.doi.org/10.1016/j.cose.2011.12.010 http://dx.doi.org/10.1016/j.cose.2011.12.010Leveraging behavioral science to mitigate cyber security risk1. Introduction2. Why technology alone is not enough3. Identifying behavioral aspects of security3.1. Scenario creation3.1.1. Scenario 1: improving security awareness among builders of information infrastructure3.1.2. Scenario 2: enhancing situational awareness during a “cyber event”3.1.3. Scenario 3: supporting decisions about trustworthiness of network transactions3.2. Analysis of results4. Areas of behavioral science with demonstrated relevance4.1. Findings with demonstrable relevance to cyber security4.1.1. Recognition easier than recollection4.1.2. Interference4.1.3. Other studies at the intersection5. Areas of behavioral science with potential relevance5.1. Cognition5.1.1. Identifiable victim effect5.1.2. Elaboration likelihood model5.1.3. Cognitive dissonance5.1.4. Social cognitive theory5.1.5. Bystander effect5.2. Bias5.2.1. Status quo bias5.2.2. Framing effects5.2.3. Optimism bias5.2.4. Control bias5.2.5. Confirmation bias5.2.6. Endowment effect5.3. Heuristics5.3.1. Affect heuristic5.3.2. Availability heuristic5.4. Health-related behavioral models5.4.1. Health belief model5.4.2. Extended parallel process model5.4.3. Illness representations5.4.4. Theory of reasoned action/theory of planned behavior5.4.5. Stages of change model5.4.6.
  • 256. Precaution-adoption process theory6. Applying behavioral science findings: the way forward6.1. Workshops bridging communities6.2. Empirical evaluation across disciplines6.3. Repository of findingsAcknowledgmentsReferences Homework 2 Due a week after the first class at 11:59 pm Read the assigned articles in D2L. Answer the questions below. The answers must demonstrate that you have substantively engaged with the material and you haven’t simply goggled the question and copy/pasted the answer. 1. How should people make decisions, according to economists? 2. Are you more likely to be killed by a bear or a bee? Why did you answer that way? Why might someone answer the other way? 3. Scenario: Bob, who just got a new laptop, is working on an important project that requires him to use it. If he has a limited amount of time to consider whether to install a virus scanner or not, what will be the first cost or benefit to come to mind? What decision is he likely to make based on that thought? 4. Pick one of the heuristics we discussed in class and come up with another example of how it might apply in cybersecurity 5. Alice is responsible for all cybersecurity decisions in her organization. How should she allocate her attention, and how is she likely to allocate her attention? Consider some examples: a vulnerability that affects a system that does not connect to the Internet, phishing attempts on her users, and Advanced Persistent Threats from nation state actors. Which should she prioritize, and which is she likely to prioritize? 6. How can an end-user tell whether their account is secure? What factors might lead a person to believe that an account was secure when it wasn’t? Page | 1
  • 257. This document is licensed with a Creative Commons Attribution 4.0 International License ©2017