SlideShare a Scribd company logo
Alec Wells, Aminu Bello Usman & Justin McKeown
International Journal of Computer Science and Security (IJCSS), Volume (14) : Issue (2) : 2020 72
Trusting Smart Speakers: Understanding the Different Levels of
Trust between Technologies
Alec Wells alec.wells@yorksj.ac.uk
Department of Computer Science
York St John University
York, YO31 8JY, United Kingdom
Aminu Bello Usman a.usman@yorksj.ac.uk
Department of Computer Science
York St John University
York, YO31 8JY, United Kingdom
Justin McKeown j.mckeown@yorksj.ac.uk
Department of Computer Science
York St John University
York, YO31 8JY, United Kingdom
Abstract
The growing usage of smart speakers raises many privacy and trust concerns compared to other
technologies such as smart phones and computers. In this study, a proxy measure of trust is
used to gauge users’ opinions on three different technologies based on an empirical study, and to
understand which technology most people are most likely to trust. The collected data were
analyzed using the Kruskal-Wallis H test to determine the statistical differences between the
users’ trust level of the three technologies: smart speaker, computer and smart phone. The
findings of the study revealed that despite the wide acceptance, ease of use and reputation of
smart speakers, people find it difficult to trust smart speakers with their sensitive information via
the Direct Voice Input (DVI) and would prefer to use a keyboard or touchscreen offered by
computers and smart phones. Findings from this study can inform future work on users’ trust in
technology based on perceived ease of use, reputation, perceived credibility and risk of using
technologies via DVI.
Keywords: Direct Voice Input, Risk, Security, Technology and Trust.
1. INTRODUCTION
As new technologies are constantly emerging, it is important to understand how the user
perceives such technologies, especially when considering possible security concerns. For several
years, studies in information security and practice have posed, that many security incidents and
breaches are caused by human factors, rather than technical failures [1]. Thus, human factors on
the part of individual or an employee in an organization can be an attack vector that can be
exploited. The assessment of human factors as an attack vector is a complex multi-component
and multi-level problem involving characteristics of hardware, software, user interface, and how
humans’ issue instructions to technologies [2]. The use of DVI to issue instructions to
technologies is growing rapidly and it appeared to be part of the future of human-computer
interaction [3]. DVI has provided what may be a more 'natural' mode of control for communication
between human and technology that is more convenient and faster compared with the
conventional input means; such as the keyboards and touch screens [4]. Although, DVI enables
novel and convenient forms of interaction with technologies, there are concerns about the issues
of trust, security and privacy on using DVI [3].
Alec Wells, Aminu Bello Usman & Justin McKeown
International Journal of Computer Science and Security (IJCSS), Volume (14) : Issue (2) : 2020 73
At the heart of human-centered security assessments, trust (soft security property) [5] is
considered an important social control security mechanism which can be used to evaluate the
security aptitude of human agents. The trust mechanism supplements traditional technical hard
security mechanisms (e.g., authentication and access control), thus enables a wider view of
social control mechanisms in addition to existing technical-based security approaches towards an
overall cyber security system. Trust is seen as a dynamic concept that can constantly change in
humans due to its many different facets and dimensions. Studies on trust in e-commerce have
focused on users’ responses to interface design and the complexity rather than the deeper,
emotion-charged dynamic of trust [6] whereas other studies on trust and digital system focused
on evaluating trust based on usability, perceived privacy, and content requirements [7]. In relation
to technologies, trust can be considered as the degree to which a user believes in the veracity or
effectiveness of a technology to function expectedly either based on the credibility, reputability of
the technology, or simply based on the users’ experience and perceptions [8]. Thus, in this
context, trust can be viewed as the confidence a user can have about the device to behave in an
expected manner [9].
Technology is increasingly becoming a part of human lives and is considered an extension of
human functions [8]. As a society today, we treat digital technology tools and algorithms with
extensive trust. The extent and the degree of how we trust technologies with sensitive data and
our lives will soon be a security concern, if it is not already a concern. In other words, we
constantly share our sensitive data, delegate responsibilities to technologies and apparently, we
trust them. This seems the case particularly in the modern world where new technologies are
often being developed and quickly adopted by homes and workplaces. For example, the adoption
of smart speakers is already in 13% of United States households and in the United Kingdom,
about 10% of households have already adopted Smart Speakers. This was also projected to grow
to 55% in the United States and 48% in the United Kingdom by 2022 [10]. Subsequently, based
on the increase of people adoption and reliance on these technologies, it can be understood that
the more we adopt these technologies, the more urgent becomes the issue of security, trust and
associated risk [11]. However, the details of security models and algorithms (e.g. the encryption
or the security cloud architecture) used in these technologies may not always be black and white,
or a concept that can easily be understood, especially for a non-technical user. Thus, the usable
security measure of those devices in the eyes of a non-technical user can come down to – a very
simple binary condition; trust or no trust.
The study is interested in understanding whether users have varying degrees of trust depending
on the technology’s input method, DVI, keyboard or touch screen, with the hope to improve
understanding of security concerns when adopting DVI in technologies. It is also important to
understand whether the users are more likely to give away sensitive information because of the
trust they have with a technology and the input method used to access the technology. The study
considers three different technologies; smart speakers, smart phones and computers and
compares the participants’ opinions about trust between each of the technologies. This will
hopefully identify if there are any security concerns with adopting voice-based technologies such
as smart speakers.
The contribution of the paper can be summarized as follows.
 Discuss the current state of trust and technology in relation to smart speakers, smart
phones and computers.
 Investigate if there are any differences in how people perceive the technologies and if
they would be more likely to trust them with sensitive information.
 Evaluate if people have more or less trust with certain technologies depending on the
input method.
The paper is structured as follows: Section 2 looks at the related work, with the concept of trust in
social context as well as trust and technology with emphasis on the three technologies (smart
speakers, smart phones and computers). Section 3 presents the trust model. Section 4 discusses
the research methodology and the description of the precipitants involved in the study. Section 5
Alec Wells, Aminu Bello Usman & Justin McKeown
International Journal of Computer Science and Security (IJCSS), Volume (14) : Issue (2) : 2020 74
presents the results of the study and discussion. In Section 6, a conclusion is presented along
with potential future work.
2. RELATED WORK
Within the modern world, as more of our lives become dependent on digital technologies, it is an
ever-growing concern for people to keep that part of our lives secure and private. This can be true
from the perspective of average users who want to keep private or sensitive information away
from prying eyes, or from those with malicious intent. This growing awareness for keeping data
secure has only become more apparent as new information and scandals come to light. One of
the recent examples of this might come from the Facebook-Cambridge Analytica data scandal.
This was a huge breach in security for many users, in which their data were collected and used to
influence political adverts to have the most impact on those specific users [12]. One possible
explanation why user data and privacy can easily be breached; leak or be compromised is due to
how we use and trust those technologies. As new technologies are invented, so are new ways of
interacting with technologies, like DVI, and this subsequently results in rendering new challenges
about how we can trust and use the technologies. For example, previous studies such as [13]
investigated the user perceptions of smart home devices, perceptions of smart home privacy risks
and action required to regulate IoT devices and users data. The study found that users often
prioritized convenience and connectedness over privacy concerns, and mostly users’
assumptions about privacy protections are contingent on their trust of IoT device manufacturers.
In the same vein, [14] investigated the relative effects of Direct Voice Input (DVI) on head position
in the CH-146 Griffon Helicopter in comparison with the manual input system for flying and non-
flying pilots. The study suggest that users find DVI inputs to be much easier and reliable to use
compared to manual inputs. A broader perspective on the study of user perceptions and behavior
in the technology (Privacy Paradox) [15] equally suggested that users found IoT devices
significantly less private than non-IoT products, though conversely also found users considered
them less usable and familiar. Similarly, [16] investigated users’ confidence in smartphones, in
regards to privacy and have found that many participants were apprehensive about running
financially sensitive tasks on their phones due to fears of theft, accidental clicking as well as
misconceptions about network communications and a general mistrust of smart phone
applications.
2.1. Trust in Social Context
The concept of trust is usually applied in the context of social relationships between social
agents, which can be defined as a social construct with natural attributes to relationship between
social actors (a group or individual). Trust can also be viewed as being subjective and a
unidirectional relation between social agents and how social agent assess another agent or
groups to perform a particular action with a certain level of probability [17]. Simply, trust can be
attributed to relationships between people and attributes, trust is subjective, dynamic, and it can
evolve with time, experience and the environment. Uslaner et al. describes trust as “the chicken
soup of social life” [18] – it works mysteriously, often, and we develop trust with only people we
know, yet the benefits of trust mostly come from when we trust strangers. For example, a service
provider and customers, where a customer does not know the service provider, yet the customers
can trust the service provider with their personal and sometimes banking details. Uslaner also
states that countries with more trusting people, those who would volunteer, are more tolerant of
others or give to charity; have better functioning governments, more open markets and less
corruption [19]. Alternatively, studies such as ‘Trust building via risk taking: A cross-societal
experiment’ suggests that trust can also built through risk taking. The study observed Japanese
and US participants to discover that American participants took more risks, indicating they were
more inclined to risk taking and trust building [20].
2.2 Trust and Technology
Arguably, one similarity between trust in the context of social relations and trust between humans
and technology, is that humans can develop trust with technology that they have found to be
comfortable using, a technology can make them feel safe either due to its functionalities,
Alec Wells, Aminu Bello Usman & Justin McKeown
International Journal of Computer Science and Security (IJCSS), Volume (14) : Issue (2) : 2020 75
reputation or its perceived credibility. However, trust with technologies takes time to establish
and mostly, users trust technology that they have found to be more reliable over time and only the
individual has actual perceptions about how much they trust the technology. Trust is similarly
described in the context of human-robot interaction as humans must trust that a robotic teammate
will protect the interests and welfare of every other individual on the team. For users to gain the
advantages and benefits of robotic teammates, they must be willing to trust and accept robot-
produced information and follow their suggestions [21]. Much like in trust with social agents and
companies, for oneself to benefit from robots, trust must first be established. In addition,
throughout trials in an experiment it is interesting to note that levels of trust change over time,
based upon the reliability of the automation. This was observed in pilots that constantly used
automation, who were found to trust automation more often than students [22]. Hence, initial
views can be different to those observed later. When measuring trust in human-robot
collaboration, studies utilize a performance model and observe different research areas such as
psychology behind team performance, unmanned systems, mixed initiative systems and war
fighting behavior which is adapted when identifying how much humans trust robots’ decisions
[23]. However, it is important to consider that that trust between humans and autonomous agents
can be different between trust with humans and other such things due to the fact autonomous
agents are not human and trust with autonomous machines largely is about trusting that the
machine will perform as intended [24]. However, it is often observed by studies such as [25] that
for technologies to be adopted by new users, they must first overcome the initial level of trust
users require of the technology, before they can gain the benefits of them. Initial trust is observed
as being gained the same way overall trust is gained, through social influence, as it is observed
that initial trust and overall trust have a positive correlation to one another [26].
3. TRUST MODEL
In this study, Corritore et al. model [27] is used to develop the proposed questions and the basis
of the approach, due to the models continued relevance in representing how trust is formed [27].
The model also supports the literature above, about how trust can be formed with technologies
from a user’s perception of factors. The model can be seen in Figure. 1 with three perceptual
factors that impact on trust, namely: perception of credibility, ease of use and risk. As shown in
Figure. 1, the model identifies two categories that can contribute to building an individual trust
about a particular technology; perceived factors and external factors. The external factors of a
technology affect the perceived factors a user has about a technology. Some examples of
external factors include the experience a user has with the technology, the devices portability and
the control the user has in interacting with said technology. The external factors comprise the
physical and psychological factors surrounding a specific technology. Perceived factors fall into
the following three categories: easiness of use, credibility, and potential risk, described as follows.
 Perception of ease of use reflects the degree to which a person believes that using a
technology would be free of effort [28]. Ease of use can be separated into two categories;
how easy it is to learn and how easy it is to use.
 Perception of credibility comprises the following dimensions: believability, integrity,
reputability, vulnerability, advantage and hostility [29].
 Perception of risk can be viewed as how the users perceive risk when the security of their
devices for securing their personal information is not verified [30].
As illustrated in the presented model of Figure. 1, the relationships between the model’s elements
are external factors to perceived factors, perceived factors to other perceived factors and
perceived factors to trust. To this point, it can be inferred that the perceived ease of use of
technology, the users’ perception about the credibility of the technology and associated risk can
contribute to determining how users can trust a device with sensitive data. These features can be
defined as external factors. Based on the above about Figure. 1, it can be deduced that the level
of trust a user can have regarding a technology can directly be linked to how credible a device is
and the potential risk of using it in the eyes of the user. Based on these factors a proxy measure
Alec Wells, Aminu Bello Usman & Justin McKeown
International Journal of Computer Science and Security (IJCSS), Volume (14) : Issue (2) : 2020 76
of trust can be utilised to evaluate the trust via a question based on the presented model in
Figure. 1.
FIGURE 1: Trust Model.
4. METHODOLOGY
For the experiment, a between-groups testing method was used rather than a repeated measures
method. This means that participants were separated into 3 different groups, with each group
only testing 1 technology. In comparison, a repeated measure would involve participants utilizing
all 3 technologies. Primarily, this was done to improve the accuracy of the results as participants
will not be influenced by their answers given when utilizing the other technologies. The dependent
variable of this study is the measurement of trust, which is measured via the questionnaire
participants filled after using the technology; in contrast, the independent variable is the 3
technologies that participants will interact with. Three different technologies were used: Amazon
Echo Dot Smart Speaker, a Sony Xperia smart phone running android and a standard university
computer running Windows 10 operating system. Each technology was used by separate control
groups. Participants were first briefed that they would be asked to individually fill out a sign-up
sheet that would emulate a sign-up process to a website, using the technology they were
assigned and would have to give personal information such as their names, emails and setting a
passwords. They were then given a consent form and took part in the study. Both the computer
and smart phone sign-up sheets were designed to be as plain as possible, on a white background
with only answer boxes and labels, as to not influence participants perceptions by including
design features like logos as that could affect how credible some users deem the technology to
be. For the smart speaker, a chatbot from Bot Libre was utilized to ask the same fields asked in
the computer/smart phone version, meaning the smart speaker itself was not actually asking
question, though it appeared to be. This is known as the ‘wizard of oz’ technique, which was done
to provide the exact same questions across all 3 groups.
18 participants took part in this study, out of which 11 participants were male and 7 were female.
The ages of participants ranged between 20-60, and all participants were randomly assigned one
of the 3 technologies (smart speaker, smart phone and computer). After using the technologies to
give sensitive information via a sign-up sheet, participants were then asked to fill out a
questionnaire, whose questions are based on the trust model of Fig. 1 to obtain a proxy measure
of trust as described in the following section.
Alec Wells, Aminu Bello Usman & Justin McKeown
International Journal of Computer Science and Security (IJCSS), Volume (14) : Issue (2) : 2020 77
5. RESULTS ANALYSIS AND DISCUSSION
5.1 Result Analysis
To analyze the results, the data were examined (the users’ responses to the survey) using a
rank-based nonparametric test known as the ‘Kruskal-Wallis H Test’ to determine if there was a
statistically significant differences between the users’ trust level and the independent variables
(smart speaker, computer and phone) between each of the groups.
The results of the analysis can be seen in the appended Table I. The table shows the mean rank,
standard deviation, Kruskal-Wallis score and the assumed significant figure for each question that
participants were asked. Each question was ranked on a scale of 1-5 with 1 being strongly
disagree and 5 being strongly agree. As mentioned earlier, 18 participants in total took part in the
experiment and were divided by technology; hence each technology had responses from 6
different participants. The Kruskal-Wallis value is found by distributing chi-squared, which is then
used to determine the p value and verify if the data is statistically significant. The mean value
indicates which group had higher scores. A higher score would be better for all questions except
when asked if participants felt vulnerable, cautious or that it was Risky to use the technology in
which case a lower score is better.
The results show that when asked if “The technologies information required is believable? there
was no statistical significance between the technologies, X2 (2) = 2.807, p = 0.246. Though in
terms of mean ranking score, computer was the highest with 12.08, speaker was the second with
8.58 and smart phone with the least at 7.83.
Question Technology Mean
Rank
Standard
Deviation
Kruskal-
Wallis
Asymp.
Sig.
No. of
Participants
The technologies
information
required is
believable?
Speaker 8.58 0.6183 2.807 0.246 6
Computer 12.08 6
Smart phone 7.83 6
The technology
has integrity?
Speaker 10.00 1.0226 0.506 0.776 6
Computer 10.17 6
Smart phone 8.33 6
The technology
is reputable?
Speaker 10.33 1.1100 0.273 0.873 6
Computer 8.83 6
Smart phone 9.33 6
The technology
is respected?
Speaker 8.33 0.9376 0.505 0.777 6
Computer 10.08 6
Smart phone 10.08 6
The technology
was what I
expected?
Speaker 8.83 0.6077 0.711 0.701 6
Computer 10.83 6
Smart phone 8.83 6
The technology
was
predictable?
Speaker 7.00 1.0416 4.229 0.121 6
Computer 12.00 6
Smart phone 9.50 6
Learning to use
the technology
was easy?
Speaker 7.75 0.6077 4.192 0.123 6
Computer 12.50 6
Smart phone 8.25 6
I found the
technology easy
to use?
Speaker 7.58 0.6978 3.490 0.175 6
Computer 12.50 6
Smart phone 8.92 6
I felt vulnerable
using the
technology?
Speaker 13.33 1.5424 5.395 0.067 6
Computer 8.50 6
Smart phone 6.67 6
I feel like I must
be cautious
Speaker 9.33 1.4642 0.022 0.989 6
Computer 9.42 6
Alec Wells, Aminu Bello Usman & Justin McKeown
International Journal of Computer Science and Security (IJCSS), Volume (14) : Issue (2) : 2020 78
using the
technology?
Smart phone 9.75 6
It is risky to use
the technology?
Speaker 10.58 1.3921 1.236 0.539 6
Computer 7.58 6
Smart phone 10.33 6
I believe the
technology won't
take advantage
of me?
Speaker 6.08 1.3492 3.983 0.137 6
Computer 10.67 6
Smart phone 11.75 6
I believe the
technology will
not use my data
maliciously?
Speaker 5.83 1.3394 4.684 0.096 6
Computer 10.67 6
Smart phone 12.00 6
TABLE 1: Result Summary.
The results show that when asked if “The technology has integrity?” there was no statistical
significance between the technologies, X2 (2) = 0.506, p = 0.776. Though in terms of mean
ranking score, computer was the highest with 10.17, speaker was the second with 10.00 and
smart phone with the least at 9.33.
The results show that that when asked if “The technology is reputable? there was no statistical
significance between the technologies, X2 (2) = 0.273, p = 0.873. Though in terms of mean
ranking score, speaker was the highest with 10.33, smart phone was the second with 9.33 and
computer with the least at 8.83.
The results show that when asked if “The technology is respected?” there was no statistical
significance between the technologies, X2 (2) = 0.505, p = 0.777. Though in terms of mean
ranking score, computer and smart phone were tied the highest at 10.08 and speaker the lowest
at 8.33.
The results show that when asked if “The technology was what I expected?” there was no
statistical significance between the technologies, X2 (2) = 0.711, p = 0.701. Though in terms of
mean ranking score, computer was the highest with 10.83 with smart phone and speaker tied at
the lowest with 8.83.
The results show that when asked if “The technology was predictable?” there was no statistical
significance between the technologies, X2 (2) = 4.229, p = 0.121. Though in terms of mean
ranking score, computer was the highest with 12.00, smart phone was the second with 9.50 and
speaker with the least at 7.00.
The results show that when asked if “Learning to use the technology was easy?” there was no
statistical significance between the technologies, X2 (2) = 4.192, p = 0.123. Though in terms of
mean ranking score, computer was the highest with 12.50, smart phone was the second with 8.25
and speaker with the least at 7.75.
The results show that when asked if “I found the technology easy to use?” there was no
statistical significance between the technologies, X2 (2) = 3.490, p = 0.175. Though in terms of
mean ranking score, computer was the highest with 12.50, smart phone was the second with 8.92
and speaker with the least at 7.58.
The results show that when asked if “I felt vulnerable using the technology?” there was no
statistical significance between the technologies, X2 (2) = 5.395, p = 0.067. Though in terms of
mean ranking score, speaker was the highest with 13.33, computer was the second with 8.50 and
smart phone with the least at 6.67.
Alec Wells, Aminu Bello Usman & Justin McKeown
International Journal of Computer Science and Security (IJCSS), Volume (14) : Issue (2) : 2020 79
The results show that when asked if “I feel like I must be cautious using the technology?” there
was no statistical significance between the technologies, X2 (2) = 0.022, p = 0.989. Though in
terms of mean ranking score, smart phone was the highest with 9.75, computer was the second
with 9.42 and speaker with the least at 9.33.
The results show that when asked if “It is risky to use the technology?” there was no statistical
significance between the technologies, X2 (2) = 1.236, p = 0.539. Though in terms of mean
ranking score, speaker was the highest with 10.58, smart phone was the second with 10.33 and
computer with the least at 7.58.
The results show that when asked if “I believe the technology won't take advantage of me?”
there was no statistical significance between the technologies, X2 (2) = 3.983, p = 0.137. Though
in terms of mean ranking score, smart phone was the highest with 11.75, computer was the
second with 10.67 and speaker with the least at 6.08.
The results show that when asked if “I believe the technology will not use my data maliciously?
there was no statistical significance between the technologies, X2 (2) = 4.684, p = 0.096. Though
in terms of mean ranking score, mobile was the highest with 12.00, computer was the second
with 10.67 and speaker with the least at 5.83.
6. CONCLUSION
In this study, Corritore’s et al. (2003) model [27] is used to formulate a proxy measure of trust with
three different technologies: smart speakers, smartphones, and computers. The purpose was to
understand which technology most people were likely to trust. Kruskal-Wallis H test was used to
examine the collected data and determine if there was a statistical significance on the level of
trust the users have about the three technologies. The study found that participants preferred to
use a keyboard for a computer or touch screen for a smart phone over using DVI when
determining which technology to trust when handling their sensitive data. Despite this however,
many participants considered that DVI was extremely easy to use, but they also felt it was one of
the most reputable and widely accepted of the 3 technologies. In terms of practical implications of
the study, it may be a consideration to those developing voice-based technologies, that people
may have security concerns with the devices. On the other hand, it is not yet known about the
long-term effects, since the smart speakers and other voice-based technologies (such as voice-
based assistants in smart phones) are still relatively new, hence in a few years, perceptions of
voice technologies may have changed. Possible future research directions from this study could
include a long-term study that observes how levels of trust with users change and adapt over a
much longer period. Alternatively, with DVI technologies being considered as a means of
authentication for banking and healthcare, another direction that could be undertaken is
understanding how the levels of trust with voice-based technology affect how users perceive and
use voice-biometric authentication, in comparison to traditional means of authentication.
7. REFERENCES
[1] Bruce, S. (2000). Secrets and Lies–Digital Security in a Networked World.
[2] Henshel, D., Cains, M. G., Hoffman, B., & Kelley, T. (2015). Trust as a human factor in
holistic cyber security risk assessment. Procedia Manufacturing, 3, 1117-1124.
[3] Newman, N. (2018). The future of voice and the implications for news.
[4] Cohen, P. R., & Oviatt, S. L. (1995). The role of voice input for human-machine
communication. proceedings of the National Academy of Sciences, 92(22), 9921-9927.
[5] Ratnasingham, P. (1998). The importance of trust in electronic commerce. Internet research
313-321.
Alec Wells, Aminu Bello Usman & Justin McKeown
International Journal of Computer Science and Security (IJCSS), Volume (14) : Issue (2) : 2020 80
[6] Hardré, P. L. (2016). When, how, and why do we trust technology too much?. In Emotions,
Technology, and Behaviors (pp. 85-106). Academic Press.
[7] Sasse, M. A. (2005). Usability and trust in information systems. Edward Elgar.
[8] Kiran, A. H., & Verbeek, P. P. (2010). Trusting our selves to technology. Knowledge,
Technology & Policy, 23(3-4), 409-427.
[9] Sherchan, W., Nepal, S., & Paris, C. (2013). A survey of trust in social networks. ACM
Computing Surveys (CSUR), 45(4), 1-33.
[10] S.Perez. “Voice-enabled smart speakers to reach 55% of U.S. households by 2022, says
report”. Internet: https://techcrunch.com/2017/11/08/voice-enabled-smart-speakers-to-reach-
55-of-u-s-households-by-2022-says-report/, Nov. 8, 2017 [Nov. 29, 2019].
[11] Coeckelbergh, M. (2012). Can we trust robots?. Ethics and information technology, 14(1),
53-60.
[12] Cadwalladr, C., & Graham-Harrison, E. (2018). Revealed: 50 million Facebook profiles
harvested for Cambridge Analytica in major data breach. The guardian, 17, 22.
[13] Zheng, S., Apthorpe, N., Chetty, M., & Feamster, N. (2018). User perceptions of smart home
IoT privacy. Proceedings of the ACM on Human-Computer Interaction, 2(CSCW), 1-20.
[14] Lessard, L. (2001). The Effects of operator interface on head position and workload: direct
voice input versus manual input for the CH-146 Griffon helicopter (Doctoral dissertation,
Carleton University).
[15] Williams, M., Nurse, J. R., & Creese, S. (2017, August). Privacy is the boring bit: user
perceptions and behaviour in the Internet-of-Things. In 2017 15th Annual Conference on
Privacy, Security and Trust (PST) (pp. 181-18109). IEEE.
[16] Chin, E., Felt, A. P., Sekar, V., & Wagner, D. (2012, July). Measuring user confidence in
smartphone security and privacy. In Proceedings of the eighth symposium on usable privacy
and security (pp. 1-16).
[17] Usman, A. B., & Gutierrez, J. (2019). DATM: a dynamic attribute trust model for efficient
collaborative routing. Annals of Operations Research, 277(2), 293-310.
[18] Uslaner, E. (1999). The Moral Foundations of Trust University of Maryland. College Park,
MD.
[19] Uslaner, E. M. (2008). Trust as a moral value. The handbook of social capital, 101-121.
[20] Cook, K. S., Yamagishi, T., Cheshire, C., Cooper, R., Matsuda, M., & Mashima, R. (2005).
Trust building via risk taking: A cross-societal experiment. Social psychology quarterly,
68(2), 121-142.
[21] Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y., De Visser, E. J., &
Parasuraman, R. (2011). A meta-analysis of factors affecting trust in human-robot
interaction. Human factors, 53(5), 517-527.
[22] Gold, C., Körber, M., Hohenberger, C., Lechner, D., & Bengler, K. (2015). Trust in
automation–Before and after the experience of take-over scenarios in a highly automated
vehicle. Procedia Manufacturing, 3, 3025-3032.
Alec Wells, Aminu Bello Usman & Justin McKeown
International Journal of Computer Science and Security (IJCSS), Volume (14) : Issue (2) : 2020 81
[23] Freedy, A., DeVisser, E., Weltman, G., & Coeyman, N. (2007, May). Measurement of trust in
human-robot collaboration. In 2007 International Symposium on Collaborative Technologies
and Systems (pp. 106-114). IEEE.
[24] Atkinson, D. J., & Clark, M. H. (2013, March). Autonomous agents and human interpersonal
trust: Can we engineer a human-machine social interface for trust?. In 2013 AAAI Spring
Symposium Series.
[25] Gefen, D. (2004). What makes an ERP implementation relationship worthwhile: Linking trust
mechanisms and ERP usefulness. Journal of Management Information Systems, 21(1), 263-
288.
[26] Grabner-Kräuter, S., & Kaluscha, E. A. (2003). Empirical research in on-line trust: a review
and critical assessment. International journal of human-computer studies, 58(6), 783-812.
[27] Corritore, C. L., Kracher, B., & Wiedenbeck, S. (2003). On-line trust: concepts, evolving
themes, a model. International journal of human-computer studies, 58(6), 737-758.
[28] He, Y., Chen, Q., & Kitkuakul, S. (2018). Regulatory focus and technology acceptance:
Perceived ease of use and usefulness as efficacy. Cogent Business & Management, 5(1),
1459006.
[29] Tseng, S., & Fogg, B. J. (1999). Credibility and computing technology. Communications of
the ACM, 42(5), 39-44.
[30] Schnall, R., Higgins, T., Brown, W., Carballo-Dieguez, A., & Bakken, S. (2015). Trust,
perceived risk, perceived ease of use and perceived usefulness as factors related to
mHealth technology use. Studies in health technology and informatics, 216, 467.

More Related Content

PDF
The Impact of Customer Knowledge on the Security of E-Banking
PDF
Mobile Device Users’ Susceptibility To Phishing Attacks
DOCX
A survey study of title security and privacy in mobile systems
PDF
Safely Scaling Virtual Private Network for a Major Telecom Company during A P...
PDF
Efficient Data Security for Mobile Instant Messenger
PDF
Efficient Data Security for Mobile Instant Messenger
PDF
A Bring Your Own Device Risk Assessment Model
PDF
A Smart Receptionist Implementing Facial Recognition and Voice Interaction
The Impact of Customer Knowledge on the Security of E-Banking
Mobile Device Users’ Susceptibility To Phishing Attacks
A survey study of title security and privacy in mobile systems
Safely Scaling Virtual Private Network for a Major Telecom Company during A P...
Efficient Data Security for Mobile Instant Messenger
Efficient Data Security for Mobile Instant Messenger
A Bring Your Own Device Risk Assessment Model
A Smart Receptionist Implementing Facial Recognition and Voice Interaction

What's hot (20)

PDF
Steam++ An Extensible End-to-end Framework for Developing IoT Data Processing...
PDF
ENCRYPTION BASED WATERMARKING TECHNIQUE FOR SECURITY OF MEDICAL IMAGE
PDF
A survey on security and privacy issues in IoV
PDF
FEATURE EXTRACTION METHODS FOR IRIS RECOGNITION SYSTEM: A SURVEY
PDF
Mobile and SIM data - quantifying the risk - 2011
PDF
Exploring Secure Computing for the Internet of Things, Internet of Everything...
PDF
DOES DIGITAL NATIVE STATUS IMPACT END-USER ANTIVIRUS USAGE?
PDF
AN EFFICIENT SEMANTIC DATA ALIGNMENT BASED FCM TO INFER USER SEARCH GOALS USI...
PDF
Cyber security rule of use internet safely
PDF
Battlefield Cyberspace: Exploitation of Hyperconnectivity and Internet of Things
PPTX
MOBILE DEVICES: THE CASE FOR CYBER SECURITY HARDENED SYSTEMS AND METHODS TO ...
PDF
PDF
User privacy and data trustworthiness in mobile crowd sensing
PDF
Design and Development of Secure Electronic Voting System Using Radio Frequen...
PDF
October 2021: Top 10 Read Articles in Network Security and Its Applications
PDF
Security techniques for intelligent spam sensing and anomaly detection in onl...
PDF
Automatic Detection of Social Engineering Attacks Using Dialog
PDF
Computer Forensic: A Reactive Strategy for Fighting Computer Crime
PDF
Cyber security: challenges for society- literature review
DOCX
Cybersecurity Business Risk, Literature Review
Steam++ An Extensible End-to-end Framework for Developing IoT Data Processing...
ENCRYPTION BASED WATERMARKING TECHNIQUE FOR SECURITY OF MEDICAL IMAGE
A survey on security and privacy issues in IoV
FEATURE EXTRACTION METHODS FOR IRIS RECOGNITION SYSTEM: A SURVEY
Mobile and SIM data - quantifying the risk - 2011
Exploring Secure Computing for the Internet of Things, Internet of Everything...
DOES DIGITAL NATIVE STATUS IMPACT END-USER ANTIVIRUS USAGE?
AN EFFICIENT SEMANTIC DATA ALIGNMENT BASED FCM TO INFER USER SEARCH GOALS USI...
Cyber security rule of use internet safely
Battlefield Cyberspace: Exploitation of Hyperconnectivity and Internet of Things
MOBILE DEVICES: THE CASE FOR CYBER SECURITY HARDENED SYSTEMS AND METHODS TO ...
User privacy and data trustworthiness in mobile crowd sensing
Design and Development of Secure Electronic Voting System Using Radio Frequen...
October 2021: Top 10 Read Articles in Network Security and Its Applications
Security techniques for intelligent spam sensing and anomaly detection in onl...
Automatic Detection of Social Engineering Attacks Using Dialog
Computer Forensic: A Reactive Strategy for Fighting Computer Crime
Cyber security: challenges for society- literature review
Cybersecurity Business Risk, Literature Review
Ad

Similar to Trusting Smart Speakers: Understanding the Different Levels of Trust between Technologies (20)

PDF
Paper id 25201417
PDF
Enhancing cryptographic protection, authentication, and authorization in cell...
DOCX
Proceedings on Privacy Enhancing Technologies ; 2016 (3)96–11
PDF
AN EFFECTIVE METHOD FOR INFORMATION SECURITY AWARENESS RAISING INITIATIVES
PDF
Inria - Cybersecurity: current challenges and Inria’s research directions
PDF
A Systematic Literature Review On The Cyber Security
PDF
L010517180
PDF
Malware threat analysis techniques and approaches for IoT applications: a review
PDF
E017242431
PDF
The Cyberspace and Intensification of Privacy Invasion
PDF
SoK_Cryptographic_Confidentiality_of_Data_on_Mobil.pdf
PDF
Security and Privacy of Big Data in Mobile Devices
DOCX
271 Information Governance for Mobile Devices .docx
PDF
Survey On Mobile User’s Data Privacy Threats And Defence Mechanism
PDF
Disruptive Security Technologies With Mobile Code And Peertopeer Networks R R...
PDF
Secure Modern Healthcare System Based on Internet of Things and Secret Sharin...
PDF
A Novel Security Approach for Communication using IOT
PPT
DRC PMC IOTgghhhhhhhhhhhhhhhhhhhhhhhhbhh
PDF
ANDROID & FIREBASE BASED ANTI THEFT MOBILE APPLICATION
PDF
An Empirical Study on Information Security
Paper id 25201417
Enhancing cryptographic protection, authentication, and authorization in cell...
Proceedings on Privacy Enhancing Technologies ; 2016 (3)96–11
AN EFFECTIVE METHOD FOR INFORMATION SECURITY AWARENESS RAISING INITIATIVES
Inria - Cybersecurity: current challenges and Inria’s research directions
A Systematic Literature Review On The Cyber Security
L010517180
Malware threat analysis techniques and approaches for IoT applications: a review
E017242431
The Cyberspace and Intensification of Privacy Invasion
SoK_Cryptographic_Confidentiality_of_Data_on_Mobil.pdf
Security and Privacy of Big Data in Mobile Devices
271 Information Governance for Mobile Devices .docx
Survey On Mobile User’s Data Privacy Threats And Defence Mechanism
Disruptive Security Technologies With Mobile Code And Peertopeer Networks R R...
Secure Modern Healthcare System Based on Internet of Things and Secret Sharin...
A Novel Security Approach for Communication using IOT
DRC PMC IOTgghhhhhhhhhhhhhhhhhhhhhhhhbhh
ANDROID & FIREBASE BASED ANTI THEFT MOBILE APPLICATION
An Empirical Study on Information Security
Ad

Recently uploaded (20)

PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PDF
Encapsulation_ Review paper, used for researhc scholars
PPTX
sap open course for s4hana steps from ECC to s4
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PPT
Teaching material agriculture food technology
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PPTX
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
PPTX
Big Data Technologies - Introduction.pptx
PDF
Approach and Philosophy of On baking technology
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
KodekX | Application Modernization Development
PDF
Encapsulation theory and applications.pdf
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
Spectral efficient network and resource selection model in 5G networks
PPTX
Programs and apps: productivity, graphics, security and other tools
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
20250228 LYD VKU AI Blended-Learning.pptx
Encapsulation_ Review paper, used for researhc scholars
sap open course for s4hana steps from ECC to s4
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Teaching material agriculture food technology
Mobile App Security Testing_ A Comprehensive Guide.pdf
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
Big Data Technologies - Introduction.pptx
Approach and Philosophy of On baking technology
Agricultural_Statistics_at_a_Glance_2022_0.pdf
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Dropbox Q2 2025 Financial Results & Investor Presentation
KodekX | Application Modernization Development
Encapsulation theory and applications.pdf
Network Security Unit 5.pdf for BCA BBA.
Building Integrated photovoltaic BIPV_UPV.pdf
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Spectral efficient network and resource selection model in 5G networks
Programs and apps: productivity, graphics, security and other tools
“AI and Expert System Decision Support & Business Intelligence Systems”

Trusting Smart Speakers: Understanding the Different Levels of Trust between Technologies

  • 1. Alec Wells, Aminu Bello Usman & Justin McKeown International Journal of Computer Science and Security (IJCSS), Volume (14) : Issue (2) : 2020 72 Trusting Smart Speakers: Understanding the Different Levels of Trust between Technologies Alec Wells alec.wells@yorksj.ac.uk Department of Computer Science York St John University York, YO31 8JY, United Kingdom Aminu Bello Usman a.usman@yorksj.ac.uk Department of Computer Science York St John University York, YO31 8JY, United Kingdom Justin McKeown j.mckeown@yorksj.ac.uk Department of Computer Science York St John University York, YO31 8JY, United Kingdom Abstract The growing usage of smart speakers raises many privacy and trust concerns compared to other technologies such as smart phones and computers. In this study, a proxy measure of trust is used to gauge users’ opinions on three different technologies based on an empirical study, and to understand which technology most people are most likely to trust. The collected data were analyzed using the Kruskal-Wallis H test to determine the statistical differences between the users’ trust level of the three technologies: smart speaker, computer and smart phone. The findings of the study revealed that despite the wide acceptance, ease of use and reputation of smart speakers, people find it difficult to trust smart speakers with their sensitive information via the Direct Voice Input (DVI) and would prefer to use a keyboard or touchscreen offered by computers and smart phones. Findings from this study can inform future work on users’ trust in technology based on perceived ease of use, reputation, perceived credibility and risk of using technologies via DVI. Keywords: Direct Voice Input, Risk, Security, Technology and Trust. 1. INTRODUCTION As new technologies are constantly emerging, it is important to understand how the user perceives such technologies, especially when considering possible security concerns. For several years, studies in information security and practice have posed, that many security incidents and breaches are caused by human factors, rather than technical failures [1]. Thus, human factors on the part of individual or an employee in an organization can be an attack vector that can be exploited. The assessment of human factors as an attack vector is a complex multi-component and multi-level problem involving characteristics of hardware, software, user interface, and how humans’ issue instructions to technologies [2]. The use of DVI to issue instructions to technologies is growing rapidly and it appeared to be part of the future of human-computer interaction [3]. DVI has provided what may be a more 'natural' mode of control for communication between human and technology that is more convenient and faster compared with the conventional input means; such as the keyboards and touch screens [4]. Although, DVI enables novel and convenient forms of interaction with technologies, there are concerns about the issues of trust, security and privacy on using DVI [3].
  • 2. Alec Wells, Aminu Bello Usman & Justin McKeown International Journal of Computer Science and Security (IJCSS), Volume (14) : Issue (2) : 2020 73 At the heart of human-centered security assessments, trust (soft security property) [5] is considered an important social control security mechanism which can be used to evaluate the security aptitude of human agents. The trust mechanism supplements traditional technical hard security mechanisms (e.g., authentication and access control), thus enables a wider view of social control mechanisms in addition to existing technical-based security approaches towards an overall cyber security system. Trust is seen as a dynamic concept that can constantly change in humans due to its many different facets and dimensions. Studies on trust in e-commerce have focused on users’ responses to interface design and the complexity rather than the deeper, emotion-charged dynamic of trust [6] whereas other studies on trust and digital system focused on evaluating trust based on usability, perceived privacy, and content requirements [7]. In relation to technologies, trust can be considered as the degree to which a user believes in the veracity or effectiveness of a technology to function expectedly either based on the credibility, reputability of the technology, or simply based on the users’ experience and perceptions [8]. Thus, in this context, trust can be viewed as the confidence a user can have about the device to behave in an expected manner [9]. Technology is increasingly becoming a part of human lives and is considered an extension of human functions [8]. As a society today, we treat digital technology tools and algorithms with extensive trust. The extent and the degree of how we trust technologies with sensitive data and our lives will soon be a security concern, if it is not already a concern. In other words, we constantly share our sensitive data, delegate responsibilities to technologies and apparently, we trust them. This seems the case particularly in the modern world where new technologies are often being developed and quickly adopted by homes and workplaces. For example, the adoption of smart speakers is already in 13% of United States households and in the United Kingdom, about 10% of households have already adopted Smart Speakers. This was also projected to grow to 55% in the United States and 48% in the United Kingdom by 2022 [10]. Subsequently, based on the increase of people adoption and reliance on these technologies, it can be understood that the more we adopt these technologies, the more urgent becomes the issue of security, trust and associated risk [11]. However, the details of security models and algorithms (e.g. the encryption or the security cloud architecture) used in these technologies may not always be black and white, or a concept that can easily be understood, especially for a non-technical user. Thus, the usable security measure of those devices in the eyes of a non-technical user can come down to – a very simple binary condition; trust or no trust. The study is interested in understanding whether users have varying degrees of trust depending on the technology’s input method, DVI, keyboard or touch screen, with the hope to improve understanding of security concerns when adopting DVI in technologies. It is also important to understand whether the users are more likely to give away sensitive information because of the trust they have with a technology and the input method used to access the technology. The study considers three different technologies; smart speakers, smart phones and computers and compares the participants’ opinions about trust between each of the technologies. This will hopefully identify if there are any security concerns with adopting voice-based technologies such as smart speakers. The contribution of the paper can be summarized as follows.  Discuss the current state of trust and technology in relation to smart speakers, smart phones and computers.  Investigate if there are any differences in how people perceive the technologies and if they would be more likely to trust them with sensitive information.  Evaluate if people have more or less trust with certain technologies depending on the input method. The paper is structured as follows: Section 2 looks at the related work, with the concept of trust in social context as well as trust and technology with emphasis on the three technologies (smart speakers, smart phones and computers). Section 3 presents the trust model. Section 4 discusses the research methodology and the description of the precipitants involved in the study. Section 5
  • 3. Alec Wells, Aminu Bello Usman & Justin McKeown International Journal of Computer Science and Security (IJCSS), Volume (14) : Issue (2) : 2020 74 presents the results of the study and discussion. In Section 6, a conclusion is presented along with potential future work. 2. RELATED WORK Within the modern world, as more of our lives become dependent on digital technologies, it is an ever-growing concern for people to keep that part of our lives secure and private. This can be true from the perspective of average users who want to keep private or sensitive information away from prying eyes, or from those with malicious intent. This growing awareness for keeping data secure has only become more apparent as new information and scandals come to light. One of the recent examples of this might come from the Facebook-Cambridge Analytica data scandal. This was a huge breach in security for many users, in which their data were collected and used to influence political adverts to have the most impact on those specific users [12]. One possible explanation why user data and privacy can easily be breached; leak or be compromised is due to how we use and trust those technologies. As new technologies are invented, so are new ways of interacting with technologies, like DVI, and this subsequently results in rendering new challenges about how we can trust and use the technologies. For example, previous studies such as [13] investigated the user perceptions of smart home devices, perceptions of smart home privacy risks and action required to regulate IoT devices and users data. The study found that users often prioritized convenience and connectedness over privacy concerns, and mostly users’ assumptions about privacy protections are contingent on their trust of IoT device manufacturers. In the same vein, [14] investigated the relative effects of Direct Voice Input (DVI) on head position in the CH-146 Griffon Helicopter in comparison with the manual input system for flying and non- flying pilots. The study suggest that users find DVI inputs to be much easier and reliable to use compared to manual inputs. A broader perspective on the study of user perceptions and behavior in the technology (Privacy Paradox) [15] equally suggested that users found IoT devices significantly less private than non-IoT products, though conversely also found users considered them less usable and familiar. Similarly, [16] investigated users’ confidence in smartphones, in regards to privacy and have found that many participants were apprehensive about running financially sensitive tasks on their phones due to fears of theft, accidental clicking as well as misconceptions about network communications and a general mistrust of smart phone applications. 2.1. Trust in Social Context The concept of trust is usually applied in the context of social relationships between social agents, which can be defined as a social construct with natural attributes to relationship between social actors (a group or individual). Trust can also be viewed as being subjective and a unidirectional relation between social agents and how social agent assess another agent or groups to perform a particular action with a certain level of probability [17]. Simply, trust can be attributed to relationships between people and attributes, trust is subjective, dynamic, and it can evolve with time, experience and the environment. Uslaner et al. describes trust as “the chicken soup of social life” [18] – it works mysteriously, often, and we develop trust with only people we know, yet the benefits of trust mostly come from when we trust strangers. For example, a service provider and customers, where a customer does not know the service provider, yet the customers can trust the service provider with their personal and sometimes banking details. Uslaner also states that countries with more trusting people, those who would volunteer, are more tolerant of others or give to charity; have better functioning governments, more open markets and less corruption [19]. Alternatively, studies such as ‘Trust building via risk taking: A cross-societal experiment’ suggests that trust can also built through risk taking. The study observed Japanese and US participants to discover that American participants took more risks, indicating they were more inclined to risk taking and trust building [20]. 2.2 Trust and Technology Arguably, one similarity between trust in the context of social relations and trust between humans and technology, is that humans can develop trust with technology that they have found to be comfortable using, a technology can make them feel safe either due to its functionalities,
  • 4. Alec Wells, Aminu Bello Usman & Justin McKeown International Journal of Computer Science and Security (IJCSS), Volume (14) : Issue (2) : 2020 75 reputation or its perceived credibility. However, trust with technologies takes time to establish and mostly, users trust technology that they have found to be more reliable over time and only the individual has actual perceptions about how much they trust the technology. Trust is similarly described in the context of human-robot interaction as humans must trust that a robotic teammate will protect the interests and welfare of every other individual on the team. For users to gain the advantages and benefits of robotic teammates, they must be willing to trust and accept robot- produced information and follow their suggestions [21]. Much like in trust with social agents and companies, for oneself to benefit from robots, trust must first be established. In addition, throughout trials in an experiment it is interesting to note that levels of trust change over time, based upon the reliability of the automation. This was observed in pilots that constantly used automation, who were found to trust automation more often than students [22]. Hence, initial views can be different to those observed later. When measuring trust in human-robot collaboration, studies utilize a performance model and observe different research areas such as psychology behind team performance, unmanned systems, mixed initiative systems and war fighting behavior which is adapted when identifying how much humans trust robots’ decisions [23]. However, it is important to consider that that trust between humans and autonomous agents can be different between trust with humans and other such things due to the fact autonomous agents are not human and trust with autonomous machines largely is about trusting that the machine will perform as intended [24]. However, it is often observed by studies such as [25] that for technologies to be adopted by new users, they must first overcome the initial level of trust users require of the technology, before they can gain the benefits of them. Initial trust is observed as being gained the same way overall trust is gained, through social influence, as it is observed that initial trust and overall trust have a positive correlation to one another [26]. 3. TRUST MODEL In this study, Corritore et al. model [27] is used to develop the proposed questions and the basis of the approach, due to the models continued relevance in representing how trust is formed [27]. The model also supports the literature above, about how trust can be formed with technologies from a user’s perception of factors. The model can be seen in Figure. 1 with three perceptual factors that impact on trust, namely: perception of credibility, ease of use and risk. As shown in Figure. 1, the model identifies two categories that can contribute to building an individual trust about a particular technology; perceived factors and external factors. The external factors of a technology affect the perceived factors a user has about a technology. Some examples of external factors include the experience a user has with the technology, the devices portability and the control the user has in interacting with said technology. The external factors comprise the physical and psychological factors surrounding a specific technology. Perceived factors fall into the following three categories: easiness of use, credibility, and potential risk, described as follows.  Perception of ease of use reflects the degree to which a person believes that using a technology would be free of effort [28]. Ease of use can be separated into two categories; how easy it is to learn and how easy it is to use.  Perception of credibility comprises the following dimensions: believability, integrity, reputability, vulnerability, advantage and hostility [29].  Perception of risk can be viewed as how the users perceive risk when the security of their devices for securing their personal information is not verified [30]. As illustrated in the presented model of Figure. 1, the relationships between the model’s elements are external factors to perceived factors, perceived factors to other perceived factors and perceived factors to trust. To this point, it can be inferred that the perceived ease of use of technology, the users’ perception about the credibility of the technology and associated risk can contribute to determining how users can trust a device with sensitive data. These features can be defined as external factors. Based on the above about Figure. 1, it can be deduced that the level of trust a user can have regarding a technology can directly be linked to how credible a device is and the potential risk of using it in the eyes of the user. Based on these factors a proxy measure
  • 5. Alec Wells, Aminu Bello Usman & Justin McKeown International Journal of Computer Science and Security (IJCSS), Volume (14) : Issue (2) : 2020 76 of trust can be utilised to evaluate the trust via a question based on the presented model in Figure. 1. FIGURE 1: Trust Model. 4. METHODOLOGY For the experiment, a between-groups testing method was used rather than a repeated measures method. This means that participants were separated into 3 different groups, with each group only testing 1 technology. In comparison, a repeated measure would involve participants utilizing all 3 technologies. Primarily, this was done to improve the accuracy of the results as participants will not be influenced by their answers given when utilizing the other technologies. The dependent variable of this study is the measurement of trust, which is measured via the questionnaire participants filled after using the technology; in contrast, the independent variable is the 3 technologies that participants will interact with. Three different technologies were used: Amazon Echo Dot Smart Speaker, a Sony Xperia smart phone running android and a standard university computer running Windows 10 operating system. Each technology was used by separate control groups. Participants were first briefed that they would be asked to individually fill out a sign-up sheet that would emulate a sign-up process to a website, using the technology they were assigned and would have to give personal information such as their names, emails and setting a passwords. They were then given a consent form and took part in the study. Both the computer and smart phone sign-up sheets were designed to be as plain as possible, on a white background with only answer boxes and labels, as to not influence participants perceptions by including design features like logos as that could affect how credible some users deem the technology to be. For the smart speaker, a chatbot from Bot Libre was utilized to ask the same fields asked in the computer/smart phone version, meaning the smart speaker itself was not actually asking question, though it appeared to be. This is known as the ‘wizard of oz’ technique, which was done to provide the exact same questions across all 3 groups. 18 participants took part in this study, out of which 11 participants were male and 7 were female. The ages of participants ranged between 20-60, and all participants were randomly assigned one of the 3 technologies (smart speaker, smart phone and computer). After using the technologies to give sensitive information via a sign-up sheet, participants were then asked to fill out a questionnaire, whose questions are based on the trust model of Fig. 1 to obtain a proxy measure of trust as described in the following section.
  • 6. Alec Wells, Aminu Bello Usman & Justin McKeown International Journal of Computer Science and Security (IJCSS), Volume (14) : Issue (2) : 2020 77 5. RESULTS ANALYSIS AND DISCUSSION 5.1 Result Analysis To analyze the results, the data were examined (the users’ responses to the survey) using a rank-based nonparametric test known as the ‘Kruskal-Wallis H Test’ to determine if there was a statistically significant differences between the users’ trust level and the independent variables (smart speaker, computer and phone) between each of the groups. The results of the analysis can be seen in the appended Table I. The table shows the mean rank, standard deviation, Kruskal-Wallis score and the assumed significant figure for each question that participants were asked. Each question was ranked on a scale of 1-5 with 1 being strongly disagree and 5 being strongly agree. As mentioned earlier, 18 participants in total took part in the experiment and were divided by technology; hence each technology had responses from 6 different participants. The Kruskal-Wallis value is found by distributing chi-squared, which is then used to determine the p value and verify if the data is statistically significant. The mean value indicates which group had higher scores. A higher score would be better for all questions except when asked if participants felt vulnerable, cautious or that it was Risky to use the technology in which case a lower score is better. The results show that when asked if “The technologies information required is believable? there was no statistical significance between the technologies, X2 (2) = 2.807, p = 0.246. Though in terms of mean ranking score, computer was the highest with 12.08, speaker was the second with 8.58 and smart phone with the least at 7.83. Question Technology Mean Rank Standard Deviation Kruskal- Wallis Asymp. Sig. No. of Participants The technologies information required is believable? Speaker 8.58 0.6183 2.807 0.246 6 Computer 12.08 6 Smart phone 7.83 6 The technology has integrity? Speaker 10.00 1.0226 0.506 0.776 6 Computer 10.17 6 Smart phone 8.33 6 The technology is reputable? Speaker 10.33 1.1100 0.273 0.873 6 Computer 8.83 6 Smart phone 9.33 6 The technology is respected? Speaker 8.33 0.9376 0.505 0.777 6 Computer 10.08 6 Smart phone 10.08 6 The technology was what I expected? Speaker 8.83 0.6077 0.711 0.701 6 Computer 10.83 6 Smart phone 8.83 6 The technology was predictable? Speaker 7.00 1.0416 4.229 0.121 6 Computer 12.00 6 Smart phone 9.50 6 Learning to use the technology was easy? Speaker 7.75 0.6077 4.192 0.123 6 Computer 12.50 6 Smart phone 8.25 6 I found the technology easy to use? Speaker 7.58 0.6978 3.490 0.175 6 Computer 12.50 6 Smart phone 8.92 6 I felt vulnerable using the technology? Speaker 13.33 1.5424 5.395 0.067 6 Computer 8.50 6 Smart phone 6.67 6 I feel like I must be cautious Speaker 9.33 1.4642 0.022 0.989 6 Computer 9.42 6
  • 7. Alec Wells, Aminu Bello Usman & Justin McKeown International Journal of Computer Science and Security (IJCSS), Volume (14) : Issue (2) : 2020 78 using the technology? Smart phone 9.75 6 It is risky to use the technology? Speaker 10.58 1.3921 1.236 0.539 6 Computer 7.58 6 Smart phone 10.33 6 I believe the technology won't take advantage of me? Speaker 6.08 1.3492 3.983 0.137 6 Computer 10.67 6 Smart phone 11.75 6 I believe the technology will not use my data maliciously? Speaker 5.83 1.3394 4.684 0.096 6 Computer 10.67 6 Smart phone 12.00 6 TABLE 1: Result Summary. The results show that when asked if “The technology has integrity?” there was no statistical significance between the technologies, X2 (2) = 0.506, p = 0.776. Though in terms of mean ranking score, computer was the highest with 10.17, speaker was the second with 10.00 and smart phone with the least at 9.33. The results show that that when asked if “The technology is reputable? there was no statistical significance between the technologies, X2 (2) = 0.273, p = 0.873. Though in terms of mean ranking score, speaker was the highest with 10.33, smart phone was the second with 9.33 and computer with the least at 8.83. The results show that when asked if “The technology is respected?” there was no statistical significance between the technologies, X2 (2) = 0.505, p = 0.777. Though in terms of mean ranking score, computer and smart phone were tied the highest at 10.08 and speaker the lowest at 8.33. The results show that when asked if “The technology was what I expected?” there was no statistical significance between the technologies, X2 (2) = 0.711, p = 0.701. Though in terms of mean ranking score, computer was the highest with 10.83 with smart phone and speaker tied at the lowest with 8.83. The results show that when asked if “The technology was predictable?” there was no statistical significance between the technologies, X2 (2) = 4.229, p = 0.121. Though in terms of mean ranking score, computer was the highest with 12.00, smart phone was the second with 9.50 and speaker with the least at 7.00. The results show that when asked if “Learning to use the technology was easy?” there was no statistical significance between the technologies, X2 (2) = 4.192, p = 0.123. Though in terms of mean ranking score, computer was the highest with 12.50, smart phone was the second with 8.25 and speaker with the least at 7.75. The results show that when asked if “I found the technology easy to use?” there was no statistical significance between the technologies, X2 (2) = 3.490, p = 0.175. Though in terms of mean ranking score, computer was the highest with 12.50, smart phone was the second with 8.92 and speaker with the least at 7.58. The results show that when asked if “I felt vulnerable using the technology?” there was no statistical significance between the technologies, X2 (2) = 5.395, p = 0.067. Though in terms of mean ranking score, speaker was the highest with 13.33, computer was the second with 8.50 and smart phone with the least at 6.67.
  • 8. Alec Wells, Aminu Bello Usman & Justin McKeown International Journal of Computer Science and Security (IJCSS), Volume (14) : Issue (2) : 2020 79 The results show that when asked if “I feel like I must be cautious using the technology?” there was no statistical significance between the technologies, X2 (2) = 0.022, p = 0.989. Though in terms of mean ranking score, smart phone was the highest with 9.75, computer was the second with 9.42 and speaker with the least at 9.33. The results show that when asked if “It is risky to use the technology?” there was no statistical significance between the technologies, X2 (2) = 1.236, p = 0.539. Though in terms of mean ranking score, speaker was the highest with 10.58, smart phone was the second with 10.33 and computer with the least at 7.58. The results show that when asked if “I believe the technology won't take advantage of me?” there was no statistical significance between the technologies, X2 (2) = 3.983, p = 0.137. Though in terms of mean ranking score, smart phone was the highest with 11.75, computer was the second with 10.67 and speaker with the least at 6.08. The results show that when asked if “I believe the technology will not use my data maliciously? there was no statistical significance between the technologies, X2 (2) = 4.684, p = 0.096. Though in terms of mean ranking score, mobile was the highest with 12.00, computer was the second with 10.67 and speaker with the least at 5.83. 6. CONCLUSION In this study, Corritore’s et al. (2003) model [27] is used to formulate a proxy measure of trust with three different technologies: smart speakers, smartphones, and computers. The purpose was to understand which technology most people were likely to trust. Kruskal-Wallis H test was used to examine the collected data and determine if there was a statistical significance on the level of trust the users have about the three technologies. The study found that participants preferred to use a keyboard for a computer or touch screen for a smart phone over using DVI when determining which technology to trust when handling their sensitive data. Despite this however, many participants considered that DVI was extremely easy to use, but they also felt it was one of the most reputable and widely accepted of the 3 technologies. In terms of practical implications of the study, it may be a consideration to those developing voice-based technologies, that people may have security concerns with the devices. On the other hand, it is not yet known about the long-term effects, since the smart speakers and other voice-based technologies (such as voice- based assistants in smart phones) are still relatively new, hence in a few years, perceptions of voice technologies may have changed. Possible future research directions from this study could include a long-term study that observes how levels of trust with users change and adapt over a much longer period. Alternatively, with DVI technologies being considered as a means of authentication for banking and healthcare, another direction that could be undertaken is understanding how the levels of trust with voice-based technology affect how users perceive and use voice-biometric authentication, in comparison to traditional means of authentication. 7. REFERENCES [1] Bruce, S. (2000). Secrets and Lies–Digital Security in a Networked World. [2] Henshel, D., Cains, M. G., Hoffman, B., & Kelley, T. (2015). Trust as a human factor in holistic cyber security risk assessment. Procedia Manufacturing, 3, 1117-1124. [3] Newman, N. (2018). The future of voice and the implications for news. [4] Cohen, P. R., & Oviatt, S. L. (1995). The role of voice input for human-machine communication. proceedings of the National Academy of Sciences, 92(22), 9921-9927. [5] Ratnasingham, P. (1998). The importance of trust in electronic commerce. Internet research 313-321.
  • 9. Alec Wells, Aminu Bello Usman & Justin McKeown International Journal of Computer Science and Security (IJCSS), Volume (14) : Issue (2) : 2020 80 [6] Hardré, P. L. (2016). When, how, and why do we trust technology too much?. In Emotions, Technology, and Behaviors (pp. 85-106). Academic Press. [7] Sasse, M. A. (2005). Usability and trust in information systems. Edward Elgar. [8] Kiran, A. H., & Verbeek, P. P. (2010). Trusting our selves to technology. Knowledge, Technology & Policy, 23(3-4), 409-427. [9] Sherchan, W., Nepal, S., & Paris, C. (2013). A survey of trust in social networks. ACM Computing Surveys (CSUR), 45(4), 1-33. [10] S.Perez. “Voice-enabled smart speakers to reach 55% of U.S. households by 2022, says report”. Internet: https://techcrunch.com/2017/11/08/voice-enabled-smart-speakers-to-reach- 55-of-u-s-households-by-2022-says-report/, Nov. 8, 2017 [Nov. 29, 2019]. [11] Coeckelbergh, M. (2012). Can we trust robots?. Ethics and information technology, 14(1), 53-60. [12] Cadwalladr, C., & Graham-Harrison, E. (2018). Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach. The guardian, 17, 22. [13] Zheng, S., Apthorpe, N., Chetty, M., & Feamster, N. (2018). User perceptions of smart home IoT privacy. Proceedings of the ACM on Human-Computer Interaction, 2(CSCW), 1-20. [14] Lessard, L. (2001). The Effects of operator interface on head position and workload: direct voice input versus manual input for the CH-146 Griffon helicopter (Doctoral dissertation, Carleton University). [15] Williams, M., Nurse, J. R., & Creese, S. (2017, August). Privacy is the boring bit: user perceptions and behaviour in the Internet-of-Things. In 2017 15th Annual Conference on Privacy, Security and Trust (PST) (pp. 181-18109). IEEE. [16] Chin, E., Felt, A. P., Sekar, V., & Wagner, D. (2012, July). Measuring user confidence in smartphone security and privacy. In Proceedings of the eighth symposium on usable privacy and security (pp. 1-16). [17] Usman, A. B., & Gutierrez, J. (2019). DATM: a dynamic attribute trust model for efficient collaborative routing. Annals of Operations Research, 277(2), 293-310. [18] Uslaner, E. (1999). The Moral Foundations of Trust University of Maryland. College Park, MD. [19] Uslaner, E. M. (2008). Trust as a moral value. The handbook of social capital, 101-121. [20] Cook, K. S., Yamagishi, T., Cheshire, C., Cooper, R., Matsuda, M., & Mashima, R. (2005). Trust building via risk taking: A cross-societal experiment. Social psychology quarterly, 68(2), 121-142. [21] Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y., De Visser, E. J., & Parasuraman, R. (2011). A meta-analysis of factors affecting trust in human-robot interaction. Human factors, 53(5), 517-527. [22] Gold, C., Körber, M., Hohenberger, C., Lechner, D., & Bengler, K. (2015). Trust in automation–Before and after the experience of take-over scenarios in a highly automated vehicle. Procedia Manufacturing, 3, 3025-3032.
  • 10. Alec Wells, Aminu Bello Usman & Justin McKeown International Journal of Computer Science and Security (IJCSS), Volume (14) : Issue (2) : 2020 81 [23] Freedy, A., DeVisser, E., Weltman, G., & Coeyman, N. (2007, May). Measurement of trust in human-robot collaboration. In 2007 International Symposium on Collaborative Technologies and Systems (pp. 106-114). IEEE. [24] Atkinson, D. J., & Clark, M. H. (2013, March). Autonomous agents and human interpersonal trust: Can we engineer a human-machine social interface for trust?. In 2013 AAAI Spring Symposium Series. [25] Gefen, D. (2004). What makes an ERP implementation relationship worthwhile: Linking trust mechanisms and ERP usefulness. Journal of Management Information Systems, 21(1), 263- 288. [26] Grabner-Kräuter, S., & Kaluscha, E. A. (2003). Empirical research in on-line trust: a review and critical assessment. International journal of human-computer studies, 58(6), 783-812. [27] Corritore, C. L., Kracher, B., & Wiedenbeck, S. (2003). On-line trust: concepts, evolving themes, a model. International journal of human-computer studies, 58(6), 737-758. [28] He, Y., Chen, Q., & Kitkuakul, S. (2018). Regulatory focus and technology acceptance: Perceived ease of use and usefulness as efficacy. Cogent Business & Management, 5(1), 1459006. [29] Tseng, S., & Fogg, B. J. (1999). Credibility and computing technology. Communications of the ACM, 42(5), 39-44. [30] Schnall, R., Higgins, T., Brown, W., Carballo-Dieguez, A., & Bakken, S. (2015). Trust, perceived risk, perceived ease of use and perceived usefulness as factors related to mHealth technology use. Studies in health technology and informatics, 216, 467.