SlideShare a Scribd company logo
Humanrobot Interaction Control Analysis And
Design Dan Zhang download
https://ebookbell.com/product/humanrobot-interaction-control-
analysis-and-design-dan-zhang-47277782
Explore and download more ebooks at ebookbell.com
Here are some recommended products that we believe you will be
interested in. You can click the link to download.
Humanrobot Interaction Control Using Reinforcement Learning 1st
Edition Yu
https://ebookbell.com/product/humanrobot-interaction-control-using-
reinforcement-learning-1st-edition-yu-34951208
Humanrobot Interaction In Social Robotics Takayuki Kanda Hiroshi
Ishiguro
https://ebookbell.com/product/humanrobot-interaction-in-social-
robotics-takayuki-kanda-hiroshi-ishiguro-4421776
Humanrobot Interaction Strategies For Walkerassisted Locomotion 1st
Edition Carlos A Cifuentes
https://ebookbell.com/product/humanrobot-interaction-strategies-for-
walkerassisted-locomotion-1st-edition-carlos-a-cifuentes-5484898
Humanrobot Interaction Safety Standardization And Benchmarking Paolo
Barattini
https://ebookbell.com/product/humanrobot-interaction-safety-
standardization-and-benchmarking-paolo-barattini-10429176
Humanrobot Interaction An Introduction Christoph Bartneck Tony
Belpaeme
https://ebookbell.com/product/humanrobot-interaction-an-introduction-
christoph-bartneck-tony-belpaeme-11043886
Humanrobot Interaction Evaluation Methods And Their Standardization
1st Ed Cline Jost
https://ebookbell.com/product/humanrobot-interaction-evaluation-
methods-and-their-standardization-1st-ed-cline-jost-11857944
Humanrobot Interaction A Special Double Issue Of Humancomputer
Interaction 1st Edition Sara Kiesler Editor Pamela Hinds Editor
https://ebookbell.com/product/humanrobot-interaction-a-special-double-
issue-of-humancomputer-interaction-1st-edition-sara-kiesler-editor-
pamela-hinds-editor-12194388
Basic Humanrobot Interaction David O Johnson
https://ebookbell.com/product/basic-humanrobot-interaction-david-o-
johnson-56802974
Emotional Design In Humanrobot Interaction Theory Methods And
Applications Hande Ayanolu Emlia Duarte
https://ebookbell.com/product/emotional-design-in-humanrobot-
interaction-theory-methods-and-applications-hande-ayanolu-emlia-
duarte-56931176
Humanrobot Interaction Control Analysis And Design Dan Zhang
Human–Robot
Interaction
Humanrobot Interaction Control Analysis And Design Dan Zhang
Human–Robot
Interaction:
Control, Analysis, and Design
Edited by
Dan Zhang and Bin Wei
Human–Robot Interaction: Control, Analysis, and Design
Edited by Dan Zhang and Bin Wei
This book first published 2020
Cambridge Scholars Publishing
Lady Stephenson Library, Newcastle upon Tyne, NE6 2PA, UK
British Library Cataloguing in Publication Data
A catalogue record for this book is available from the British Library
Copyright © 2020 by Dan Zhang, Bin Wei and contributors
All rights for this book reserved. No part of this book may be reproduced,
stored in a retrieval system, or transmitted, in any form or by any means,
electronic, mechanical, photocopying, recording or otherwise, without
the prior permission of the copyright owner.
ISBN (10): 1-5275-5740-5
ISBN (13): 978-1-5275-5740-6
TABLE OF CONTENTS
Preface....................................................................................................... vi
Chapter 1 .................................................................................................... 1
Trust and the Discrepancy between Expectations and Actual Capabilities
of Social Robots
Bertram Malle, Kerstin Fischer, James Young, AJung Moon,
Emily Collins
Chapter 2 .................................................................................................. 24
Talking to Robots at Depth
Robert Codd-Downey, Andrew Speers, Michael Jenkin
Chapter 3 .................................................................................................. 45
Towards the Ideal Haptic Device: Review of Actuation Techniques
for Human-Machine Interfaces
Maciej Lacki, Carlos Rossa
Chapter 4 .................................................................................................. 75
New Research Avenues in Human-Robot Interaction
Frauke Zeller
Chapter 5 .................................................................................................. 93
Interpreting Bioelectrical Signals for Control of Wearable Mechatronic
Devices
Tyler Desplenter, Jacob Tryon, Emma Farago, Taylor Stanbury,
Ana Luisa Trejos
Chapter 6 ................................................................................................ 147
Human-Robot Interaction Strategy in Robotic-assisted Balance
Rehabilitation Training
Jiancheng Ji, Shuai Guo, Jeff Xi, Jin Liu
Chapter 7 ................................................................................................ 171
Development of a Wearable Exoskeleton Suit for Paraplegic Parents
Bing Chen, Bin Zi, Ling Qin, Wei-Hsin Liao
PREFACE
Robotics have been used in industry and other fields for the past decade,
however human-robot interaction is at its early stage. This book, Human –
Robot Interaction: Control, Analysis, and Design, will focus on the topics
of human-robot interaction, its applications and current challenges.
We would like to thank all the authors for their contributions to the book.
We are also grateful to the publisher for supporting this project. We hope
the readers find this book informative and useful.
This book consists of 7 chapters. Chapter 1 takes trust to be a set of
expectations about the robot’s capabilities and explores the risks of
discrepancies between a person’s expectations and the robot’s actual
capabilities. The major sources of these discrepancies and ways to mitigate
their detrimental effects are examined. Chapter 2 has concentrated primarily
on diver to robot communication. Communication from the robot to the
diver, especially when approaches such as gesture-based are used, is also an
issue. Chapter 3 reviews recent advancements in the field of passive and
hybrid haptic actuation. The authors highlight the design considerations and
trade-offs associated with these actuation methods and provide guidelines
on how their use can help with development of the ultimate haptic device.
Chapter 4 introduces an extended HRI research model, which is adapted
from communication and mass communication studies, and focuses on the
social dimension of social robots. Chapter 5 highlights some existing
methods for interpreting EEG and EMG signals that are useful for the
control of wearable mechatronic devices. These methods are focused on
modelling motion for the purpose of controlling wearable mechatronic
devices that target musculoskeletal rehabilitation of the upper limb. Chapter
6 discusses a training method for patient balance rehabilitation based on
human-robot interaction. Chapter 7 develops a wearable exoskeleton suit
that involves human-robot interaction to help the individuals with mobility
disorders caused by a stroke, spinal cord injury or other related diseases.
Finally, the editors would like to acknowledge all the friends and
colleagues who have contributed to this book.
Dan Zhang, Toronto, Ontario, Canada
Bin Wei, Sault Ste Marie, Ontario, Canada
February 25, 2020
CHAPTER 1
TRUST AND THE DISCREPANCY BETWEEN
EXPECTATIONS AND ACTUAL CAPABILITIES
OF SOCIAL ROBOTS
BERTRAM F. MALLE, KERSTIN FISCHER,
JAMES E. YOUNG, AJUNG MOON,
EMILY COLLINS
Corresponding author:
Bertram F. Malle, Professor
Department of Cognitive, Linguistic, and Psychological Sciences
Brown University
190 Thayer St.
Providence, RI 02912, USA
bfmalle@brown.edu
+1 (401) 863-6820
Kerstin Fischer, Professor (WSR)
Department of Design and
Communication
University of Southern Denmark
Alsion 2
DK-6400 Sonderborg, Denmark
kerstin@sdu.dk
Phone: +45-6550-1220
James E. Young, Associate
Professor
Department of Computer Science
University of Manitoba
Winnipeg, Manitoba R3T 2N2,
Canada
Email: young@cs.umanitoba.ca
Phone: (lab) +1-204-474-6791
AJung Moon, Assistant Professor
Department of Electrical and
Computer Engineering
McGill University
3480 University Street
Montreal, Quebec H3A 0E9,
Canada
ajung.moon@mcgill.ca
Phone: +1-514-398-1694
Emily C. Collins, Research
Associate
Department of Computer Science
University of Liverpool
Ashton Street
Liverpool L69 3BX, UK
E.C.Collins@liverpool.ac.uk
Phone: +44 (0)151 795 4271
Chapter 1
2
Abstract
From collaborators in factories to companions in homes, social robots
hold the promise to intuitively and efficiently assist and work alongside
people. However, human trust in robotic systems is crucial if these robots
are to be adopted and used in home and work. In this chapter we take trust
to be a set of expectations about the robot’s capabilities and explore the
risks of discrepancies between a person’s expectations and the robot’s
actual capabilities. We examine major sources of these discrepancies and
ways to mitigate their detrimental effects. No simple recipe exists to help
build justified trust in human-robot interaction. Rather, we must try to
understand humans’ expectations and harmonize them with robot design
over time.
Introduction
As robots continue to be developed for a range of contexts where they
work with people, including factories, museums, airports, hospitals, and
homes, the field of Human-Robot Interaction explores how well people
will work with these machines, and what kinds of challenges will arise in
their interaction patterns. Social robotics focuses on the social and
relational aspects of Human-Robot Interaction, investigating how people
respond to robots cognitively and emotionally, how they use their basic
interpersonal skills when interacting with robots, and how robots
themselves can be designed to facilitate successful human-machine
interactions.
Trust is a topic that currently receives much attention in human-robot
interaction research. If people do not trust robots, they will not collaborate
with them or accept their advice, let alone purchase them and delegate to
them the important tasks they have been designed for. Building trust is
therefore highly desirable from the perspective of robot developers. A
closer look at trust in human-robot interaction, however, reveals that the
concept of trust itself is multidimensional. For instance, one could trust
another human (or perhaps robot) that they will carry out a particular task
reliably and without errors, and that they are competent to carry out the
task. But in some contexts, people trust another agent to be honest in their
communication, sincere in their promises, and to value another person’s,
or the larger community’s interests. In short, people may trust agents
based on evidence of reliability, competence, sincerity, or ethical integrity
Trust and Discrepancy 3
[1], [2]1
. What unites trust along all these dimensions is that it is an
expectation—expecting that the other is reliable, competent, sincere, or
ethical. Expectations, of course, can be disappointed. When the other was
not as reliable, capable, or sincere as one thought, one’s trust was
misplaced. Our goal in this chapter is to explore some of the ways in
which people’s expectations of robots may be raised too high and
therefore be vulnerable to disappointment.
To avert disappointed expectations, at least two paths of action are
available. One is to rapidly expand robots’ capacities, which is what most
designers and engineers strive for. But progress has been slow [3], and the
social and communicative skills of artificial agents are still far from what
seems desirable [4], [5]. Another path is to ensure that people trust a robot
to be just as reliable, capable, and ethical as it really is able to; that is, to
ensure that people understand the robot’s actual abilities and limitations.
This path focuses on one aspect of transparency: providing human users
with information about the capabilities of a system. Such transparency, we
argue, is a precondition for justified trust in any autonomous machine, and
social robots in particular [6], [7].
In this chapter, we describe some of the sources of discrepancies
between people’s expectations and robots’ real capabilities. We argue the
discrepancies are often caused by superficial properties of robots that elicit
feelings of trust in humans without validly indicating the underlying
property the person trusts in. We therefore need to understand the complex
human responses triggered by the morphology and behaviour of
autonomous machines, and we need to build a systematic understanding of
the effects that specific design choices have on people’s cognitive,
emotional, and relational reactions to robots. In the second part of the
chapter we lay out a number of ways to combat these discrepancies.
Discrepancies Between Human Expectations and Actual
Robot Capabilities
In robot design and human-robot interaction research, the tendency to
build ever more social cues into robots (from facial expressions to
emotional tone of voice) is undeniable. Intuitively, this makes sense since
robots that exhibit social cues are assumed to facilitate social interaction
by leveraging people’s existing social skill sets and experience, and they
1
The authors have provided a measure of these multiple dimensions of trust and
invite readers to use that measure for their human-robot interaction studies:
http://bit.ly/MDMT_Scale
Chapter 1
4
would fit seamlessly into social spaces without constantly being in the way
[8]. However, in humans, the display of social cues is indicative of certain
underlying mental properties, such as thoughts, emotions, intentions, or
abilities. The problem is that robots can exhibit these same cues, through
careful design or specific technologies, even though they do not have the
same, or even similar, underlying properties.
For example, in human interaction, following another person’s gaze is
an invitation to joint attention [9]; and in communication, joint attention
signals the listener’s understanding of the speaker’s communicative
intention. Robots using such gaze cues [10] are similarly interpreted as
indicating joint attention and of understanding a speaker’s instructions
[11], [12]. However, robots can produce these behaviors naïvely using
simple algorithms, without having any concept of joint attention or any
actual understanding of the speaker’s communication. Thus, when a robot
displays these social cues, they are not symptoms of the expected
underlying processes, and a person observing this robot may erroneously
attribute a range of (often human-like) properties to the robot [13].
Erroneous assumptions about other people are not always harmful.
Higher expectations than initially warranted can aid human development
(when caregivers “scaffold” the infant’s budding abilities; [14], can
generate learning success [15], and can foster prosocial behaviors [16].
But such processes are, at least currently, wholly absent with robots.
Overestimating a robot’s capacities poses manifest risks to users,
developers, and the public at large. When users entrust a robot with tasks
that the robot ends up not being equipped to do, people may be
disappointed and frustrated when they discover the robot’s limited actual
capabilities [17]; and there may be distress or harm if they discover these
limitations too late. Likewise, developers who consistently oversell their
products will be faced with increasing numbers of disappointed, frustrated,
or distressed users who no longer use the product, write terrible public
reviews (quite a significant impact factor for consumer technology), or
even sue the manufacturer. Finally, the public at large could be deprived of
genuine benefits if a few oversold robotic products cause serious harm,
destroy consumer trust, and lead to stifling regulation.
Broadly speaking, discrepancies between expectations and reality have
been well documented and explored under the umbrella of “expectancy
violation,” from the domains of perception [18] to human interaction [19].
In human-robot interaction research, such violations have been studied, for
example, by comparing expectations from media to interactions with a real
robot [20] or by quantifying updated capability estimates after interacting
with a robot [21]. Our discussion builds on this line of inquiry, but we do
Trust and Discrepancy 5
not focus on cases when an expectancy violation has occurred, which
assumes that the person has become aware of the discrepancy (and is
likely to lose trust in the robot). Instead, we focus on sources of such
discrepancies and avenues for making a person aware of the robot’s
limitations before they encounter a violation (and thus before a loss of
trust).
Sources of Discrepancies
There are multiple sources of discrepancies between the perceived and
actual capacities of a robot. Obvious sources are the entertainment
industry and public media, which frequently exaggerate technical realities
of robotic systems. We discuss here more psychological processes, from
misleading and deceptive design and presentation to automatic inferences
from a robot’s superficial behavior to deep underlying capabilities.
Misleading design
Equipping a robot with outward social cues that have no corresponding
abilities is, at best, misleading. Such a strategy violates German designer
Dieter Rams’ concept of honest design, which is the commitment to design
that “does not make a product more innovative, powerful or valuable than
it really is” [22]; see also [23], [24]. Honest design is a commitment to
transparency—enabling the user to “see through” the outward appearance
and to accurately infer the robot’s capacities. In the HRI laboratory,
researchers often violate this commitment to transparency when they use
Wizard-of-Oz (WoZ) methods to make participants believe that they are
interacting with an autonomous, capable robot. Though such misperceptions
are rarely harmful, they do contribute to false beliefs and overly high
expectations about robots outside the laboratory. Moreover, thorough
debriefing at the end of such experiments is not always provided [25],
which would reset people’s generalizations about technical realities.
Deception
When a mismatch between apparent and real capacities is specifically
intended—for example, to sell the robot or impress the media—it arguably
turns into deception and even exploitation [26]. And people are
undoubtedly vulnerable to such exploitation. A recent study suggested that
people were willing to unlock the door to a university dormitory building
for a verbally communicating robot that had the seeming authority of a
Chapter 1
6
food delivery agent. Deception is not always objectionable; in some
instances it is used for the benefit of the end user [27], [28], such as in
calming individuals with dementia [29] or encouraging children on the
autism spectrum to form social bonds [30]. However, these instances must
involve careful management of the risks involved in the deception—risks
for the individual user, the surrounding social community, and the
precedent it sets for other, perhaps less justified cases of deception.
Impact of norms
At times, people are well aware that they are interacting with a
machine in human-like ways because they are engaging with the robot in a
joint pretense [31] or because it is the normatively correct way to behave.
For example, if a robot greets a person, the appropriate response is to
reciprocate the greeting; if the speaker asks a question, the appropriate
response is to answer the question. Robots may not recognize the
underlying social norm and they may not be insulted if the user violates
the norm, but the user, and the surrounding community (e.g., children who
are learning these norms), benefit from the fact that both parties uphold
relevant social practices and thus a cooperative, respectful social order
[32]. The more specific the roles that robots are assigned (e.g., nurse
assistant, parking lot attendant), the more these norms and practices will
influence people’s behavior toward the robot [33]. If robots are equipped
with the norms that apply to their roles (which is a significant challenge;
[34], this may improve interaction quality and user satisfaction. Further,
robots can actively leverage norms to shape how people interact with it,
but perhaps even in manipulative fashion [35]. Norm-appropriate behavior
is also inherently trust-building, because norms are commitments to act,
and expectations that others will act, in ways that benefit the other (thus
invoking the dimension of ethical trust; [36], norm violations become all
the more powerful in threatening trust.
Expanded inferences
Whereas attributions of norm competence to a robot are well grounded
in the robot’s actual behavior, a robot that displays seemingly natural
communicative skills can compel people to infer (and genuinely assume to
be present) many other abilities that the robot probably is unlikely to have
[37]. In particular, seeing that a robot has some higher-level abilities,
people are likely to assume that it will also possess more basic abilities
that in humans would be a prerequisite for the higher-level ability. For
Trust and Discrepancy 7
instance, a robot may greet someone with “Hi, how are you?” but be
unable itself to answer the same question when the greeting is reciprocated,
and it may not even have any speech understanding capabilities at all.
Furthermore, a robot’s syntactically correct sentences do not mean it has a
full-blown semantics or grasps anything about conversational dynamics
[38]. Likewise, seeing that a robot has one skill, we must expect people to
assume that it also is has other skills that in humans are highly correlated
with the first. For example, a robot may be able to entertain or even tutor a
child but be unable to recognize when the child is choking on a toy. People
find it hard to imagine that a being can have selected, isolated abilities that
do not build upon each other [39].
Though it is desirable that, say, a manufacturer provides explicit and
understandable documentation of a system’s safety and performance
parameters [40], [41], making explicit what a robot can and cannot do will
often fail. That is because some displayed behaviors set off a cascade of
inferences that people have evolved and practiced countless times with
human beings [32]. As a result, spontaneous reactions to robots in social
contexts and their explicit beliefs on what mental capacities robots possess
can come apart [42], [43].
Automatic inferences
Some inferences or emotional responses are automatic, at least upon
initial encounters with artificial agents. Previous research has shown that
people treat computers and related technology (including robots) in some
ways just like human beings (e.g., applying politeness and reciprocity),
and often do so mindlessly [44]. The field of human-robot interaction has
since identified numerous instances in which people show basic social-
cognitive responses when responding to humanlike robots—for example,
by following the “gaze” of a robot [45] or by taking its visual perspective
[46]. Beyond such largely automatic reactions, a robot’s humanlike
appearance seems to invite a wide array of inferences about the robot’s
intelligence, autonomy, or mental capacities more generally [47]–[49]. But
even if these appearance-to-mind inferences are automatic, they are not
simplistic; they do not merely translate some degree of humanlikeness into
a proportional degree of “having a mind.” People represent both
humanlike appearance and mental capacities along multiple dimensions
[50]–[52], and specific dimensions of humanlike appearance trigger
people’s inferences for specific dimensions of mind. For example, features
of the Body Manipulator dimension (e.g., torso, arms, fingers) elicit
inferences about capacities of reality interaction, which include perception,
Chapter 1
8
learning, acting, and communicating. By contrast, facial and surface features
(e.g., eyelashes, skin, apparel) elicit inferences about affective capacities,
including feelings and basic emotions, as well as moral capacities,
including telling right from wrong and upholding moral values [53].
Variations
We should note, however, that people’s responses to robots are neither
constant nor universal. They show variation within person, manifesting
sometimes as cognitive, emotional, or social-relational reactions, can be in
the foreground or background at different moments in time, and change
with extended interactions with the robot [8], [32]. They also show substantial
interpersonal variation, as a function of levels of expertise [54], personal
style [55], and psychosocial predispositions such as loneliness [56].
Status quo
The fact remains, however, that people are vulnerable to the impact of
a robot’s behavior and appearance [57]. We must expect that, in real life as
in the laboratory, people will be willing to disclose negative personal
information to humanoid agents [58], [59], trust and rely on them [60],
empathize with them [61], [62], give in to a robot’s obedience-like
pressure to continue tedious work [63] or perform erroneous tasks [64].
Further, in comparison to a mechanical robot, people are more prone to
take advice from a humanoid robot [65], trust and rely on them more [60],
and are more likely to comply with their requests [66]. None of these
behaviors are inherently faulty; but currently they are unjustified, because
they are generated by superficial cues rather than by an underlying reality
[57]. At present, neither mechanical nor humanoid robots have more
knowledge to share than Wikipedia, are no more trustworthy to keep
secrets than one’s iPhone, and have no more needs or suffering than a
cartoon character. They may in the future, but until that future, we have to
ask how we can prevent people from having unrealistic expectations of
robots, especially humanlike ones.
How to Combat Discrepancies
We have seen that discrepancies between perceived and actual
capacities exist at multiple levels and are fed from numerous sources. How
can people recover from these mismatches or avoid them in the first place?
In this section, we provide potential paths for both short- and long-term
Trust and Discrepancy 9
solutions to the problem of expectation discrepancy when dealing with
social robots.
Waiting for the future
An easy solution may be to simply wait for the robots of the future to
make true the promises of the present. However, that would mean an
extended time of misperceived reality, and numerous opportunities for
misplaced trust, disappointment, and non-use. It is unclear whether
recovery from such prolonged negative experiences is possible. Another
strategy to overcome said discrepancies may be to encourage users to
acquire minimally necessary technical knowledge to better evaluate
artificial agents, perhaps encouraging children to program machines and
thus see their mechanical and electronic insides. However, given the
widespread disparities in access to quality education in most of the world’s
countries, the technical-knowledge path would leave poorer people misled,
deceived, and more exploitable than ever before. Moreover, whereas the
knowledge strategy would combat some of the sources we discussed (e.g.,
deception, expanded inferences), it would leave automatic inferences
intact, as they are likely grounded in biologically or culturally evolved
response patterns.
Experiencing the cold truth
Another strategy might be to practically force people to experience the
mechanical and lifeless nature of machines—such as by asking people to
inspect the skinless plastic insides of an animal robot like Paro or by
unscrewing a robot’s head and handing it to the person. It is, however, not
clear that this will provide more clarity for human-robot interactions. A
study of the effects of demonstrating the mechanistic nature of robots to
children in fact showed that the children still interacted with the robot in
the same social ways as children to whom the robotic side of robots had
not been pointed out [67]. Furthermore, if people have already formed
emotional attachments, such acts will be seen as cruel and distasteful,
rather than have any corrective effects on discrepant perceptions.
Revealing real capacities
Perhaps most obvious would be truth in advertising. Robot designers
and manufacturers, organizations and companies that deploy robots in
hotel lobbies, hospitals, or school yards would signal to users what the
Chapter 1
10
robot can and cannot do. But there are numerous obstacles to designers
and manufacturers offering responsible and modest explanations of the
machine’s real capacities. They are under pressure to produce within the
constraints of their contracts; they are beholden to funders; they need to
satisfy the curiosity of journalists and policy makers, who are also keen to
present positive images of developing technologies.
Further, even if designers or manufacturers adequately reveal the
machine’s limited capabilities, human users may resist such information.
If the information is in a manual, people won’t read it. If it is offered
during purchase, training, or first encounters, it may still be ineffective.
That is because the abovementioned human tendency to perceive agency
and mind in machines that have the tell-tale signs of self-propelled motion,
eyes, and verbal communication is difficult to overcome. Given the
eliciting power of these cues, it is questionable (though empirically testable)
whether explicit information can ever counteract a user’s inappropriate
mental model of the machine.
Legibility and explainability
An alternative approach is to make the robot itself “legible”—
something that a growing group of scholars is concerned with [68]. But
whereas a robot’s intentions and goals can be made legible—e.g., in a
projection of the robot’s intended motion path or in the motion itself—
capabilities and other dispositions are not easily expressed in this way. At
the same time, the robot can correct unrealistic expectations by indicating
some of its limits of capability in failed actions [69] or, even more
informative, in explicit statements that it is unable or forbidden to act a
certain way [70].
A step further would be to design the robot in such a way that it can
explicate its own actions, reasoning, and capabilities. But whereas giving
users access to the robot’s ongoing decision making and perhaps offering
insightful and human-tailored explanations of its performed actions may
be desirable [71], “explaining” one’s capacities is highly unusual. Most of
this kind of communication among humans is done indirectly, by
providing information about, say, one’s occupation [72] or acquaintance
with a place [73]. Understanding such indirect speech requires access to
shared perceptions, background knowledge, and acquired common ground
that humans typically do not have with robots. Moreover, a robot’s
attempts to communicate its knowledge, skills, and limitations can also
disrupt an ongoing activity or even backfire if talk about capabilities
makes users suspect that there is a problem with the interaction [32]. There
Trust and Discrepancy 11
is, however, a context in which talk about capabilities is natural—
educational settings. Here, one agent learns new knowledge, skills,
abilities, often from another agent, and both might comment freely on the
learner’s capabilities already in place, others still developing, and yet
others clearly absent. If we consider a robot an ever-learning agent, then
perhaps talk about capabilities and limitations can be rather natural.
One potential drawback of robots that explain themselves must be
mentioned. Such robots would appear extremely sophisticated, and one
might then worry which other capacities people will infer from this
explanatory prowess. Detailed insights into reasoning may invite
inferences of deeper self-awareness, even wisdom, and user-tailored
explanations may invite inferences of caring and understanding of the
user’s needs. But perhaps by the time full-blown explainability can really
be implemented, some of these other capacities will too; then the
discrepancies would all lift at once.
Managing expectations
But until that time, we are better off with a strategy of managing
expectations and ensuring performance that matches these expectations
and lets trust build upon solid evidence. Managing expectations will rely
on some of the legibility and explainability strategies just mentioned along
with attempts to explicitly set expectations low, which may be easily
exceeded to positive effect [74]. However, such explicit strategies would
be unlikely to keep automatic inferences in check. For example, in one
study, Zhao et al. (submitted) showed that people take a highly humanlike
robot’s visual perspective even when they are told it is a wax figure. The
power of the mere humanlike appearance was enough to trigger the basic
social-cognitive act of perspective taking.
Thus, we also need something we might call restrained design—
attempts to avoid overpromising signals in behavior, communication, and
appearance, as well as limiting the robot’s roles so that people form
limited, role- and context-adequate expectations. As a special case of such
an approach we describe here the possible benefit of an incremental robot
design strategy—the commitment to advance robot capacities in small
steps, each of which is well grounded in user studies and reliability testing.
Incremental Design
Why would designing and implementing small changes in a robot
prevent discrepancies between a person’s understanding of the robot’s
Chapter 1
12
capacities and its actual capacities? Well-designed small changes may be
barely noticeable and, unless in a known, significant dimension (e.g.,
having eyes after never having had eyes), will limit the number of new
inferences that would be elicited by it. Further, even when noticed, the
user may be able to more easily adapt to a small change, and integrate it
into their existing knowledge and understanding of the robot, without
having to alter their entire mental model of the robot.
Consider the iRobot Roomba robotic vacuum cleaner. The Roomba has
a well-defined, functional role in households as a cleaning appliance. From
its first iteration, any discrepancy between people’s perceptions of the
robot’s capacities and its actual capacities were likely related to the robot’s
cleaning abilities, which could be quickly resolved by using the robot in
practice. As new models hit the market, Roomba’s functional capacities
improved only incrementally—for example, beep-sequence error codes
were replaced by pre-recorded verbal announcements, or random-walk
cleaning modes were replaced by rudimentary mapping technology. In
these cases, the human users have to accommodate only minor novel
elements in their mental models, each changing only very few parameters.
Consider, by contrast, Softbank’s Pepper robot. From the original
version, Pepper was equipped with a humanoid form including arms and
hands that appeared to gesture, and a head with eyes and an actuated neck,
such that it appeared to look at and follow people. Further, marketing
material emphasized the robot’s emotional capacities, using such terms as
“perception modules” and an “emotional engine.” We can expect that
these features encourage people to infer complex capacities in this robot,
even beyond perception and emotion. Observing the robot seemingly gaze
at us and follow a person’s movements suggests attention and interest; the
promise of emotional capacities suggests sympathy and understanding.
However, beyond pre-coded sentences intended to be cute or funny, the
robot currently has no internal programmed emotional model at all. As a
result, we expect there to be large discrepancies between a person’s
elicited expectations and the robot’s actual abilities. Assumptions of deep
understanding in conversation and willingness toward risky personal
disclosure may then be followed by likely frustration or disappointment.
The discrepancy in Pepper’s case stems in part from the jump in
expectation that the designers invite the human to take and the actual
reality of Pepper’s abilities. Compared with other technologies people may
be familiar with, a highly humanoid appearance, human-like social
signaling behaviors, and purported emotional abilities trigger a leap in
inference people make from “robots can't do much” to “they can do a lot.”
But that leap is not matched by Pepper’s actual capabilities. As a result,
Trust and Discrepancy 13
encountering Pepper creates a large discrepancy that will be quite difficult
to overcome. A more incremental approach would curtail the humanoid
form and focus on the robot’s gaze-following abilities, without claims of
emotional processing. If the gaze following behavior actually supports
successful person recognition and communication turn taking, then a more
humanoid form may be warranted. And only if actual emotion recognition
and the functional equivalent of emotional states in the robot are achieved
would Pepper’s “emotion engine” be promoted.
Incremental approaches have been implemented in other technological
fields. For example, commercial car products have in recent years
increasingly included small technical changes that point toward eventual
autonomous driving abilities, such as cruise control, active automatic
breaking systems, lane violation detection and correction, and the like.
More advanced cars, such as Tesla’s Model S, have an “auto-pilot” mode
that takes a further step toward autonomous driving in currently highly
constrained circumstances. The system still frequently reminds the user to
keep their hands on the steering wheel and to take over when those
constrained circumstances no longer hold (e.g., no painted lane
information). However, the success of this shared autonomy situation
depends on how a product is marketed. Other recent cars may include a
great deal of autonomy in their onboard computing system but are not
marketed as autonomous or self-driving but are called “Traffic Jam Assist”
or “Super Cruise.” Such labeling decisions limit what the human users
expects of the car and therefore what they entrust it to do. A recent study
confirms that labeling matters: People overestimate Tesla cars’ capacities
more than other comparable brands [75]. And perhaps unsurprisingly, the
few highly-publicized accidents with Teslas are typically the result of vast
overestimation of what the car can do [76], [77].
Within self-driving vehicle research and development, a category
system is in place to express the gradually increasing levels of autonomy
of the system in question. In this space, however, the incremental approach
may still take steps that are too big. In the case of vehicle control, people's
adjustment to continuously increasing autonomy is not itself continuous
but takes a qualitative leap. People either drive themselves, assisted up to a
point, or they let someone else (or something else) drive; they become
passengers. In regular cars, actual passengers give up control, take naps,
read books, chat on the phone, and would not be ready to instantly take the
wheel when the main driver requests it. Once people take on the
unengaged passenger role with increasingly (but not yet fully) autonomous
vehicles, the situation will result in over-trust (the human will take naps,
read books, etc.). And if there remains a small chance that the car needs
Chapter 1
14
the driver’s attention but the driver has slipped into the passenger role, the
situation could prove catastrophic. The human would not be able to take
the wheel quickly enough when the car requests it because it takes time for
a human to shift attention, observe their surroundings, develop situational
awareness, make a plan, and act [78]. Thus, even an incremental approach
would not be able to avert the human’s jump to believing the car can
handle virtually all situations, when in fact the car cannot.
Aside from incremental strategies, the more general restrained design
approach must ultimately be evidence-based design. Decisions about form
and function must be informed by evidence into which of the robot’s
signals elicit what expectations in the human. Such insights are still rather
sparse and often highly specific to certain robots. It therefore takes a
serious research agenda to address this challenge, with a full arsenal of
scientific approaches: carefully controlled experiments to establish causal
relations between robot characteristics and a person’s expectations;
examination of the stability of these response patterns by comparing young
children and adults as well as people from different cultures; and
longitudinal studies to establish how those responses will change or
stabilize in the wake of interacting with robots over time. We close our
analysis by discussing the strengths and challenges that come with
longitudinal studies.
Longitudinal Research
Longitudinal studies would be the ideal data source to elucidate the
source of and remedy for discrepancies between perceived and actual
robot capacities. That is because, first, they can distinguish between initial
reactions to robots and more enduring response patterns. We have learned
from human-human social perception research that initial responses, even
if they change over time, can strongly influence the range of possible long-
term responses; in particular, initial negative responses tend to improve
more slowly than positive initial reactions deteriorate [79]. In human-robot
encounters, some responses may be automatic and have a lasting impact,
whereas others may initially be automatic but could be changeable over
time. Furthermore, some responses may reflect an initial lack of
understanding of the encountered novel agent, and with time a search for
meaning may improve this understanding [80]. Longitudinal studies can
also track how expectations clash with new observations and how trust
fluctuates as a result.
High-quality longitudinal research is undoubtedly difficult to conduct
because of cost, time and management commitments, participant attrition,
Trust and Discrepancy 15
ethical concerns of privacy and unforeseen impacts on daily living, and the
high rate of mechanical robot failures. A somewhat more modest goal
might be to study short-term temporal dynamics that will advance
knowledge but also provide a launching pad for genuine longitudinal
research. For the question of recovery from expectation-reality
discrepancies we can focus on a few feasible but informative paradigms.
A first paradigm is to measure people’s responses to a robot with or
without information about the true capacities of the robot. In comparison
to spontaneous inferences about the robot’s capacities, would people
adjust their inferences when given credible information? One could
compare the differential effectiveness of (a) inoculation (providing the
ground-truth information before the encounter with the robot) and (b)
correction (providing it after the encounter). In human persuasion
research, inoculation is successful when the persuasive attempt operates at
an explicit, rational level [81]. By analogy, the comparison of inoculation
and post-hoc correction in the human-robot perception case may help
clarify which human responses to robots lie at the more explicit and which
at the more implicit level.
A second paradigm is to present the robot twice during a single
experimental session, separated by some time delay or unrelated other
activities. What happens to people’s representations formed in the first
encounter that are either confirmed or disconfirmed in the second
encounter? If the initial reactions are mere novelty effects, they would
subside independent of the new information; if they are deeply entrenched,
they would remain even after disconfirmation; and if they are
systematically responsive to evidence, they would stay the same under
confirmation and change under disconfirmation [82]. In addition, different
response dimensions may behave differently. Beliefs about the robot’s
reliability and competence may change more rapidly whereas beliefs about
its benevolence may be more stable.
In a third paradigm, repeated-encounter but short-term experiments
could bring participants back to the laboratory more than once. Such
studies could distinguish people’s adjustments to specific robots (if they
encounter the same robot again) from adjustments of their general beliefs
about robots (if they encounter a different, but comparable robot again).
From stereotype research, we have learned that people often maintain
general beliefs about a social category even when acquiring stereotype-
disconfirming information about specific individuals [83]. Likewise,
people may update their beliefs about a specific robot they encounter
repeatedly without changing their beliefs about robots in general [82].
Chapter 1
16
Conclusion
Trust is one agent’s expectation about the other’s actions. Trust is
broken when the other does not act as one expected—is not as reliable or
competent as one expected, or is dishonest or unethical. In all these cases,
a discrepancy emerges between what one agent expected and the other
agent delivered. Human-robot interactions, we suggest, often exemplify
such cases: people expect more of their robots than the robots can deliver.
Such discrepancies have many sources, from misleading and deceptive
information to the seemingly innocuous but powerful presence of deep-
seated social signals. This range of sources demands a range of remedies,
and we explored several of them, from patience to legibility, from
incremental design to longitudinal research. Because of people’s complex
responses to artificial agents, there is no optimal recipe for minimizing
discrepancies and maximizing trust. We can only advance our
understanding of those complex human responses to robots, use this
understanding to guide robot design, and monitor how improved design
and human adaptation, over time, foster more calibrated and trust-building
human-robot interactions.
References
[1] D. Ullman and B. F. Malle, “What does it mean to trust a robot? Steps
toward a multidimensional measure of trust,” in Companion of the 2018
ACM/IEEE International Conference on Human-Robot Interaction, New
York, NY, USA: ACM, 2018, pp. 263–264.
[2] D. Ullman and B. F. Malle, “Measuring gains and losses in human-robot
trust: Evidence for differentiable components of trust.,” in Companion to
the 2019 ACM/IEEE International Conference on Human-Robot
Interaction, HRI ’19., New York, NY: ACM, 2019, pp. 618–619.
[3] L. Lewis and S. Shrikanth, “Japan lays bare the limitations of robots in
unpredictable work,” Financial Times, 25-Apr-2019. [Online]. Available:
https://www.ft.com/content/beece6b8-4b1a-11e9-bde6-79eaea5acb64.
[Accessed: 04-Jan-2020].
[4] R. K. Moore, “Spoken language processing: Where do we go from here,” in
Your Virtual Butler: The Making-of, R. Trappl, Ed. Berlin, Heidelberg:
Springer Berlin Heidelberg, 2013, pp. 119–133.
[5] M.-A. Williams, “Robot social intelligence,” in Social Robotics, S. S. Ge,
O. Khatib, J.-J. Cabibihan, R. Simmons, and M.-A. Williams, Eds. Springer
Berlin Heidelberg, 2012, pp. 45–55.
[6] K. Fischer, H. M. Weigelin, and L. Bodenhagen, “Increasing trust in
human–robot medical interactions: effects of transparency and
Trust and Discrepancy 17
adaptability,” Paladyn, Journal of Behavioral Robotics, vol. 9, no. 1, pp.
95–109, 2018, doi: 10.1515/pjbr-2018-0007.
[7] T. L. Sanders, T. Wixon, K. E. Schafer, J. Y. Chen, and P. Hancock, “The
influence of modality and transparency on trust in human-robot
interaction,” presented at the Cognitive Methods in Situation Awareness
and Decision Support (CogSIMA), 2014 IEEE International Inter-
Disciplinary Conference on, 2014, pp. 156–159.
[8] K. Fischer, “Tracking anthromorphizing behavior in human-robot
interaction,” Manuscript submitted for publication, 2020.
[9] N. Eilan, C. Hoerl, T. McCormack, and J. Roessler, Eds., Joint attention:
Communication and other minds. New York, NY: Oxford University Press,
2005.
[10] B. Mutlu, T. Kanda, J. Forlizzi, J. Hodgins, and H. Ishiguro,
“Conversational gaze mechanisms for humanlike robots,” ACM Trans.
Interact. Intell. Syst., vol. 1, no. 2, pp. 12:1–12:33, Jan. 2012, doi:
10.1145/2070719.2070725.
[11] K. Fischer, K. Lohan, J. Saunders, C. Nehaniv, B. Wrede, and K. Rohlfing,
“The impact of the contingency of robot feedback on HRI,” in 2013
International Conference on Collaboration Technologies and Systems
(CTS), IEEE, 2013, pp. 210–217.
[12] K. Fischer, K. Foth, K. Rohlfing, and B. Wrede, “Mindful tutors – linguistic
choice and action demonstration in speech to infants and to a simulated
robot,” Interaction Studies, vol. 12, no. 1, pp. 134–161, 2011.
[13] M. Kwon, M. F. Jung, and R. A. Knepper, “Human expectations of social
robots,” in Proceedings of the Eleventh ACM/IEEE International
Conference on Human Robot Interaction, HRI’16, Piscataway, NJ, 2016,
pp. 463–464.
[14] R. Mermelshtine, “Parent–child learning interactions: A review of the
literature on scaffolding,” British Journal of Educational Psychology, vol.
87, no. 2, pp. 241–254, Jun. 2017, doi: 10.1111/bjep.12147.
[15] L. Jussim, S. L. Robustelli, and T. R. Cain, “Teacher expectations and self-
fulfilling prophecies,” in Handbook of motivation at school., K. R. Wenzel
and A. Wigfield, Eds. New York, NY: Routledge/Taylor & Francis Group,
2009, pp. 349–380.
[16] R. E. Kraut, “Effects of social labeling on giving to charity,” Journal of
Experimental Social Psychology, vol. 9, no. 6, pp. 551–562, Nov. 1973, doi:
10.1016/0022-1031(73)90037-1.
[17] M. de Graaf, S. B. Allouch, and J. van Dijk, “Why do they refuse to use my
robot?: Reasons for non-use derived from a long-term home study,” in
Proceedings of the 2017 ACM/IEEE International Conference on Human-
Robot Interaction, 2017, pp. 224–233.
[18] M. A. Bobes, M. Valdessosa, and E. Olivares, “An ERP study of
expectancy violation in face perception,” Brain and Cognition, vol. 26, no.
1, pp. 1–22, Sep. 1994, doi: 10.1006/brcg.1994.1039.
Chapter 1
18
[19] J. K. Burgoon, D. A. Newton, J. B. Walther, and E. J. Baesler, “Non-verbal
expectancy violations,” Journal of Nonverbal Behavior, vol. 55, no. 1, pp.
58–79, 1989.
[20] U. Bruckenberger, A. Weiss, N. Mirnig, E. Strasser, S. Stadler, and M.
Tscheligi, “The good, the bad, the weird: Audience evaluation of a ‘real’
robot in relation to science fiction and mass media,” ICSR 2013, vol. 8239
LNAI, pp. 301–310, 2013, doi: 10.1007/978-3-319-02675-6_30.
[21] T. Komatsu, R. Kurosawa, and S. Yamada, “How does the difference
between users’ expectations and perceptions about a robotic agent affect
their behavior?,” Int J of Soc Robotics, vol. 4, no. 2, pp. 109–116, Apr.
2012, doi: 10.1007/s12369-011-0122-y.
[22] Vitsoe, “The power of good design,” 2018. [Online]. Available:
https://www.vitsoe.com/us/about/good-design. [Accessed: 22-Oct-2018].
[23] G. Donelli, “Good design is honest,” 13-Mar-2015. [Online]. Available:
https://blog.astropad.com/good-design-is-honest/. [Accessed: 22-Oct-2018].
[24] C. de Jong, Ed., Ten principles for good design: Dieter Rams. New York,
NY: Prestel Publishing, 2017.
[25] D. J. Rea, D. Geiskkovitch, and J. E. Young, “Wizard of awwws: Exploring
psychological impact on the researchers in social HRI experiments,” in
Proceedings of the Companion of the 2017 ACM/IEEE International
Conference on Human-Robot Interaction, New York, NY, USA:
Association for Computing Machinery, 2017, pp. 21–29.
[26] W. C. Redding, “Ethics and the study of organizational communication:
When will we wake up? In Jaksa, J.A., Pritchard, M.S. (eds.) (pp. ).
Hampton, Cresskill,” in Responsible Communication: Ethical Issues in
Business, Industry, and the Professions, J. A. Jaksa and M. S. Pritchard,
Eds. Cresskill, NJ: Hampton Press, 1996, pp. 17–40.
[27] E. C. Collins, “Vulnerable users: deceptive robotics,” Connection Science,
vol. 29, no. 3, pp. 223–229, Jul. 2017, doi:
10.1080/09540091.2016.1274959.
[28] A. Matthias, “Robot lies in health care: When is deception morally
permissible?,” Kennedy Inst Ethics J, vol. 25, no. 2, pp. 169–192, Jun.
2015, doi: 10.1353/ken.2015.0007.
[29] K. Wada, T. Shibata, T. Saito, and K. Tanie, “Effects of robot-assisted
activity for elderly people and nurses at a day service center,” Proceedings
of the IEEE, vol. 92, no. 11, pp. 1780–1788, Nov. 2004, doi:
10.1109/JPROC.2004.835378.
[30] E. Karakosta, K. Dautenhahn, D. S. Syrdal, L. J. Wood, and B. Robins,
“Using the humanoid robot Kaspar in a Greek school environment to
support children with Autism Spectrum Condition,” Paladyn, Journal of
Behavioral Robotics, vol. 10, no. 1, pp. 298–317, Jan. 2019, doi:
10.1515/pjbr-2019-0021.
[31] H. H. Clark, “How do real people communicate with virtual partners?,”
presented at the Proceedings of AAAI-99 Fall Symposium, Psychological
Models of Communication in Collaborative Systems, November 5-7th,
1999, North Falmouth, MA., 1999.
Trust and Discrepancy 19
[32] K. Fischer, Designing speech for a recipient. partner modeling, alignment
and feedback in so-called “simplified registers.” Amsterdam: John
Benjamins, 2016.
[33] J. Goetz, S. Kiesler, and A. Powers, “Matching robot appearance and
behavior to tasks to improve human-robot cooperation,” in The 12th IEEE
International Workshop on Robot and Human Interactive Communication,
vol. 19, New York, NY: Association for Computing Machinery, 2003, pp.
55–60.
[34] B. F. Malle, P. Bello, and M. Scheutz, “Requirements for an artificial agent
with norm competence,” in Proceedings of 2nd ACM conference on AI and
Ethics (AIES’19), New York, NY: ACM, 2019.
[35] E. Sanoubari, S. H. Seo, D. Garcha, J. E. Young, and V. Loureiro-
Rodríguez, “Good robot design or Machiavellian? An in-the-wild robot
leveraging minimal knowledge of passersby’s culture,” in Proceedings of
the 14th ACM/IEEE International Conference on Human-Robot Interaction
(HRI), New York, NY, USA: Association for Computing Machinery, 2019,
pp. 382–391.
[36] B. F. Malle and D. Ullman, “A multi-dimensional conception and measure
of human-robot trust,” in Trust in human-robot interaction: Research and
applications, C. S. Nam and J. B. Lyons, Eds. Elsevier, 2020.
[37] K. Fischer and R. Moratz, “From communicative strategies to cognitive
modelling,” presented at the First International Workshop on `Epigenetic
Robotics’, September 17-18, 2001, Lund, Sweden, 2001.
[38] S. Payr, “Towards human-robot interaction ethics.,” in A Construction Manual
for Robots’ Ethical Systems: Requirements, Methods, Implementations, R.
Trappl, Ed. Cham, Switzerland: Springer International, 2015, pp. 31–62.
[39] K. Fischer, What computer talk is and isn’t: Human-computer conversation
as intercultural communication. Saarbrücken: AQ-Verlag, 2006.
[40] K. R. Fleischmann and W. A. Wallace, “A covenant with transparency:
Opening the black box of models,” Communications of the ACM - Adaptive
complex enterprises, vol. 48, no. 5, pp. 93–97, 2005, doi:
10.1145/1060710.1060715.
[41] M. Hind et al., “Increasing trust in AI services through supplier’s
declarations of conformity,” ArXiv e-prints, Aug. 2018.
[42] S. R. Fussell, S. Kiesler, L. D. Setlock, and V. Yew, “How people
anthropomorphize robots,” in Proceedings of the 3rd ACM/IEEE
International Conference on Human Robot Interaction, HRI ’08, New
York, NY, USA: Association for Computing Machinery, 2008, pp. 145–
152.
[43] J. =áRWRZVNL H. Sumioka, S. Nishio, D. F. Glas, C. Bartneck, and H.
Ishiguro, “Appearance of a robot affects the impact of its behaviour on
perceived trustworthiness and empathy,” Paladyn, Journal of Behavioral
Robotics, vol. 7, no. 1, 2016, doi: 10.1515/pjbr-2016-0005.
[44] C. Nass and Y. Moon, “Machines and mindlessness: Social responses to
computers,” Journal of Social Issues, vol. 56, no. 1, pp. 81–103, Jan. 2000,
doi: 10.1111/0022-4537.00153.
Chapter 1
20
[45] H. Admoni and B. Scassellati, “Social eye gaze in human-robot
interaction,” Journal of Human-Robot Interaction, vol. 6, no. 1, pp. 25–63,
May 2017, doi: 10.5898/JHRI.6.1.Admoni.
[46] X. Zhao, C. Cusimano, and B. F. Malle, “Do people spontaneously take a
robot’s visual perspective?,” in Proceedings of the Eleventh ACM/IEEE
International Conference on Human Robot Interaction, HRI’16,
Piscataway, NJ: IEEE Press, 2016, pp. 335–342.
[47] C. Bartneck, T. Kanda, O. Mubin, and A. Al Mahmud, “Does the design of
a robot influence its animacy and perceived intelligence?,” International
Journal of Social Robotics, vol. 1, no. 2, pp. 195–204, Feb. 2009, doi:
10.1007/s12369-009-0013-7.
[48] E. Broadbent et al., “Robots with display screens: A robot with a more
humanlike face display is perceived to have more mind and a better
personality,” PLoS ONE, vol. 8, no. 8, p. e72589, Aug. 2013, doi:
10.1371/journal.pone.0072589.
[49] F. Eyssel, D. Kuchenbrandt, S. Bobinger, L. de Ruiter, and F. Hegel, “‘If
you sound like me, you must be more human’: On the interplay of robot and
user features on human-robot acceptance and anthropomorphism,” in
Proceedings of the 7th ACM/IEEE International Conference on Human-
Robot Interaction, HRI’12, New York, NY: Association for Computing
Machinery, 2012, pp. 125–126.
[50] B. F. Malle, “How many dimensions of mind perception really are there?,”
in Proceedings of the 41st Annual Meeting of the Cognitive Science Society,
E. K. Goel, C. M. Seifert, and C. Freksa, Eds. Montreal, Canada: Cognitive
Science Society, 2019, pp. 2268–2274.
[51] E. Phillips, D. Ullman, M. de Graaf, and B. F. Malle, “What does a robot
look like?: A multi-site examination of user expectations about robot
appearance.,” in Proceedings of the Human Factors and Ergonomics
Society Annual Meeting., 2017.
[52] E. Phillips, X. Zhao, D. Ullman, and B. F. Malle, “What is human-like?
Decomposing robots’ human-like appearance using the Anthropomorphic
roBOT (ABOT) database,” in Proceedings of the 2018 ACM/IEEE
International Conference on Human-Robot Interaction, New York, NY,
USA: ACM, 2018, pp. 105–113.
[53] X. Zhao, E. Phillips, and B. F. Malle, “How people infer a humanlike mind
from a robot body,” PsyArXiv, preprint, Nov. 2019.
[54] K. Fischer, “Interpersonal variation in understanding robots as social
actors,” in Proceedings of em HRI’11, March 6-9th, 2011. Lausanne,
Switzerland, 2011, pp. 53–60.
[55] S. Payr, “Virtual butlers and real people: Styles and practices in long-term
use of a companion,” in Your virtual butler: The making-of, R. Trappl, Ed.
Berlin, Heidelberg: Springer, 2013, pp. 134–178.
[56] K. M. Lee, Y. Jung, J. Kim, and S. R. Kim, “Are physically embodied
social agents better than disembodied social agents?: The effects of physical
embodiment, tactile interaction, and people’s loneliness in human–robot
Trust and Discrepancy 21
interaction,” International Journal of Human-Computer Studies, vol. 64, no.
10, pp. 962–973, Oct. 2006, doi: 10.1016/j.ijhcs.2006.05.002.
[57] K. S. Haring, K. Watanabe, M. Velonaki, C. C. Tossell, and V. Finomore,
“FFAB—The form function attribution bias in human–robot interaction,”
IEEE Transactions on Cognitive and Developmental Systems, vol. 10, no. 4,
pp. 843–851, Dec. 2018, doi: 10.1109/TCDS.2018.2851569.
[58] G. M. Lucas, J. Gratch, A. King, and L.-P. Morency, “It’s only a computer:
Virtual humans increase willingness to disclose,” Computers in Human
Behavior, vol. 37, pp. 94–100, Aug. 2014, doi: 10.1016/j.chb.2014.04.043.
[59] T. Uchida, H. Takahashi, M. Ban, J. Shimaya, Y. Yoshikawa, and H.
Ishiguro, “A robot counseling system — What kinds of topics do we prefer
to disclose to robots?,” in 2017 26th IEEE International Symposium on
Robot and Human Interactive Communication (RO-MAN), 2017, pp. 207–
212.
[60] R. Pak, N. Fink, M. Price, B. Bass, and L. Sturre, “Decision support aids
with anthropomorphic characteristics influence trust and performance in
younger and older adults,” Ergonomics, vol. 55, no. 9, pp. 1059–1072, Sep.
2012, doi: 10.1080/00140139.2012.691554.
[61] L. D. Riek, T. Rabinowitch, B. Chakrabarti, and P. Robinson, “Empathizing
with robots: Fellow feeling along the anthropomorphic spectrum,” in 2009
3rd International Conference on Affective Computing and Intelligent
Interaction and Workshops, 2009, pp. 1–6.
[62] S. H. Seo, D. Geiskkovitch, M. Nakane, C. King, and J. E. Young, “Poor
thing! Would you feel sorry for a simulated robot? A comparison of
empathy toward a physical and a simulated robot,” in Proceedings of the
Tenth Annual ACM/IEEE International Conference on Human-Robot
Interaction, New York, NY, USA: Association for Computing Machinery,
2015, pp. 125–132.
[63] D. Y. Geiskkovitch, D. Cormier, S. H. Seo, and J. E. Young, “Please
continue, we need more data: An exploration of obedience to robots,”
Journal of Human-Robot Interaction, vol. 5, no. 1, pp. 82–99, 2016, doi:
10.5898/10.5898/JHRI.5.1.Geiskkovitch.
[64] M. Salem, G. Lakatos, F. Amirabdollahian, and K. Dautenhahn, “Would
you trust a (faulty) robot? Effects of error, task type and personality on
human-robot cooperation and trust,” in Proceedings of the Tenth Annual
ACM/IEEE International Conference on Human-Robot Interaction, HRI
’15, New York: ACM, 2015, pp. 141–148.
[65] A. Powers and S. Kiesler, “The advisor robot: Tracing people’s mental
model from a robot’s physical attributes,” in Proceedings of the 1st ACM
SIGCHI/SIGART Conference on Human-robot Interaction, New York, NY,
USA: ACM, 2006, pp. 218–225.
[66] V. Chidambaram, Y.-H. Chiang, and B. Mutlu, “Designing persuasive
robots: How robots might persuade people using vocal and nonverbal cues,”
in Proceedings of the Seventh Annual ACM/IEEE International Conference
on Human-Robot Interaction (HRI ’12), New York, NY, USA: Association
for Computing Machinery, 2012, pp. 293–300.
Chapter 1
22
[67] S. Turkle, C. Breazeal, O. Dasté, and B. Scassellati, “First encounters with
Kismet and Cog: Children respond to relational artifacts,” in Digital media:
Transformations in human communication, P. Messaris and L. Humphreys,
Eds. New York, NY: Peter Lang, 2006, pp. 313–330.
[68] C. Lichtenthäler and A. Kirsch, Legibility of robot behavior: A literature
review. https://hal.archives-ouvertes.fr/hal-01306977, 2016.
[69] M. Kwon, S. H. Huang, and A. D. Dragan, “Expressing robot incapability,”
in Proceedings of the 2018 ACM/IEEE International Conference on
Human-Robot Interaction (HRI ’18), New York, NY, USA: Association for
Computing Machinery, 2018, pp. 87–95.
[70] G. Briggs and M. Scheutz, “‘Sorry, I can’t do that:’ Developing
mechanisms to appropriately reject directives in human-robot interactions,”
in Proceedings of the 2015 AAAI Fall Symposium on AI and HRI, 2015.
[71] M. de Graaf and B. F. Malle, “How people explain action (and autonomous
intelligent systems should too),” in 2017 AAAI Fall Symposium Series
Technical Reports, Palo Alto, CA: AAAI Press, 2017, pp. 19–26.
[72] H. H. Clark, “Communal lexicons,” in Context in Language Learning and
Language Understanding, v, 198 vols., K. Malmkjær and J. Williams, Eds.
Cambridge University Press, 1998, pp. 63–87.
[73] E. A. Schegloff, “Notes on a conversational practise: formulating place,” in
Studies in Social Interaction, D. Sudnow, Ed. New York: Free Press, 1972,
pp. 75–119.
[74] S. Paepcke and L. Takayama, “Judging a bot by its cover: An experiment on
expectation setting for personal robots,” in 2010 5th ACM/IEEE
International Conference on Human-Robot Interaction (HRI), New York,
NY: Association for Computing Machinery, 2010, pp. 45–52.
[75] E. R. Teoh, “What’s in a name? Drivers’ perceptions of the use of five SAE
Level 2 driving automation systems,” Journal of Safety Research, 2020.
[76] J. Bhuiyan, “A federal agency says an overreliance on Tesla’s Autopilot
contributed to a fatal crash,” Vox, 12-Sep-2017. [Online]. Available:
https://www.vox.com/2017/9/12/16294510/fatal-tesla-crash-self-driving-
elon-musk-autopilot. [Accessed: 05-Jan-2020].
[77] F. Lambert, “Tesla driver was eating and drinking during publicized
Autopilot crash, NTSB reports,” Electrek, 03-Sep-2019. [Online].
Available: https://electrek.co/2019/09/03/tesla-driver-autopilot-crash-eating
-ntsb-report/. [Accessed: 05-Jan-2020].
[78] M. A. Regan, C. Hallett, and C. P. Gordon, “Driver distraction and driver
inattention: Definition, relationship and taxonomy,” ACCIDENT ANALYSIS
AND PREVENTION, vol. 43, no. 5, pp. 1771–1781, Sep. 2011, doi:
10.1016/j.aap.2011.04.008.
[79] M. Rothbart and B. Park, “On the confirmability and disconfirmability of
trait concepts,” Journal of Personality and Social Psychology, vol. 50, pp.
131–142, 1986.
[80] C. V. Smedegaard, “Reframing the Role of Novelty within Social HRI:
from Noise to Information,” 2019 14th ACM/IEEE International
Trust and Discrepancy 23
Conference on Human-Robot Interaction (HRI), pp. 411–420, 2019, doi:
10.1109/HRI.2019.8673219.
[81] W. J. McGuire, “Inducing resistance to persuasion: Some contemporary
approaches,” in Advances in Experimental Social Psychology, vol. 1, L.
Berkowitz, Ed. Academic Press, 1964, pp. 191–229.
[82] M. J. Ferguson, M. Kwon, T. Mann, and R. A. Knepper, “The formation
and updating of implicit impressions of robots.,” presented at the The
Annual Meeting of the Society for Experimental Social Psychology,
Toronto, Canada, 2019.
[83] M. Rothbart and M. Taylor, “Category labels and social reality: Do we view
social categories as natural kinds?,” in Language, interaction and social
cognition, Thousand Oaks, CA, US: Sage Publications, Inc, 1992, pp. 11–
36.
CHAPTER 2
TALKING TO ROBOTS AT DEPTH
ROBERT CODD-DOWNEY, ANDREW SPEERS,
MICHAEL JENKIN
Electrical Engineering and Computer Science
Lassonde School of Engineering
York University, Canada
Effective human-robot interaction can be complex at the best of times
and under the best of situations but the problem becomes even more
complex underwater. Here both the robot and the human operator must be
shielded from the effects of water. Furthermore, the nature of water itself
complicates both the available technologies and the way in which they can
be used to support communication. Small-scale robots working in close
proximity to divers underwater are further constrained in their
communication choices by power, mass and safety concerns, yet it is in this
domain that effective human-robot interaction is perhaps most critical.
Failure in this scenario can result in vehicle loss as well as vehicle operation
that could pose a threat to local operators. Here we describe a range of
approaches that have been used successfully to provide this essential
communication. Tethered and tetherless approaches are reviewed along
with design considerations for human input and display/interaction devices
that can be controlled by divers operating at depth.
Introduction
Effective human-robot communication is essential everywhere, but
perhaps nowhere is that more the case than when a human is communicating
with a robot that is operating underwater. Consider the scenarios shown in
Figure 2.1. Here two different robots are shown operating in close proximity
to an underwater operator. Effective operation of the robot requires a mechanism
Talking to Robots at Depth 25
for the operator (a diver) to communicate instructions to the robot and to
have the robot communicate acknowledgment of those instructions and
provide other information to the diver. Failure in this communication can
lead to mission failure, injury to the diver and damage to, and even loss of,
the vehicle. The development of effective communication strategies for
Unmanned Underwater Vehicles (UUVs) is critical. Unfortunately not only
does the underwater environment require effective communication between
a robot and its operator(s), it also places substantive constraints on the ways
in which this communication can take place. The water column restricts
many common terrestrial communication approaches and even systems that
might be appropriate for underwater use tend to offer only low data
bandwidth and require high power consumption. Communication underwater
is further complicated by the limitations that the underwater environment
place on the ways in which the human can utilize a given technology to
communicate with the robot and the robot communicate with the human
user. For example, recreational SCUBA equipment typically requires the
diver to hold a SCUBA regulator in their mouth eliminating voice-based
command options. Normal touch input devices (e.g., keyboards, mice, etc.)
are difficult to make work underwater. Although such devices can be made
waterproof the pressure of water at depth renders many touch-sensitive
devices ineffective as the surrounding water pressure is mistaken as user
input. Finally, the refractive nature of transparent housings for display can
(a) AQUA [13] (b) Milton [11]
Figure 2.1: Divers operating in close proximity with robots underwater. Divers
require an effective means to communicate with a robot when operating at
depth. A range of potential solutions exist but any such solution must take into
account the realities of the operating medium and the cognitive load placed on
the diver. (a) shows AQUA [13] a six legged amphibious hexapod being
operated in a pool. (b) shows Milton [11] a more traditional thruster-based
Unmanned Underwater Vehicle (UUV) being operated in the open ocean. In
both cases the robots are shown operating with divers in close proximity.
Chapter 2
26
complicate the readability of displays designed for humans to view, whether
located on the robot itself or on some diver-carried display panel.
A further issue in robot-diver communication involves cognitive task
loading. Tasks that may appear simple at the surface can be difficult for a
diver to perform at depth. Cognitive loading as a consequence of diving is
well documented [30]. The effects on the cognitive abilities of divers
utilizing various gas mixtures, including oxygen-enriched air [2], have also
been documented. Task loading is a known risk factor in SCUBA diving
[36], and alerting divers to this risk is a component of recreational and
commercial dive training. Given these constraints, developing effective
interaction mechanisms for divers and robots operating at depth is a
complex and challenging task.
Some realities of working underwater
Communication between a human operator and a robot typically relies
on some medium such as a physical tether (e.g., wire), electromagnetic
waves (e.g., radio, light), or acoustic energy (e.g., sound) for communication.
The same is true underwater, however the physical properties of the
communications medium (e.g., water versus air) places complex restrictions
on such options.
Water is denser than air. The density of water varies with temperature
but for normal operational conditions a density of 1gm/cm3 is a reasonable
approximation. Air has a density of approximately 0.001225g/cm3. This
difference will cause objects (such as communication cables) that would
normally fall to the floor in terrestrial operation to be buoyant or sink
depending on their density. Furthermore the high density of water will cause
cables or tethers to introduce considerable drag on the vehicle even when
suspended in the water column. Buoyant cables will be subject to surface
wave and wind action while cables that are denser than water will encounter
the normal drag problems associated with terrestrial cables. Depending on
the location of the UUV and the operator a cable may be partially buoyant
and partially sunk.
Terrestrial wireless communication between an operator and a robot is
typically straightforward. Standard communication technologies based on
radio communication including WiFi and BlueTooth are pervasive and
standard technologies exist to support the development of communication
protocols based on these and other infrastructures. Water, unfortunately, is
not an effective medium for radio wave-based communication. Radio waves
are attenuated by water, and by salt water in particular [25]. This typically
limits their use to very short distances [43]. Different radio frequencies are
Talking to Robots at Depth 27
attenuated differently by water. Through an appropriate choice of
frequencies it is possible to use radio to communicate over short distances
underwater (see [4]), but given the constraints with such technologies there
is considerable interest in the development of other underwater
communication technologies. See [21] for a review.
The poor transmission of electromagnetic energy through water is also
found with visible light. However the effects are not as significant as
compared with the rest of the electromagnetic spectrum. The transmission
of light through water is impacted both by the water’s turbidity as well as
the nature of the light being transmitted. The absorption of light through one
meter of sea water can run from a low of around 2% for blue-green portion
of the visible light spectrum through transparent ocean water to over 74%
for the red portion in coastal sea water[20]. For a given level of turbidity,
the red light band is absorbed much more quickly than the blue-green band.
Thus, under natural sunlight, objects at depth that naturally appear red end
up appearing more blue-green than they do at the surface. From a
communications point of view, such colour loss is often not critical as over
short distances (5m or less) the increased absorption of the red band over
the blue-green band will not have a significant impact on computer displays.
High levels of turbidity, on the other hand, can easily obscure displays
associated with the diver-operator and the robot itself.
(a) Shore-based robot control (b) Tether management at the
surface
Figure 2.2: Tethered communication. Here surface-based operators shown in
(a) communicate with a submerged device through the tether shown in (b). Note
that the operators do not have direct view of the device, nor do they have direct
view of any divers who might be accompanying the device at depth. For this
deployment also observe the large number of cable handlers required to service
the cable as it travels through the surf zone.
Chapter 2
28
One final complication with light underwater for human-robot
communication is the nature of light refraction. Displays mounted within
air-tight housings are typically viewed through flat clear ports. Light
travelling from some display through the air within the device, through the
port and into the water pass through three different materials and the light
is refracted at each boundary according to Snell’s Law. This refraction will
introduce a distortion in displays and for each boundary a critical viewing
angle exists beyond which the port will act as a reflector rather than allowing
a view of the display within. The net effect of this is to require displays to
be viewed straight on as much as possible.
Finally, sound-based communications technology is common underwater
(see [29]). Such systems can communicate extremely long distances.
Unfortunately such systems can be quite bulky and have considerable power
requirements, reducing their potential application involving small scale
devices such as those shown in Figure 2.1.
(a) Android-based underwater (b) PC-based underwater tablet tablet
[41]
Figure 2.3: Tethered communication underwater. Tether-based communication
can also be accomplished completely underwater. Here the operator utilizes
a properly protected interaction device tethered to the vehicle. (a) shows a
mobile operator following the robot with a small blue optical tether. (b)
shows a larger underwater display and interaction device being operated
by a diver. Such devices provide considerably less flexibility than the
computer monitor and keyboard input shown in Figure 2.2(b) but allow for
direct line-of-sight to the vehicle aiding in the operators situational
awareness.
Talking to Robots at Depth 29
Using a physical tether
Given the complexities of the underwater environment perhaps the most
straightforward mechanism for structuring human-robot communication for
autonomous underwater vehicles is from a surface controller or an
underwater operator via a physical tether to the UUV (see [26, 22, 1, 37, 38]
for examples). While such an approach can provide for excellent
communication between the operator and the device as well as providing a
conduit for power and vehicle recovery if necessary, a tether, and in
particular a surface-based tether also presents several problems. An above
surface operator is typically located in some safe, dry location (as shown in
Figure 2.2(a)). Here the operator has no direct view of the autonomous
vehicle. Furthermore it is typically the case that the operator’s only “view”
of the operational environment is via sensors mounted on-board the
platform. As a consequence the operator tends to have very poor situational
awareness.
The tether being managed in Figure 2.2(b) provides both power and data
to the robotic sensor operating at the other end of the tether. This particular
tether is buoyant which has implications for the control of the sensor
package as well as the nature of the drag on the vehicle which is impacted
by surface wave action. When using small robots, such as AQUA and
Milton, the tether can be fragile and require special care in handling. When
working in the field, the operator’s controlling computer, being in close
proximity to water, may be unintentionally exposed to environmental
contaminants such as water from ocean spray or rain. Reducing this risk
requires that the operator be placed at a safe distance from water and thus
from UUV operation. This implies longer cables between the robot and
semi-dry operator locations, increasing cable management issues and
handler communication concerns.
The actual UUV operator is, of course, not the only human involved in
controlling an underwater vehicle. Although a tether provides a number of
advantages, at the end of the day it is a tether that must be properly managed.
Different deployments necessitate different tether management strategies
but personnel must be deployed in order to deal with problems that arise in
standard operation of the tethered vehicle. Figure 2.2 illustrates the
complexity of this problem for the shore deployment of an underwater
sensor package. A number of personnel are engaged in the task and the
ability of the various personnel to communicate among each other
effectively is key to successful UUV deployment. This problem becomes
even more acute underwater where personnel (divers) must be deployed to
manage the tether between the UUV and the operator. The problems with
Chapter 2
30
tethered operations become even more severe at depth. Communication
with submerged divers can be problematic. Divers are limited in their ability
to assess the state of the robot, relying instead on confirmation from the
operator passed through surface-based cable wranglers.
An alternative to having the UUV teleoperated from the surface involves
placing the operator in close proximity to the UUV and then operating the
vehicle from underwater. This is shown in Figure 2.3. Teleoperation at
depth allows the operator to interact directly with the robot. This enables a
number of different operational modes not possible with a ship- or shore-
based operator. For example, a diver operating at a relatively safe depth (say
80’) can teleoperate a vehicle operating 30’-40’ deeper, without exposing
the diver to the increased dangers associated with the lower dive profile. An
underwater tether can also be used to enable a diver to remain outside of
potentially dangerous environments while the robot operates within them.
For example, a robot could be sent to investigate the inside of a wreck, while
allowing the diver to remain outside. Unfortunately the nature of the
underwater environment limits the kinds of interaction that the operator can
engage in, and the remote interaction device and cognitive loading is also a
serious issue. The remote interaction device used by the diver-operator
needs to be as neutrally buoyant as possible so as to minimize the effect of
the device on the diver-operator’s ability to maneuver underwater. The
device shown in Figure 2.3(a), for example, has a very small form factor in
(a) Li-Fi Modems (b) Li-Fi Modems underwater
Figure 2.4: Li-Fi modems for operation underwater. (a) shows the
modems in their housings with an array of LEDs and photo diodes for
light generation and capture. (b) shows the same modems deployed
underwater. As Manchester coding is used for encoding the message in
the light, the source light does not appear to flicker but rather appears
as a dim constant light source.
Talking to Robots at Depth 31
part to reduce the effect of the buoyancy of the device on the diver. The
device shown in Figure 2.3(b) is negatively buoyant, which makes operating
the robot from the seabed more comfortable than operating the robot from
the middle of the water column.
Given the complexities of tether-based operation technologies, there is
a desire for communication approaches that can replace the physical tether
with some form of wireless technology that is suitable for underwater
operation. The requirement that a diver operates in close proximity to the
robot and the small form factor of the robot limits some of the potential
technologies that might be deployed. Sound-based technologies require a
power budget that is unlikely to be available on a device that could be
carried by the diver or small scale form-factor vehicle. Sound itself might
pose health risks to the diver and other marine life at certain decibel levels.
RF-based technology requires considerable power to operate over the
distances that would be required. Given these constraints visible light-based
communication is an appropriate choice. Modern LED-based lighting
systems utilize very little power, and by limiting the power of any light
sources used we can ensure that the light is safe for any diver.
Encoded light-based communication
Given that light travels long distances underwater, at least outside of the
red portion of the visible light spectrum, visible light would seem to be an
appropriate medium for underwater communication. Underwater wireless
optical communication (UWOC) can either be based on LASER-based light
(a) Static marker (b) Dynamic marker
Figure 2.5: Visual fiducial markers. Both static (a) and dynamic (b)
fiducial markers can be used to communicate command information
through a visual channel.
Chapter 2
32
sources or on the use of regular light, typically generated through a array of
low power LEDs or a single high powered LED. See [28] for a recent review
of both approaches. Regardless of the technology used to generate the light,
the basic approach is to encode the message through modulating the light
source and then observing this modulation at the receiving end of the
communication. Given the frequency-dependency of light absorption in
water, typically light sources in the white or blue-green spectrum are used.
Light also works as a conduit upon which communication can be built
terrestrially. Light-Fidelity communication (Li-Fi) aims to use visible light
as the communication medium for digital communication. (See [17] for a
review of the technology.) Although still in its infancy, Li-Fi has shown
substantive promise. There have, however, been few large-field tests of the
technology. Beyond the terrestrial domain there have also been a number of
efforts to deploy Li-Fi technology underwater. For example, the
transmission properties of different light sources for Li-Fi have been studied
underwater, leading to the observation that LED-based communication has
advantages while underwater when line of sight cannot be guaranteed [24].
At a systems level, a long distance (100m) light-based communication
system has been demonstrated that utilizes optics to concentrate the emitter
and a single photon avalanche diode to enhance detection [42].
The IEEE 802.15.7 standard for visible light communication (VLC) [32]
utilizes on-off keying (OOK) to encode the format of the data stream from
the transmitter to the receiver. The basic idea here is that by turning a light
on and off at the transmitter using a message-driven encoding the receiver
can decode this sequence into the transmitted message. A popular approach
for this OOK process is Manchester encoding [40], which is a recommended
OOK approach in the IEEE VLC standard. Essentially this approach
modulates the data stream using a clock signal. One downside of this
mechanism is its relatively high overhead in terms of the communication
signal, consuming 100% more bandwidth than a raw encoding scheme.
Figure 2.4 shows the experimental Li-Fi underwater modem described
in [8, 9]. A key problem in the deployment of Li-Fi underwater is the
construction of an appropriate light emission/collection device that can
operate underwater and that is more or less agnostic to misalignment errors
between the emitter and receiver. The Light-Byte modems shown in Figure
2.4 utilize a ring of emitter/receivers that provide a 360ƕ range of light
emission/detection in a reasonably wide vertical band. All processing of the
incoming and outgoing light signals is performed within the underwater
housings themselves allowing the units to appear as USB modems to the
external computers or robots.
Talking to Robots at Depth 33
One problem with deploying Li-Fi-based systems outdoors is that the
technology must compete with other light sources present in the
environment. In essence, the receiver must be able to extract the encoded
light message from the ambient light. Utilizing a brighter emitter source can
help, but it is it difficult to compete with the Sun, especially on cloudless
days. Underwater this means that performance of Li-Fi modems is actually
worse near the surface and that performance improves markedly with depth.
Short pass light filters may be a reliable mechanism to overcome this
limitation.
Visual target-based communication
Fiducial markers are two-dimensional binary tags that convey
information to the observer. Technologies such as ARTags [15], April-Tags
[27] and Fourier Tags [34] can be used to determine the pose of an observer
with respect to the tag. Such tags can also be used to communicate messages
along with pose information. The amount of information can be very limited
or can encode a large number of bytes. For example, QRCodes [12] can
encode a large amount of custom binary data. In this type of communication
a two-dimensional visual target is presented to the robot which captures the
image using an on-board camera and processes the image stream to obtain
the intended message. One benefit of target-based communication for
human to robot communication is that in an underwater environment the
small amount of processing power required to localize and recognize the
target within an image can be very beneficial. A collection of unique visual
targets allows for the development of a simple command language, where
each tag corresponds to a different command. Even a simple set of visual
command targets can provide effective vehicle control given a controlled
operational environment. Sequences of tags such as those found in
RoboChat [14] can be strung together to describe sophisticated tasks. Figure
2.5 illustrates this process in action for both static pre-printed targets as well
as dynamically generated targets on an underwater display.
The use of static fiducial target-based communication is effective but
cumbersome, as operators need to carry a library of tags. Finding the
specific tag needed for the next part of a command sequence is arduous and
it is possible to accidentally show the wrong card to the robot while
searching for the correct tag. The use of custom or dynamic tags has also
been explored [41]. This technique utilizes an underwater tablet-like device
that allows the user to select and present a series of control commands to
the robot (Figure 2.5(b)). The robot captures these images and once verified
a compact sequence of tags that encode the command sequence is generated
Chapter 2
34
and can be shown to the robot. This approach can reduce the complexity of
carrying a large collection of tags but requires the development of a suitable
underwater display and interaction box.
Figure 2.6 illustrates the normal process of dynamic marker identification
along with some potential pitfalls associated with the approach. Figure
2.6(a) illustrates the desired outcome. This figure is from the output of the
target identification process. The view of the target and its housing has been
overlayed by a red rectangle, illustrating the localization of the target, and
the target identity is printed over the target itself. The process of actually
capturing this target can be frustrated in a number of ways. The first is due
to refraction effects (Figure 2.6(b). Here the oblique viewing angle of the
target past the critical viewing angle causes the surface of the display to act
as a mirror and reflect light rather than pass light through the port. Figure
2.6(c) shows a further failure mode where the robot or some other participant
– here the robot operator – is reflected in the port. Notwithstanding these types
of errors, reasonably high recognition rates (57%) with no false positives at
(a) Marker identification (b) Refraction error
(c) Reflection error
Figure 2.6: Complexities of dynamic marker identification. Although the basic
task of marker identification and recognition can be straight- forward
underwater the nature of the protective housing introduces a range of
complexities. (b) shows refraction-based error where the incidence angle is
sufficiently large that the marker housing acts as a mirror obscuring the view of
the target. (c) shows reflection in the port of the underwater housing. Even
though the internal target is visible, a clear view of the target is obscured by the
reflection of the camera (here held by a diver).
Talking to Robots at Depth 35
30 frames-per-second are reported (resulting in more than 17 measurements
of the target being made each second) [39].
Interaction hardware and software
Many of the technologies used to communicate messages to the robot
from an underwater diver-operator require some mechanism for input and
display. Terrestrial robot control can exploit standard input devices
including keyboards, joysticks, and phone/tablet interfaces to communicate
with a robot. Underwater the choices are more limited. A critical requirement
is that the input device be protected from water and pressure and be operable
by a diver working at depth. In general this means constructing some
waterproof enclosure that provides both access to any display and input
devices. It is also possible to use the entire waterproof enclosure as a
joystick by augmenting the device with an appropriate tilt sensor and to use
the entire device as a pointing device through the use of a compass or IMU.
Although the waterproof container can be machined to be relatively light
when empty, it is critical that this container be (approximately) neutrally
buoyant when deployed underwater. If it is not neutrally buoyant then the
diver-operator will have to compensate for this when operating the device
which complicates the diver-operator’s task. In order for the entire tablet to
be neutrally buoyant it must weigh the same as the weight of the water it
displaces. In essence this limits the usability of large volume underwater
housings. It is certainly possible to build such housings but the large volume
of the housing will require that the housing be weighted through either the
inclusion of very heavy components or through the addition of external
mass. The resulting device may be “weightless” underwater, but will
difficult to move when underwater and to deploy from the surface.
Beyond mass and buoyancy there are a number of other issues in terms
of the design of any housing that must be taken into account. The surface
area of the device is a concern. The housing acts as a reaction surface
underwater, acts as a drag when the diver is swimming, and acts as a sail in
strong current or swells. Each penetration into the underwater housing is a
potential failure point, so although more inputs may be desirable each one
increases the risk of flood-failure to the device. Deploying a robot and its
support team typically means entry through the surf or off of a boat and thus
switches must be positioned so as to minimize the potential for accidental
operation during deployment. Cognitive loading of the diver-operator is an
issue, and although it is possible to provide large numbers of controls it can
be difficult for the operator to make effective use of them. Operators will
view displays through their dive goggles, the water column and a transparent
Chapter 2
36
port on the interaction device. Detailed displays may be difficult to view,
especially in turbid water conditions.
Figure 2.7 shows a lightweight interaction device for tethered control of
an underwater vehicle [38]. The control device is housed in a custom
fabricated housing as shown in Figure 2.7(b). An Android Nexus 7 tablet
provides both a display and on board computation to process switch and
other inputs and to condition the data for transmission to the robot.
The availability of small form factor displays and interaction devices
with general purpose computing, such as Android tablets, provides a
number of options for input and display. Sensors within the tablet itself,
including a compass and IMUs, can be exploited for robot control, they have
internal batteries for power, they typically support WiFi, Bluetooth and
USB communication, and provide a rich software library for control. One
disadvantage of such devices is that they do not support standard robot
control software middlewares such as ROS [31]. Being able to have the
interaction device communicate via ROS considerably simplifies
communication between the interaction device and the robot itself. Even for
devices that are not physically wired to the robot, using ROS as a common
communication framework between the interaction device and the robot has
benefits. ROS provides an effective logging mechanism, and there exist
visualization tools that can be used post deployment to help in
understanding any issues that may have arisen during deployment.
(a) Interaction (b) Housing
Figure 2.7: Devices for underwater interaction with a robot must be
designed to be operated at depth. This requires displays and inter-
action mechanisms that are appropriate for divers. (a) shows a diver
operating AQUA. The tablet acts as a joystick and provides simple
interaction mechanisms controlled by two, three-state switches. (b)
shows the diver’s view of the sensor.
Talking to Robots at Depth 37
Within ROS overall robot control is modeled as a collection of
asynchronous processes that communicate by message passing. Although a
very limited level of support does exist for ROS on commodity computer
tablets, environments such as Android or iOS are not a fully supported ROS
environment. In order to avoid any potential inconsistencies between these
environments and supported ROS environments, one option is to not build
the software structures in ROS directly, but rather to exploit the RosBridge
mechanism instead. RosBridge provides a mechanism within which ROS
messages are exposed to an external agent and within which an external
agent can inject ROS messages into the ROS environment. This injection
process uses the standard WebSocket protocol. The process of developing
specific interaction devices for robot-operator control can be simplified by
automating much of the specialized software required to map interaction
devices to ROS commands for interaction. Software toolkits that can be
used to semi-automatically generate display and interaction tools using the
RosBridge communication structure have previously been developed for
Android [37] and iOS [7] mobile platforms.
Gesture-based communication
Rather than augmenting the diver with some equipment that assists in
diver-robot communication it is possible to deploy novel gesture-based
communication languages (e.g., [19, 18, 3, 5, 6]). Although such approaches
can be effective, they require divers to learn a novel language for human-
robot communication while retaining their existing gesture-based language
for diver-diver communication. In addition to the increased cognitive load
on divers, such an approach also has the potential for accidental
miscommunication among divers and between divers and robots, given the
common symbols used in the two gesture languages. Rather than developing
a novel gesture-based language for human-diver communication, another
alternative is to leverage existing diver-diver gesture-based communication.
Divers have developed a set of effective strategies for implicit and explicit
communication with other divers. A standard set of hand gestures (signals)
have been developed with special commands for specific tasks and
environments. National and international recreational diving organizations
such as PADI teach these languages and help to establish and maintain a
standard set of gesture symbols and grammar. These gestures – which
include actions such as pointing with the index finger and obvious motions
of the hand while it is held in some configuration – are strung together in a
simple language. For example, to indicate that there is something wrong
with one’s ears, one would indicate “unwell” by holding the hand flat and
Chapter 2
38
rotating the wrist and then pointing at the effected ear. One observation
about these signals is that their semantics are dependent on their location
relative to the body. Thus a critical step in having a robot understand normal
diver gestures involves identifying the relative position of the hands to the
body and the configuration and motion of the hands. (See Figure 2.8.) Diver-
diver gesture-based communication relies on both an explicit and implicit
communication strategy. For example, given the lack of a straightforward
mechanism to name locations underwater, many commands conveying
coordinated motion are phrased as “you follow me” or “we swim together”
and rely on the implicit communication of aspects of the diver’s state.
Implicit understanding of a diver’s state requires some mechanism to
track and monitor the diver as they move. Autonomous following of an
underwater diver has been explored in the past. Perhaps the simplest diver-
following technique involves augmenting the diver in some manner to
simplify diver tracking. For example, [35] adopts the use of an atypically
coloured ball and simple computer vision techniques to track the diver. This
requires the diver to hold onto an inflated ball of some recognizable colour,
which will affect their buoyancy control negatively. Other methods (e.g.,
[33, 18]) track flipper oscillations of a certain colour in the frequency
domain to determine the location within the scene to track. Another
(a) Diver pointing at their ear (b) Tracked hand position
Figure 2.8: Divers utilize a standard set of gestures to communicate with other
divers underwater. (a) To indicate that there is an issue with their ear – for
example some problem with pressure equalization – the diver would indicate
that something is wrong and then point at their ear. Understanding this
communication involves tracking the diver’s hand relative to their head. (b)
Results of this tracking plotted in head-centric coordinates.
Talking to Robots at Depth 39
approach is to just recognize and localize the diver as part of a more general
diver communication mechanism.
A range of different technologies exist that could be used to identify and
track diver body parts in order to localize the diver and to understand the
actions being performed. Both traditional image processing techniques as
well as data driven (e.g., neural network-based) approaches could be
utilized. Previous work in this area includes an approach based on transfer
learning using a pre-trained Convolutional Neural Network (CNN) to
identify parts of divers, which are then tracked through time [10]. Deploying
a CNN requires an appropriately trained dataset and an appropriate CNN
that can be applied to the problem. This is addressed in part through the
SCUBANet dataset [10]. The SCUBANet dataset contains underwater
images of divers taken from both freshwater and saltwater environments.
The freshwater portion of the dataset was collected in Lake Seneca in King
City, Ontario, Canada. The saltwater portion of the dataset was collected
just off the west coast of Barbados. The SCUBANet dataset was collected
using the Milton robot [11] and consists of over 120,000 images of divers.
CNNs require that the dataset be properly labelled. SCUBANet’s dataset
was labelled using a crowd-sourcing tool and as of 2019 over 3200 image
annotations had been performed.
Work performed in our lab utilizes a transfer-learning approach to
recognize diver parts. Transfer learning involves taking some pretrained
CNN from a related task, using a larger dataset for the initial training, then
training the final level (or levels) using a smaller task-specific dataset. For
this task, a CNN trained on the COCO dataset for object detection,
segmentation and captioning [23] was used with the final level being
retrained on the SCUBANet dataset [10]. This process is typically much
faster than training a new CNN from scratch. Performance of the resulting
networks can be very good. For example, [10] reports that the retrained
faster rcnn inception v2 architecture demonstrated an average recognition
rate for divers, heads and hands of 71.6% mean average precision at 0.5
intersection over union [10].
Figure 2.8 shows the first steps in diver-robot communication using
diver gestures. Individual frames from a video of the diver are captured by
the robot and processed to identify and localize body parts important for
both implicit and explicit communication. Here the diver’s head and hand
positions are tracked while the diver points to their ear. When plotted in a
head-centric frame of reference the motion of the diver’s hand as it is raised
to and then lowered from the side of the diver’s head is clear (Figure 2.8(b)).
Ongoing work is investigating different techniques for labelling the specific
hand/finger gestures being used during these motions.
Chapter 2
40
Summary
Communication between humans and robots is transitioning from key-
board inputs performed by machine specialists to interactions with the
general public and with specialists who have been trained for particular
tasks. Many of these tasks have developed multi-modal communication
structures that are designed to meet the specifics of the task as hand. As
robots move out of the lab and into application domains it is critical that the
communications strategies used are appropriate for the task at hand. This is
true for many terrestrial tasks but is especially true underwater. Here the
environment places constraints on the technology available for the human
to talk to the robot and for the robot to talk to the human. Although it is
certainly possible for the human to remain warm and dry above the surface
of the water and to communicate either wirelessly (e.g., through sound) or
through a physical tether, the lack of a direct view of the operation being
undertaken reduces considerably the situational awareness of the operator.
Placing the operator in the water with the robot creates its own set of
problems. Underwater tethers can certainly be used, but this then requires
careful tether management, because a tangled tether underwater is a threat
to both the operator and the robot. Wireless communication must be safe for
the diver and not place undue power requirements on the diver, thus limiting
the use of RF and sound-based technologies. Visible light-based technologies
would seem appropriate although here again it is critical to not place undue
constraints on the diver operator. Carried interaction devices must work
well at depth neither upsetting the diver’s buoyancy nor placing undue load
on the diver’s cognitive abilities.
Perhaps the most desirable diver to robot communication strategy is to
exploit the normal diver to diver gesture-based communication strategy.
Recent work suggests that such an approach has promise and ongoing
research is exploring how best to understand complex statements and
commands based on gesture underwater. Table 2.1 provides a summary of
the various technologies presented in this chapter.
This chapter has concentrated on diver to robot communication.
Communication from the robot to the diver, especially when approaches
such as gesture-based are used that do not augment the diver, is also an
issue. Given that robots are often augmented with lights and displays then
clearly these devices can be leveraged for robot to diver communications.
But other options are possible. For example, it is possible for the robot to
choose specific motions to encode simple yes/no responses to queries and
even more complex motion sequences can be exploited to communicate
more sophisticated messages to the diver [16].
Talking to Robots at Depth 41
Strategy Properties
Physical tether Advantages: High data bandwidth, reasonably low
cost, bi-directional, support for standard
communication protocols. Disadvantages: Tether
management, tether drag.
Acoustic Advantages: Good range, not impacted by turbidity.
Disadvantages: Power requirements, potential
damage to divers and marine life, low bandwidth.
Static visual
target
Advantages: Inexpensive, easy to deploy.
Disadvantages: Restrictive communication set,
potential for accidental target viewing by robot, low
bandwidth.
Dynamic
visual target
Advantages: Large symbol set, easy to structure the
display so as to avoid accidental target display to the
robot. Disadvantages: Requirement of a display
device, complexity of viewing the target through the
display port, bandwidth can be improved by ganging
together multiple symbols.
UWOC Advantages: Low-power, relatively high bandwidth.
Disadvantages: Short-range, impacted by turbidity.
Specialized
gesture
language
Advantages: Can be tuned to the task at hand, can
be easily learned, can be designed to be easily
recognized by the robot. Disadvantages: Increased
cognitive loading on the diver, possibility of
confusion between symbol set used in diver to diver
communication and diver to robot communication,
low bandwidth.
Diver gesture
language
Advantages: Well known by divers.
Disadvantages: Complex gestures to recognize, low
bandwidth.
Table 2.1: The communication strategies described in this chapter
along with their advantages and disadvantages.
Bibliography
[1] T. Aoki, T. Maruashima, Y. Asao, T. Nakae, and M. Yamaguchi,
“Development of high-speed data transmission equipment for the full-depth
remotely operated vehicle – KAIKO,” in OCEANS, vol. 1, Halifax, Canada,
1997, pp. 87–92.
Chapter 2
42
[2] A.-K. Brebeck, A. Deussen, H. Schmitz-Peiffer, U. Range, C. Balestra, and
S. C. J. D. Schipke, “Effects of oxygen-enriched air on cognitive performance
during scuba-diving – an open-water study,” Research in Sports Med., vol.
24, pp. 1–12, 2017.
[3] A. G. Chavez, C. A. Mueller, T. Doernbach, D. Chiarella, and A. Birk,
“Robust gesture-based communication for underwater human-robot
interaction in the context of search and rescue diver missions,” in IROS
Workshop on Human-Aiding Robotics, Madrid, Spain, 2018, held in
conjunction with IEEE/RSJ International Conference on Intelligent Robots
and Systems (IROS).
[4] X. Che, I. Wells, G. Dickers, P. Kear, and X. Gong, “Re-evaluation of rf
electromagnetic communication in underwater sensor networks,” IEEE
Communications Magainze, vol. 48, pp. 143–151, 2011.
[5] D. Chiarella, M. Bibuli, G. Bruzzone, M. Caccia, A. Ranieri, E. Zereik, L.
Marconi, and P. Cutugno, “Gesture-based language for diver-robot
underwater interaction,” in OCEANS, Genoa, Italy, 2015.
[6] D. Chiarella, “A novel gesture-based language for underwater human-robot
interaction,” J. of Marine Science and Engineering, vol. 6, 2018.
[7] R. Codd-Downey and M. Jenkin, “RCON: dynamic mobile interfaces for
command and control of ROS-enabled robots,” in International Conference
on Informatics in Control, Automation and Robotics (ICAR, Colmar, France,
2015.
[8] R. Codd-Downey, “LightByte: Communicating wirelessly with an
underwater robot using light,” in International Conference on Informatics in
Control, Automation and Robotics (ICINO), Porto, Portugal, 2018.
[9] R. Codd-Downey, “Wireless teleoperation of an underwater robot using li-
fi,” in Proc. IEEE International Conference on Information and Automation
(ICIA), Wuyishan, China, 2018.
[10] R. Codd-Downey, “Finding divers with SCUBANet,” in IEEE International
Conference on Robotics and Automation (ICRA), Montreal, Canada, 2019.
[11] R. Codd-Downey, M. Jenkin, and K. Allison, “Milton: An open hardware
underwater autonomous vehicle,” in Proc. IEEE International Conference on
Information and Automation (ICIA), Macau, China, 2017.
[12] CSA: Standards Council of Canada, “Information Technology - Automated
Identification and Capture Techniques - QR Code 2005 Bar Code Symbol
Specification,” 2016, Canadian Government Report.
[13] G. Dudek, P. Giguere, C. Prahacs, S. Saunderson, J. Sattar, L.-A. Torres-
Mendez, M. Jenkin, A. German, A. Hogue, A. Ripsman, J. Zacher, E. Milios,
H. Liu, and P. Zhang, “AQUA: An amphibious autonomous robot,” IEEE
Computer, vol. Jan., pp. 46–53, 2007.
[14] G. Dudek, J. Sattar, and A. Xu, “A visual language for robot control and
programming: A human-interface study,” in IEEE International Conference
on Robotics and Automation (ICRA), Rome, Italy, 2007, pp. 2507–2513.
[15] M. Fiala, “Artag, a fiducial marker system using digital techniques,” in IEEE
Computer Society Conference on Computer Vision and Pattern Recognition
(CVPR), vol. 2, San Diego, CA, 2005, pp. 590–596.
Talking to Robots at Depth 43
[16] M. Fulton, C. Edge, and J. Sattar, “Robot communication via motion: Closing
the underwater human-robot interaction loop,” in International Conference
on Robotics and Automation (ICRA), Montreal, Canada, 2019, pp. 4660–
4666.
[17] H. Haas, L. Yin, Y. Wang, and C. Chen, “What is LIFI?” J. of Lightwave
Technology, vol. 34, pp. 1533–1544, 2016.
[18] M. J. Islam, “Understanding human motion and gestures for underwater
human-robot collaboration,” J. of Field Robotics, vol. 36, 2018.
[19] M. J. Islam, M. Ho, and J. Sattar, “Dynamic reconfiguration of mission
parameters in underwater human-robot collaboration,” in IEEE International
Conference on Robotics and Automation (ICRA), Brisbane, Australia, 2018,
pp. 1–8.
[20] N. G. Jerlov, Optical Oceanography. Amsterdam, The Netherlands: Elsevier,
1968.
[21] H. Kaushal and G. Kaddoum, “Underwater optical wireless communication,”
IEEE Access, vol. 4, pp. 1518–1547, 2016.
[22] P. Lee, B. Jeon, S. Hong, Y. Lim, C. Lee, J. Park, and C. Lee, “System design
of an ROV with manipulators and adaptive control if it,” in 2000 International
Symposium on Underwater Technology, Tokyo, Japan, 2000, pp. 431–436.
[23] T. Lin, M. Maire, S. J. Belongie, L. D. Bourdev, R. B. Girshick, J. Hays, P.
Perona, D. Ramanan, P. Dollar, and C. L. Zitnick, “Microsoft COCO:
common objects in context,” CoRR, vol. abs/1405.0312, 2014.
[24] P. Medhekar, S. Mungekar, V. Marathe, and V. Meharwade, “Visible light
underwater communciation using different light sources,” International
Journal of Modern Trends in Engineering and Research, vol. 3, pp. 635–638,
2016.
[25] R. K. Moore, “Radio communciation in the sea,” Spectrum, vol. 4, pp. 42–
51, 1967.
[26] M. Nokin, “ROV 6000 – objectives and description,” in OCEANS, vol. 2,
Brest, France, 1994, pp. 505–509.
[27] E. Olson, “AprilTag: a robust and flexible visual fiducial system,” in IEEE
International Conference on Robotics and Automation (ICRA), Shanghai,
China, 2011.
[28] H. Oubei, C. Shen, A. Kammoun, E. Zedini, K. Park, X. Sun, G. Liu, C. H.
Kang, T. K. Ng, M. S. Alouini, and B. S. Ooi, “Light based underwater
wireless communications,” Japanese J. of Applied Physics, vol. 57, 2018.
[29] D. Pompili and I. F. Akyildiz, “Overview of networking protocols for
underwater wireless communications,” IEEE Commun. Mag., vol. Jan., pp.
97–102, 2009.
[30] S. F. Pourhashemi, H. Sahraei, G. H. Meftahi, B. Hatef, and B. Gholipour,
“The effect of 20 minutes SCUBA diving on cognitive function of
professional SCUBA divers,” Asian J. of Sports Med., vol. 7, 2016.
[31] M. Quigley, B. Gerkey, K. Conley, J. Faust, T. Foote, J. Leibs, E. Berger, R.
Wheeler, and A. Y. Ng, “ROS: an open-source robot operating system,” in
Open-Source Software workshop at the International Conference on Robotics
and Automation (ICRA), Kobe, Japan, 2009.
Chapter 2
44
[32] S. Rajagopal, R. D. Roberts, and S. K. Lim, “IEEE 802.15.7 visible light
communication: modulation schemes and dimming support,” IEEE
Communications Magazine, vol. 50, p. 72–82, 2012.
[33] J. Sattar and G. Dudek, “Where is your dive buddy: tracking humans
underwater using spatio-temporal features,” in IEEE/RSJ International
Conference on Intelligent Robots and Systems (IROS), Sand Diego, CA,
2007, pp. 3654–3659.
[34] J. Sattar, E. E. Bourque, P. Giguere, and G. Dudek, “Fourier tags: Smoothly
degradable fiducial markers for use in human-robot interaction,” in Canadian
Conference on Computer and Robot Vision (CRV), Montreal, Canada, 2007,
pp. 165–174.
[35] J. Sattar, P. Giguere, G. Dudek, and C. Prahacs, “A visual servoing system
for an aquatic swimming robot,” in IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS), Edmonton, Canada, 2005, pp. 1483–
1488.
[36] R. W. Smith, “Application of a medical model to psychopathology in diving,”
in 6th International Conference on Underwater Education, San Diego, CA,
1975, pp. 377–385.
[37] A. Speers, P. Forooshani, M. Dicke, and M. Jenkin, “Lightweight tablet
devices for command and control of ROS-enabled robots,” in International
Conference on Advanced Robotics (ICAR), Montevideo, Uruguay, 2013.
[38] A. Speers and M. Jenkin, “Diver-based control of a tethered unmanned
underwater vehicle,” in International Conference on Informatics in Control,
Automation and Robotics (ICINCO), Rekjavik, Iceland, 2013.
[39] A. Speers, A. Topol, J. Zacher, R. Codd-Downey, B. Verzijlenberg, and M.
Jenkin, “Monitoring underwater sensors with an amphibious robot,” in
Canadian Conference on Computer and Robot Vision (CRV), St. John’s,
Canada, 2011.
[40] A. S. Tanenbaum and D. J. Wetherall, Computer Networks. Pearson, 2011.
[41] B. Verzijlenberg and M. Jenkin, “Swimming with robots: Human robot
communication at depth,” in IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS), Taipei, Taiwan, 2010, pp. 4023–
4028.
[42] C. Wang, H.-Y. Yu, and H.-J. Zhu, “A long distance underwater visible light
communication system with single photo avalanche diode,” IEEE Photonics
J., vol. 8, pp. 1–11, 2016.
[43] A. Zoksimovski, C. M. Rappaport, D. Sexton, and M. Stojanovic,
“Underwater electromagnetic communications using conduction – channel
characterization,” in Proc. of the 7th ACM International Conference on
Underwater Networks and Systems, Los Angeles, CA, 2012.
Random documents with unrelated
content Scribd suggests to you:
I'll take all the blame. Let me hide on the ferry-boat, and I won't
show myself until you've got miles away.
That might do, said Bob, smiling. Perhaps it isn't exactly square,
but with such a man as your uncle we must make use of his own
methods.
You will take me, then? asked Sam, eagerly.
By this time they had reached the boat.
Clip, said Bob, go with Sam and hide him somewhere on the boat,
but don't tell me where he is concealed. Then, if old Wolverton
comes after him I can say truly that I don't know where he is.
All right, Massa Bob, said Clip, showing his teeth.
When the contents of the boat had been transferred to the larger
craft, Bob rowed back, leaving Clip and Sam together. The boat was
roofed over, as already stated. Besides the bins there was a corner in
which some bedding had been placed for the accommodation of the
young voyagers. But it seemed difficult to find a suitable hiding-
place for Sam.
Where can you put me? asked the young runaway, with a troubled
look.
Clip looked about him, rolling his eyes in perplexity.
At length his face brightened, for an idea had come to him.
In one corner was an empty barrel. Some stores had been brought
aboard in it, and it had been suffered to remain, with the idea that it
might possibly prove of use. The particular use to which it was to be
put certainly never occurred to Bob or Clip.
Get in there, Sam! said Clip. Old Mass' Wolverton won't look for
you in there.
But I shall be seen.
You wait and I'll show you how we'll manage; only get in!
Thus adjured, Sam got into the barrel, and with some difficulty
crouched so that his head was lower than the top of the barrel.
Now I'll show you, said Clip.
He took a white cloth—it was apiece of sail-cloth—and spread over
the top of the barrel.
Now old Mass' Wolverton will have sharp eyes to see you, said
Clip, triumphantly.
That may do, said Sam. But it isn't necessary to put it on now. It
will be time if my uncle makes his appearance. I'll keep out of sight
in the center of the boat.
Meanwhile Bob had gone to the house to bid good-bye to his
mother.
I feel anxious about your going off on such a long trip, Robert, said
Mrs. Burton.
You forget that I am almost a man, mother. It is time for me to
assume some responsibility.
But you are only a boy, after all, Robert. Think, if anything should
happen to you, what would become of me?
My dear mother, you may depend on my taking excellent care of
myself. I don't see what risk or danger there can be in going to St.
Louis. It isn't a long trip. I shall be back in less than a fortnight if all
goes well.
It will seem a very long fortnight to me, Robert.
I have no doubt you will miss me, mother, but you forget I have
Clip to look after me.
Clip is only a poor colored boy, but I am sure he will prove faithful
to you, said Mrs. Burton, seriously. Even the humble are
sometimes of great service. I am glad he is going with you.
Bob did not mention that Sam Wolverton would also be his
companion, as he foresaw that the agent would not unlikely question
his mother on that point.
Bob returned to the boat, and was just about to cast off, when
Wolverton was seen on the bank, waving his hat and shouting
frantically.
I guess, Massa Sam, you'd better get into the barrel, said Clip with
a grin.
CHAPTER XXI.
HOW WOLVERTON WAS FOOLED.
What do you want, Mr. Wolverton? asked Bob, coolly, as he stood
at one end of the boat and surveyed the excited agent.
Come ashore, or I'll have you arrested, shouted the irate
Wolverton.
You are very kind, Mr. Wolverton; but I am in considerable of a
hurry, and have not time to comply with your request.
You'd better come ashore, if you know what's best for yourself.
Please state your business! If it is anything to my advantage, I may
come; but I am just ready to start for St. Louis.
Is my nephew Sam on your boat?
I don't see him. Why should he be on board?
I suspect him of running away, the ungrateful young rascal? I
thought he might be scheming to go down the river with you.
Clip, said Bob, gravely, has Sam Wolverton engaged passage with
us?
Not as I knows on, Massa Bob.
If he should, charge him fifteen dollars.
Yes, Massa Bob, answered Clip, with a grin.
If you wish your nephew to go to St. Louis on my boat, Mr.
Wolverton, said Bob, with ceremonious politeness, I will take him,
being a friend, for fifteen dollars, excursion ticket. You can't
complain of that.
But I don't want him to go, roared Wolverton. I tell you he has
run away.
That's very strange, considering how kindly and liberally you have
always treated him.
Wolverton eyed Bob suspiciously, for he knew well enough that the
remark was ironical.
None of your gammon, young man! he said, crabbedly. Send Sam
ashore.
Really, Mr. Wolverton, you must be joking. What have I got to do
with Sam?
I don't believe a word you say. I mean to search your boat.
You had better do it at once, then, for it is time for me to start.
But how am I to get aboard, asked the agent, perplexed.
You might swim, suggested Bob, or wade. The water is shallow—
not higher than your neck, anywhere.
That is nonsense. Steer your boat to shore, that I may board her.
It can't be done, Mr. Wolverton. We can only drift down with the
current.
Then how am I to get aboard?
That is your lookout.
Just then Mr. Wolverton espied the flat-bottomed boat which Bob
proposed to take with him. He had attached it by a line to the stern
of the ferry-boat.
Row over and take me across.
I can't spare the time.
Wolverton was about to give vent to his wrath at this refusal, when
he observed a boat approaching, rowed by a German boy named
Otto Brandes.
Come here, boy, and row me out to yonder boat, he said.
Otto paused in his rowing, and, understanding the man with whom
he was dealing, he asked, quietly: How much will you pay me, Mr.
Wolverton?
Five cents to take me over and back, answered the agent, with
some hesitation.
Otto laughed.
I don't work for any such wages, he said.
I'll give you ten; but be quick about it.
Give me a quarter and I'll do it.
Do you think I am made of money? said Wolverton, in anger. That
is an outrageous extortion.
All right! Then hire somebody else, said Otto, coolly.
After a fruitless effort to beat down the price, Wolverton sulkily
agreed to the terms, and Otto rowed to the bank.
Now, row with all your might, said the agent, as he seated himself
in one end of the boat.
Your fare, please, said Otto.
I'll pay you when the trip is over, said Wolverton. It's a poor
paymaster that pays in advance.
Then you'd better get out of the boat. Railroad and boat tickets are
always paid in advance.
I'll give you ten cents now, and the balance when I land.
It won't do, Mr. Wolverton. I don't care much about the job anyway;
I'm in a hurry to get home.
Otto lived about half a mile further down the creek.
Much against his will, the agent was obliged to deposit the passage-
money in the boy's hand before he would consent to take up the
oars and commence rowing.
That rascal Sam is putting me to all this expense, he said to
himself. I'll take my pay out of his skin once I get hold of him.
Clip went up to the barrel in which Sam was concealed.
Ol' Wolverton is comin', Massa Sam, he said. Don't you make no
noise, and we'll fool de ol' man.
In spite of this assurance, poor Sam trembled in his narrow place of
concealment. He knew that he would fare badly if his uncle got hold
of him.
How's he coming? he asked in a stifled voice.
Otto Brandes is rowin' him. He's in Otto's boat.
It's mean of Otto!
No; he don't know what de ol' man is after.
It took scarcely two minutes for Wolverton to reach the ferry-boat.
He mounted it with fire in his eye.
Now, where is Sam? he demanded in a peremptory tone.
You can search for him, Mr. Wolverton, said Bob, coolly. You seem
to know more about where he is than I do.
Wolverton began to peer here and there, looking into bins of wheat
and all sorts of improbable places.
Clip took a broom and began to sweep energetically. Bob could not
explain this sudden fit of industry till he saw Clip slyly slip the broom
between Wolverton's legs as he was hurrying along, thereby
upsetting the unfortunate agent, who tumbled sprawling on the
deck.
Why, you black imp! he exclaimed, furiously, as he picked himself
up, what made you do that?
Couldn't help it, Massa Wolverton! I 'clare to gracious I couldn't!
said Clip, rolling his eyes in a most wonderful manner. Are you hurt,
Massa Wolverton?
I most broke my knee! growled Wolverton, as he rose and limped
towards the other end of the boat. I may be laid up for a week.
It was de ol' broom did it, said Clip, innocently. Never see such a
broom!
Bob had hard work to keep a straight face, as he heard Clip's odd
accusation against the unoffending broom.
This accident seemed to dampen Wolverton's enthusiasm, and the
pain in his knee increasing made him desirous of getting home as
soon as possible. Besides, he began to suspect that he was on a
wrong scent, as he had thus far found no traces of his runaway
nephew. He never once noticed the barrel, over which the piece of
sail-cloth had been thrown so carelessly.
Well, did you find Sam? asked Bob, composedly.
No! snapped Wolverton.
I seed him jest before you came, Massa Wolverton, said Clip.
Where? asked the agent, eagerly.
Runnin' along the bank.
In what direction?
Clip pointed up the creek.
Why didn't you tell me that before?
You didn't ask me, Massa Wolverton.
Take me ashore quick! said Wolverton to Otto.
Hurry up, Massa Wolverton, and mebbe you'll catch him!
Wolverton was already in the boat, and Otto was rowing him to the
shore.
Clip went to the barrel and released the prisoner.
De ol' man's gone, Sam! he said.
I'm glad of it, Clip. I'm almost suffocated.
Golly, didn't we fool him! and Clip lay down on his back on deck,
and gave way to an explosion of mirth.
A minute later the rope was drawn in, and the ferry-boat started on
its adventurous career down the creek.
CHAPTER XXII.
THE FIRST DAY.
Bob was accustomed to rowing, but navigation with the ferry-boat
presented a new and interesting problem which he was eager to
solve. A steering apparatus had been rigged up at the stern, which
was found strong enough for the purpose required. Bob took his
place at the helm in starting, and managed for the first hour to
regulate the direction of his craft. By that time they came to a place
where the creek widened considerably, and the boat showed a
disposition to whirl round in an eddy. This difficulty, however, was
overcome by practice, and Bob began to acquire confidence in
himself as a navigator. But it was evident that he could not remain at
the helm all day.
Come here, Clip, he said; I want you to rest me in steering.
Clip took his place, but his first attempts proved discouraging. He
was inclined to steer in just the reverse direction, and twice came
near running the boat ashore.
What are you about, Clip? demanded Bob, in excitement. Don't
you see you are running the boat ashore?
I done just like you, Massa Bob, protested Clip. De boat acts
contrary; never see such an ol' boat.
It is you that are contrary, Clip. You don't do as I tell you.
I 'clar to gracious I did, Massa Bob. I can't never learn to steer.
In fact, Clip, who was naturally lazy, found it very irksome to stand
at the helm, and much preferred going here and there on the boat
and surveying the scenery on either bank. He hoped that his
incompetence would save him from the task. But his dream was
rudely disturbed.
If you can't take your turn in steering, Clip, said Bob, you won't
be of any use to me. I shall have to send you home, and get along
with Sam's assistance.
Oh, don't send me home, Massa Bob! exclaimed Clip, in alarm. I'll
try—'deed I will.
I'll try you a little longer, Clip, said Bob; but you must not blame
me for sending you back, if it is necessary.
No better argument could have been used to insure satisfactory
work from Clip, who was naturally careless, and inclined to shirk
work. Nevertheless, Bob felt glad that he had another assistant in
Sam Wolverton, who proved to possess all the qualities which Clip
lacked.
When it was one o'clock, Clip began to show signs of distress.
I'm pow'ful hungry, Massa Bob, he said, in a pleading tone.
So am I, Clip, returned Bob, with a smile. I will see if I can't do
something to relieve you.
He had brought from home a basket of sandwiches and a gallon of
milk. To these the boys did ample justice, displaying even more
appetite than usual. This was not surprising, for they had worked
hard, and this in the open air.
Sam, said Bob, I can't hope to supply you with all the delicacies
you would get at home, but I hope you'll make it do with our humble
fare.
Sam smiled.
All the delicacies on Uncle Aaron's table wouldn't spoil anybody's
digestion. I like my dinner to-day better than any I've eaten for a
long time. I don't know what uncle and aunt would say if they could
see me here.
De ol' man would be wild, said Clip, with a guffaw.
I expect he would, Clip. He isn't fond of me, but he doesn't want to
lose me. He will have to do his own chores now, for I don't believe
he can get a boy to work for him.
About six o'clock in the afternoon, having arrived opposite a town
which I will call Rushford, Bob decided to tie up for the night. He
and Clip went on shore, leaving Sam in charge of the boat. He did
not dare to leave it unguarded, for the cargo, according to his
estimate, was worth not far from three thousand dollars.
He took the opportunity to enter a restaurant, where he bought Clip
and himself cups of coffee, and ordered a fresh supply of
sandwiches made up, which he arranged to have delivered at the
boat early the next morning.
I don't mean that we shall starve, Clip, he said.
Clip showed his teeth.
Dat coffee's awful good, Massa Bob, he said.
Yes, but we can't make it on board the boat. I shall have to depend
on getting it at the villages on the way.
How far are we from home, Massa Bob?
Well thought of, Clip. I will inquire.
He asked the keeper of the restaurant the distance to Carver.
I don't know, but I think my waiter comes from that neighborhood.
Sam, how far away is Carver?
Forty miles, answered Sam promptly.
I thought it had been more. We have been eight hours coming on
the river.
That is because the river (they had left the creek fifteen miles up)
was winding in its course.
On the whole, however, Bob decided that it was very fair progress
for the first day, and that only about two-thirds of the time.
Rushford was a town of fifteen hundred inhabitants, and presented
as busy an appearance as a town three times the size in the East.
Clip, who was fond of variety, was reluctant to return to the boat,
but Bob said:
We must relieve Sam, and give him a chance to come ashore and
get some coffee. You come with him, and show him the restaurant.
This arrangement suited Clip, who liked as much variety and
excitement as possible.
On returning to the boat Bob was somewhat surprised to find his
young lieutenant in conversation with an old lady dressed in
antediluvian costume. She had a sharp face, with an eager, birdlike
look, and seemed to be preferring a request.
Here's the captain; you can ask him, said Sam, who seemed much
relieved by the return of Bob.
Is he the captain? asked the old lady. Why, he's nothin' but a
boy!
He's all the captain we have, answered Sam.
Be you in charge of this boat? asked the old lady.
Yes, ma'am. What can I do for you?
I want to go down to St. Louis, said the old lady, and I thought
maybe you might find room for me.
But, ma'am, why don't you take passage on a river steamer?
They charge too much, said the old lady. I hain't got much
money, and I s'pose you wouldn't charge me much. Are you any
acquainted in St. Louis?
No, ma'am.
I thought maybe you might know my darter's husband. He keeps a
grocery store down near the river. His name is Jeremiah Pratt, and
my darter's name is Melinda Ann. I want to give 'em a surprise.
I never met the gentleman.
When do you start?
To-morrow morning about half-past seven o'clock.
Can't you put it off till eight? I've got to pack my trunk over night,
and I've got to eat a bit of breakfast to stay my stummik. How much
do you charge? I'd be willing to pay you seventy-five cents.
How much do the steamboats charge? asked Bob.
I think it's six dollars, or it may be seven. That's too much for a
poor woman like me.
I think you will have to pay it, madam, for we have no
accommodation for passengers on our boat.
Oh, I ain't a mite particular. You can put me anywhere.
I suppose you wouldn't be willing to get into a grain bin?
Oh, now you're jokin'. Where do you sleep yourself?
On a mattress on the floor; that wouldn't be suitable for a lady like
you. Besides, we have no separate rooms.
Then you can't take me, no way? asked the old lady, disappointed.
I am afraid not, madam.
You're real disobligin'. I don't see how I am to get to St. Louis.
I am sorry I can't help you.
The old woman hobbled off in evident anger. Bob heard afterwards
that she was a woman of ample means, fully able to afford
steamboat fare, but so miserly that she grudged paying it.
Now, Sam, said Bob, Clip will show you the way to a restaurant
where you can get a hot cup of coffee and a plate of meat, if you
desire it.
While the boys were gone, Bob received a visitor.
CHAPTER XXIII.
A SUSPICIOUS CHARACTER.
Fifteen minutes after Sam and Clip had left him Bob's attention was
drawn to a man of somewhat flashy appearance, who, while leaning
against a tree on the bank, seemed to be eying him and the boat
with attention. He wore a Prince Albert coat which was no longer fit
to appear in good society, a damaged hat, and a loud neck-tie. His
eyes were roving from one point to another, as if he felt a great deal
of interest in Bob or the boat. Our hero was not favorably impressed
with this man's appearance.
I wonder what he sees that interests him so much? he thought.
I say, young man, is this here boat yours? he asked.
Yes, answered Bob, coldly.
What have you got on board?
Bob felt under no obligation to answer, but reflecting that there was
no good excuse for refusing, he said, briefly: Wheat.
Humph! How much have you got?
This clearly was none of the questioner's business, and Bob replied
by another question:
Do you want to buy?
I don't know, said the stranger. What do you ask?
I can't say till I get to St. Louis.
How much do you calc'late to get?
Two dollars and a quarter, answered Bob, naming a price beyond
his expectations.
Ain't that a high figger?
Perhaps so.
Come, young feller, you don't seem social. Can't you invite me
aboard?
I don't think you would be paid for coming, said Bob, more and
more unfavorably impressed.
Oh, I don't mind. My time ain't valuable. I guess I'll come.
The stranger stepped across the gang-plank, which Bob had laid
from the boat to the shore, and entered without an invitation. Bob
was tempted to order him off, but the intruder appeared much
stronger than himself; and while he was alone it seemed politic to
submit to the disagreeable necessity of entertaining his unwelcome
visitor.
The latter walked from end to end of the boat, examining for himself
without asking permission, or appearing to feel the need of any. He
opened the bins and counted them, while Bob looked on uneasily.
I say, young feller, you've got a smart lot of wheat here.
Yes, said Bob, briefly.
Got a thousand bushels, I reckon?
Perhaps so.
And you expect to get two dollars and a quarter a bushel?
Perhaps I shall have to take less.
At any rate, you must have two thousand dollars' worth on board.
You can judge for yourself.
I say, that's a pile of money—for a boy.
The wheat doesn't belong to me.
Who owns it, then.
My mother.
What's your mother's name?
I have answered all the questions I am going to, said Bob,
indignantly.
Don't get riled, youngster. It ain't no secret, is it?
I don't care about answering all the questions a stranger chooses to
put to me.
I say, young chap, you're gettin' on your high horse.
What is your object in putting all these questions?
What is my object?
That is what I asked.
The fact is, youngster, I've got a ranch round here myself, and I've
got about five hundred bushels of wheat I want to market. Naturally
I'm interested. See?
Bob did not believe a word of this.
Where is your ranch? he asked.
About two miles back of the town, answered the stranger, glibly.
That lie was an easy one. I'm thinkin' some of runnin' down to the
city to see if I can't sell my wheat in a lump to some merchant.
Mebbe I could strike a bargain with you to carry me down.
Bob had even more objection to the new passenger than to the old
lady, and he answered stiffly:
I have no accommodations for passengers.
Oh, I can bunk anywhere—can lie on deck, on one of the bins. I'm
used to roughin' it.
You'd better take passage by the next steamer. This is a freight
boat.
There ain't anybody but you aboard, is there?
Yes; I have two companions.
The stranger seemed surprised and incredulous.
Where are they? he asked.
Gone into the village.
The visitor seemed thoughtful. He supposed the two companions
were full-grown men, and this would not tally with his plans. This
illusion, however, was soon dissipated, for Sam and Clip at this point
crossed the gang-plank and came aboard.
Are them your two companions? asked the stranger, appearing
relieved.
Yes.
Sam and Clip eyed him curiously, expecting Bob to explain who he
was, but our hero was only anxious to get rid of him.
Then you can't accommodate me? asked the man.
No, sir; but if you'll give me your name and address, I can perhaps
sell your crop for you, and leave you to deliver it.
Never mind, young feller! I reckon I'll go to the city myself next
week.
Just as you like, sir.
He re-crossed the plank, and when he reached the shore took up his
post again beside the tree, and resumed his scrutiny of the boat.
What does that man want? asked Sam.
I don't know. He asked me to give him passage to St. Louis.
You might make money by carrying passengers, suggested Sam.
I wouldn't carry a man like him at any price, said Bob. I haven't
any faith in his honesty or respectability, though he tells me that he
owns a ranch two miles back of the town. He came on the boat to
spy out what he could steal, in my opinion.
How many days do you think we shall need for the trip, Bob?
asked Sam.
It may take us a week; but it depends on the current, and whether
we meet with any obstructions. Are you in a hurry to get back to
your uncle?
No, said Sam, his face wearing a troubled look. The fact is, Bob, I
don't mean to go back at all.
You mean dat, Massa Sam? asked Clip, his eyes expanding in his
excitement.
Yes, I mean it. If I go back I shall have to return to my uncle, and
you know what kind of a reception I shall get. He will treat me
worse than ever.
I am sure, Sam, my mother will be willing to let you live with us.
I should like nothing better, but my uncle would come and take me
away.
Would he have the right?
I think he would. He has always told me that my poor father left me
to his charge.
Do you think he left any property?
Yes; I feel sure he did; for on his deathbed he called me to him,
and said: 'I leave you something, Sam; I wish it were more; but, at
any rate, you are not a pauper.'
Did you ever mention this to your uncle, Sam?
Yes.
What did he say?
It seemed to make him very angry. He said that my father was
delirious or he would never have said such absurd things. But I know
he was in his right mind. He was never more calm and sensible than
when he told me about the property.
I am afraid Sam, your uncle has swindled you out of your
inheritance.
I think so, too, but I can't prove anything, and it won't do to say
anything, for it makes him furious.
What does your aunt say?
Oh, she sides with Uncle Aaron; she always does that.
Then I can't say I advise you to return to Carver, although Clip and
I are sure to miss you.
'Deed I shall, Massa Sam, said Clip.
I think I can pick up a living somehow in St. Louis. I would rather
black boots than go back to Uncle Aaron.
I am sure you can. Perhaps some gentleman will feel an interest in
you, and take you into his service.
I want to tell you, Bob, that Uncle Aaron hates you, and will try to
injure you. You will need to be careful.
That's no news, Sam. He has shown his dislike for me in many
ways; but I am not afraid of him, the boy added, proudly.
At nine o'clock the boys went to bed. They were all tired, and all
slept well. It was not till seven o'clock that Bob awoke. His two
companions were asleep. He roused them, and they prepared for the
second day's trip.
CHAPTER XXIV.
CLIP MAKES A LITTLE MONEY FOR HIMSELF.
About noon the next day, while Clip was at the helm, there was a
sudden jolt that jarred the boat from stem to stern, if I may so
speak of a double-ender ferry-boat.
Bob and Sam, who had been occupied with re-arranging some of the
cargo, rushed up to the colored pilot.
What on earth is the matter, Clip asked Bob.
'Clare to gracious, I dunno, Massa Bob, asseverated Clip.
Bob didn't need to repeat the question. Clip had steered in shore,
and the boat had run against a tree of large size which had fallen
over into the river, extending a distance of a hundred feet into the
stream. Of course the boat came to a standstill.
What made you do this, Clip? said Bob, sternly.
Didn't do it, Massa Bob. Ol' boat run into the tree himself.
That won't do, Clip. If you had steered right, there would have been
no trouble.
I steered just as you told me to, Massa Bob.
No, you didn't. You should have kept the boat at least a hundred
and fifty feet from the shore.
Didn't I, Massa Bob? asked Clip, innocently.
No. Don't you see we are not more than fifty feet away now?
I didn't get out and measure, Massa Bob, said Clip, with a grin.
Now, own up, Clip, were you not looking at something on the bank,
so that you didn't notice where you were steering?
Who told you, Massa Bob? asked Clip, wondering.
I know it must be so. Do you know you have got us into trouble?
How am I going to get the boat back into the stream?
Clip scratched his head hopelessly. The problem was too intricate for
him to solve.
I think, Clip, I shall have to leave you over at the next place we
come to. You are more bother than you are worth.
Oh, don't, Massa Bob. I won't do so again. 'Deed I won't.
Bob didn't relent for some time. He felt that it was necessary to
impress Clip with the heinousness of his conduct. At length he
agreed to give him one more chance. He had to secure the services
of two stout backwoodsmen to remove the tree, and this occasioned
a delay of at least two hours. Finally the boat got started again, and
for the remainder of the day there was no trouble.
Towards the close of the afternoon they reached a place which we
will call Riverton. It was a smart Western village of about two
thousand inhabitants. Bob and Sam went on shore to get some
supper, leaving Clip in charge.
Now, Clip, you must keep your eyes open, and take good care of
everything while we are gone, said Bob.
All right, Massa Bob.
About ten minutes after the boys went away Clip was sitting on a
barrel whistling a plantation melody, when a slender, florid-
complexioned young man stepped aboard.
Good-evening, sir, he said, removing his hat.
Evenin', answered Clip, with a grin. He was flattered by being
addressed as sir.
Are you in charge of this boat?
Yes; while Massa Bob and Sam are gone ashore.
Are they boys like yourself?
Yes, sir.
Are you three all that are on board—I mean all that man the boat?
Yes, massa.
Where are you bound?
To St. Louis.
Do you think they would take me as passenger?
Clip shook his head.
They won't take no passengers, he answered. An ol' woman
wanted to go as passenger, and another man (Clip was unconscious
of the bull), but Massa Bob he said no.
Suppose I make a bargain with you, said the man, insinuatingly.
What you mean, massa? asked Clip, rolling his eyes in
wonderment.
Can't you hide me somewhere without their knowing I am on
board?
What for I do dat? asked Clip.
I'll make it worth your while.
What's dat?
I'll give you five dollars.
For my own self?
Yes; for yourself.
And I won't have to give it to Massa Bob?
No; you can spend it for yourself.
But Massa Bob would find out to-morrer.
If he finds out to-morrow I shan't mind.
And you won't take back the money?
No; you can keep the money at any rate.
Where's the money? asked Clip, cautiously.
The stranger took out a five-dollar gold piece, and showed it to Clip.
Clip had seen gold coins before, and he understood the value of
what was offered him.
Where can I put you? he said.
We'll go round the boat together, and see if we can find a place.
The round was taken, and the stranger selected a dark corner
behind a bin of wheat.
Will Massa Bob, as you call him; be likely to look here?
No; I reckon not.
Have you got anything to eat on board which you can bring me by
and by?
I'm goin' on shore soon as Massa Bob gets back. I'll buy
something.
That will do.
Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.
More than just a book-buying platform, we strive to be a bridge
connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.
Join us on a journey of knowledge exploration, passion nurturing, and
personal growth every day!
ebookbell.com

More Related Content

DOCX
The essay about In the future, what kind of relationship should .docx
DOCX
Running head ROBOTIC SURGERY TECHNOLOGY1OPERATING SYSTE.docx
PDF
Human Behavior Learning And Transfer Yangsheng Xu Ka Keung C Lee
PDF
Human robot Interaction A Special Double Issue of human computer Interaction ...
PPTX
Human robot interaction
PDF
humanrobotinteraction-180527054931 (1) (1).pdf
PDF
Human robot Interaction A Special Double Issue of human computer Interaction ...
PDF
Human Behavior Learning and Transfer 1st Edition Yangsheng Xu (Author)
The essay about In the future, what kind of relationship should .docx
Running head ROBOTIC SURGERY TECHNOLOGY1OPERATING SYSTE.docx
Human Behavior Learning And Transfer Yangsheng Xu Ka Keung C Lee
Human robot Interaction A Special Double Issue of human computer Interaction ...
Human robot interaction
humanrobotinteraction-180527054931 (1) (1).pdf
Human robot Interaction A Special Double Issue of human computer Interaction ...
Human Behavior Learning and Transfer 1st Edition Yangsheng Xu (Author)

Similar to Humanrobot Interaction Control Analysis And Design Dan Zhang (20)

PDF
Download Human robot Interaction A Special Double Issue of human computer Int...
PDF
diplomarbeit-alesiaivanova-clear
PDF
Ai and robotics
PDF
Robot And Multibody Dynamics Analysis And Algorithms 1st Edition Abhinandan J...
DOCX
Rasic_Shannon_CIS376-201_ResearchPaper
PDF
Ergonomics-for-One in a Robotic Shopping Cart for the Blind
PDF
Humanrobot Interaction Safety Standardization And Benchmarking Paolo Barattini
PDF
Biologically Inspired Approaches For Locomotion Anomaly Detection And Reconfi...
PDF
A deep dive into the mechanics of human-robot interaction
PDF
Computer And Information Science Roger Lee
PDF
Computer And Information Science Roger Lee
PDF
Cyberphysicalsocial Intelligence On Humanmachinenature Symbiosis 1st Ed Hai Z...
PDF
Computational Context The Value Theory And Application Of Context With Ai Law...
PDF
Robots a threat_or_an_opportunity_m2_deipm
PDF
Wheeled Mobile Robotics From Fundamentals Towards Autonomous Systems 1st Edit...
PDF
Swarm Systems In Art And Architecture State Of The Art 1st Edition Mahsoo Salimi
DOCX
Response Paper Due Monday, February 6th Write an .docx
PDF
Multirobot Systems Trends And Development T Yasuda K Ohkura
PDF
Human Centered Robot Systems Cognition Interaction Technology 1st Edition Tho...
PDF
Human Computer Interaction New Developments Kikuo Asai
Download Human robot Interaction A Special Double Issue of human computer Int...
diplomarbeit-alesiaivanova-clear
Ai and robotics
Robot And Multibody Dynamics Analysis And Algorithms 1st Edition Abhinandan J...
Rasic_Shannon_CIS376-201_ResearchPaper
Ergonomics-for-One in a Robotic Shopping Cart for the Blind
Humanrobot Interaction Safety Standardization And Benchmarking Paolo Barattini
Biologically Inspired Approaches For Locomotion Anomaly Detection And Reconfi...
A deep dive into the mechanics of human-robot interaction
Computer And Information Science Roger Lee
Computer And Information Science Roger Lee
Cyberphysicalsocial Intelligence On Humanmachinenature Symbiosis 1st Ed Hai Z...
Computational Context The Value Theory And Application Of Context With Ai Law...
Robots a threat_or_an_opportunity_m2_deipm
Wheeled Mobile Robotics From Fundamentals Towards Autonomous Systems 1st Edit...
Swarm Systems In Art And Architecture State Of The Art 1st Edition Mahsoo Salimi
Response Paper Due Monday, February 6th Write an .docx
Multirobot Systems Trends And Development T Yasuda K Ohkura
Human Centered Robot Systems Cognition Interaction Technology 1st Edition Tho...
Human Computer Interaction New Developments Kikuo Asai
Ad

Recently uploaded (20)

PDF
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
PDF
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
PPTX
202450812 BayCHI UCSC-SV 20250812 v17.pptx
PDF
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
PDF
FourierSeries-QuestionsWithAnswers(Part-A).pdf
PDF
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
PPTX
Presentation on HIE in infants and its manifestations
PPTX
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
PDF
Abdominal Access Techniques with Prof. Dr. R K Mishra
PPTX
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
PDF
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
PPTX
Pharmacology of Heart Failure /Pharmacotherapy of CHF
PPTX
Tissue processing ( HISTOPATHOLOGICAL TECHNIQUE
PDF
Classroom Observation Tools for Teachers
PDF
O5-L3 Freight Transport Ops (International) V1.pdf
PPTX
master seminar digital applications in india
PDF
Computing-Curriculum for Schools in Ghana
PDF
STATICS OF THE RIGID BODIES Hibbelers.pdf
PDF
Anesthesia in Laparoscopic Surgery in India
PPTX
human mycosis Human fungal infections are called human mycosis..pptx
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
202450812 BayCHI UCSC-SV 20250812 v17.pptx
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
FourierSeries-QuestionsWithAnswers(Part-A).pdf
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
Presentation on HIE in infants and its manifestations
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
Abdominal Access Techniques with Prof. Dr. R K Mishra
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
Pharmacology of Heart Failure /Pharmacotherapy of CHF
Tissue processing ( HISTOPATHOLOGICAL TECHNIQUE
Classroom Observation Tools for Teachers
O5-L3 Freight Transport Ops (International) V1.pdf
master seminar digital applications in india
Computing-Curriculum for Schools in Ghana
STATICS OF THE RIGID BODIES Hibbelers.pdf
Anesthesia in Laparoscopic Surgery in India
human mycosis Human fungal infections are called human mycosis..pptx
Ad

Humanrobot Interaction Control Analysis And Design Dan Zhang

  • 1. Humanrobot Interaction Control Analysis And Design Dan Zhang download https://ebookbell.com/product/humanrobot-interaction-control- analysis-and-design-dan-zhang-47277782 Explore and download more ebooks at ebookbell.com
  • 2. Here are some recommended products that we believe you will be interested in. You can click the link to download. Humanrobot Interaction Control Using Reinforcement Learning 1st Edition Yu https://ebookbell.com/product/humanrobot-interaction-control-using- reinforcement-learning-1st-edition-yu-34951208 Humanrobot Interaction In Social Robotics Takayuki Kanda Hiroshi Ishiguro https://ebookbell.com/product/humanrobot-interaction-in-social- robotics-takayuki-kanda-hiroshi-ishiguro-4421776 Humanrobot Interaction Strategies For Walkerassisted Locomotion 1st Edition Carlos A Cifuentes https://ebookbell.com/product/humanrobot-interaction-strategies-for- walkerassisted-locomotion-1st-edition-carlos-a-cifuentes-5484898 Humanrobot Interaction Safety Standardization And Benchmarking Paolo Barattini https://ebookbell.com/product/humanrobot-interaction-safety- standardization-and-benchmarking-paolo-barattini-10429176
  • 3. Humanrobot Interaction An Introduction Christoph Bartneck Tony Belpaeme https://ebookbell.com/product/humanrobot-interaction-an-introduction- christoph-bartneck-tony-belpaeme-11043886 Humanrobot Interaction Evaluation Methods And Their Standardization 1st Ed Cline Jost https://ebookbell.com/product/humanrobot-interaction-evaluation- methods-and-their-standardization-1st-ed-cline-jost-11857944 Humanrobot Interaction A Special Double Issue Of Humancomputer Interaction 1st Edition Sara Kiesler Editor Pamela Hinds Editor https://ebookbell.com/product/humanrobot-interaction-a-special-double- issue-of-humancomputer-interaction-1st-edition-sara-kiesler-editor- pamela-hinds-editor-12194388 Basic Humanrobot Interaction David O Johnson https://ebookbell.com/product/basic-humanrobot-interaction-david-o- johnson-56802974 Emotional Design In Humanrobot Interaction Theory Methods And Applications Hande Ayanolu Emlia Duarte https://ebookbell.com/product/emotional-design-in-humanrobot- interaction-theory-methods-and-applications-hande-ayanolu-emlia- duarte-56931176
  • 7. Human–Robot Interaction: Control, Analysis, and Design Edited by Dan Zhang and Bin Wei
  • 8. Human–Robot Interaction: Control, Analysis, and Design Edited by Dan Zhang and Bin Wei This book first published 2020 Cambridge Scholars Publishing Lady Stephenson Library, Newcastle upon Tyne, NE6 2PA, UK British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library Copyright © 2020 by Dan Zhang, Bin Wei and contributors All rights for this book reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of the copyright owner. ISBN (10): 1-5275-5740-5 ISBN (13): 978-1-5275-5740-6
  • 9. TABLE OF CONTENTS Preface....................................................................................................... vi Chapter 1 .................................................................................................... 1 Trust and the Discrepancy between Expectations and Actual Capabilities of Social Robots Bertram Malle, Kerstin Fischer, James Young, AJung Moon, Emily Collins Chapter 2 .................................................................................................. 24 Talking to Robots at Depth Robert Codd-Downey, Andrew Speers, Michael Jenkin Chapter 3 .................................................................................................. 45 Towards the Ideal Haptic Device: Review of Actuation Techniques for Human-Machine Interfaces Maciej Lacki, Carlos Rossa Chapter 4 .................................................................................................. 75 New Research Avenues in Human-Robot Interaction Frauke Zeller Chapter 5 .................................................................................................. 93 Interpreting Bioelectrical Signals for Control of Wearable Mechatronic Devices Tyler Desplenter, Jacob Tryon, Emma Farago, Taylor Stanbury, Ana Luisa Trejos Chapter 6 ................................................................................................ 147 Human-Robot Interaction Strategy in Robotic-assisted Balance Rehabilitation Training Jiancheng Ji, Shuai Guo, Jeff Xi, Jin Liu Chapter 7 ................................................................................................ 171 Development of a Wearable Exoskeleton Suit for Paraplegic Parents Bing Chen, Bin Zi, Ling Qin, Wei-Hsin Liao
  • 10. PREFACE Robotics have been used in industry and other fields for the past decade, however human-robot interaction is at its early stage. This book, Human – Robot Interaction: Control, Analysis, and Design, will focus on the topics of human-robot interaction, its applications and current challenges. We would like to thank all the authors for their contributions to the book. We are also grateful to the publisher for supporting this project. We hope the readers find this book informative and useful. This book consists of 7 chapters. Chapter 1 takes trust to be a set of expectations about the robot’s capabilities and explores the risks of discrepancies between a person’s expectations and the robot’s actual capabilities. The major sources of these discrepancies and ways to mitigate their detrimental effects are examined. Chapter 2 has concentrated primarily on diver to robot communication. Communication from the robot to the diver, especially when approaches such as gesture-based are used, is also an issue. Chapter 3 reviews recent advancements in the field of passive and hybrid haptic actuation. The authors highlight the design considerations and trade-offs associated with these actuation methods and provide guidelines on how their use can help with development of the ultimate haptic device. Chapter 4 introduces an extended HRI research model, which is adapted from communication and mass communication studies, and focuses on the social dimension of social robots. Chapter 5 highlights some existing methods for interpreting EEG and EMG signals that are useful for the control of wearable mechatronic devices. These methods are focused on modelling motion for the purpose of controlling wearable mechatronic devices that target musculoskeletal rehabilitation of the upper limb. Chapter 6 discusses a training method for patient balance rehabilitation based on human-robot interaction. Chapter 7 develops a wearable exoskeleton suit that involves human-robot interaction to help the individuals with mobility disorders caused by a stroke, spinal cord injury or other related diseases. Finally, the editors would like to acknowledge all the friends and colleagues who have contributed to this book. Dan Zhang, Toronto, Ontario, Canada Bin Wei, Sault Ste Marie, Ontario, Canada February 25, 2020
  • 11. CHAPTER 1 TRUST AND THE DISCREPANCY BETWEEN EXPECTATIONS AND ACTUAL CAPABILITIES OF SOCIAL ROBOTS BERTRAM F. MALLE, KERSTIN FISCHER, JAMES E. YOUNG, AJUNG MOON, EMILY COLLINS Corresponding author: Bertram F. Malle, Professor Department of Cognitive, Linguistic, and Psychological Sciences Brown University 190 Thayer St. Providence, RI 02912, USA bfmalle@brown.edu +1 (401) 863-6820 Kerstin Fischer, Professor (WSR) Department of Design and Communication University of Southern Denmark Alsion 2 DK-6400 Sonderborg, Denmark kerstin@sdu.dk Phone: +45-6550-1220 James E. Young, Associate Professor Department of Computer Science University of Manitoba Winnipeg, Manitoba R3T 2N2, Canada Email: young@cs.umanitoba.ca Phone: (lab) +1-204-474-6791 AJung Moon, Assistant Professor Department of Electrical and Computer Engineering McGill University 3480 University Street Montreal, Quebec H3A 0E9, Canada ajung.moon@mcgill.ca Phone: +1-514-398-1694 Emily C. Collins, Research Associate Department of Computer Science University of Liverpool Ashton Street Liverpool L69 3BX, UK E.C.Collins@liverpool.ac.uk Phone: +44 (0)151 795 4271
  • 12. Chapter 1 2 Abstract From collaborators in factories to companions in homes, social robots hold the promise to intuitively and efficiently assist and work alongside people. However, human trust in robotic systems is crucial if these robots are to be adopted and used in home and work. In this chapter we take trust to be a set of expectations about the robot’s capabilities and explore the risks of discrepancies between a person’s expectations and the robot’s actual capabilities. We examine major sources of these discrepancies and ways to mitigate their detrimental effects. No simple recipe exists to help build justified trust in human-robot interaction. Rather, we must try to understand humans’ expectations and harmonize them with robot design over time. Introduction As robots continue to be developed for a range of contexts where they work with people, including factories, museums, airports, hospitals, and homes, the field of Human-Robot Interaction explores how well people will work with these machines, and what kinds of challenges will arise in their interaction patterns. Social robotics focuses on the social and relational aspects of Human-Robot Interaction, investigating how people respond to robots cognitively and emotionally, how they use their basic interpersonal skills when interacting with robots, and how robots themselves can be designed to facilitate successful human-machine interactions. Trust is a topic that currently receives much attention in human-robot interaction research. If people do not trust robots, they will not collaborate with them or accept their advice, let alone purchase them and delegate to them the important tasks they have been designed for. Building trust is therefore highly desirable from the perspective of robot developers. A closer look at trust in human-robot interaction, however, reveals that the concept of trust itself is multidimensional. For instance, one could trust another human (or perhaps robot) that they will carry out a particular task reliably and without errors, and that they are competent to carry out the task. But in some contexts, people trust another agent to be honest in their communication, sincere in their promises, and to value another person’s, or the larger community’s interests. In short, people may trust agents based on evidence of reliability, competence, sincerity, or ethical integrity
  • 13. Trust and Discrepancy 3 [1], [2]1 . What unites trust along all these dimensions is that it is an expectation—expecting that the other is reliable, competent, sincere, or ethical. Expectations, of course, can be disappointed. When the other was not as reliable, capable, or sincere as one thought, one’s trust was misplaced. Our goal in this chapter is to explore some of the ways in which people’s expectations of robots may be raised too high and therefore be vulnerable to disappointment. To avert disappointed expectations, at least two paths of action are available. One is to rapidly expand robots’ capacities, which is what most designers and engineers strive for. But progress has been slow [3], and the social and communicative skills of artificial agents are still far from what seems desirable [4], [5]. Another path is to ensure that people trust a robot to be just as reliable, capable, and ethical as it really is able to; that is, to ensure that people understand the robot’s actual abilities and limitations. This path focuses on one aspect of transparency: providing human users with information about the capabilities of a system. Such transparency, we argue, is a precondition for justified trust in any autonomous machine, and social robots in particular [6], [7]. In this chapter, we describe some of the sources of discrepancies between people’s expectations and robots’ real capabilities. We argue the discrepancies are often caused by superficial properties of robots that elicit feelings of trust in humans without validly indicating the underlying property the person trusts in. We therefore need to understand the complex human responses triggered by the morphology and behaviour of autonomous machines, and we need to build a systematic understanding of the effects that specific design choices have on people’s cognitive, emotional, and relational reactions to robots. In the second part of the chapter we lay out a number of ways to combat these discrepancies. Discrepancies Between Human Expectations and Actual Robot Capabilities In robot design and human-robot interaction research, the tendency to build ever more social cues into robots (from facial expressions to emotional tone of voice) is undeniable. Intuitively, this makes sense since robots that exhibit social cues are assumed to facilitate social interaction by leveraging people’s existing social skill sets and experience, and they 1 The authors have provided a measure of these multiple dimensions of trust and invite readers to use that measure for their human-robot interaction studies: http://bit.ly/MDMT_Scale
  • 14. Chapter 1 4 would fit seamlessly into social spaces without constantly being in the way [8]. However, in humans, the display of social cues is indicative of certain underlying mental properties, such as thoughts, emotions, intentions, or abilities. The problem is that robots can exhibit these same cues, through careful design or specific technologies, even though they do not have the same, or even similar, underlying properties. For example, in human interaction, following another person’s gaze is an invitation to joint attention [9]; and in communication, joint attention signals the listener’s understanding of the speaker’s communicative intention. Robots using such gaze cues [10] are similarly interpreted as indicating joint attention and of understanding a speaker’s instructions [11], [12]. However, robots can produce these behaviors naïvely using simple algorithms, without having any concept of joint attention or any actual understanding of the speaker’s communication. Thus, when a robot displays these social cues, they are not symptoms of the expected underlying processes, and a person observing this robot may erroneously attribute a range of (often human-like) properties to the robot [13]. Erroneous assumptions about other people are not always harmful. Higher expectations than initially warranted can aid human development (when caregivers “scaffold” the infant’s budding abilities; [14], can generate learning success [15], and can foster prosocial behaviors [16]. But such processes are, at least currently, wholly absent with robots. Overestimating a robot’s capacities poses manifest risks to users, developers, and the public at large. When users entrust a robot with tasks that the robot ends up not being equipped to do, people may be disappointed and frustrated when they discover the robot’s limited actual capabilities [17]; and there may be distress or harm if they discover these limitations too late. Likewise, developers who consistently oversell their products will be faced with increasing numbers of disappointed, frustrated, or distressed users who no longer use the product, write terrible public reviews (quite a significant impact factor for consumer technology), or even sue the manufacturer. Finally, the public at large could be deprived of genuine benefits if a few oversold robotic products cause serious harm, destroy consumer trust, and lead to stifling regulation. Broadly speaking, discrepancies between expectations and reality have been well documented and explored under the umbrella of “expectancy violation,” from the domains of perception [18] to human interaction [19]. In human-robot interaction research, such violations have been studied, for example, by comparing expectations from media to interactions with a real robot [20] or by quantifying updated capability estimates after interacting with a robot [21]. Our discussion builds on this line of inquiry, but we do
  • 15. Trust and Discrepancy 5 not focus on cases when an expectancy violation has occurred, which assumes that the person has become aware of the discrepancy (and is likely to lose trust in the robot). Instead, we focus on sources of such discrepancies and avenues for making a person aware of the robot’s limitations before they encounter a violation (and thus before a loss of trust). Sources of Discrepancies There are multiple sources of discrepancies between the perceived and actual capacities of a robot. Obvious sources are the entertainment industry and public media, which frequently exaggerate technical realities of robotic systems. We discuss here more psychological processes, from misleading and deceptive design and presentation to automatic inferences from a robot’s superficial behavior to deep underlying capabilities. Misleading design Equipping a robot with outward social cues that have no corresponding abilities is, at best, misleading. Such a strategy violates German designer Dieter Rams’ concept of honest design, which is the commitment to design that “does not make a product more innovative, powerful or valuable than it really is” [22]; see also [23], [24]. Honest design is a commitment to transparency—enabling the user to “see through” the outward appearance and to accurately infer the robot’s capacities. In the HRI laboratory, researchers often violate this commitment to transparency when they use Wizard-of-Oz (WoZ) methods to make participants believe that they are interacting with an autonomous, capable robot. Though such misperceptions are rarely harmful, they do contribute to false beliefs and overly high expectations about robots outside the laboratory. Moreover, thorough debriefing at the end of such experiments is not always provided [25], which would reset people’s generalizations about technical realities. Deception When a mismatch between apparent and real capacities is specifically intended—for example, to sell the robot or impress the media—it arguably turns into deception and even exploitation [26]. And people are undoubtedly vulnerable to such exploitation. A recent study suggested that people were willing to unlock the door to a university dormitory building for a verbally communicating robot that had the seeming authority of a
  • 16. Chapter 1 6 food delivery agent. Deception is not always objectionable; in some instances it is used for the benefit of the end user [27], [28], such as in calming individuals with dementia [29] or encouraging children on the autism spectrum to form social bonds [30]. However, these instances must involve careful management of the risks involved in the deception—risks for the individual user, the surrounding social community, and the precedent it sets for other, perhaps less justified cases of deception. Impact of norms At times, people are well aware that they are interacting with a machine in human-like ways because they are engaging with the robot in a joint pretense [31] or because it is the normatively correct way to behave. For example, if a robot greets a person, the appropriate response is to reciprocate the greeting; if the speaker asks a question, the appropriate response is to answer the question. Robots may not recognize the underlying social norm and they may not be insulted if the user violates the norm, but the user, and the surrounding community (e.g., children who are learning these norms), benefit from the fact that both parties uphold relevant social practices and thus a cooperative, respectful social order [32]. The more specific the roles that robots are assigned (e.g., nurse assistant, parking lot attendant), the more these norms and practices will influence people’s behavior toward the robot [33]. If robots are equipped with the norms that apply to their roles (which is a significant challenge; [34], this may improve interaction quality and user satisfaction. Further, robots can actively leverage norms to shape how people interact with it, but perhaps even in manipulative fashion [35]. Norm-appropriate behavior is also inherently trust-building, because norms are commitments to act, and expectations that others will act, in ways that benefit the other (thus invoking the dimension of ethical trust; [36], norm violations become all the more powerful in threatening trust. Expanded inferences Whereas attributions of norm competence to a robot are well grounded in the robot’s actual behavior, a robot that displays seemingly natural communicative skills can compel people to infer (and genuinely assume to be present) many other abilities that the robot probably is unlikely to have [37]. In particular, seeing that a robot has some higher-level abilities, people are likely to assume that it will also possess more basic abilities that in humans would be a prerequisite for the higher-level ability. For
  • 17. Trust and Discrepancy 7 instance, a robot may greet someone with “Hi, how are you?” but be unable itself to answer the same question when the greeting is reciprocated, and it may not even have any speech understanding capabilities at all. Furthermore, a robot’s syntactically correct sentences do not mean it has a full-blown semantics or grasps anything about conversational dynamics [38]. Likewise, seeing that a robot has one skill, we must expect people to assume that it also is has other skills that in humans are highly correlated with the first. For example, a robot may be able to entertain or even tutor a child but be unable to recognize when the child is choking on a toy. People find it hard to imagine that a being can have selected, isolated abilities that do not build upon each other [39]. Though it is desirable that, say, a manufacturer provides explicit and understandable documentation of a system’s safety and performance parameters [40], [41], making explicit what a robot can and cannot do will often fail. That is because some displayed behaviors set off a cascade of inferences that people have evolved and practiced countless times with human beings [32]. As a result, spontaneous reactions to robots in social contexts and their explicit beliefs on what mental capacities robots possess can come apart [42], [43]. Automatic inferences Some inferences or emotional responses are automatic, at least upon initial encounters with artificial agents. Previous research has shown that people treat computers and related technology (including robots) in some ways just like human beings (e.g., applying politeness and reciprocity), and often do so mindlessly [44]. The field of human-robot interaction has since identified numerous instances in which people show basic social- cognitive responses when responding to humanlike robots—for example, by following the “gaze” of a robot [45] or by taking its visual perspective [46]. Beyond such largely automatic reactions, a robot’s humanlike appearance seems to invite a wide array of inferences about the robot’s intelligence, autonomy, or mental capacities more generally [47]–[49]. But even if these appearance-to-mind inferences are automatic, they are not simplistic; they do not merely translate some degree of humanlikeness into a proportional degree of “having a mind.” People represent both humanlike appearance and mental capacities along multiple dimensions [50]–[52], and specific dimensions of humanlike appearance trigger people’s inferences for specific dimensions of mind. For example, features of the Body Manipulator dimension (e.g., torso, arms, fingers) elicit inferences about capacities of reality interaction, which include perception,
  • 18. Chapter 1 8 learning, acting, and communicating. By contrast, facial and surface features (e.g., eyelashes, skin, apparel) elicit inferences about affective capacities, including feelings and basic emotions, as well as moral capacities, including telling right from wrong and upholding moral values [53]. Variations We should note, however, that people’s responses to robots are neither constant nor universal. They show variation within person, manifesting sometimes as cognitive, emotional, or social-relational reactions, can be in the foreground or background at different moments in time, and change with extended interactions with the robot [8], [32]. They also show substantial interpersonal variation, as a function of levels of expertise [54], personal style [55], and psychosocial predispositions such as loneliness [56]. Status quo The fact remains, however, that people are vulnerable to the impact of a robot’s behavior and appearance [57]. We must expect that, in real life as in the laboratory, people will be willing to disclose negative personal information to humanoid agents [58], [59], trust and rely on them [60], empathize with them [61], [62], give in to a robot’s obedience-like pressure to continue tedious work [63] or perform erroneous tasks [64]. Further, in comparison to a mechanical robot, people are more prone to take advice from a humanoid robot [65], trust and rely on them more [60], and are more likely to comply with their requests [66]. None of these behaviors are inherently faulty; but currently they are unjustified, because they are generated by superficial cues rather than by an underlying reality [57]. At present, neither mechanical nor humanoid robots have more knowledge to share than Wikipedia, are no more trustworthy to keep secrets than one’s iPhone, and have no more needs or suffering than a cartoon character. They may in the future, but until that future, we have to ask how we can prevent people from having unrealistic expectations of robots, especially humanlike ones. How to Combat Discrepancies We have seen that discrepancies between perceived and actual capacities exist at multiple levels and are fed from numerous sources. How can people recover from these mismatches or avoid them in the first place? In this section, we provide potential paths for both short- and long-term
  • 19. Trust and Discrepancy 9 solutions to the problem of expectation discrepancy when dealing with social robots. Waiting for the future An easy solution may be to simply wait for the robots of the future to make true the promises of the present. However, that would mean an extended time of misperceived reality, and numerous opportunities for misplaced trust, disappointment, and non-use. It is unclear whether recovery from such prolonged negative experiences is possible. Another strategy to overcome said discrepancies may be to encourage users to acquire minimally necessary technical knowledge to better evaluate artificial agents, perhaps encouraging children to program machines and thus see their mechanical and electronic insides. However, given the widespread disparities in access to quality education in most of the world’s countries, the technical-knowledge path would leave poorer people misled, deceived, and more exploitable than ever before. Moreover, whereas the knowledge strategy would combat some of the sources we discussed (e.g., deception, expanded inferences), it would leave automatic inferences intact, as they are likely grounded in biologically or culturally evolved response patterns. Experiencing the cold truth Another strategy might be to practically force people to experience the mechanical and lifeless nature of machines—such as by asking people to inspect the skinless plastic insides of an animal robot like Paro or by unscrewing a robot’s head and handing it to the person. It is, however, not clear that this will provide more clarity for human-robot interactions. A study of the effects of demonstrating the mechanistic nature of robots to children in fact showed that the children still interacted with the robot in the same social ways as children to whom the robotic side of robots had not been pointed out [67]. Furthermore, if people have already formed emotional attachments, such acts will be seen as cruel and distasteful, rather than have any corrective effects on discrepant perceptions. Revealing real capacities Perhaps most obvious would be truth in advertising. Robot designers and manufacturers, organizations and companies that deploy robots in hotel lobbies, hospitals, or school yards would signal to users what the
  • 20. Chapter 1 10 robot can and cannot do. But there are numerous obstacles to designers and manufacturers offering responsible and modest explanations of the machine’s real capacities. They are under pressure to produce within the constraints of their contracts; they are beholden to funders; they need to satisfy the curiosity of journalists and policy makers, who are also keen to present positive images of developing technologies. Further, even if designers or manufacturers adequately reveal the machine’s limited capabilities, human users may resist such information. If the information is in a manual, people won’t read it. If it is offered during purchase, training, or first encounters, it may still be ineffective. That is because the abovementioned human tendency to perceive agency and mind in machines that have the tell-tale signs of self-propelled motion, eyes, and verbal communication is difficult to overcome. Given the eliciting power of these cues, it is questionable (though empirically testable) whether explicit information can ever counteract a user’s inappropriate mental model of the machine. Legibility and explainability An alternative approach is to make the robot itself “legible”— something that a growing group of scholars is concerned with [68]. But whereas a robot’s intentions and goals can be made legible—e.g., in a projection of the robot’s intended motion path or in the motion itself— capabilities and other dispositions are not easily expressed in this way. At the same time, the robot can correct unrealistic expectations by indicating some of its limits of capability in failed actions [69] or, even more informative, in explicit statements that it is unable or forbidden to act a certain way [70]. A step further would be to design the robot in such a way that it can explicate its own actions, reasoning, and capabilities. But whereas giving users access to the robot’s ongoing decision making and perhaps offering insightful and human-tailored explanations of its performed actions may be desirable [71], “explaining” one’s capacities is highly unusual. Most of this kind of communication among humans is done indirectly, by providing information about, say, one’s occupation [72] or acquaintance with a place [73]. Understanding such indirect speech requires access to shared perceptions, background knowledge, and acquired common ground that humans typically do not have with robots. Moreover, a robot’s attempts to communicate its knowledge, skills, and limitations can also disrupt an ongoing activity or even backfire if talk about capabilities makes users suspect that there is a problem with the interaction [32]. There
  • 21. Trust and Discrepancy 11 is, however, a context in which talk about capabilities is natural— educational settings. Here, one agent learns new knowledge, skills, abilities, often from another agent, and both might comment freely on the learner’s capabilities already in place, others still developing, and yet others clearly absent. If we consider a robot an ever-learning agent, then perhaps talk about capabilities and limitations can be rather natural. One potential drawback of robots that explain themselves must be mentioned. Such robots would appear extremely sophisticated, and one might then worry which other capacities people will infer from this explanatory prowess. Detailed insights into reasoning may invite inferences of deeper self-awareness, even wisdom, and user-tailored explanations may invite inferences of caring and understanding of the user’s needs. But perhaps by the time full-blown explainability can really be implemented, some of these other capacities will too; then the discrepancies would all lift at once. Managing expectations But until that time, we are better off with a strategy of managing expectations and ensuring performance that matches these expectations and lets trust build upon solid evidence. Managing expectations will rely on some of the legibility and explainability strategies just mentioned along with attempts to explicitly set expectations low, which may be easily exceeded to positive effect [74]. However, such explicit strategies would be unlikely to keep automatic inferences in check. For example, in one study, Zhao et al. (submitted) showed that people take a highly humanlike robot’s visual perspective even when they are told it is a wax figure. The power of the mere humanlike appearance was enough to trigger the basic social-cognitive act of perspective taking. Thus, we also need something we might call restrained design— attempts to avoid overpromising signals in behavior, communication, and appearance, as well as limiting the robot’s roles so that people form limited, role- and context-adequate expectations. As a special case of such an approach we describe here the possible benefit of an incremental robot design strategy—the commitment to advance robot capacities in small steps, each of which is well grounded in user studies and reliability testing. Incremental Design Why would designing and implementing small changes in a robot prevent discrepancies between a person’s understanding of the robot’s
  • 22. Chapter 1 12 capacities and its actual capacities? Well-designed small changes may be barely noticeable and, unless in a known, significant dimension (e.g., having eyes after never having had eyes), will limit the number of new inferences that would be elicited by it. Further, even when noticed, the user may be able to more easily adapt to a small change, and integrate it into their existing knowledge and understanding of the robot, without having to alter their entire mental model of the robot. Consider the iRobot Roomba robotic vacuum cleaner. The Roomba has a well-defined, functional role in households as a cleaning appliance. From its first iteration, any discrepancy between people’s perceptions of the robot’s capacities and its actual capacities were likely related to the robot’s cleaning abilities, which could be quickly resolved by using the robot in practice. As new models hit the market, Roomba’s functional capacities improved only incrementally—for example, beep-sequence error codes were replaced by pre-recorded verbal announcements, or random-walk cleaning modes were replaced by rudimentary mapping technology. In these cases, the human users have to accommodate only minor novel elements in their mental models, each changing only very few parameters. Consider, by contrast, Softbank’s Pepper robot. From the original version, Pepper was equipped with a humanoid form including arms and hands that appeared to gesture, and a head with eyes and an actuated neck, such that it appeared to look at and follow people. Further, marketing material emphasized the robot’s emotional capacities, using such terms as “perception modules” and an “emotional engine.” We can expect that these features encourage people to infer complex capacities in this robot, even beyond perception and emotion. Observing the robot seemingly gaze at us and follow a person’s movements suggests attention and interest; the promise of emotional capacities suggests sympathy and understanding. However, beyond pre-coded sentences intended to be cute or funny, the robot currently has no internal programmed emotional model at all. As a result, we expect there to be large discrepancies between a person’s elicited expectations and the robot’s actual abilities. Assumptions of deep understanding in conversation and willingness toward risky personal disclosure may then be followed by likely frustration or disappointment. The discrepancy in Pepper’s case stems in part from the jump in expectation that the designers invite the human to take and the actual reality of Pepper’s abilities. Compared with other technologies people may be familiar with, a highly humanoid appearance, human-like social signaling behaviors, and purported emotional abilities trigger a leap in inference people make from “robots can't do much” to “they can do a lot.” But that leap is not matched by Pepper’s actual capabilities. As a result,
  • 23. Trust and Discrepancy 13 encountering Pepper creates a large discrepancy that will be quite difficult to overcome. A more incremental approach would curtail the humanoid form and focus on the robot’s gaze-following abilities, without claims of emotional processing. If the gaze following behavior actually supports successful person recognition and communication turn taking, then a more humanoid form may be warranted. And only if actual emotion recognition and the functional equivalent of emotional states in the robot are achieved would Pepper’s “emotion engine” be promoted. Incremental approaches have been implemented in other technological fields. For example, commercial car products have in recent years increasingly included small technical changes that point toward eventual autonomous driving abilities, such as cruise control, active automatic breaking systems, lane violation detection and correction, and the like. More advanced cars, such as Tesla’s Model S, have an “auto-pilot” mode that takes a further step toward autonomous driving in currently highly constrained circumstances. The system still frequently reminds the user to keep their hands on the steering wheel and to take over when those constrained circumstances no longer hold (e.g., no painted lane information). However, the success of this shared autonomy situation depends on how a product is marketed. Other recent cars may include a great deal of autonomy in their onboard computing system but are not marketed as autonomous or self-driving but are called “Traffic Jam Assist” or “Super Cruise.” Such labeling decisions limit what the human users expects of the car and therefore what they entrust it to do. A recent study confirms that labeling matters: People overestimate Tesla cars’ capacities more than other comparable brands [75]. And perhaps unsurprisingly, the few highly-publicized accidents with Teslas are typically the result of vast overestimation of what the car can do [76], [77]. Within self-driving vehicle research and development, a category system is in place to express the gradually increasing levels of autonomy of the system in question. In this space, however, the incremental approach may still take steps that are too big. In the case of vehicle control, people's adjustment to continuously increasing autonomy is not itself continuous but takes a qualitative leap. People either drive themselves, assisted up to a point, or they let someone else (or something else) drive; they become passengers. In regular cars, actual passengers give up control, take naps, read books, chat on the phone, and would not be ready to instantly take the wheel when the main driver requests it. Once people take on the unengaged passenger role with increasingly (but not yet fully) autonomous vehicles, the situation will result in over-trust (the human will take naps, read books, etc.). And if there remains a small chance that the car needs
  • 24. Chapter 1 14 the driver’s attention but the driver has slipped into the passenger role, the situation could prove catastrophic. The human would not be able to take the wheel quickly enough when the car requests it because it takes time for a human to shift attention, observe their surroundings, develop situational awareness, make a plan, and act [78]. Thus, even an incremental approach would not be able to avert the human’s jump to believing the car can handle virtually all situations, when in fact the car cannot. Aside from incremental strategies, the more general restrained design approach must ultimately be evidence-based design. Decisions about form and function must be informed by evidence into which of the robot’s signals elicit what expectations in the human. Such insights are still rather sparse and often highly specific to certain robots. It therefore takes a serious research agenda to address this challenge, with a full arsenal of scientific approaches: carefully controlled experiments to establish causal relations between robot characteristics and a person’s expectations; examination of the stability of these response patterns by comparing young children and adults as well as people from different cultures; and longitudinal studies to establish how those responses will change or stabilize in the wake of interacting with robots over time. We close our analysis by discussing the strengths and challenges that come with longitudinal studies. Longitudinal Research Longitudinal studies would be the ideal data source to elucidate the source of and remedy for discrepancies between perceived and actual robot capacities. That is because, first, they can distinguish between initial reactions to robots and more enduring response patterns. We have learned from human-human social perception research that initial responses, even if they change over time, can strongly influence the range of possible long- term responses; in particular, initial negative responses tend to improve more slowly than positive initial reactions deteriorate [79]. In human-robot encounters, some responses may be automatic and have a lasting impact, whereas others may initially be automatic but could be changeable over time. Furthermore, some responses may reflect an initial lack of understanding of the encountered novel agent, and with time a search for meaning may improve this understanding [80]. Longitudinal studies can also track how expectations clash with new observations and how trust fluctuates as a result. High-quality longitudinal research is undoubtedly difficult to conduct because of cost, time and management commitments, participant attrition,
  • 25. Trust and Discrepancy 15 ethical concerns of privacy and unforeseen impacts on daily living, and the high rate of mechanical robot failures. A somewhat more modest goal might be to study short-term temporal dynamics that will advance knowledge but also provide a launching pad for genuine longitudinal research. For the question of recovery from expectation-reality discrepancies we can focus on a few feasible but informative paradigms. A first paradigm is to measure people’s responses to a robot with or without information about the true capacities of the robot. In comparison to spontaneous inferences about the robot’s capacities, would people adjust their inferences when given credible information? One could compare the differential effectiveness of (a) inoculation (providing the ground-truth information before the encounter with the robot) and (b) correction (providing it after the encounter). In human persuasion research, inoculation is successful when the persuasive attempt operates at an explicit, rational level [81]. By analogy, the comparison of inoculation and post-hoc correction in the human-robot perception case may help clarify which human responses to robots lie at the more explicit and which at the more implicit level. A second paradigm is to present the robot twice during a single experimental session, separated by some time delay or unrelated other activities. What happens to people’s representations formed in the first encounter that are either confirmed or disconfirmed in the second encounter? If the initial reactions are mere novelty effects, they would subside independent of the new information; if they are deeply entrenched, they would remain even after disconfirmation; and if they are systematically responsive to evidence, they would stay the same under confirmation and change under disconfirmation [82]. In addition, different response dimensions may behave differently. Beliefs about the robot’s reliability and competence may change more rapidly whereas beliefs about its benevolence may be more stable. In a third paradigm, repeated-encounter but short-term experiments could bring participants back to the laboratory more than once. Such studies could distinguish people’s adjustments to specific robots (if they encounter the same robot again) from adjustments of their general beliefs about robots (if they encounter a different, but comparable robot again). From stereotype research, we have learned that people often maintain general beliefs about a social category even when acquiring stereotype- disconfirming information about specific individuals [83]. Likewise, people may update their beliefs about a specific robot they encounter repeatedly without changing their beliefs about robots in general [82].
  • 26. Chapter 1 16 Conclusion Trust is one agent’s expectation about the other’s actions. Trust is broken when the other does not act as one expected—is not as reliable or competent as one expected, or is dishonest or unethical. In all these cases, a discrepancy emerges between what one agent expected and the other agent delivered. Human-robot interactions, we suggest, often exemplify such cases: people expect more of their robots than the robots can deliver. Such discrepancies have many sources, from misleading and deceptive information to the seemingly innocuous but powerful presence of deep- seated social signals. This range of sources demands a range of remedies, and we explored several of them, from patience to legibility, from incremental design to longitudinal research. Because of people’s complex responses to artificial agents, there is no optimal recipe for minimizing discrepancies and maximizing trust. We can only advance our understanding of those complex human responses to robots, use this understanding to guide robot design, and monitor how improved design and human adaptation, over time, foster more calibrated and trust-building human-robot interactions. References [1] D. Ullman and B. F. Malle, “What does it mean to trust a robot? Steps toward a multidimensional measure of trust,” in Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, New York, NY, USA: ACM, 2018, pp. 263–264. [2] D. Ullman and B. F. Malle, “Measuring gains and losses in human-robot trust: Evidence for differentiable components of trust.,” in Companion to the 2019 ACM/IEEE International Conference on Human-Robot Interaction, HRI ’19., New York, NY: ACM, 2019, pp. 618–619. [3] L. Lewis and S. Shrikanth, “Japan lays bare the limitations of robots in unpredictable work,” Financial Times, 25-Apr-2019. [Online]. Available: https://www.ft.com/content/beece6b8-4b1a-11e9-bde6-79eaea5acb64. [Accessed: 04-Jan-2020]. [4] R. K. Moore, “Spoken language processing: Where do we go from here,” in Your Virtual Butler: The Making-of, R. Trappl, Ed. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013, pp. 119–133. [5] M.-A. Williams, “Robot social intelligence,” in Social Robotics, S. S. Ge, O. Khatib, J.-J. Cabibihan, R. Simmons, and M.-A. Williams, Eds. Springer Berlin Heidelberg, 2012, pp. 45–55. [6] K. Fischer, H. M. Weigelin, and L. Bodenhagen, “Increasing trust in human–robot medical interactions: effects of transparency and
  • 27. Trust and Discrepancy 17 adaptability,” Paladyn, Journal of Behavioral Robotics, vol. 9, no. 1, pp. 95–109, 2018, doi: 10.1515/pjbr-2018-0007. [7] T. L. Sanders, T. Wixon, K. E. Schafer, J. Y. Chen, and P. Hancock, “The influence of modality and transparency on trust in human-robot interaction,” presented at the Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), 2014 IEEE International Inter- Disciplinary Conference on, 2014, pp. 156–159. [8] K. Fischer, “Tracking anthromorphizing behavior in human-robot interaction,” Manuscript submitted for publication, 2020. [9] N. Eilan, C. Hoerl, T. McCormack, and J. Roessler, Eds., Joint attention: Communication and other minds. New York, NY: Oxford University Press, 2005. [10] B. Mutlu, T. Kanda, J. Forlizzi, J. Hodgins, and H. Ishiguro, “Conversational gaze mechanisms for humanlike robots,” ACM Trans. Interact. Intell. Syst., vol. 1, no. 2, pp. 12:1–12:33, Jan. 2012, doi: 10.1145/2070719.2070725. [11] K. Fischer, K. Lohan, J. Saunders, C. Nehaniv, B. Wrede, and K. Rohlfing, “The impact of the contingency of robot feedback on HRI,” in 2013 International Conference on Collaboration Technologies and Systems (CTS), IEEE, 2013, pp. 210–217. [12] K. Fischer, K. Foth, K. Rohlfing, and B. Wrede, “Mindful tutors – linguistic choice and action demonstration in speech to infants and to a simulated robot,” Interaction Studies, vol. 12, no. 1, pp. 134–161, 2011. [13] M. Kwon, M. F. Jung, and R. A. Knepper, “Human expectations of social robots,” in Proceedings of the Eleventh ACM/IEEE International Conference on Human Robot Interaction, HRI’16, Piscataway, NJ, 2016, pp. 463–464. [14] R. Mermelshtine, “Parent–child learning interactions: A review of the literature on scaffolding,” British Journal of Educational Psychology, vol. 87, no. 2, pp. 241–254, Jun. 2017, doi: 10.1111/bjep.12147. [15] L. Jussim, S. L. Robustelli, and T. R. Cain, “Teacher expectations and self- fulfilling prophecies,” in Handbook of motivation at school., K. R. Wenzel and A. Wigfield, Eds. New York, NY: Routledge/Taylor & Francis Group, 2009, pp. 349–380. [16] R. E. Kraut, “Effects of social labeling on giving to charity,” Journal of Experimental Social Psychology, vol. 9, no. 6, pp. 551–562, Nov. 1973, doi: 10.1016/0022-1031(73)90037-1. [17] M. de Graaf, S. B. Allouch, and J. van Dijk, “Why do they refuse to use my robot?: Reasons for non-use derived from a long-term home study,” in Proceedings of the 2017 ACM/IEEE International Conference on Human- Robot Interaction, 2017, pp. 224–233. [18] M. A. Bobes, M. Valdessosa, and E. Olivares, “An ERP study of expectancy violation in face perception,” Brain and Cognition, vol. 26, no. 1, pp. 1–22, Sep. 1994, doi: 10.1006/brcg.1994.1039.
  • 28. Chapter 1 18 [19] J. K. Burgoon, D. A. Newton, J. B. Walther, and E. J. Baesler, “Non-verbal expectancy violations,” Journal of Nonverbal Behavior, vol. 55, no. 1, pp. 58–79, 1989. [20] U. Bruckenberger, A. Weiss, N. Mirnig, E. Strasser, S. Stadler, and M. Tscheligi, “The good, the bad, the weird: Audience evaluation of a ‘real’ robot in relation to science fiction and mass media,” ICSR 2013, vol. 8239 LNAI, pp. 301–310, 2013, doi: 10.1007/978-3-319-02675-6_30. [21] T. Komatsu, R. Kurosawa, and S. Yamada, “How does the difference between users’ expectations and perceptions about a robotic agent affect their behavior?,” Int J of Soc Robotics, vol. 4, no. 2, pp. 109–116, Apr. 2012, doi: 10.1007/s12369-011-0122-y. [22] Vitsoe, “The power of good design,” 2018. [Online]. Available: https://www.vitsoe.com/us/about/good-design. [Accessed: 22-Oct-2018]. [23] G. Donelli, “Good design is honest,” 13-Mar-2015. [Online]. Available: https://blog.astropad.com/good-design-is-honest/. [Accessed: 22-Oct-2018]. [24] C. de Jong, Ed., Ten principles for good design: Dieter Rams. New York, NY: Prestel Publishing, 2017. [25] D. J. Rea, D. Geiskkovitch, and J. E. Young, “Wizard of awwws: Exploring psychological impact on the researchers in social HRI experiments,” in Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, New York, NY, USA: Association for Computing Machinery, 2017, pp. 21–29. [26] W. C. Redding, “Ethics and the study of organizational communication: When will we wake up? In Jaksa, J.A., Pritchard, M.S. (eds.) (pp. ). Hampton, Cresskill,” in Responsible Communication: Ethical Issues in Business, Industry, and the Professions, J. A. Jaksa and M. S. Pritchard, Eds. Cresskill, NJ: Hampton Press, 1996, pp. 17–40. [27] E. C. Collins, “Vulnerable users: deceptive robotics,” Connection Science, vol. 29, no. 3, pp. 223–229, Jul. 2017, doi: 10.1080/09540091.2016.1274959. [28] A. Matthias, “Robot lies in health care: When is deception morally permissible?,” Kennedy Inst Ethics J, vol. 25, no. 2, pp. 169–192, Jun. 2015, doi: 10.1353/ken.2015.0007. [29] K. Wada, T. Shibata, T. Saito, and K. Tanie, “Effects of robot-assisted activity for elderly people and nurses at a day service center,” Proceedings of the IEEE, vol. 92, no. 11, pp. 1780–1788, Nov. 2004, doi: 10.1109/JPROC.2004.835378. [30] E. Karakosta, K. Dautenhahn, D. S. Syrdal, L. J. Wood, and B. Robins, “Using the humanoid robot Kaspar in a Greek school environment to support children with Autism Spectrum Condition,” Paladyn, Journal of Behavioral Robotics, vol. 10, no. 1, pp. 298–317, Jan. 2019, doi: 10.1515/pjbr-2019-0021. [31] H. H. Clark, “How do real people communicate with virtual partners?,” presented at the Proceedings of AAAI-99 Fall Symposium, Psychological Models of Communication in Collaborative Systems, November 5-7th, 1999, North Falmouth, MA., 1999.
  • 29. Trust and Discrepancy 19 [32] K. Fischer, Designing speech for a recipient. partner modeling, alignment and feedback in so-called “simplified registers.” Amsterdam: John Benjamins, 2016. [33] J. Goetz, S. Kiesler, and A. Powers, “Matching robot appearance and behavior to tasks to improve human-robot cooperation,” in The 12th IEEE International Workshop on Robot and Human Interactive Communication, vol. 19, New York, NY: Association for Computing Machinery, 2003, pp. 55–60. [34] B. F. Malle, P. Bello, and M. Scheutz, “Requirements for an artificial agent with norm competence,” in Proceedings of 2nd ACM conference on AI and Ethics (AIES’19), New York, NY: ACM, 2019. [35] E. Sanoubari, S. H. Seo, D. Garcha, J. E. Young, and V. Loureiro- Rodríguez, “Good robot design or Machiavellian? An in-the-wild robot leveraging minimal knowledge of passersby’s culture,” in Proceedings of the 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), New York, NY, USA: Association for Computing Machinery, 2019, pp. 382–391. [36] B. F. Malle and D. Ullman, “A multi-dimensional conception and measure of human-robot trust,” in Trust in human-robot interaction: Research and applications, C. S. Nam and J. B. Lyons, Eds. Elsevier, 2020. [37] K. Fischer and R. Moratz, “From communicative strategies to cognitive modelling,” presented at the First International Workshop on `Epigenetic Robotics’, September 17-18, 2001, Lund, Sweden, 2001. [38] S. Payr, “Towards human-robot interaction ethics.,” in A Construction Manual for Robots’ Ethical Systems: Requirements, Methods, Implementations, R. Trappl, Ed. Cham, Switzerland: Springer International, 2015, pp. 31–62. [39] K. Fischer, What computer talk is and isn’t: Human-computer conversation as intercultural communication. Saarbrücken: AQ-Verlag, 2006. [40] K. R. Fleischmann and W. A. Wallace, “A covenant with transparency: Opening the black box of models,” Communications of the ACM - Adaptive complex enterprises, vol. 48, no. 5, pp. 93–97, 2005, doi: 10.1145/1060710.1060715. [41] M. Hind et al., “Increasing trust in AI services through supplier’s declarations of conformity,” ArXiv e-prints, Aug. 2018. [42] S. R. Fussell, S. Kiesler, L. D. Setlock, and V. Yew, “How people anthropomorphize robots,” in Proceedings of the 3rd ACM/IEEE International Conference on Human Robot Interaction, HRI ’08, New York, NY, USA: Association for Computing Machinery, 2008, pp. 145– 152. [43] J. =áRWRZVNL H. Sumioka, S. Nishio, D. F. Glas, C. Bartneck, and H. Ishiguro, “Appearance of a robot affects the impact of its behaviour on perceived trustworthiness and empathy,” Paladyn, Journal of Behavioral Robotics, vol. 7, no. 1, 2016, doi: 10.1515/pjbr-2016-0005. [44] C. Nass and Y. Moon, “Machines and mindlessness: Social responses to computers,” Journal of Social Issues, vol. 56, no. 1, pp. 81–103, Jan. 2000, doi: 10.1111/0022-4537.00153.
  • 30. Chapter 1 20 [45] H. Admoni and B. Scassellati, “Social eye gaze in human-robot interaction,” Journal of Human-Robot Interaction, vol. 6, no. 1, pp. 25–63, May 2017, doi: 10.5898/JHRI.6.1.Admoni. [46] X. Zhao, C. Cusimano, and B. F. Malle, “Do people spontaneously take a robot’s visual perspective?,” in Proceedings of the Eleventh ACM/IEEE International Conference on Human Robot Interaction, HRI’16, Piscataway, NJ: IEEE Press, 2016, pp. 335–342. [47] C. Bartneck, T. Kanda, O. Mubin, and A. Al Mahmud, “Does the design of a robot influence its animacy and perceived intelligence?,” International Journal of Social Robotics, vol. 1, no. 2, pp. 195–204, Feb. 2009, doi: 10.1007/s12369-009-0013-7. [48] E. Broadbent et al., “Robots with display screens: A robot with a more humanlike face display is perceived to have more mind and a better personality,” PLoS ONE, vol. 8, no. 8, p. e72589, Aug. 2013, doi: 10.1371/journal.pone.0072589. [49] F. Eyssel, D. Kuchenbrandt, S. Bobinger, L. de Ruiter, and F. Hegel, “‘If you sound like me, you must be more human’: On the interplay of robot and user features on human-robot acceptance and anthropomorphism,” in Proceedings of the 7th ACM/IEEE International Conference on Human- Robot Interaction, HRI’12, New York, NY: Association for Computing Machinery, 2012, pp. 125–126. [50] B. F. Malle, “How many dimensions of mind perception really are there?,” in Proceedings of the 41st Annual Meeting of the Cognitive Science Society, E. K. Goel, C. M. Seifert, and C. Freksa, Eds. Montreal, Canada: Cognitive Science Society, 2019, pp. 2268–2274. [51] E. Phillips, D. Ullman, M. de Graaf, and B. F. Malle, “What does a robot look like?: A multi-site examination of user expectations about robot appearance.,” in Proceedings of the Human Factors and Ergonomics Society Annual Meeting., 2017. [52] E. Phillips, X. Zhao, D. Ullman, and B. F. Malle, “What is human-like? Decomposing robots’ human-like appearance using the Anthropomorphic roBOT (ABOT) database,” in Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, New York, NY, USA: ACM, 2018, pp. 105–113. [53] X. Zhao, E. Phillips, and B. F. Malle, “How people infer a humanlike mind from a robot body,” PsyArXiv, preprint, Nov. 2019. [54] K. Fischer, “Interpersonal variation in understanding robots as social actors,” in Proceedings of em HRI’11, March 6-9th, 2011. Lausanne, Switzerland, 2011, pp. 53–60. [55] S. Payr, “Virtual butlers and real people: Styles and practices in long-term use of a companion,” in Your virtual butler: The making-of, R. Trappl, Ed. Berlin, Heidelberg: Springer, 2013, pp. 134–178. [56] K. M. Lee, Y. Jung, J. Kim, and S. R. Kim, “Are physically embodied social agents better than disembodied social agents?: The effects of physical embodiment, tactile interaction, and people’s loneliness in human–robot
  • 31. Trust and Discrepancy 21 interaction,” International Journal of Human-Computer Studies, vol. 64, no. 10, pp. 962–973, Oct. 2006, doi: 10.1016/j.ijhcs.2006.05.002. [57] K. S. Haring, K. Watanabe, M. Velonaki, C. C. Tossell, and V. Finomore, “FFAB—The form function attribution bias in human–robot interaction,” IEEE Transactions on Cognitive and Developmental Systems, vol. 10, no. 4, pp. 843–851, Dec. 2018, doi: 10.1109/TCDS.2018.2851569. [58] G. M. Lucas, J. Gratch, A. King, and L.-P. Morency, “It’s only a computer: Virtual humans increase willingness to disclose,” Computers in Human Behavior, vol. 37, pp. 94–100, Aug. 2014, doi: 10.1016/j.chb.2014.04.043. [59] T. Uchida, H. Takahashi, M. Ban, J. Shimaya, Y. Yoshikawa, and H. Ishiguro, “A robot counseling system — What kinds of topics do we prefer to disclose to robots?,” in 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 2017, pp. 207– 212. [60] R. Pak, N. Fink, M. Price, B. Bass, and L. Sturre, “Decision support aids with anthropomorphic characteristics influence trust and performance in younger and older adults,” Ergonomics, vol. 55, no. 9, pp. 1059–1072, Sep. 2012, doi: 10.1080/00140139.2012.691554. [61] L. D. Riek, T. Rabinowitch, B. Chakrabarti, and P. Robinson, “Empathizing with robots: Fellow feeling along the anthropomorphic spectrum,” in 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, 2009, pp. 1–6. [62] S. H. Seo, D. Geiskkovitch, M. Nakane, C. King, and J. E. Young, “Poor thing! Would you feel sorry for a simulated robot? A comparison of empathy toward a physical and a simulated robot,” in Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, New York, NY, USA: Association for Computing Machinery, 2015, pp. 125–132. [63] D. Y. Geiskkovitch, D. Cormier, S. H. Seo, and J. E. Young, “Please continue, we need more data: An exploration of obedience to robots,” Journal of Human-Robot Interaction, vol. 5, no. 1, pp. 82–99, 2016, doi: 10.5898/10.5898/JHRI.5.1.Geiskkovitch. [64] M. Salem, G. Lakatos, F. Amirabdollahian, and K. Dautenhahn, “Would you trust a (faulty) robot? Effects of error, task type and personality on human-robot cooperation and trust,” in Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI ’15, New York: ACM, 2015, pp. 141–148. [65] A. Powers and S. Kiesler, “The advisor robot: Tracing people’s mental model from a robot’s physical attributes,” in Proceedings of the 1st ACM SIGCHI/SIGART Conference on Human-robot Interaction, New York, NY, USA: ACM, 2006, pp. 218–225. [66] V. Chidambaram, Y.-H. Chiang, and B. Mutlu, “Designing persuasive robots: How robots might persuade people using vocal and nonverbal cues,” in Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction (HRI ’12), New York, NY, USA: Association for Computing Machinery, 2012, pp. 293–300.
  • 32. Chapter 1 22 [67] S. Turkle, C. Breazeal, O. Dasté, and B. Scassellati, “First encounters with Kismet and Cog: Children respond to relational artifacts,” in Digital media: Transformations in human communication, P. Messaris and L. Humphreys, Eds. New York, NY: Peter Lang, 2006, pp. 313–330. [68] C. Lichtenthäler and A. Kirsch, Legibility of robot behavior: A literature review. https://hal.archives-ouvertes.fr/hal-01306977, 2016. [69] M. Kwon, S. H. Huang, and A. D. Dragan, “Expressing robot incapability,” in Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’18), New York, NY, USA: Association for Computing Machinery, 2018, pp. 87–95. [70] G. Briggs and M. Scheutz, “‘Sorry, I can’t do that:’ Developing mechanisms to appropriately reject directives in human-robot interactions,” in Proceedings of the 2015 AAAI Fall Symposium on AI and HRI, 2015. [71] M. de Graaf and B. F. Malle, “How people explain action (and autonomous intelligent systems should too),” in 2017 AAAI Fall Symposium Series Technical Reports, Palo Alto, CA: AAAI Press, 2017, pp. 19–26. [72] H. H. Clark, “Communal lexicons,” in Context in Language Learning and Language Understanding, v, 198 vols., K. Malmkjær and J. Williams, Eds. Cambridge University Press, 1998, pp. 63–87. [73] E. A. Schegloff, “Notes on a conversational practise: formulating place,” in Studies in Social Interaction, D. Sudnow, Ed. New York: Free Press, 1972, pp. 75–119. [74] S. Paepcke and L. Takayama, “Judging a bot by its cover: An experiment on expectation setting for personal robots,” in 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI), New York, NY: Association for Computing Machinery, 2010, pp. 45–52. [75] E. R. Teoh, “What’s in a name? Drivers’ perceptions of the use of five SAE Level 2 driving automation systems,” Journal of Safety Research, 2020. [76] J. Bhuiyan, “A federal agency says an overreliance on Tesla’s Autopilot contributed to a fatal crash,” Vox, 12-Sep-2017. [Online]. Available: https://www.vox.com/2017/9/12/16294510/fatal-tesla-crash-self-driving- elon-musk-autopilot. [Accessed: 05-Jan-2020]. [77] F. Lambert, “Tesla driver was eating and drinking during publicized Autopilot crash, NTSB reports,” Electrek, 03-Sep-2019. [Online]. Available: https://electrek.co/2019/09/03/tesla-driver-autopilot-crash-eating -ntsb-report/. [Accessed: 05-Jan-2020]. [78] M. A. Regan, C. Hallett, and C. P. Gordon, “Driver distraction and driver inattention: Definition, relationship and taxonomy,” ACCIDENT ANALYSIS AND PREVENTION, vol. 43, no. 5, pp. 1771–1781, Sep. 2011, doi: 10.1016/j.aap.2011.04.008. [79] M. Rothbart and B. Park, “On the confirmability and disconfirmability of trait concepts,” Journal of Personality and Social Psychology, vol. 50, pp. 131–142, 1986. [80] C. V. Smedegaard, “Reframing the Role of Novelty within Social HRI: from Noise to Information,” 2019 14th ACM/IEEE International
  • 33. Trust and Discrepancy 23 Conference on Human-Robot Interaction (HRI), pp. 411–420, 2019, doi: 10.1109/HRI.2019.8673219. [81] W. J. McGuire, “Inducing resistance to persuasion: Some contemporary approaches,” in Advances in Experimental Social Psychology, vol. 1, L. Berkowitz, Ed. Academic Press, 1964, pp. 191–229. [82] M. J. Ferguson, M. Kwon, T. Mann, and R. A. Knepper, “The formation and updating of implicit impressions of robots.,” presented at the The Annual Meeting of the Society for Experimental Social Psychology, Toronto, Canada, 2019. [83] M. Rothbart and M. Taylor, “Category labels and social reality: Do we view social categories as natural kinds?,” in Language, interaction and social cognition, Thousand Oaks, CA, US: Sage Publications, Inc, 1992, pp. 11– 36.
  • 34. CHAPTER 2 TALKING TO ROBOTS AT DEPTH ROBERT CODD-DOWNEY, ANDREW SPEERS, MICHAEL JENKIN Electrical Engineering and Computer Science Lassonde School of Engineering York University, Canada Effective human-robot interaction can be complex at the best of times and under the best of situations but the problem becomes even more complex underwater. Here both the robot and the human operator must be shielded from the effects of water. Furthermore, the nature of water itself complicates both the available technologies and the way in which they can be used to support communication. Small-scale robots working in close proximity to divers underwater are further constrained in their communication choices by power, mass and safety concerns, yet it is in this domain that effective human-robot interaction is perhaps most critical. Failure in this scenario can result in vehicle loss as well as vehicle operation that could pose a threat to local operators. Here we describe a range of approaches that have been used successfully to provide this essential communication. Tethered and tetherless approaches are reviewed along with design considerations for human input and display/interaction devices that can be controlled by divers operating at depth. Introduction Effective human-robot communication is essential everywhere, but perhaps nowhere is that more the case than when a human is communicating with a robot that is operating underwater. Consider the scenarios shown in Figure 2.1. Here two different robots are shown operating in close proximity to an underwater operator. Effective operation of the robot requires a mechanism
  • 35. Talking to Robots at Depth 25 for the operator (a diver) to communicate instructions to the robot and to have the robot communicate acknowledgment of those instructions and provide other information to the diver. Failure in this communication can lead to mission failure, injury to the diver and damage to, and even loss of, the vehicle. The development of effective communication strategies for Unmanned Underwater Vehicles (UUVs) is critical. Unfortunately not only does the underwater environment require effective communication between a robot and its operator(s), it also places substantive constraints on the ways in which this communication can take place. The water column restricts many common terrestrial communication approaches and even systems that might be appropriate for underwater use tend to offer only low data bandwidth and require high power consumption. Communication underwater is further complicated by the limitations that the underwater environment place on the ways in which the human can utilize a given technology to communicate with the robot and the robot communicate with the human user. For example, recreational SCUBA equipment typically requires the diver to hold a SCUBA regulator in their mouth eliminating voice-based command options. Normal touch input devices (e.g., keyboards, mice, etc.) are difficult to make work underwater. Although such devices can be made waterproof the pressure of water at depth renders many touch-sensitive devices ineffective as the surrounding water pressure is mistaken as user input. Finally, the refractive nature of transparent housings for display can (a) AQUA [13] (b) Milton [11] Figure 2.1: Divers operating in close proximity with robots underwater. Divers require an effective means to communicate with a robot when operating at depth. A range of potential solutions exist but any such solution must take into account the realities of the operating medium and the cognitive load placed on the diver. (a) shows AQUA [13] a six legged amphibious hexapod being operated in a pool. (b) shows Milton [11] a more traditional thruster-based Unmanned Underwater Vehicle (UUV) being operated in the open ocean. In both cases the robots are shown operating with divers in close proximity.
  • 36. Chapter 2 26 complicate the readability of displays designed for humans to view, whether located on the robot itself or on some diver-carried display panel. A further issue in robot-diver communication involves cognitive task loading. Tasks that may appear simple at the surface can be difficult for a diver to perform at depth. Cognitive loading as a consequence of diving is well documented [30]. The effects on the cognitive abilities of divers utilizing various gas mixtures, including oxygen-enriched air [2], have also been documented. Task loading is a known risk factor in SCUBA diving [36], and alerting divers to this risk is a component of recreational and commercial dive training. Given these constraints, developing effective interaction mechanisms for divers and robots operating at depth is a complex and challenging task. Some realities of working underwater Communication between a human operator and a robot typically relies on some medium such as a physical tether (e.g., wire), electromagnetic waves (e.g., radio, light), or acoustic energy (e.g., sound) for communication. The same is true underwater, however the physical properties of the communications medium (e.g., water versus air) places complex restrictions on such options. Water is denser than air. The density of water varies with temperature but for normal operational conditions a density of 1gm/cm3 is a reasonable approximation. Air has a density of approximately 0.001225g/cm3. This difference will cause objects (such as communication cables) that would normally fall to the floor in terrestrial operation to be buoyant or sink depending on their density. Furthermore the high density of water will cause cables or tethers to introduce considerable drag on the vehicle even when suspended in the water column. Buoyant cables will be subject to surface wave and wind action while cables that are denser than water will encounter the normal drag problems associated with terrestrial cables. Depending on the location of the UUV and the operator a cable may be partially buoyant and partially sunk. Terrestrial wireless communication between an operator and a robot is typically straightforward. Standard communication technologies based on radio communication including WiFi and BlueTooth are pervasive and standard technologies exist to support the development of communication protocols based on these and other infrastructures. Water, unfortunately, is not an effective medium for radio wave-based communication. Radio waves are attenuated by water, and by salt water in particular [25]. This typically limits their use to very short distances [43]. Different radio frequencies are
  • 37. Talking to Robots at Depth 27 attenuated differently by water. Through an appropriate choice of frequencies it is possible to use radio to communicate over short distances underwater (see [4]), but given the constraints with such technologies there is considerable interest in the development of other underwater communication technologies. See [21] for a review. The poor transmission of electromagnetic energy through water is also found with visible light. However the effects are not as significant as compared with the rest of the electromagnetic spectrum. The transmission of light through water is impacted both by the water’s turbidity as well as the nature of the light being transmitted. The absorption of light through one meter of sea water can run from a low of around 2% for blue-green portion of the visible light spectrum through transparent ocean water to over 74% for the red portion in coastal sea water[20]. For a given level of turbidity, the red light band is absorbed much more quickly than the blue-green band. Thus, under natural sunlight, objects at depth that naturally appear red end up appearing more blue-green than they do at the surface. From a communications point of view, such colour loss is often not critical as over short distances (5m or less) the increased absorption of the red band over the blue-green band will not have a significant impact on computer displays. High levels of turbidity, on the other hand, can easily obscure displays associated with the diver-operator and the robot itself. (a) Shore-based robot control (b) Tether management at the surface Figure 2.2: Tethered communication. Here surface-based operators shown in (a) communicate with a submerged device through the tether shown in (b). Note that the operators do not have direct view of the device, nor do they have direct view of any divers who might be accompanying the device at depth. For this deployment also observe the large number of cable handlers required to service the cable as it travels through the surf zone.
  • 38. Chapter 2 28 One final complication with light underwater for human-robot communication is the nature of light refraction. Displays mounted within air-tight housings are typically viewed through flat clear ports. Light travelling from some display through the air within the device, through the port and into the water pass through three different materials and the light is refracted at each boundary according to Snell’s Law. This refraction will introduce a distortion in displays and for each boundary a critical viewing angle exists beyond which the port will act as a reflector rather than allowing a view of the display within. The net effect of this is to require displays to be viewed straight on as much as possible. Finally, sound-based communications technology is common underwater (see [29]). Such systems can communicate extremely long distances. Unfortunately such systems can be quite bulky and have considerable power requirements, reducing their potential application involving small scale devices such as those shown in Figure 2.1. (a) Android-based underwater (b) PC-based underwater tablet tablet [41] Figure 2.3: Tethered communication underwater. Tether-based communication can also be accomplished completely underwater. Here the operator utilizes a properly protected interaction device tethered to the vehicle. (a) shows a mobile operator following the robot with a small blue optical tether. (b) shows a larger underwater display and interaction device being operated by a diver. Such devices provide considerably less flexibility than the computer monitor and keyboard input shown in Figure 2.2(b) but allow for direct line-of-sight to the vehicle aiding in the operators situational awareness.
  • 39. Talking to Robots at Depth 29 Using a physical tether Given the complexities of the underwater environment perhaps the most straightforward mechanism for structuring human-robot communication for autonomous underwater vehicles is from a surface controller or an underwater operator via a physical tether to the UUV (see [26, 22, 1, 37, 38] for examples). While such an approach can provide for excellent communication between the operator and the device as well as providing a conduit for power and vehicle recovery if necessary, a tether, and in particular a surface-based tether also presents several problems. An above surface operator is typically located in some safe, dry location (as shown in Figure 2.2(a)). Here the operator has no direct view of the autonomous vehicle. Furthermore it is typically the case that the operator’s only “view” of the operational environment is via sensors mounted on-board the platform. As a consequence the operator tends to have very poor situational awareness. The tether being managed in Figure 2.2(b) provides both power and data to the robotic sensor operating at the other end of the tether. This particular tether is buoyant which has implications for the control of the sensor package as well as the nature of the drag on the vehicle which is impacted by surface wave action. When using small robots, such as AQUA and Milton, the tether can be fragile and require special care in handling. When working in the field, the operator’s controlling computer, being in close proximity to water, may be unintentionally exposed to environmental contaminants such as water from ocean spray or rain. Reducing this risk requires that the operator be placed at a safe distance from water and thus from UUV operation. This implies longer cables between the robot and semi-dry operator locations, increasing cable management issues and handler communication concerns. The actual UUV operator is, of course, not the only human involved in controlling an underwater vehicle. Although a tether provides a number of advantages, at the end of the day it is a tether that must be properly managed. Different deployments necessitate different tether management strategies but personnel must be deployed in order to deal with problems that arise in standard operation of the tethered vehicle. Figure 2.2 illustrates the complexity of this problem for the shore deployment of an underwater sensor package. A number of personnel are engaged in the task and the ability of the various personnel to communicate among each other effectively is key to successful UUV deployment. This problem becomes even more acute underwater where personnel (divers) must be deployed to manage the tether between the UUV and the operator. The problems with
  • 40. Chapter 2 30 tethered operations become even more severe at depth. Communication with submerged divers can be problematic. Divers are limited in their ability to assess the state of the robot, relying instead on confirmation from the operator passed through surface-based cable wranglers. An alternative to having the UUV teleoperated from the surface involves placing the operator in close proximity to the UUV and then operating the vehicle from underwater. This is shown in Figure 2.3. Teleoperation at depth allows the operator to interact directly with the robot. This enables a number of different operational modes not possible with a ship- or shore- based operator. For example, a diver operating at a relatively safe depth (say 80’) can teleoperate a vehicle operating 30’-40’ deeper, without exposing the diver to the increased dangers associated with the lower dive profile. An underwater tether can also be used to enable a diver to remain outside of potentially dangerous environments while the robot operates within them. For example, a robot could be sent to investigate the inside of a wreck, while allowing the diver to remain outside. Unfortunately the nature of the underwater environment limits the kinds of interaction that the operator can engage in, and the remote interaction device and cognitive loading is also a serious issue. The remote interaction device used by the diver-operator needs to be as neutrally buoyant as possible so as to minimize the effect of the device on the diver-operator’s ability to maneuver underwater. The device shown in Figure 2.3(a), for example, has a very small form factor in (a) Li-Fi Modems (b) Li-Fi Modems underwater Figure 2.4: Li-Fi modems for operation underwater. (a) shows the modems in their housings with an array of LEDs and photo diodes for light generation and capture. (b) shows the same modems deployed underwater. As Manchester coding is used for encoding the message in the light, the source light does not appear to flicker but rather appears as a dim constant light source.
  • 41. Talking to Robots at Depth 31 part to reduce the effect of the buoyancy of the device on the diver. The device shown in Figure 2.3(b) is negatively buoyant, which makes operating the robot from the seabed more comfortable than operating the robot from the middle of the water column. Given the complexities of tether-based operation technologies, there is a desire for communication approaches that can replace the physical tether with some form of wireless technology that is suitable for underwater operation. The requirement that a diver operates in close proximity to the robot and the small form factor of the robot limits some of the potential technologies that might be deployed. Sound-based technologies require a power budget that is unlikely to be available on a device that could be carried by the diver or small scale form-factor vehicle. Sound itself might pose health risks to the diver and other marine life at certain decibel levels. RF-based technology requires considerable power to operate over the distances that would be required. Given these constraints visible light-based communication is an appropriate choice. Modern LED-based lighting systems utilize very little power, and by limiting the power of any light sources used we can ensure that the light is safe for any diver. Encoded light-based communication Given that light travels long distances underwater, at least outside of the red portion of the visible light spectrum, visible light would seem to be an appropriate medium for underwater communication. Underwater wireless optical communication (UWOC) can either be based on LASER-based light (a) Static marker (b) Dynamic marker Figure 2.5: Visual fiducial markers. Both static (a) and dynamic (b) fiducial markers can be used to communicate command information through a visual channel.
  • 42. Chapter 2 32 sources or on the use of regular light, typically generated through a array of low power LEDs or a single high powered LED. See [28] for a recent review of both approaches. Regardless of the technology used to generate the light, the basic approach is to encode the message through modulating the light source and then observing this modulation at the receiving end of the communication. Given the frequency-dependency of light absorption in water, typically light sources in the white or blue-green spectrum are used. Light also works as a conduit upon which communication can be built terrestrially. Light-Fidelity communication (Li-Fi) aims to use visible light as the communication medium for digital communication. (See [17] for a review of the technology.) Although still in its infancy, Li-Fi has shown substantive promise. There have, however, been few large-field tests of the technology. Beyond the terrestrial domain there have also been a number of efforts to deploy Li-Fi technology underwater. For example, the transmission properties of different light sources for Li-Fi have been studied underwater, leading to the observation that LED-based communication has advantages while underwater when line of sight cannot be guaranteed [24]. At a systems level, a long distance (100m) light-based communication system has been demonstrated that utilizes optics to concentrate the emitter and a single photon avalanche diode to enhance detection [42]. The IEEE 802.15.7 standard for visible light communication (VLC) [32] utilizes on-off keying (OOK) to encode the format of the data stream from the transmitter to the receiver. The basic idea here is that by turning a light on and off at the transmitter using a message-driven encoding the receiver can decode this sequence into the transmitted message. A popular approach for this OOK process is Manchester encoding [40], which is a recommended OOK approach in the IEEE VLC standard. Essentially this approach modulates the data stream using a clock signal. One downside of this mechanism is its relatively high overhead in terms of the communication signal, consuming 100% more bandwidth than a raw encoding scheme. Figure 2.4 shows the experimental Li-Fi underwater modem described in [8, 9]. A key problem in the deployment of Li-Fi underwater is the construction of an appropriate light emission/collection device that can operate underwater and that is more or less agnostic to misalignment errors between the emitter and receiver. The Light-Byte modems shown in Figure 2.4 utilize a ring of emitter/receivers that provide a 360ƕ range of light emission/detection in a reasonably wide vertical band. All processing of the incoming and outgoing light signals is performed within the underwater housings themselves allowing the units to appear as USB modems to the external computers or robots.
  • 43. Talking to Robots at Depth 33 One problem with deploying Li-Fi-based systems outdoors is that the technology must compete with other light sources present in the environment. In essence, the receiver must be able to extract the encoded light message from the ambient light. Utilizing a brighter emitter source can help, but it is it difficult to compete with the Sun, especially on cloudless days. Underwater this means that performance of Li-Fi modems is actually worse near the surface and that performance improves markedly with depth. Short pass light filters may be a reliable mechanism to overcome this limitation. Visual target-based communication Fiducial markers are two-dimensional binary tags that convey information to the observer. Technologies such as ARTags [15], April-Tags [27] and Fourier Tags [34] can be used to determine the pose of an observer with respect to the tag. Such tags can also be used to communicate messages along with pose information. The amount of information can be very limited or can encode a large number of bytes. For example, QRCodes [12] can encode a large amount of custom binary data. In this type of communication a two-dimensional visual target is presented to the robot which captures the image using an on-board camera and processes the image stream to obtain the intended message. One benefit of target-based communication for human to robot communication is that in an underwater environment the small amount of processing power required to localize and recognize the target within an image can be very beneficial. A collection of unique visual targets allows for the development of a simple command language, where each tag corresponds to a different command. Even a simple set of visual command targets can provide effective vehicle control given a controlled operational environment. Sequences of tags such as those found in RoboChat [14] can be strung together to describe sophisticated tasks. Figure 2.5 illustrates this process in action for both static pre-printed targets as well as dynamically generated targets on an underwater display. The use of static fiducial target-based communication is effective but cumbersome, as operators need to carry a library of tags. Finding the specific tag needed for the next part of a command sequence is arduous and it is possible to accidentally show the wrong card to the robot while searching for the correct tag. The use of custom or dynamic tags has also been explored [41]. This technique utilizes an underwater tablet-like device that allows the user to select and present a series of control commands to the robot (Figure 2.5(b)). The robot captures these images and once verified a compact sequence of tags that encode the command sequence is generated
  • 44. Chapter 2 34 and can be shown to the robot. This approach can reduce the complexity of carrying a large collection of tags but requires the development of a suitable underwater display and interaction box. Figure 2.6 illustrates the normal process of dynamic marker identification along with some potential pitfalls associated with the approach. Figure 2.6(a) illustrates the desired outcome. This figure is from the output of the target identification process. The view of the target and its housing has been overlayed by a red rectangle, illustrating the localization of the target, and the target identity is printed over the target itself. The process of actually capturing this target can be frustrated in a number of ways. The first is due to refraction effects (Figure 2.6(b). Here the oblique viewing angle of the target past the critical viewing angle causes the surface of the display to act as a mirror and reflect light rather than pass light through the port. Figure 2.6(c) shows a further failure mode where the robot or some other participant – here the robot operator – is reflected in the port. Notwithstanding these types of errors, reasonably high recognition rates (57%) with no false positives at (a) Marker identification (b) Refraction error (c) Reflection error Figure 2.6: Complexities of dynamic marker identification. Although the basic task of marker identification and recognition can be straight- forward underwater the nature of the protective housing introduces a range of complexities. (b) shows refraction-based error where the incidence angle is sufficiently large that the marker housing acts as a mirror obscuring the view of the target. (c) shows reflection in the port of the underwater housing. Even though the internal target is visible, a clear view of the target is obscured by the reflection of the camera (here held by a diver).
  • 45. Talking to Robots at Depth 35 30 frames-per-second are reported (resulting in more than 17 measurements of the target being made each second) [39]. Interaction hardware and software Many of the technologies used to communicate messages to the robot from an underwater diver-operator require some mechanism for input and display. Terrestrial robot control can exploit standard input devices including keyboards, joysticks, and phone/tablet interfaces to communicate with a robot. Underwater the choices are more limited. A critical requirement is that the input device be protected from water and pressure and be operable by a diver working at depth. In general this means constructing some waterproof enclosure that provides both access to any display and input devices. It is also possible to use the entire waterproof enclosure as a joystick by augmenting the device with an appropriate tilt sensor and to use the entire device as a pointing device through the use of a compass or IMU. Although the waterproof container can be machined to be relatively light when empty, it is critical that this container be (approximately) neutrally buoyant when deployed underwater. If it is not neutrally buoyant then the diver-operator will have to compensate for this when operating the device which complicates the diver-operator’s task. In order for the entire tablet to be neutrally buoyant it must weigh the same as the weight of the water it displaces. In essence this limits the usability of large volume underwater housings. It is certainly possible to build such housings but the large volume of the housing will require that the housing be weighted through either the inclusion of very heavy components or through the addition of external mass. The resulting device may be “weightless” underwater, but will difficult to move when underwater and to deploy from the surface. Beyond mass and buoyancy there are a number of other issues in terms of the design of any housing that must be taken into account. The surface area of the device is a concern. The housing acts as a reaction surface underwater, acts as a drag when the diver is swimming, and acts as a sail in strong current or swells. Each penetration into the underwater housing is a potential failure point, so although more inputs may be desirable each one increases the risk of flood-failure to the device. Deploying a robot and its support team typically means entry through the surf or off of a boat and thus switches must be positioned so as to minimize the potential for accidental operation during deployment. Cognitive loading of the diver-operator is an issue, and although it is possible to provide large numbers of controls it can be difficult for the operator to make effective use of them. Operators will view displays through their dive goggles, the water column and a transparent
  • 46. Chapter 2 36 port on the interaction device. Detailed displays may be difficult to view, especially in turbid water conditions. Figure 2.7 shows a lightweight interaction device for tethered control of an underwater vehicle [38]. The control device is housed in a custom fabricated housing as shown in Figure 2.7(b). An Android Nexus 7 tablet provides both a display and on board computation to process switch and other inputs and to condition the data for transmission to the robot. The availability of small form factor displays and interaction devices with general purpose computing, such as Android tablets, provides a number of options for input and display. Sensors within the tablet itself, including a compass and IMUs, can be exploited for robot control, they have internal batteries for power, they typically support WiFi, Bluetooth and USB communication, and provide a rich software library for control. One disadvantage of such devices is that they do not support standard robot control software middlewares such as ROS [31]. Being able to have the interaction device communicate via ROS considerably simplifies communication between the interaction device and the robot itself. Even for devices that are not physically wired to the robot, using ROS as a common communication framework between the interaction device and the robot has benefits. ROS provides an effective logging mechanism, and there exist visualization tools that can be used post deployment to help in understanding any issues that may have arisen during deployment. (a) Interaction (b) Housing Figure 2.7: Devices for underwater interaction with a robot must be designed to be operated at depth. This requires displays and inter- action mechanisms that are appropriate for divers. (a) shows a diver operating AQUA. The tablet acts as a joystick and provides simple interaction mechanisms controlled by two, three-state switches. (b) shows the diver’s view of the sensor.
  • 47. Talking to Robots at Depth 37 Within ROS overall robot control is modeled as a collection of asynchronous processes that communicate by message passing. Although a very limited level of support does exist for ROS on commodity computer tablets, environments such as Android or iOS are not a fully supported ROS environment. In order to avoid any potential inconsistencies between these environments and supported ROS environments, one option is to not build the software structures in ROS directly, but rather to exploit the RosBridge mechanism instead. RosBridge provides a mechanism within which ROS messages are exposed to an external agent and within which an external agent can inject ROS messages into the ROS environment. This injection process uses the standard WebSocket protocol. The process of developing specific interaction devices for robot-operator control can be simplified by automating much of the specialized software required to map interaction devices to ROS commands for interaction. Software toolkits that can be used to semi-automatically generate display and interaction tools using the RosBridge communication structure have previously been developed for Android [37] and iOS [7] mobile platforms. Gesture-based communication Rather than augmenting the diver with some equipment that assists in diver-robot communication it is possible to deploy novel gesture-based communication languages (e.g., [19, 18, 3, 5, 6]). Although such approaches can be effective, they require divers to learn a novel language for human- robot communication while retaining their existing gesture-based language for diver-diver communication. In addition to the increased cognitive load on divers, such an approach also has the potential for accidental miscommunication among divers and between divers and robots, given the common symbols used in the two gesture languages. Rather than developing a novel gesture-based language for human-diver communication, another alternative is to leverage existing diver-diver gesture-based communication. Divers have developed a set of effective strategies for implicit and explicit communication with other divers. A standard set of hand gestures (signals) have been developed with special commands for specific tasks and environments. National and international recreational diving organizations such as PADI teach these languages and help to establish and maintain a standard set of gesture symbols and grammar. These gestures – which include actions such as pointing with the index finger and obvious motions of the hand while it is held in some configuration – are strung together in a simple language. For example, to indicate that there is something wrong with one’s ears, one would indicate “unwell” by holding the hand flat and
  • 48. Chapter 2 38 rotating the wrist and then pointing at the effected ear. One observation about these signals is that their semantics are dependent on their location relative to the body. Thus a critical step in having a robot understand normal diver gestures involves identifying the relative position of the hands to the body and the configuration and motion of the hands. (See Figure 2.8.) Diver- diver gesture-based communication relies on both an explicit and implicit communication strategy. For example, given the lack of a straightforward mechanism to name locations underwater, many commands conveying coordinated motion are phrased as “you follow me” or “we swim together” and rely on the implicit communication of aspects of the diver’s state. Implicit understanding of a diver’s state requires some mechanism to track and monitor the diver as they move. Autonomous following of an underwater diver has been explored in the past. Perhaps the simplest diver- following technique involves augmenting the diver in some manner to simplify diver tracking. For example, [35] adopts the use of an atypically coloured ball and simple computer vision techniques to track the diver. This requires the diver to hold onto an inflated ball of some recognizable colour, which will affect their buoyancy control negatively. Other methods (e.g., [33, 18]) track flipper oscillations of a certain colour in the frequency domain to determine the location within the scene to track. Another (a) Diver pointing at their ear (b) Tracked hand position Figure 2.8: Divers utilize a standard set of gestures to communicate with other divers underwater. (a) To indicate that there is an issue with their ear – for example some problem with pressure equalization – the diver would indicate that something is wrong and then point at their ear. Understanding this communication involves tracking the diver’s hand relative to their head. (b) Results of this tracking plotted in head-centric coordinates.
  • 49. Talking to Robots at Depth 39 approach is to just recognize and localize the diver as part of a more general diver communication mechanism. A range of different technologies exist that could be used to identify and track diver body parts in order to localize the diver and to understand the actions being performed. Both traditional image processing techniques as well as data driven (e.g., neural network-based) approaches could be utilized. Previous work in this area includes an approach based on transfer learning using a pre-trained Convolutional Neural Network (CNN) to identify parts of divers, which are then tracked through time [10]. Deploying a CNN requires an appropriately trained dataset and an appropriate CNN that can be applied to the problem. This is addressed in part through the SCUBANet dataset [10]. The SCUBANet dataset contains underwater images of divers taken from both freshwater and saltwater environments. The freshwater portion of the dataset was collected in Lake Seneca in King City, Ontario, Canada. The saltwater portion of the dataset was collected just off the west coast of Barbados. The SCUBANet dataset was collected using the Milton robot [11] and consists of over 120,000 images of divers. CNNs require that the dataset be properly labelled. SCUBANet’s dataset was labelled using a crowd-sourcing tool and as of 2019 over 3200 image annotations had been performed. Work performed in our lab utilizes a transfer-learning approach to recognize diver parts. Transfer learning involves taking some pretrained CNN from a related task, using a larger dataset for the initial training, then training the final level (or levels) using a smaller task-specific dataset. For this task, a CNN trained on the COCO dataset for object detection, segmentation and captioning [23] was used with the final level being retrained on the SCUBANet dataset [10]. This process is typically much faster than training a new CNN from scratch. Performance of the resulting networks can be very good. For example, [10] reports that the retrained faster rcnn inception v2 architecture demonstrated an average recognition rate for divers, heads and hands of 71.6% mean average precision at 0.5 intersection over union [10]. Figure 2.8 shows the first steps in diver-robot communication using diver gestures. Individual frames from a video of the diver are captured by the robot and processed to identify and localize body parts important for both implicit and explicit communication. Here the diver’s head and hand positions are tracked while the diver points to their ear. When plotted in a head-centric frame of reference the motion of the diver’s hand as it is raised to and then lowered from the side of the diver’s head is clear (Figure 2.8(b)). Ongoing work is investigating different techniques for labelling the specific hand/finger gestures being used during these motions.
  • 50. Chapter 2 40 Summary Communication between humans and robots is transitioning from key- board inputs performed by machine specialists to interactions with the general public and with specialists who have been trained for particular tasks. Many of these tasks have developed multi-modal communication structures that are designed to meet the specifics of the task as hand. As robots move out of the lab and into application domains it is critical that the communications strategies used are appropriate for the task at hand. This is true for many terrestrial tasks but is especially true underwater. Here the environment places constraints on the technology available for the human to talk to the robot and for the robot to talk to the human. Although it is certainly possible for the human to remain warm and dry above the surface of the water and to communicate either wirelessly (e.g., through sound) or through a physical tether, the lack of a direct view of the operation being undertaken reduces considerably the situational awareness of the operator. Placing the operator in the water with the robot creates its own set of problems. Underwater tethers can certainly be used, but this then requires careful tether management, because a tangled tether underwater is a threat to both the operator and the robot. Wireless communication must be safe for the diver and not place undue power requirements on the diver, thus limiting the use of RF and sound-based technologies. Visible light-based technologies would seem appropriate although here again it is critical to not place undue constraints on the diver operator. Carried interaction devices must work well at depth neither upsetting the diver’s buoyancy nor placing undue load on the diver’s cognitive abilities. Perhaps the most desirable diver to robot communication strategy is to exploit the normal diver to diver gesture-based communication strategy. Recent work suggests that such an approach has promise and ongoing research is exploring how best to understand complex statements and commands based on gesture underwater. Table 2.1 provides a summary of the various technologies presented in this chapter. This chapter has concentrated on diver to robot communication. Communication from the robot to the diver, especially when approaches such as gesture-based are used that do not augment the diver, is also an issue. Given that robots are often augmented with lights and displays then clearly these devices can be leveraged for robot to diver communications. But other options are possible. For example, it is possible for the robot to choose specific motions to encode simple yes/no responses to queries and even more complex motion sequences can be exploited to communicate more sophisticated messages to the diver [16].
  • 51. Talking to Robots at Depth 41 Strategy Properties Physical tether Advantages: High data bandwidth, reasonably low cost, bi-directional, support for standard communication protocols. Disadvantages: Tether management, tether drag. Acoustic Advantages: Good range, not impacted by turbidity. Disadvantages: Power requirements, potential damage to divers and marine life, low bandwidth. Static visual target Advantages: Inexpensive, easy to deploy. Disadvantages: Restrictive communication set, potential for accidental target viewing by robot, low bandwidth. Dynamic visual target Advantages: Large symbol set, easy to structure the display so as to avoid accidental target display to the robot. Disadvantages: Requirement of a display device, complexity of viewing the target through the display port, bandwidth can be improved by ganging together multiple symbols. UWOC Advantages: Low-power, relatively high bandwidth. Disadvantages: Short-range, impacted by turbidity. Specialized gesture language Advantages: Can be tuned to the task at hand, can be easily learned, can be designed to be easily recognized by the robot. Disadvantages: Increased cognitive loading on the diver, possibility of confusion between symbol set used in diver to diver communication and diver to robot communication, low bandwidth. Diver gesture language Advantages: Well known by divers. Disadvantages: Complex gestures to recognize, low bandwidth. Table 2.1: The communication strategies described in this chapter along with their advantages and disadvantages. Bibliography [1] T. Aoki, T. Maruashima, Y. Asao, T. Nakae, and M. Yamaguchi, “Development of high-speed data transmission equipment for the full-depth remotely operated vehicle – KAIKO,” in OCEANS, vol. 1, Halifax, Canada, 1997, pp. 87–92.
  • 52. Chapter 2 42 [2] A.-K. Brebeck, A. Deussen, H. Schmitz-Peiffer, U. Range, C. Balestra, and S. C. J. D. Schipke, “Effects of oxygen-enriched air on cognitive performance during scuba-diving – an open-water study,” Research in Sports Med., vol. 24, pp. 1–12, 2017. [3] A. G. Chavez, C. A. Mueller, T. Doernbach, D. Chiarella, and A. Birk, “Robust gesture-based communication for underwater human-robot interaction in the context of search and rescue diver missions,” in IROS Workshop on Human-Aiding Robotics, Madrid, Spain, 2018, held in conjunction with IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). [4] X. Che, I. Wells, G. Dickers, P. Kear, and X. Gong, “Re-evaluation of rf electromagnetic communication in underwater sensor networks,” IEEE Communications Magainze, vol. 48, pp. 143–151, 2011. [5] D. Chiarella, M. Bibuli, G. Bruzzone, M. Caccia, A. Ranieri, E. Zereik, L. Marconi, and P. Cutugno, “Gesture-based language for diver-robot underwater interaction,” in OCEANS, Genoa, Italy, 2015. [6] D. Chiarella, “A novel gesture-based language for underwater human-robot interaction,” J. of Marine Science and Engineering, vol. 6, 2018. [7] R. Codd-Downey and M. Jenkin, “RCON: dynamic mobile interfaces for command and control of ROS-enabled robots,” in International Conference on Informatics in Control, Automation and Robotics (ICAR, Colmar, France, 2015. [8] R. Codd-Downey, “LightByte: Communicating wirelessly with an underwater robot using light,” in International Conference on Informatics in Control, Automation and Robotics (ICINO), Porto, Portugal, 2018. [9] R. Codd-Downey, “Wireless teleoperation of an underwater robot using li- fi,” in Proc. IEEE International Conference on Information and Automation (ICIA), Wuyishan, China, 2018. [10] R. Codd-Downey, “Finding divers with SCUBANet,” in IEEE International Conference on Robotics and Automation (ICRA), Montreal, Canada, 2019. [11] R. Codd-Downey, M. Jenkin, and K. Allison, “Milton: An open hardware underwater autonomous vehicle,” in Proc. IEEE International Conference on Information and Automation (ICIA), Macau, China, 2017. [12] CSA: Standards Council of Canada, “Information Technology - Automated Identification and Capture Techniques - QR Code 2005 Bar Code Symbol Specification,” 2016, Canadian Government Report. [13] G. Dudek, P. Giguere, C. Prahacs, S. Saunderson, J. Sattar, L.-A. Torres- Mendez, M. Jenkin, A. German, A. Hogue, A. Ripsman, J. Zacher, E. Milios, H. Liu, and P. Zhang, “AQUA: An amphibious autonomous robot,” IEEE Computer, vol. Jan., pp. 46–53, 2007. [14] G. Dudek, J. Sattar, and A. Xu, “A visual language for robot control and programming: A human-interface study,” in IEEE International Conference on Robotics and Automation (ICRA), Rome, Italy, 2007, pp. 2507–2513. [15] M. Fiala, “Artag, a fiducial marker system using digital techniques,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), vol. 2, San Diego, CA, 2005, pp. 590–596.
  • 53. Talking to Robots at Depth 43 [16] M. Fulton, C. Edge, and J. Sattar, “Robot communication via motion: Closing the underwater human-robot interaction loop,” in International Conference on Robotics and Automation (ICRA), Montreal, Canada, 2019, pp. 4660– 4666. [17] H. Haas, L. Yin, Y. Wang, and C. Chen, “What is LIFI?” J. of Lightwave Technology, vol. 34, pp. 1533–1544, 2016. [18] M. J. Islam, “Understanding human motion and gestures for underwater human-robot collaboration,” J. of Field Robotics, vol. 36, 2018. [19] M. J. Islam, M. Ho, and J. Sattar, “Dynamic reconfiguration of mission parameters in underwater human-robot collaboration,” in IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 2018, pp. 1–8. [20] N. G. Jerlov, Optical Oceanography. Amsterdam, The Netherlands: Elsevier, 1968. [21] H. Kaushal and G. Kaddoum, “Underwater optical wireless communication,” IEEE Access, vol. 4, pp. 1518–1547, 2016. [22] P. Lee, B. Jeon, S. Hong, Y. Lim, C. Lee, J. Park, and C. Lee, “System design of an ROV with manipulators and adaptive control if it,” in 2000 International Symposium on Underwater Technology, Tokyo, Japan, 2000, pp. 431–436. [23] T. Lin, M. Maire, S. J. Belongie, L. D. Bourdev, R. B. Girshick, J. Hays, P. Perona, D. Ramanan, P. Dollar, and C. L. Zitnick, “Microsoft COCO: common objects in context,” CoRR, vol. abs/1405.0312, 2014. [24] P. Medhekar, S. Mungekar, V. Marathe, and V. Meharwade, “Visible light underwater communciation using different light sources,” International Journal of Modern Trends in Engineering and Research, vol. 3, pp. 635–638, 2016. [25] R. K. Moore, “Radio communciation in the sea,” Spectrum, vol. 4, pp. 42– 51, 1967. [26] M. Nokin, “ROV 6000 – objectives and description,” in OCEANS, vol. 2, Brest, France, 1994, pp. 505–509. [27] E. Olson, “AprilTag: a robust and flexible visual fiducial system,” in IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, 2011. [28] H. Oubei, C. Shen, A. Kammoun, E. Zedini, K. Park, X. Sun, G. Liu, C. H. Kang, T. K. Ng, M. S. Alouini, and B. S. Ooi, “Light based underwater wireless communications,” Japanese J. of Applied Physics, vol. 57, 2018. [29] D. Pompili and I. F. Akyildiz, “Overview of networking protocols for underwater wireless communications,” IEEE Commun. Mag., vol. Jan., pp. 97–102, 2009. [30] S. F. Pourhashemi, H. Sahraei, G. H. Meftahi, B. Hatef, and B. Gholipour, “The effect of 20 minutes SCUBA diving on cognitive function of professional SCUBA divers,” Asian J. of Sports Med., vol. 7, 2016. [31] M. Quigley, B. Gerkey, K. Conley, J. Faust, T. Foote, J. Leibs, E. Berger, R. Wheeler, and A. Y. Ng, “ROS: an open-source robot operating system,” in Open-Source Software workshop at the International Conference on Robotics and Automation (ICRA), Kobe, Japan, 2009.
  • 54. Chapter 2 44 [32] S. Rajagopal, R. D. Roberts, and S. K. Lim, “IEEE 802.15.7 visible light communication: modulation schemes and dimming support,” IEEE Communications Magazine, vol. 50, p. 72–82, 2012. [33] J. Sattar and G. Dudek, “Where is your dive buddy: tracking humans underwater using spatio-temporal features,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Sand Diego, CA, 2007, pp. 3654–3659. [34] J. Sattar, E. E. Bourque, P. Giguere, and G. Dudek, “Fourier tags: Smoothly degradable fiducial markers for use in human-robot interaction,” in Canadian Conference on Computer and Robot Vision (CRV), Montreal, Canada, 2007, pp. 165–174. [35] J. Sattar, P. Giguere, G. Dudek, and C. Prahacs, “A visual servoing system for an aquatic swimming robot,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Edmonton, Canada, 2005, pp. 1483– 1488. [36] R. W. Smith, “Application of a medical model to psychopathology in diving,” in 6th International Conference on Underwater Education, San Diego, CA, 1975, pp. 377–385. [37] A. Speers, P. Forooshani, M. Dicke, and M. Jenkin, “Lightweight tablet devices for command and control of ROS-enabled robots,” in International Conference on Advanced Robotics (ICAR), Montevideo, Uruguay, 2013. [38] A. Speers and M. Jenkin, “Diver-based control of a tethered unmanned underwater vehicle,” in International Conference on Informatics in Control, Automation and Robotics (ICINCO), Rekjavik, Iceland, 2013. [39] A. Speers, A. Topol, J. Zacher, R. Codd-Downey, B. Verzijlenberg, and M. Jenkin, “Monitoring underwater sensors with an amphibious robot,” in Canadian Conference on Computer and Robot Vision (CRV), St. John’s, Canada, 2011. [40] A. S. Tanenbaum and D. J. Wetherall, Computer Networks. Pearson, 2011. [41] B. Verzijlenberg and M. Jenkin, “Swimming with robots: Human robot communication at depth,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Taipei, Taiwan, 2010, pp. 4023– 4028. [42] C. Wang, H.-Y. Yu, and H.-J. Zhu, “A long distance underwater visible light communication system with single photo avalanche diode,” IEEE Photonics J., vol. 8, pp. 1–11, 2016. [43] A. Zoksimovski, C. M. Rappaport, D. Sexton, and M. Stojanovic, “Underwater electromagnetic communications using conduction – channel characterization,” in Proc. of the 7th ACM International Conference on Underwater Networks and Systems, Los Angeles, CA, 2012.
  • 55. Random documents with unrelated content Scribd suggests to you:
  • 56. I'll take all the blame. Let me hide on the ferry-boat, and I won't show myself until you've got miles away. That might do, said Bob, smiling. Perhaps it isn't exactly square, but with such a man as your uncle we must make use of his own methods. You will take me, then? asked Sam, eagerly. By this time they had reached the boat. Clip, said Bob, go with Sam and hide him somewhere on the boat, but don't tell me where he is concealed. Then, if old Wolverton comes after him I can say truly that I don't know where he is. All right, Massa Bob, said Clip, showing his teeth. When the contents of the boat had been transferred to the larger craft, Bob rowed back, leaving Clip and Sam together. The boat was roofed over, as already stated. Besides the bins there was a corner in which some bedding had been placed for the accommodation of the young voyagers. But it seemed difficult to find a suitable hiding- place for Sam. Where can you put me? asked the young runaway, with a troubled look. Clip looked about him, rolling his eyes in perplexity. At length his face brightened, for an idea had come to him. In one corner was an empty barrel. Some stores had been brought aboard in it, and it had been suffered to remain, with the idea that it might possibly prove of use. The particular use to which it was to be put certainly never occurred to Bob or Clip. Get in there, Sam! said Clip. Old Mass' Wolverton won't look for you in there. But I shall be seen.
  • 57. You wait and I'll show you how we'll manage; only get in! Thus adjured, Sam got into the barrel, and with some difficulty crouched so that his head was lower than the top of the barrel. Now I'll show you, said Clip. He took a white cloth—it was apiece of sail-cloth—and spread over the top of the barrel. Now old Mass' Wolverton will have sharp eyes to see you, said Clip, triumphantly. That may do, said Sam. But it isn't necessary to put it on now. It will be time if my uncle makes his appearance. I'll keep out of sight in the center of the boat. Meanwhile Bob had gone to the house to bid good-bye to his mother. I feel anxious about your going off on such a long trip, Robert, said Mrs. Burton. You forget that I am almost a man, mother. It is time for me to assume some responsibility. But you are only a boy, after all, Robert. Think, if anything should happen to you, what would become of me? My dear mother, you may depend on my taking excellent care of myself. I don't see what risk or danger there can be in going to St. Louis. It isn't a long trip. I shall be back in less than a fortnight if all goes well. It will seem a very long fortnight to me, Robert. I have no doubt you will miss me, mother, but you forget I have Clip to look after me.
  • 58. Clip is only a poor colored boy, but I am sure he will prove faithful to you, said Mrs. Burton, seriously. Even the humble are sometimes of great service. I am glad he is going with you. Bob did not mention that Sam Wolverton would also be his companion, as he foresaw that the agent would not unlikely question his mother on that point. Bob returned to the boat, and was just about to cast off, when Wolverton was seen on the bank, waving his hat and shouting frantically. I guess, Massa Sam, you'd better get into the barrel, said Clip with a grin.
  • 59. CHAPTER XXI. HOW WOLVERTON WAS FOOLED. What do you want, Mr. Wolverton? asked Bob, coolly, as he stood at one end of the boat and surveyed the excited agent. Come ashore, or I'll have you arrested, shouted the irate Wolverton. You are very kind, Mr. Wolverton; but I am in considerable of a hurry, and have not time to comply with your request. You'd better come ashore, if you know what's best for yourself. Please state your business! If it is anything to my advantage, I may come; but I am just ready to start for St. Louis. Is my nephew Sam on your boat? I don't see him. Why should he be on board? I suspect him of running away, the ungrateful young rascal? I thought he might be scheming to go down the river with you. Clip, said Bob, gravely, has Sam Wolverton engaged passage with us? Not as I knows on, Massa Bob. If he should, charge him fifteen dollars. Yes, Massa Bob, answered Clip, with a grin. If you wish your nephew to go to St. Louis on my boat, Mr. Wolverton, said Bob, with ceremonious politeness, I will take him,
  • 60. being a friend, for fifteen dollars, excursion ticket. You can't complain of that. But I don't want him to go, roared Wolverton. I tell you he has run away. That's very strange, considering how kindly and liberally you have always treated him. Wolverton eyed Bob suspiciously, for he knew well enough that the remark was ironical. None of your gammon, young man! he said, crabbedly. Send Sam ashore. Really, Mr. Wolverton, you must be joking. What have I got to do with Sam? I don't believe a word you say. I mean to search your boat. You had better do it at once, then, for it is time for me to start. But how am I to get aboard, asked the agent, perplexed. You might swim, suggested Bob, or wade. The water is shallow— not higher than your neck, anywhere. That is nonsense. Steer your boat to shore, that I may board her. It can't be done, Mr. Wolverton. We can only drift down with the current. Then how am I to get aboard? That is your lookout. Just then Mr. Wolverton espied the flat-bottomed boat which Bob proposed to take with him. He had attached it by a line to the stern of the ferry-boat. Row over and take me across.
  • 61. I can't spare the time. Wolverton was about to give vent to his wrath at this refusal, when he observed a boat approaching, rowed by a German boy named Otto Brandes. Come here, boy, and row me out to yonder boat, he said. Otto paused in his rowing, and, understanding the man with whom he was dealing, he asked, quietly: How much will you pay me, Mr. Wolverton? Five cents to take me over and back, answered the agent, with some hesitation. Otto laughed. I don't work for any such wages, he said. I'll give you ten; but be quick about it. Give me a quarter and I'll do it. Do you think I am made of money? said Wolverton, in anger. That is an outrageous extortion. All right! Then hire somebody else, said Otto, coolly. After a fruitless effort to beat down the price, Wolverton sulkily agreed to the terms, and Otto rowed to the bank. Now, row with all your might, said the agent, as he seated himself in one end of the boat. Your fare, please, said Otto. I'll pay you when the trip is over, said Wolverton. It's a poor paymaster that pays in advance. Then you'd better get out of the boat. Railroad and boat tickets are always paid in advance.
  • 62. I'll give you ten cents now, and the balance when I land. It won't do, Mr. Wolverton. I don't care much about the job anyway; I'm in a hurry to get home. Otto lived about half a mile further down the creek. Much against his will, the agent was obliged to deposit the passage- money in the boy's hand before he would consent to take up the oars and commence rowing. That rascal Sam is putting me to all this expense, he said to himself. I'll take my pay out of his skin once I get hold of him. Clip went up to the barrel in which Sam was concealed. Ol' Wolverton is comin', Massa Sam, he said. Don't you make no noise, and we'll fool de ol' man. In spite of this assurance, poor Sam trembled in his narrow place of concealment. He knew that he would fare badly if his uncle got hold of him. How's he coming? he asked in a stifled voice. Otto Brandes is rowin' him. He's in Otto's boat. It's mean of Otto! No; he don't know what de ol' man is after. It took scarcely two minutes for Wolverton to reach the ferry-boat. He mounted it with fire in his eye. Now, where is Sam? he demanded in a peremptory tone. You can search for him, Mr. Wolverton, said Bob, coolly. You seem to know more about where he is than I do. Wolverton began to peer here and there, looking into bins of wheat and all sorts of improbable places.
  • 63. Clip took a broom and began to sweep energetically. Bob could not explain this sudden fit of industry till he saw Clip slyly slip the broom between Wolverton's legs as he was hurrying along, thereby upsetting the unfortunate agent, who tumbled sprawling on the deck. Why, you black imp! he exclaimed, furiously, as he picked himself up, what made you do that? Couldn't help it, Massa Wolverton! I 'clare to gracious I couldn't! said Clip, rolling his eyes in a most wonderful manner. Are you hurt, Massa Wolverton? I most broke my knee! growled Wolverton, as he rose and limped towards the other end of the boat. I may be laid up for a week. It was de ol' broom did it, said Clip, innocently. Never see such a broom! Bob had hard work to keep a straight face, as he heard Clip's odd accusation against the unoffending broom. This accident seemed to dampen Wolverton's enthusiasm, and the pain in his knee increasing made him desirous of getting home as soon as possible. Besides, he began to suspect that he was on a wrong scent, as he had thus far found no traces of his runaway nephew. He never once noticed the barrel, over which the piece of sail-cloth had been thrown so carelessly. Well, did you find Sam? asked Bob, composedly. No! snapped Wolverton. I seed him jest before you came, Massa Wolverton, said Clip. Where? asked the agent, eagerly. Runnin' along the bank. In what direction?
  • 64. Clip pointed up the creek. Why didn't you tell me that before? You didn't ask me, Massa Wolverton. Take me ashore quick! said Wolverton to Otto. Hurry up, Massa Wolverton, and mebbe you'll catch him! Wolverton was already in the boat, and Otto was rowing him to the shore. Clip went to the barrel and released the prisoner. De ol' man's gone, Sam! he said. I'm glad of it, Clip. I'm almost suffocated. Golly, didn't we fool him! and Clip lay down on his back on deck, and gave way to an explosion of mirth. A minute later the rope was drawn in, and the ferry-boat started on its adventurous career down the creek.
  • 65. CHAPTER XXII. THE FIRST DAY. Bob was accustomed to rowing, but navigation with the ferry-boat presented a new and interesting problem which he was eager to solve. A steering apparatus had been rigged up at the stern, which was found strong enough for the purpose required. Bob took his place at the helm in starting, and managed for the first hour to regulate the direction of his craft. By that time they came to a place where the creek widened considerably, and the boat showed a disposition to whirl round in an eddy. This difficulty, however, was overcome by practice, and Bob began to acquire confidence in himself as a navigator. But it was evident that he could not remain at the helm all day. Come here, Clip, he said; I want you to rest me in steering. Clip took his place, but his first attempts proved discouraging. He was inclined to steer in just the reverse direction, and twice came near running the boat ashore. What are you about, Clip? demanded Bob, in excitement. Don't you see you are running the boat ashore? I done just like you, Massa Bob, protested Clip. De boat acts contrary; never see such an ol' boat. It is you that are contrary, Clip. You don't do as I tell you. I 'clar to gracious I did, Massa Bob. I can't never learn to steer. In fact, Clip, who was naturally lazy, found it very irksome to stand at the helm, and much preferred going here and there on the boat and surveying the scenery on either bank. He hoped that his
  • 66. incompetence would save him from the task. But his dream was rudely disturbed. If you can't take your turn in steering, Clip, said Bob, you won't be of any use to me. I shall have to send you home, and get along with Sam's assistance. Oh, don't send me home, Massa Bob! exclaimed Clip, in alarm. I'll try—'deed I will. I'll try you a little longer, Clip, said Bob; but you must not blame me for sending you back, if it is necessary. No better argument could have been used to insure satisfactory work from Clip, who was naturally careless, and inclined to shirk work. Nevertheless, Bob felt glad that he had another assistant in Sam Wolverton, who proved to possess all the qualities which Clip lacked. When it was one o'clock, Clip began to show signs of distress. I'm pow'ful hungry, Massa Bob, he said, in a pleading tone. So am I, Clip, returned Bob, with a smile. I will see if I can't do something to relieve you. He had brought from home a basket of sandwiches and a gallon of milk. To these the boys did ample justice, displaying even more appetite than usual. This was not surprising, for they had worked hard, and this in the open air. Sam, said Bob, I can't hope to supply you with all the delicacies you would get at home, but I hope you'll make it do with our humble fare. Sam smiled. All the delicacies on Uncle Aaron's table wouldn't spoil anybody's digestion. I like my dinner to-day better than any I've eaten for a
  • 67. long time. I don't know what uncle and aunt would say if they could see me here. De ol' man would be wild, said Clip, with a guffaw. I expect he would, Clip. He isn't fond of me, but he doesn't want to lose me. He will have to do his own chores now, for I don't believe he can get a boy to work for him. About six o'clock in the afternoon, having arrived opposite a town which I will call Rushford, Bob decided to tie up for the night. He and Clip went on shore, leaving Sam in charge of the boat. He did not dare to leave it unguarded, for the cargo, according to his estimate, was worth not far from three thousand dollars. He took the opportunity to enter a restaurant, where he bought Clip and himself cups of coffee, and ordered a fresh supply of sandwiches made up, which he arranged to have delivered at the boat early the next morning. I don't mean that we shall starve, Clip, he said. Clip showed his teeth. Dat coffee's awful good, Massa Bob, he said. Yes, but we can't make it on board the boat. I shall have to depend on getting it at the villages on the way. How far are we from home, Massa Bob? Well thought of, Clip. I will inquire. He asked the keeper of the restaurant the distance to Carver. I don't know, but I think my waiter comes from that neighborhood. Sam, how far away is Carver? Forty miles, answered Sam promptly.
  • 68. I thought it had been more. We have been eight hours coming on the river. That is because the river (they had left the creek fifteen miles up) was winding in its course. On the whole, however, Bob decided that it was very fair progress for the first day, and that only about two-thirds of the time. Rushford was a town of fifteen hundred inhabitants, and presented as busy an appearance as a town three times the size in the East. Clip, who was fond of variety, was reluctant to return to the boat, but Bob said: We must relieve Sam, and give him a chance to come ashore and get some coffee. You come with him, and show him the restaurant. This arrangement suited Clip, who liked as much variety and excitement as possible. On returning to the boat Bob was somewhat surprised to find his young lieutenant in conversation with an old lady dressed in antediluvian costume. She had a sharp face, with an eager, birdlike look, and seemed to be preferring a request. Here's the captain; you can ask him, said Sam, who seemed much relieved by the return of Bob. Is he the captain? asked the old lady. Why, he's nothin' but a boy! He's all the captain we have, answered Sam. Be you in charge of this boat? asked the old lady. Yes, ma'am. What can I do for you? I want to go down to St. Louis, said the old lady, and I thought maybe you might find room for me.
  • 69. But, ma'am, why don't you take passage on a river steamer? They charge too much, said the old lady. I hain't got much money, and I s'pose you wouldn't charge me much. Are you any acquainted in St. Louis? No, ma'am. I thought maybe you might know my darter's husband. He keeps a grocery store down near the river. His name is Jeremiah Pratt, and my darter's name is Melinda Ann. I want to give 'em a surprise. I never met the gentleman. When do you start? To-morrow morning about half-past seven o'clock. Can't you put it off till eight? I've got to pack my trunk over night, and I've got to eat a bit of breakfast to stay my stummik. How much do you charge? I'd be willing to pay you seventy-five cents. How much do the steamboats charge? asked Bob. I think it's six dollars, or it may be seven. That's too much for a poor woman like me. I think you will have to pay it, madam, for we have no accommodation for passengers on our boat. Oh, I ain't a mite particular. You can put me anywhere. I suppose you wouldn't be willing to get into a grain bin? Oh, now you're jokin'. Where do you sleep yourself? On a mattress on the floor; that wouldn't be suitable for a lady like you. Besides, we have no separate rooms. Then you can't take me, no way? asked the old lady, disappointed. I am afraid not, madam.
  • 70. You're real disobligin'. I don't see how I am to get to St. Louis. I am sorry I can't help you. The old woman hobbled off in evident anger. Bob heard afterwards that she was a woman of ample means, fully able to afford steamboat fare, but so miserly that she grudged paying it. Now, Sam, said Bob, Clip will show you the way to a restaurant where you can get a hot cup of coffee and a plate of meat, if you desire it. While the boys were gone, Bob received a visitor.
  • 71. CHAPTER XXIII. A SUSPICIOUS CHARACTER. Fifteen minutes after Sam and Clip had left him Bob's attention was drawn to a man of somewhat flashy appearance, who, while leaning against a tree on the bank, seemed to be eying him and the boat with attention. He wore a Prince Albert coat which was no longer fit to appear in good society, a damaged hat, and a loud neck-tie. His eyes were roving from one point to another, as if he felt a great deal of interest in Bob or the boat. Our hero was not favorably impressed with this man's appearance. I wonder what he sees that interests him so much? he thought. I say, young man, is this here boat yours? he asked. Yes, answered Bob, coldly. What have you got on board? Bob felt under no obligation to answer, but reflecting that there was no good excuse for refusing, he said, briefly: Wheat. Humph! How much have you got? This clearly was none of the questioner's business, and Bob replied by another question: Do you want to buy? I don't know, said the stranger. What do you ask? I can't say till I get to St. Louis. How much do you calc'late to get?
  • 72. Two dollars and a quarter, answered Bob, naming a price beyond his expectations. Ain't that a high figger? Perhaps so. Come, young feller, you don't seem social. Can't you invite me aboard? I don't think you would be paid for coming, said Bob, more and more unfavorably impressed. Oh, I don't mind. My time ain't valuable. I guess I'll come. The stranger stepped across the gang-plank, which Bob had laid from the boat to the shore, and entered without an invitation. Bob was tempted to order him off, but the intruder appeared much stronger than himself; and while he was alone it seemed politic to submit to the disagreeable necessity of entertaining his unwelcome visitor. The latter walked from end to end of the boat, examining for himself without asking permission, or appearing to feel the need of any. He opened the bins and counted them, while Bob looked on uneasily. I say, young feller, you've got a smart lot of wheat here. Yes, said Bob, briefly. Got a thousand bushels, I reckon? Perhaps so. And you expect to get two dollars and a quarter a bushel? Perhaps I shall have to take less. At any rate, you must have two thousand dollars' worth on board. You can judge for yourself.
  • 73. I say, that's a pile of money—for a boy. The wheat doesn't belong to me. Who owns it, then. My mother. What's your mother's name? I have answered all the questions I am going to, said Bob, indignantly. Don't get riled, youngster. It ain't no secret, is it? I don't care about answering all the questions a stranger chooses to put to me. I say, young chap, you're gettin' on your high horse. What is your object in putting all these questions? What is my object? That is what I asked. The fact is, youngster, I've got a ranch round here myself, and I've got about five hundred bushels of wheat I want to market. Naturally I'm interested. See? Bob did not believe a word of this. Where is your ranch? he asked. About two miles back of the town, answered the stranger, glibly. That lie was an easy one. I'm thinkin' some of runnin' down to the city to see if I can't sell my wheat in a lump to some merchant. Mebbe I could strike a bargain with you to carry me down. Bob had even more objection to the new passenger than to the old lady, and he answered stiffly:
  • 74. I have no accommodations for passengers. Oh, I can bunk anywhere—can lie on deck, on one of the bins. I'm used to roughin' it. You'd better take passage by the next steamer. This is a freight boat. There ain't anybody but you aboard, is there? Yes; I have two companions. The stranger seemed surprised and incredulous. Where are they? he asked. Gone into the village. The visitor seemed thoughtful. He supposed the two companions were full-grown men, and this would not tally with his plans. This illusion, however, was soon dissipated, for Sam and Clip at this point crossed the gang-plank and came aboard. Are them your two companions? asked the stranger, appearing relieved. Yes. Sam and Clip eyed him curiously, expecting Bob to explain who he was, but our hero was only anxious to get rid of him. Then you can't accommodate me? asked the man. No, sir; but if you'll give me your name and address, I can perhaps sell your crop for you, and leave you to deliver it. Never mind, young feller! I reckon I'll go to the city myself next week. Just as you like, sir.
  • 75. He re-crossed the plank, and when he reached the shore took up his post again beside the tree, and resumed his scrutiny of the boat. What does that man want? asked Sam. I don't know. He asked me to give him passage to St. Louis. You might make money by carrying passengers, suggested Sam. I wouldn't carry a man like him at any price, said Bob. I haven't any faith in his honesty or respectability, though he tells me that he owns a ranch two miles back of the town. He came on the boat to spy out what he could steal, in my opinion. How many days do you think we shall need for the trip, Bob? asked Sam. It may take us a week; but it depends on the current, and whether we meet with any obstructions. Are you in a hurry to get back to your uncle? No, said Sam, his face wearing a troubled look. The fact is, Bob, I don't mean to go back at all. You mean dat, Massa Sam? asked Clip, his eyes expanding in his excitement. Yes, I mean it. If I go back I shall have to return to my uncle, and you know what kind of a reception I shall get. He will treat me worse than ever. I am sure, Sam, my mother will be willing to let you live with us. I should like nothing better, but my uncle would come and take me away. Would he have the right? I think he would. He has always told me that my poor father left me to his charge.
  • 76. Do you think he left any property? Yes; I feel sure he did; for on his deathbed he called me to him, and said: 'I leave you something, Sam; I wish it were more; but, at any rate, you are not a pauper.' Did you ever mention this to your uncle, Sam? Yes. What did he say? It seemed to make him very angry. He said that my father was delirious or he would never have said such absurd things. But I know he was in his right mind. He was never more calm and sensible than when he told me about the property. I am afraid Sam, your uncle has swindled you out of your inheritance. I think so, too, but I can't prove anything, and it won't do to say anything, for it makes him furious. What does your aunt say? Oh, she sides with Uncle Aaron; she always does that. Then I can't say I advise you to return to Carver, although Clip and I are sure to miss you. 'Deed I shall, Massa Sam, said Clip. I think I can pick up a living somehow in St. Louis. I would rather black boots than go back to Uncle Aaron. I am sure you can. Perhaps some gentleman will feel an interest in you, and take you into his service. I want to tell you, Bob, that Uncle Aaron hates you, and will try to injure you. You will need to be careful.
  • 77. That's no news, Sam. He has shown his dislike for me in many ways; but I am not afraid of him, the boy added, proudly. At nine o'clock the boys went to bed. They were all tired, and all slept well. It was not till seven o'clock that Bob awoke. His two companions were asleep. He roused them, and they prepared for the second day's trip.
  • 78. CHAPTER XXIV. CLIP MAKES A LITTLE MONEY FOR HIMSELF. About noon the next day, while Clip was at the helm, there was a sudden jolt that jarred the boat from stem to stern, if I may so speak of a double-ender ferry-boat. Bob and Sam, who had been occupied with re-arranging some of the cargo, rushed up to the colored pilot. What on earth is the matter, Clip asked Bob. 'Clare to gracious, I dunno, Massa Bob, asseverated Clip. Bob didn't need to repeat the question. Clip had steered in shore, and the boat had run against a tree of large size which had fallen over into the river, extending a distance of a hundred feet into the stream. Of course the boat came to a standstill. What made you do this, Clip? said Bob, sternly. Didn't do it, Massa Bob. Ol' boat run into the tree himself. That won't do, Clip. If you had steered right, there would have been no trouble. I steered just as you told me to, Massa Bob. No, you didn't. You should have kept the boat at least a hundred and fifty feet from the shore. Didn't I, Massa Bob? asked Clip, innocently. No. Don't you see we are not more than fifty feet away now? I didn't get out and measure, Massa Bob, said Clip, with a grin.
  • 79. Now, own up, Clip, were you not looking at something on the bank, so that you didn't notice where you were steering? Who told you, Massa Bob? asked Clip, wondering. I know it must be so. Do you know you have got us into trouble? How am I going to get the boat back into the stream? Clip scratched his head hopelessly. The problem was too intricate for him to solve. I think, Clip, I shall have to leave you over at the next place we come to. You are more bother than you are worth. Oh, don't, Massa Bob. I won't do so again. 'Deed I won't. Bob didn't relent for some time. He felt that it was necessary to impress Clip with the heinousness of his conduct. At length he agreed to give him one more chance. He had to secure the services of two stout backwoodsmen to remove the tree, and this occasioned a delay of at least two hours. Finally the boat got started again, and for the remainder of the day there was no trouble. Towards the close of the afternoon they reached a place which we will call Riverton. It was a smart Western village of about two thousand inhabitants. Bob and Sam went on shore to get some supper, leaving Clip in charge. Now, Clip, you must keep your eyes open, and take good care of everything while we are gone, said Bob. All right, Massa Bob. About ten minutes after the boys went away Clip was sitting on a barrel whistling a plantation melody, when a slender, florid- complexioned young man stepped aboard. Good-evening, sir, he said, removing his hat.
  • 80. Evenin', answered Clip, with a grin. He was flattered by being addressed as sir. Are you in charge of this boat? Yes; while Massa Bob and Sam are gone ashore. Are they boys like yourself? Yes, sir. Are you three all that are on board—I mean all that man the boat? Yes, massa. Where are you bound? To St. Louis. Do you think they would take me as passenger? Clip shook his head. They won't take no passengers, he answered. An ol' woman wanted to go as passenger, and another man (Clip was unconscious of the bull), but Massa Bob he said no. Suppose I make a bargain with you, said the man, insinuatingly. What you mean, massa? asked Clip, rolling his eyes in wonderment. Can't you hide me somewhere without their knowing I am on board? What for I do dat? asked Clip. I'll make it worth your while. What's dat? I'll give you five dollars.
  • 81. For my own self? Yes; for yourself. And I won't have to give it to Massa Bob? No; you can spend it for yourself. But Massa Bob would find out to-morrer. If he finds out to-morrow I shan't mind. And you won't take back the money? No; you can keep the money at any rate. Where's the money? asked Clip, cautiously. The stranger took out a five-dollar gold piece, and showed it to Clip. Clip had seen gold coins before, and he understood the value of what was offered him. Where can I put you? he said. We'll go round the boat together, and see if we can find a place. The round was taken, and the stranger selected a dark corner behind a bin of wheat. Will Massa Bob, as you call him; be likely to look here? No; I reckon not. Have you got anything to eat on board which you can bring me by and by? I'm goin' on shore soon as Massa Bob gets back. I'll buy something. That will do.
  • 82. Welcome to our website – the perfect destination for book lovers and knowledge seekers. We believe that every book holds a new world, offering opportunities for learning, discovery, and personal growth. That’s why we are dedicated to bringing you a diverse collection of books, ranging from classic literature and specialized publications to self-development guides and children's books. More than just a book-buying platform, we strive to be a bridge connecting you with timeless cultural and intellectual values. With an elegant, user-friendly interface and a smart search system, you can quickly find the books that best suit your interests. Additionally, our special promotions and home delivery services help you save time and fully enjoy the joy of reading. Join us on a journey of knowledge exploration, passion nurturing, and personal growth every day! ebookbell.com