SlideShare a Scribd company logo
MPEG V Bridging the Virtual and Real World 1st
Edition Kyoungro Yoon download
https://ebookgate.com/product/mpeg-v-bridging-the-virtual-and-
real-world-1st-edition-kyoungro-yoon/
Get Instant Ebook Downloads – Browse at https://ebookgate.com
Get Your Digital Files Instantly: PDF, ePub, MOBI and More
Quick Digital Downloads: PDF, ePub, MOBI and Other Formats
Maya The World as Virtual Reality Richard L. Thompson
https://ebookgate.com/product/maya-the-world-as-virtual-reality-
richard-l-thompson/
Architectonics of Game Spaces The Spatial Logic of the
Virtual and Its Meaning for the Real 1st Edition Andri
Gerber
https://ebookgate.com/product/architectonics-of-game-spaces-the-
spatial-logic-of-the-virtual-and-its-meaning-for-the-real-1st-
edition-andri-gerber/
Real World Hadoop 1st Edition Ted Dunning
https://ebookgate.com/product/real-world-hadoop-1st-edition-ted-
dunning/
Real World Haskell 1st Edition Bryan O'Sullivan
https://ebookgate.com/product/real-world-haskell-1st-edition-
bryan-osullivan/
Developing Real World Software 1st Edition Richard
Schlesinger
https://ebookgate.com/product/developing-real-world-software-1st-
edition-richard-schlesinger/
Real estate finance in the new economic world 1st
Edition White
https://ebookgate.com/product/real-estate-finance-in-the-new-
economic-world-1st-edition-white/
Vision and Brain How We Perceive the World 1st Edition
James V. Stone
https://ebookgate.com/product/vision-and-brain-how-we-perceive-
the-world-1st-edition-james-v-stone/
Everyday Thinking Memory Reasoning and Judgment in the
Real World 1st Edition Stanley Woll
https://ebookgate.com/product/everyday-thinking-memory-reasoning-
and-judgment-in-the-real-world-1st-edition-stanley-woll/
HTML5 CSS3 for the Real World 2nd Edition Alexis
Goldstein
https://ebookgate.com/product/html5-css3-for-the-real-world-2nd-
edition-alexis-goldstein/
MPEG-V
MPEG-V
BRIDGING THE VIRTUAL AND REAL WORLD
KYOUNGRO YOON
SANG-KYUN KIM
JAE JOON HAN
SEUNGJU HAN
MARIUS PREDA
AMSTERDAM • BOSTON • HEIDELBERG • LONDON
NEW YORK • OXFORD • PARIS • SAN DIEGO
SAN FRANCISCO • SINGAPORE • SYDNEY • TOKYO
Academic Press is an imprint of Elsevier
Academic Press is an imprint of Elsevier
125 London Wall, London EC2Y 5AS, UK
525 B Street, Suite 1800, San Diego, CA 92101-4495, USA
225 Wyman Street,Waltham, MA 02451, USA
The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, UK
© 2015 Elsevier Inc.All rights reserved.
No part of this publication may be reproduced or transmitted in any form or by any means, electronic
or mechanical, including photocopying, recording, or any information storage and retrieval system,
without permission in writing from the publisher. Details on how to seek permission, further
information about the Publisher’s permissions policies and our arrangements with organizations such
as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website:
www.elsevier.com/permissions.
This book and the individual contributions contained in it are protected under copyright by
the Publisher (other than as may be noted herein).
Notices
Knowledge and best practice in this field are constantly changing.As new research and experience
broaden our understanding, changes in research methods, professional practices, or medical treatment
may become necessary.
Practitioners and researchers must always rely on their own experience and knowledge in evaluating
and using any information, methods, compounds, or experiments described herein. In using such
information or methods they should be mindful of their own safety and the safety of others, including
parties for whom they have a professional responsibility.
To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume
any liability for any injury and/or damage to persons or property as a matter of products liability,
negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas
contained in the material herein.
ISBN: 978-0-12-420140-8
British Library Cataloguing-in-Publication Data
A catalogue record for this book is available from the British Library.
Library of Congress Cataloging-in-Publication Data
A catalog record for this book is available from the Library of Congress.
For Information on all Academic Press publications
visit our website at http://store.elsevier.com/
Typeset by MPS Limited, Chennai, India
www.adi-mps.com
Printed and bound in the United States
Publisher:Todd Green
Acquisition Editor:Tim Pitts
Editorial Project Manager: Charlie Kent
Production Project Manager: Jason Mitchell
Designer: Matthew Limbert
vii
ACKNOWLEDGMENT
This book would not be possible without the hard work of all the
MPEG-V contributors that, meeting after meeting, during 3 years, built a
consistent architecture supporting multi-sensorial user experiences, bring-
ing innovative ideas and giving them a shape in terms of standard specifi-
cations.We would like to express our honor and satisfaction for working
in such a challenging environment. Naming all the MPEG-V contributors
would require a few pages and, probably, would not be complete, however,
we would like to express special thanks to Jean Gelissen from Philipps and
Sanghyun Joo from ETRI, the original initiators of the project and to
Leonardo Chiariglione from Cedeo for the significant help in positioning
MPEG-V in the MPEG eco-system.
ix
AUTHOR BIOGRAPHIES
Kyoungro Yoon is a professor in School of Computer Science and
Engineering at Konkuk University, Seoul, Korea. He received the BS
degree in electronic and computer engineering from Yonsei University,
Korea, in 1987, the MSE degree in electrical and computer engineering
from University of Michigan, Ann Arbor, in 1989, and the PhD degree
in computer and information science in 1999 from Syracuse University,
USA. From 1999 to 2003, he was a Chief Research Engineer and Group
Leader in charge of development of various product-related technolo-
gies and standards in the field of image and audio processing at the LG
Electronics Institute of Technology. Since 2003, he joined Konkuk
University as an assistant professor and has been a professor since 2012.
He actively participated in the development of standards such as MPEG-7,
MPEG-21, MPEG-V, JPSearch, and TV-Anytime and served as a co-chair
for Ad Hoc Groups on User Preferences, chair for Ad Hoc Group on
MPEG Query Format. He is currently serving as the chair for Ad Hoc
Group on MPEG-V, the chair for Ad Hoc Group on JPSearch, and the
chair for the Metadata Subgroup of ISO/IEC JTC1 SC29 WG1 (a.k.a.
JPEG). He also served as an editor of various international standards such
as ISO/IEC 15938-12, ISO/IEC 23005-2/5/6, and ISO/IEC 24800-2/5.
He has co-authored over 40 conference and journal publications in
the field of multimedia information systems. He is also an inventor/
co-inventor of more than 30 US patents and 70 Korean patents.
Sang-Kyun Kim received the BS, MS, and PhD degrees in computer
science from University of Iowa in 1991, 1994, and 1997, respectively.
In 1997, he joined the Samsung Advanced Institute of Technology as a
researcher. He was a senior researcher as well as a project leader on the
Image and Video Content Search Team of the Computing Technology
Lab until 2007. Since 2007, he joined Myongji University as an assis-
tant Professor and has been an associate Professor in the Department of
Computer Engineering since 2011. His research interests include digital
content (image, video, and music) analysis and management, image search
and indexing, color adaptation, mulsemedia adaptation, sensors and actu-
ators,VR, and media-centric-IoT. He actively participated in the multi-
media standardization activities such as MPEG-7, MPEG-21, MPEG-A,
Author Biographies
x
MPEG-V, as a co-chair and a project editor. He serves currently as a
project editor of MPEG-V International Standards, i.e. ISO/IEC 23005-
2/3/4/5, and 23005-7. He has co-authored over 40 conference and
journal publications in the field of digital content management and mul-
semedia simulation and adaptation. He is also an inventor/co-inventor of
more than 25 US patents and 90 Korean patents.
Jae Joon Han has been a principal researcher at Samsung Advanced
Institute of Technology (SAIT) in Samsung Electronics, Korea since
2007. He received the BS degree in electronic engineering from Yonsei
University, Korea, in 1997, the MS degree in electrical and computer
engineering from the University of Southern California, Los Angeles, in
2001, and the PhD degree in electrical and computer engineering from
Purdue University, West Lafayette, IN, in August 2006. Since receiving
the PhD degree, he was at Purdue as a Postdoctoral Fellow in 2007. His
research interests include statistical machine learning and data mining,
computer vision, and real-time recognition technologies. He participated
in the development of standards such as ISO/IEC 23005 (MPEG-V) and
ISO/IEC 23007 (MPEG-U), and served as the editor of ISO/IEC 23005-
1/4/6. He has co-authored over 20 conference and journal publications.
He is also an inventor/co-inventor of three US patents and 70 filed inter-
national patent applications.
Seungju Han is currently a senior researcher at Samsung Advanced
Institute of Technology (SAIT) in Samsung Electronics, Korea. He
received the PhD degree in electrical and computer engineering in 2007,
from the University of Florida, USA. Since 2007, he has joined Samsung
Advanced Institute of Technology as a research engineer. He participated
in the development of standards such as ISO/IEC 23005 (MPEG-V) and
ISO/IEC 23007 (MPEG-U), and served as the editor of ISO/IEC 23005-
2/5. He has authored and co-authored over 25 research papers in the field
of pattern recognition and human–computer interaction. He is also an
inventor/co-inventor of four US patents and 70 filed international patent
applications.
Marius Preda is an associate professor at Institut MINES-Telecom and
Chairman of the 3D Graphics group of ISO’s MPEG (Moving Picture
Expert Group). He contributes to various ISO standards with technolo-
gies in the fields of 3D graphics, virtual worlds, and augmented reality and
Author Biographies xi
has received several ISO Certifications of Appreciation. He leads a research
team with a focus on Augmented Reality, Cloud Computing, Games and
Interactive Media and regularly presents results in journals and at speaking
engagements worldwide. He serves on the program committee interna-
tional conferences and reviews top-level research journals.
After being part of various research groups and networks, in 2010
he founded a research team within Institut MINES-Telecom, called
GRIN – GRaphics and INteractive media. The team is conducting
research at the international level cooperating with academic partners
worldwide and industrial ICT leaders. Selected results are showcased on
www.MyMultimediaWorld.com.
Academically, Marius received a degree in Engineering from
Politehnica Bucharest, a PhD in Mathematics and Informatics from
University ParisV and an eMBA from Telecom Business School, Paris.
xiii
PREFACE
Traditional multimedia content is typically consumed via audio-visual
(AV) devices like displays and speakers. Recent advances in 3D video and
spatial audio allow for a deeper user immersion into the digital AV con-
tent, and thus a richer user experience.The norm, however, is that just two
of our five senses – sight and hearing – are exercised, while the other three
(touch, smell, and taste) are neglected.
The recent multitude of new sensors map the data they cap-
ture onto our five senses and enable us to better perceive the environ-
ment both locally and remotely. In the literature, the former is referred
to as “Augmented Reality”, and the latter as “Immersive Experience”.
In parallel, new types of actuators produce different kinds of multi-
sensory effect. In early periods such effects were mostly used in dedicated
installations in attraction parks equipped with motion chairs, lighting
sources, liquid sprays, etc., but it is more and more to see multi-sensory
effects produced in more familiar environments such as at home.
Recognizing the need to represent, compress, and transmit this kind
of contextual data captured by sensors, and of synthesizing effects that
stimulate all human senses in a holistic fashion, the Moving Picture
Experts Group (MPEG, formally ISO/IEC JTC 1/SC 29/WG 11) rati-
fied in 2011 the first version of the MPEG-V standard (officially known
as “ISO/IEC 23005 – Media context and control”). MPEG-V provides
the architecture and specifies the associated information representations
that enable interoperable multimedia and multimodal communication
within Virtual Worlds (VWs) but also with the real world, paving the way
to a “Metaverse”, i.e. an online shared space created by the convergence
of virtually enhanced reality and physically persistent virtual space that
include the sum of allVirtual Worlds and Augmented Realities. For exam-
ple, MPEG-V may be used to provide multi-sensorial content associated
to traditional AV data enriching multimedia presentations with sensory
effects created by lights, winds, sprays, tactile sensations, scents, etc.; or it
may be used to interact with a multimedia scene by using more advanced
interaction paradigms such as hand/body gestures; or to access different
VWs with an avatar with a similar appearance in all of them.
In the MPEG-V vision, a piece of digital content is not limited to an
AV asset, but may be a collection of multimedia and multimodal objects
Preface
xiv
forming a scene, having their own behaviour, capturing their context, pro-
ducing effects in the real world, interacting with one or several users, etc.
In other words, a digital item can be as complex as an entire VW. Since a
standardizing VW representation is technically possible but not aligned
with industry interests, MPEG-V offers interoperability between VWs (and
between any of them and the real world) by describing virtual objects, and
specifically avatars, so that they can “move” from oneVW to another.
This book on MPEG-V draws a global picture of the features made
possible by the MPEG-V standard, and is divided into seven chapters, cov-
ering all aspects from the global architecture, to technical details of key
components – sensors, actuators, multi-sensorial effects – and to applica-
tion examples.
At the time this text was written (November 2014), three editions of
MPEG-V have been published and the technical community developing
the standard is still very active. As the main MPEG-V philosophy is not
expected to change in future editions, this book is a good starting point
to understand the principles that were at the basis of the standard. Readers
interested in the latest technical details can see the MPEG-V Web-site
(http://wg11.sc29.org/mpeg-v/).
Marius Preda
Leonardo Chiariglione
1
MPEG-V
.
DOI:
© 2014 Elsevier Inc.
All rights reserved.
2015
http://dx.doi.org/10.1016/B978-0-12-420140-8.00001-9
CHAPTER 1
Introduction to MPEG-V
Standards
Contents
1.1 Introduction to Virtual Worlds 1
1.2 Advances in Multiple Sensorial Media 3
1.2.1 Basic Studies on Multiple Sensorial Media 3
1.2.2 Authoring of MulSeMedia 4
1.2.3 Quality of Experience of MulSeMedia 7
1.2.3.1 Test Setups 8
1.2.3.2 Test Procedures 8
1.2.3.3 Experimental QoE Results for Sensorial Effects 10
1.3 History of MPEG-V 11
1.4 Organizations of MPEG-V 14
1.5 Conclusion 17
References 18
1.1 INTRODUCTION TO VIRTUAL WORLDS
The concept of a virtual world has become a part of our everyday lives
so recently that we have not even noticed the change. There have been
various attempts at defining a virtual world, each with its own point of
view. The worlds that we are currently experiencing, from the view-
point of information technology, can be divided into three types: the real
world, virtual worlds, and mixed worlds. Conventionally, a virtual world,
also referred to frequently as virtual reality (VR), is a computer-generated
environment, giving the participants the impression that the participants
are present within that environment [1]. According to Milgram and
Kishino [1], real objects are those having actual existence that can be
observed directly or can be sampled and resynthesized for viewing,
whereas virtual objects are those that exist in essence or effect, but not
formally or actually, and must be simulated.
Recently, Gelissen and Sivan [2] redefined a virtual world as an inte-
gration of 3D, Community, Creation, and Commerce (3D3C). Here, 3D
indicates a 3D visualization and navigation for the representation of a
virtual world, and 3C represents the three key factors that make a virtual
MPEG-V
2
world closer to the real world, which can be characterized by daily inter-
actions for either economic (creation and commerce) or noneconomic/
cultural (community) purposes.
Virtual worlds can also be divided into gaming and nongaming worlds.
A virtual gaming world is a virtual world in which the behavior of the
avatar (user) is goal-driven.The goal of a particular game is given within
its design. Lineage [3] and World ofWarcraft [4] are examples of virtual gam-
ing worlds. Figure 1.1 shows a screen capture from World of Warcraft. In
contrast, a nongaming virtual world is a virtual world in which the behav-
ior of the avatar (user) is not goal-driven. In a nongaming virtual world,
there is no goal provided by the designer, and the behavior of the avatar
depends on the user’s own intention. An example of a nongaming virtual
world is Second Life by Linden Lab, a captured image of which is shown in
Figure 1.2 [5].
A virtual world can provide an environment for both collabora-
tion and entertainment [6]. Collaboration can mainly be enabled by
the features of the virtual world, such as the 3D virtual environments
in which the presence, realism, and interactivity can be supported at a
higher degree than in conventional collaboration technology, and avatar-
based interactions through which the social presence of the participants
and the self-presentation can be provided at a higher degree than in any
other existing environment.
Figure 1.1 A virtual gaming world (from World of Warcraft).
Introduction to MPEG-V Standards 3
1.2 ADVANCES IN MULTIPLE SENSORIAL MEDIA
1.2.1 Basic Studies on Multiple Sensorial Media
Along with the sensations associated with 3D films and UHD display panels,
the development of Multiple Sensorial Media (MulSeMedia), or 4D media,
has received significant attention from the public. 4D content generally adds
sensorial effects to 3D, UHD, and/or IMAX content, allowing audiences
to immerse themselves more deeply into the content-viewing experience.
Along with the two human senses of sight and hearing, sensorial effects such
as wind, vibration, and scent can stimulate other senses, such as the tactile
and olfaction senses. MulSeMedia content indicates audiovisual content
annotated with sensory effect metadata [7].
The attempts to stimulate other senses while playing multimedia
content have a long history. Sensorama [8,9] which was an immersiveVR
motorbike simulator, was a pioneer in MulSeMedia history. As a type of
futuristic cinema, Sensorama rendered sensorial effects with nine different
fans, a vibrating seat, and aromas to simulate a blowing wind, driving over
gravel, and the scent of a flower garden or pizzeria. Although Sensorama
was not successful in its day, its technology soon became a pioneer of cur-
rent 4D theaters and the gaming industry.
The significance of olfactory or tactile cues has been reported in many
previous studies [10–14]. Dinh et al. [10] reported that the addition of
tactile, olfactory, and auditory cues into a VR environment increases the
Figure 1.2 A nongaming virtual world (from Second Life).
MPEG-V
4
user’s sense of presence and memory of the environment. Bodnar et al.
[11] reported that the olfactory modality is less effective in alarming users
than the other modalities such as vibration and sound, but can have a less
disruptive effect on continuing the primary task of the users. Ryu and
Kim [12] studied the effectiveness of vibro-tactile effects on the whole
body to simulate collisions between users and their virtual environment.
Olfactory cues can be used to evoke human memories. Brewster et al.
[13] presented a study on the use of smell for searching through digital
photo collections, and compared text- and odor-based tagging (Figure 1.3).
For the first stage, sets of odors and tag names from the user descriptions
of different photos were generated.The participants then used these to tag
their photos, returning two weeks later to answer questions regarding these
images. The results showed that the performance when using odors was
lower than that from simple text searching but that some of the participants
had their memories of their photos evoked through the use of smell.
Ghinea and Ademoye [14] presented a few design guidelines for the
integration of olfaction (with six odor categories) in multimedia appli-
cations. Finally, Kannan et al. [15] encompassed the significance of other
senses incorporated in the creation of digital content for the packaging
industry, healthcare systems, and educational learning models.
1.2.2 Authoring of MulSeMedia
The difficulties in producing MulSeMedia content mainly lie in the time
and effort incurred by authoring the sensory effects. For the successful
industrial deployment of MulSeMedia services, the provisioning of an easy
and efficient means of producing MulSeMedia content plays a critical role.
Figure 1.4 shows examples of the authoring tools used to create digital
content with sensorial effects.
Photo viewing
pane
Tagging
pane
Thumbnail
pane
Searching
pane
Figure 1.3 Search and retrieve based on odor [13].
Introduction to MPEG-V Standards 5
(A)
(B)
(C)
Figure 1.4 Authoring tools for sensorial effects: (A) SEVino by Waltl et al. [18,19],
(B) RoSEStudio by Choi et al. [16], and (C) SMURF by Kim [17].
MPEG-V
6
Waltl et al. [18,19] presented a sensory effect authoring tool called
SEVino (Figure 1.4A), which can verify XML instances from the Java
Architecture for XML Binding (JAXB) complying with the XML schema
specified in MPEG-V, Part 3 (which is described in Chapter 2). Choi et al.
[16] presented an authoring tool known as RoSEStudio (Figure 1.4B) with
a framework for streaming services with sensorial effects to bring about an
at-home 4D entertainment system based on the MPEG-V standard. Kim
[17] presented an authoring tool known as SMURF (Figure 1.4C), which
not only can create GroupOfEffects but also supports the Declaration and
ReferenceEffect for ordinary users to easily create their own desired senso-
rial effect metadata. Figure 1.5 shows 20 icons indicating sensorial effects
such as wind, temperature, scent, fog, light, vibration, motion, and tactile
sensations.
The authoring of MulSeMedia content can be boosted by extract-
ing sensorial information automatically from the content itself. In other
words, sensory effects can be generated automatically by extracting senso-
rial (physical and emotional) properties from the content and by mapping
the major attributes of the extracted properties to the sensory effects [7].
This can speed up the authoring process significantly.
Extracting physical properties such as the color characteristics from the
content was achieved by Waltl et al. [19] and Timmerer et al. [20]. In their
Figure 1.5 Sensorial effect menu icons [16].
Introduction to MPEG-V Standards 7
Figure 1.6 Sensorial effect simulation [21].
works, ambient light devices were controlled using automatic color calcula-
tions (e.g., averaging the RGB or dominant colors in the RGB, HSV, and
HMMD spaces) to enable an immediate reaction to color changes within
the content. Kim et al. [7] extracted the color temperature from the content
to convert them into four categories of emotional properties (i.e., hot, warm,
moderate, and cool).The extracted emotional properties are in turn mapped
to temperature effects to author the MulSeMedia content automatically.
The sensory effects created by different authoring tools can be visual-
ized through sensory effect simulators. Kim et al. [21] presented a sensible
media simulator (Figure 1.6) for a 4D simulation in an automobile envi-
ronment and the implementation of sensorial actuators. Waltl et al. [19]
briefly described a simulator (SESim) to evaluate the quality of the multi-
media experience presented to the users.
1.2.3 Quality of Experience of MulSeMedia
It is important to know how digital content enriched with additional sen-
sorial effects actually affects the level of satisfaction.Therefore, the quality
MPEG-V
8
of experience regarding sensorial effects is measured through a careful
experimental design. In this section, publicly known test setups along
with regulated test procedures are described as well as a few experimental
results of the quality of experience (QoE) of MulSeMedia.
1.2.3.1 Test Setups
Waltl et al. [18] collected a total of 76 video sequences from different genres,
i.e., action, documentaries, sports, news, and commercial sequences, and
described them based on their sensorial effects (i.e.,wind,vibration,and light).
They opened a dataset comprising a number of video sequences from dif-
ferent genres as a means to inspire similar researches. Furthermore, they
described possible test setups using off-the-shelf hardware for conducting sub-
jective quality assessments.The setup for one amBX system consists of two
fans, two light-speakers, a wall washer, a Wrist Rumbler, and a subwoofer
(left-most side of Figure 1.7A). The middle of Figure 1.7A shows the test
setup using two amBX systems.The third test setup (right-most side of Figure
1.7A) consists of two amBX systems and two sets of Cyborg Gaming Lights.
Figure 1.7B shows the actual test setup depicted on the right-most side
of Figure 1.7A.
Waltl et al. [22] presented a demonstration setup that uses stereoscopic
3D and sensory devices, i.e., fans, vibration panels, and lights (Figure 1.7C).
This chapter reported that the combination of 3D content with sensorial
effects allows further improvement in the viewing experience for users.
1.2.3.2 Test Procedures
Rainer et al. [23] presented recommendations for the test setups and
methods used in the MulSeMedia experience. Figure 1.8 shows the
experimental procedures for a MulSeMedia viewing experience. In
the first stage, the test participants have to read the introduction, which
explains the purpose of the actual experiment. In the second stage, some
demographic and educational information of the participants is acquired
using a pre-questionnaire.The training phase is provided to eliminate the
surprise effect and help the participants become familiar with the stimu-
lus presentation.The main evaluation adheres to the recommendations of
ITU P.910 and P.911 [24,25] regarding the test methods and design.Two
of the main evaluation methods used, i.e., DCR and DSCQS, are pre-
sented in Figure 1.9. Finally, a post-questionnaire was provided to ask the
participants whether they had already participated in a similar experiment
and to provide them a chance to give their feedback.
Introduction to MPEG-V Standards 9
(a)
(A)
(B) (C)
amBX
wrist rumbler
amBX
light
amBX
wall washer
amBX
fan
Cyborg
gaming light
(b) (c)
Figure 1.7 Sensorial effect test setups [18,22].
Introduction
Settings
Test procedure
Task
Rating method
Disclaimer
Age
Gender
Occupation
Education
Nationaliy
ACR
ACR-HR
SSCQS
DCR
DSCQS
Participated?
Feedback
Pre-Quest Training Main Eval. Post-Quest.
Figure 1.8 Test procedure for the sensorial effects [23].
T1
(A) (B)
T2
Vote Vote
Em-vote
T1 T2 T3 T4
Figure 1.9 (A) DSR and (B) DSCQS [23].
MPEG-V
10
Figure 1.9A shows the Degradation Category Rating (DCR) method.
In T1, the reference content is presented, and in T2, the content with
sensorial effects is shown. Between T1 and T2, a gray screen is presented
to the participants. Figure 1.9B shows the Double Stimulus Continuous
Quality Scale (DSCQS) method. T1 shows the presentation of a video
sequence without sensorial effects. T2 illustrates the rating of emotions
and their intensity. T3 shows a presentation of the same video sequence
with sensorial effects, and finally, T4 provides the rating of the emotions
and their intensity for the video sequence with sensorial effects.
1.2.3.3 Experimental QoE Results for Sensorial Effects
Waltl et al. [26] investigated the QoE based on various video bit-rates of
multimedia contents annotated with sensorial effects (e.g., wind, vibra-
tion, and light). The results show that the level of satisfaction of a video
sequence with sensorial effects is higher than that of a video without
sensorial effects. Timmerer et al. [20] presented the QoE test results for
wind, vibration, and lighting effects for the action, sports, documentary,
news, and commercial genres, which indicate that the action, sports, and
documentary genres benefit more from sensorial effects than the news and
commercial genres.
Rainer et al. [27] presented the emotional response of users and an
enhancement of the QoE of Web video sequences. In particular, the
authors’ QoE experiments were conducted in Austria and Australia to
investigate whether geographical and cultural differences affect elicited
emotional responses of the users.
Timmerer et al. [28] derived a utility model for sensory experiences
using their previous QoE experimental results. The aim of this util-
ity model was to estimate the QoE of multimedia content with senso-
rial effects as compared with the QoE of multimedia content without
sensorial effects. The proposed utility model shows that a linear relation-
ship exists between the QoE without sensorial effects and the QoE with
sensorial effects.
Kim et al. [21] presented the relationship between the QoE with sen-
sorial effects and the learning types of the participants. The experimen-
tal results showed that stimulations from the vibration effects generated
greater satisfaction in people with a high tactile perception capability at a
statistically significant level. Stimulations through vibration effects gener-
ated more satisfaction in people with a low visual perception level as well.
This indicates that vibration effects can be assumed to be a high priority
Introduction to MPEG-V Standards 11
for people with a high tactile perception capability and/or a low visual
perception capability. Kim et al. [7] also showed that the sequences with
temperature effects automatically generated through a color temperature
estimation clearly enhanced the level of satisfaction.
Yazdani et al. [29] analyzed the electroencephalogram (EEG) of five
participants during their perception of both unpleasant and pleasant odor-
ous stimuli.They identified the regions of the brain cortex that are active
during the discrimination of unpleasant and pleasant odor stimuli.
1.3 HISTORY OF MPEG-V
MPEG-V shares a similar view of virtual and real worlds, except that its
definition of a real world is tighter, and that of a virtual world has been
extended as compared to their conventional definitions. In MPEG-V, the
sampled and resynthesized environments of the real world are no longer
considered real worlds and are viewed as virtual worlds.Therefore, movies
or video sequences depicting the real world are also considered another
representation of a virtual world. Such a change in the definitions of real
and virtual worlds has made it possible to develop the concepts of virtual-
to-real and real-to-virtual adaptations.
Creating and enjoying films in 3D have become popular, a break-
ing point being the 3D movie, Avatar, which had unprecedented suc-
cess owing to its 3D effects. One reason for this success is the ability to
immerge the user into the story through the creation of a full audiovisual
environment.Additionally, by providing more effects on top of the audio-
visual effects, it is possible to obtain more immersion in terms of user
experience. One possibility is to add special effects provided by (senso-
rial) actuators, so-called 4D effects, which affect senses other than seeing
and hearing. Other modalities, such as olfaction, mechanoreception, equi-
librioception, or thermoception may be stimulated, giving the feeling of
being part of the media content, and resulting in a meaningful and consis-
tent user experience. In particular, 4D movies that include sensorial effects
such as wind, vibration, lighting, and scent can stimulate the human sen-
sory system using actuators such as fans, motion chairs, lighting devices,
and scent generators. Such rendering of sensorial effects in the real world
is an example of a virtual-to-real adaptation.
It is also well known that user interaction is a powerful means to
improve the user experience. Interacting with digital content, thereby
changing it from a linear content, as in the case of traditional movies,
MPEG-V
12
allows users to be not only spectators but also actors. The success of
complex video games that create an entire universe is an indicator of
the role such an interaction can play. More generally, virtual worlds are
typical applications using 3D technologies, allowing the user to interact
and change both the storyline and the environment. A notable example
is Second Life, which allows users to project themselves into virtual char-
acters (called avatars).Through their avatar, the user can live a virtual life;
communicate with others, perform daily activities, and own virtual assets
such as houses and other types of property. In massive multiplayer online
role-playing games (MMORPG) such as World of Warcraft or Lineage, users
can operate their characters in a virtual world and cooperate with oth-
ers to fulfill missions. Such 3D games immerse users in a virtual world by
providing a fictional environment that can otherwise only be experienced
in their imagination.
Moreover, controlling virtual worlds with sensors provides an even more
immersive media experience.The effective control of objects in such virtual
worlds has been developed in many ways: the motions of users captured
from a set of sensors are used to control game characters. The recently
developed “Kinect” sensor can capture the full-body skeleton of each user
and use captured data to manipulate objects in a virtual world. In addition,
some pioneering technologies are used to capture brain waves to recognize
the user’s intention and/or internal state.These activities for controlling the
avatars or objects of a virtual world by sensing the real environment and
real objects can be viewed as an example of a real-to-virtual adaptation.
Because each of these technologies related to immersive multisenso-
rial experiences is based on proprietary products, there is no standard way
for representing the data from sensors and actuators in the real world, and
no common way to interface with a virtual world. As a result, each pro-
prietary virtual world has also been isolated from other virtual worlds.
This hinders users when migrating from one virtual world to another, and
therefore, when a virtual world loses its interest, all assets produced and
the entire community itself are lost.To increase the usability of each vir-
tual world and their interoperability, and improve the controls and increase
the quality of the user experience, the MPEG community has developed
the MPEG-V standard (ISO/IEC 23005) with the intention of offering
a common information representation format. The standardization work
in MPEG-V was initiated in 2008, and the second version of the standard
was published in 2013 [30–36].
Introduction to MPEG-V Standards 13
MPEG-V was initiated in 2008 based on two separate projects with
different objectives. One is the Metaverse EU project whose objective
is to provide a framework for interoperability between heterogeneous
virtual worlds [2,37]. The other is the Single Media Multiple Devices
(SMMD) project of ETRI, Korea, whose objective is to develop tech-
nology providing new media services with sensory effects using multiple
devices [38].
Metaverse project-related proposals were first submitted at the 81st MPEG
meeting in Lausanne, Switzerland, in July 2007, and SMMD project-related
proposals were first submitted at the 82nd MPEG meeting in Shenzhen in
October 2007.The Metaverse project was renamed MPEG-V, and is focused
on the exchange of information between virtual worlds.The SMMD project
was renamed Representation of Sensory Effects (RoSE),and is focused on the
representation of sensory effects for new types of media services.
At the 87th meeting in Lausanne, Switzerland, in February 2009,
the two on-going projects of MPEG-V and RoSE were merged into
the MPEG-V standard, which deals with both virtual and real worlds.
The architecture and introduction of the standard was given in Part 1 of
MPEG-V.The control information was provided in Part 2. Representations
of the sensory effects and sensory effect metadata were given in Part 3, and
the representation of avatars was provided in Part 4. Committee drafts of the
first edition were released at the 89th meeting in London in July 2009. At
the 90th meeting in Xian, China, in October 2009, discussions were held
on the subdivision of the control information of Part 2, which was finally
divided into two separate parts at an Ad Hoc meeting in Paris in December
2009, i.e., the control information in Part 2, and the data formats for inter-
action devices in Part 5.At the 91st Kyoto meeting, the common tools and
types from each part of the standard were extracted and became the newly
added Part 6. Finally, the reference software is provided in Part 7.
The first edition of the complete set of MPEG-V specifications was
published in early 2011. At the 91st Kyoto meeting in January 2010, the
need for binary representations of the MPEG-V tools for a greater transfer
efficiency was raised, and work on the second edition of the standard was
started. After creating a binary representation of all existing tools in the
first edition, as well as new sensory effects and other additional tools, the
second edition was finally published in 2013.
Currently, the third edition of the standard is progressing with the
addition of more effects, devices, and sensors.
MPEG-V
14
1.4 ORGANIZATIONS OF MPEG-V
MPEG-V (Media context and control), published in ISO/IEC 23005,
provides an architecture and specifies the associated information repre-
sentations to enable bridges between the real world and digital content,
and to increase the interoperability between virtual worlds. MPEG-V is
applicable in various business models/domains for which audiovisual con-
tents can be associated with sensorial effects that need to be rendered on
appropriate actuators and/or benefit from well-defined interactions with
an associated virtual world.
A well-defined connection between the real and virtual worlds is
needed to reach simultaneous reactions in both worlds. This is done
in MPEG-V by defining an architecture that provides interoperabil-
ity at various levels. Efficient, effective, intuitive, and entertaining inter-
faces between users and virtual worlds are of crucial importance for their
wide acceptance and use of such technologies.To improve the process of
creating virtual worlds, a better design methodology and better tools are
indispensable.
The MPEG-V standard consists of the following parts: Part 1:
Architecture [30]; Part 2: Control Information [31]; Part 3: Sensory
Information [32]; Part 4: Virtual World Object Characteristics [33]; Part
5: Formats for Interaction Devices [34]; Part 6: Common Types and Tools
[35]; and Part 7: Conformance and Reference Software [36].
Part 1 provides an overview of MPEG-V along with the architecture
and various use cases or applications of the MPEG-V standard.
Part 2 provides the tools for a description of the capabilities of the
actuators and sensors, the user’s preferences regarding the sensory effects,
and their preferences in terms of the sensor adaptations. Altogether, these
tools are called the control information, and are used for the detailed and
personalized control of the actuators and sensors.The control information
is provided using the Control Information Description Language (CIDL)
with the Device Capability Description Vocabulary (DCDV), Sensor
Capability Description Vocabulary (SCDV), User’s Sensory Preference
Vocabulary (USPV), and Sensor Adaptation Preference Vocabulary
(SAPV), whose syntaxes are defined using the XML schema.
Part 3 provides the tools for a description of the sensorial effect in syn-
chronization with the media content. The descriptions of the sensorial
effect or sensory effect metadata (SEM) are defined using the Sensory
Effect Description Language (SEDL) with the Sensory Effect Vocabulary
(SEV) based on the XML schema.
Introduction to MPEG-V Standards 15
Part 4 defines the characteristics of a virtual-world object to provide
tools enabling the interoperability between virtual worlds. It also provides
tools for the description or metadata of avatars and virtual objects. The
metadata describe the characteristics of the avatars and virtual objects in
terms of their nature, character, and appearance, to name a few, but do not
provide the actual shape, texture, or rendering information.
Part 5 specifies the interfaces or data formats for an interoperable
exchange of information to/from the sensors and actuators. These inter-
faces are defined by the Interaction Information Description Language
(IIDL) with the Device Command Vocabulary (DCV) and Sensed
Information Vocabulary (SIV) based on the XML schema. The DCV
defines the data formats used as commands to the actuators. The SIV
defines the data formats used for transferring sensed information from a
sensor to the adaptation engine or to the information destination.
Part 6 specifies the syntax and semantics of the data types and tools
that are common to more than one part of the MPEG-V standard. In the
appendix of this part of the standard, the classification schemes for various
sets of terms, such as the unit and scent types, are also defined.
Part 7 provides the reference software and specifies the conformance
using a Schematron.
Figure 1.10 shows a diagram of the MPEG-V system architec-
ture and its data transition scenarios. The MPEG-V specifications are
used for three different types of media exchanges between real and vir-
tual worlds.The first media exchange is the information adaptation from
a virtual world into the real world (Figure 1.10A). It accepts sensorial
effect data (specified in MPEG-V, Part 3) and/or Virtual World Object
Characteristics (MPEG-V, Part 2) as contextual inputs; accepts Actuator
Capability and/or Actuation Preferences (MPEG-V, Part 2) and/or
Sensed Information (MPEG-V, Part 5) as control parameters; and gener-
ates Actuator Commands (MPEG-V, Part 5) to the real-world actuators.
The VR adaptation engine converts (or adapts) either the Virtual World
Object Characteristics or the sensorial effect data from a virtual world
into the Actuator Commands in the real world in accordance with the
input control parameters. The manner in which the adaptation engine
is implemented is not within the scope of the MPEG-V standardiza-
tion. The second media exchange is the information adaptation from
the real world into a virtual world.The real-to-virtual adaptation engine
accepts Sensed Information (MPEG-V, Part 5) from sensors as the real-
world context; accepts Sensor Capability and/or Sensor Adaptation
MPEG-V
16
Real World
(Sensors)
Real World
(Actuators)
Sensed
Information
Sensor
Capability
Actuator
Capability
Actuator
Commands
Actuation
Preferences
Virtual World
(A)
Sensorial
Effects
Sensor
Adaptation
Preferences
VR Adaptation: converts Sensorial
Effect data and/or VW Object
Char. from VW into Actuator Cmds
applied to RW
RV Adaptation: converts Sensed
Info from RW to VW Object
Char/Sensed Info applied to VW
Sensed
Information
Virtual World
Object
Characteristics
Engine
Virtual World to Real World
User
(B)
Real World
(Sensors)
Real World
(Actuators)
Sensed
Information
Sensor
Capability
Actuator
Capability
Actuator
Commands
Actuation
Preferences
Virtual World
Sensorial
Effects
Sensor
Adaptation
Preferences
VR Adaptation: converts Sensorial
Effect data and/or VW Object Char.
from VW into Actuator Cmds
applied to RW
RV Adaptation: converts Sensed
Info from RW to VW Object
Char/Sensed Info applied to VW
Sensed
Information
Virtual World
Object
Characteristics
Engine
Real World to Virtual World
User
Figure 1.10 MPEG-V architectures and data transition scenarios: (A) a virtual- into
real-world scenario, (B) a real- into virtual-world scenario, and (C) a virtual- into virtual-
world scenario.
Introduction to MPEG-V Standards 17
Real World
(Sensors)
Real World
(Actuators)
User
Sensed
Information
Sensor
Capability
Actuator
Capability
Actuator
Commands
Actuation
Preferences
Virtual World
Sensorial
Effects
Sensor
Adaptation
Preferences
VR Adaptation: converts Sensorial
Effect data and/or VW Object Char.
from VW into Actuator Cmds
applied to RW
RV Adaptation: converts Sensed
Info from RW to VW Object
Char/Sensed Info applied to VW
Sensed
Information
Virtual World
Object
Characteristics
Engine
Virtual World to Virtual World
(C)
Figure 1.10 (Continued).
Preferences (MPEG-V, Part 2) as control parameters; and generatesVirtual
World Object Characteristics (MPEG-V, Part 4) and/or adapted Sensed
Information (MPEG-V, Part 5) to the associated virtual-world objects
(Figure 1.10B).The RV adaptation engine converts (or adapts) the sensed
information from the real-world sensors into the Virtual World Object
Characteristics and/or the adapted sensed information of a virtual world
in accordance with the input control parameters. Finally, information
exchange between virtual worlds is conducted by adapting proprietary
Virtual World Object Characteristics into the normatively specifiedVirtual
World Object Characteristics (MPEG-V, Part 4) (Figure 1.10C).
1.5 CONCLUSION
MPEG-V (ISO/IEC 23005) provides the architecture and neces-
sary associated information representation supporting the informa-
tion exchanges between the real and virtual worlds, and the information
exchange between virtual worlds. To support the information exchanges,
MPEG-V
18
the information between the two worlds should be adapted by consider-
ing the capabilities of each world and the user preferences regarding the
information. Each component for the information adaption is addressed
in sections of ISO/IEC 23005. Finally, adoption of the standardized infor-
mation representation provides opportunities for 4D broadcasting, natu-
ral interaction with intelligent sensors within any virtual world, seamless
interaction between real and virtual worlds, and the importing of virtual
characters and objects between virtual worlds.
REFERENCES
[1] P. Milgram, F. Kishino, A taxonomy of mixed reality visual displays, IEICE Trans. Inf.
Syst. E77-D (12) (1994).
[2] J.H.A. Gelissen,Y.Y. Sivan,The Metaverse1 case: historical review of making one vir-
tual worlds standard (MPEG-V), J.Virtual Worlds Res. 4 (3) (2011).
[3] Lineage. <http://lineage.plaync.com>, (last accessed on 20.09.14).
[4] World of WarCraft. <http://www.battle.net/wow/>, (last accessed on 20.09.14).
[5] Second Life. <http://secondlife.com>, (last accessed on 20.09.14).
[6] S. van der Land,A.P. Schouten, B. van der Hooff, F. Feldberg, Modelling the Metaverse:
a theoretical model of effective team collaboration in 3D virtual environments,
J.Virtual Worlds Res. 4 (3) (2011).
[7] S.-K. Kim, S.-J.Yang, C.Ahn,Y. Joo, Sensorial information extraction and mapping to
generate temperature sensory effects, ETRI J. 36 (2) (2014) 232–241.
[8] H. Rheingold,Virtual Reality, Summit Books, NewYork, NY, 1991 (Chapter 2).
[9] J.J. Kaye, Making scents: aromatic output for HCI, Interactions 11 (1) (2004) 48–61.
[10] H.Q. Dinh, N.Walker, L.F. Hodges, C. Song,A. Kobayashi, Evaluating the importance
of multisensory input on memory and the sense of presence in virtual environments,
in:Proceedings—Virtual Reality Annual International Symposium,1999,pp.222–228.
[11] A. Bodnar, R. Corbett, D. Nekrasovski, AROMA: ambient awareness through olfac-
tion in a messaging application: Does olfactory notification make “scents?” in: Sixth
International Conference on Multimodal Interfaces, 2004, pp. 183.
[12] J. Ryu, G.J. Kim, Using a vibro-tactile display for enhanced collision perception and
presence, in: VRST ‘04: Proceedings of the ACM Symposium on Virtual Reality
Software and Technology,ACM, NewYork, NY, 2004, pp. 89–96.
[13] S.A. Brewster, D.K. McGookin, C.A. Miller, Olfoto: designing a smell-based interac-
tion, in: CHI 2006: Conference on Human Factors in Computing Systems, 2006,
p. 653.
[14] G. Ghinea, O.A. Ademoye, Olfaction-enhanced multimedia: perspectives and chal-
lenges, Multimed.Tools Appl. (2010) 1–26.
[15] R. Kannan, S.R. Balasundaram, F. Andres,The role of mulsemedia in digital content
ecosystem design, in: Proceedings of the International Conference on Management of
Emergent Digital EcoSystems, 2010, pp. 264–266.
[16] B. Choi, E.-S. Lee, K.Yoon, Streaming media with sensory effect, in: Proceedings of
the International Conference on Information Science and Application, Jeju Island,
Republic of Korea,April 26–29, 2011, pp. 1–6.
[17] S.-K. Kim, Authoring multisensorial content, Signal Process. Image Commun. 28 (2)
(2013) 162–167.
[18] M. Waltl, C. Timmerer, B. Rainer, H. Hellwagner, Sensory effect dataset and test
setups, in: IEEE Proceedings of the Fourth International Workshop Quality
Multimedia Experience, 2012, pp. 115–120.
Introduction to MPEG-V Standards 19
[19] M.Waltl,C.Timmerer,H.Hellwagner,A test-bed for quality of multimedia experience
evaluation of sensory effects, in: Proceedings of the International Workshop Quality
Multimedia Experience, San Diego, CA, July 29–31, 2009, pp. 145–150.
[20] C. Timmerer, M. Waltl, B. Rainer, H. Hellwagner, Assessing the quality of sensory
experience for multimedia presentations, Signal Process. Image Commun. 27 (8)
(2012) 909–916.
[21] S.-K.Kim,Y.-S.Joo,Y.Lee,Sensible media simulation in an automobile application and
human responses to sensory effects, ETRI J. 35 (6) (2013) 1001–1010.
[22] M.Waltl, B. Rainer, S. Lederer, et al.,A 4D multimedia player enabling sensory experi-
ence, in: IEEE Proceedings of the Fifth International Workshop Quality Multimedia
Experience, 2013, pp. 126–127.
[23] Rainer, B.,Timmerer, C.,Waltl, M., Recommendations for the subjective evaluation
of sensory experience, in: Fourth International Workshop on Perceptual Quality of
Systems, 2013.
[24] ITU-T Rec. P.910, Subjective Video Quality Assessment Methods for Multimedia
Applications,April 2008.
[25] ITU-T Rec.P.911,SubjectiveAudiovisual QualityAssessment Methods for Multimedia
Applications, December 2008.
[26] M. Waltl, C. Timmerer, H. Hellwagner, Improving the quality of multimedia expe-
rience through sensory effects, in: IEEE Proceedings of the Second International
Workshop Quality Multimedia Experience, 2010, pp. 124–129.
[27] B. Rainer, M.Waltl, E. Cheng et al., Investigating the impact of sensory effects on the
quality of experience and emotional response in web videos, in: IEEE Proceedings
of the Fourth International Workshop Quality Multimedia Experience, 2012,
pp. 115–120.
[28] C. Timmerer, B. Rainer, M. Waltl, A utility model for sensory experience, in: IEEE
Proceedings of the Fifth International Workshop Quality Multimedia Experience,
2013, pp. 224–229.
[29] A.Yazdani, E. Kroupi, J.Vesni, T. Ebrahimi, Electroencephalogram alterations during
perception of pleasant and unpleasant odors, in: IEEE Proceedings of the Fourth
International Workshop Quality Multimedia Experience,Yarra Valley, Australia, 2012,
pp. 272–277.
[30] ISO/IEC 23005-1: 2014 Information technology—Media context and control—Part
1:Architecture, January 2014.
[31] ISO/IEC 23005-2:2013 Information technology—Media context and control—Part
2: Control information, November 2013.
[32] ISO/IEC 23005-3:2013 Information technology—Media context and control—Part
3: Sensory information, November 2013.
[33] ISO/IEC 23005-4:2013 Information technology—Media context and control—Part
4:Virtual world object characteristics, November 2013.
[34] ISO/IEC 23005-5: 2013 Information technology—Media context and control—Part
5: Data formats for interaction devices, November 2013.
[35] ISO/IEC 23005-6: 2013 Information technology—Media context and control—Part
6: Common types and tools, November 2013.
[36] ISO/IEC 23005-7: 2014 Information technology—Media context and control—Part
7: Conformance and reference software, January 2014.
[37] Metaverse, <http://www.metaverse1.org>, (last accessed on 20.09.14).
[38] B.S. Choi, S.H. Joo, H.Y. Lee, Sensory effect metadata for SMMD media service, in:
Proceedings of the Fourth International Conference on Internet andWeb Applications
and Services,Venice/Mestre, Italy, May 2009.
21
MPEG-V
.
DOI:
© 2014 Elsevier Inc.
All rights reserved.
2015
http://dx.doi.org/10.1016/B978-0-12-420140-8.00002-0
CHAPTER 2
Adding Sensorial Effects
to Media Content
Contents
2.1 Introduction 21
2.2 Sensory Effect Description Language 24
2.2.1 SEDL Structure 24
2.2.2 Base Data Types and Elements of SEDL 25
2.2.3 Root Element of SEDL 27
2.2.4 Description Metadata 30
2.2.5 Declarations 31
2.2.6 Group of Effects 32
2.2.7 Effect 33
2.2.8 Reference Effect 34
2.2.9 Parameters 35
2.3 Sensory Effect Vocabulary: Data Formats for Creating SEs 36
2.4 Creating SEs 49
2.5 Conclusion 56
References 56
2.1 INTRODUCTION
MPEG-V, Part 3: Sensory information (ISO/IEC 23005-3), specifies the
Sensory Effect Description Language (SEDL) [1] as an XML schema-
based language that enables one to describe sensorial effects (SEs) such
as light, wind, fog, and vibration that trigger human senses. The actual
SEs are not part of the SEDL but are defined within the Sensory Effect
Vocabulary (SEV) for extensibility and flexibility, allowing each applica-
tion domain to define its own SEs. A description conforming to SEDL is
referred to as Sensory Effect Metadata (SEM) and may be associated with
any type of multimedia content (e.g., movies, music,Web sites, games).The
SEM is used to steer actuators such as fans, vibration chairs, and lamps
using an appropriate mediation device to increase the user experience.
That is, in addition to the audiovisual (AV) content of a movie, e.g., the
user will also perceive other effects such as those described above, giving
the user the sensation of being part of the particular media content, which
MPEG-V
22
will result in a worthwhile, informative user experience. The concept of
receiving SEs in addition to AV content is depicted in Figure 2.1.
The media and corresponding SEM may be obtained from a Digital
Versatile Disc (DVD), Blu-ray Disc (BD), or any type of online service
(i.e., download/play or streaming).The media processing engine, which is
also referred to as the adaptation engine, acts as the mediation device and
is responsible for playing the actual media content resource and accompa-
nied SEs in a synchronized way based on the user’s setup in terms of both
the media content and rendering of the SE.Therefore, the media process-
ing engine may adapt both the media resource and the SEM according to
the capabilities of the various rendering devices.
The SEV defines a clear set of actual SEs to be used with the SEDL in
an extensible and flexible way.That is, it can be easily extended with new
effects or through a derivation of existing effects thanks to the extensibil-
ity feature of the XML schema. Furthermore, the effects are defined based
on the authors’ (i.e., creators of the SEM) intention independent from the
end user’s device setting, as shown in Figure 2.2.
The sensory effect metadata elements or data types are mapped to
commands that control the actuators based on their capabilities.This map-
ping is usually provided by theVirtual-to-Real adaptation engine and was
deliberately not defined in this standard, i.e., it is left open for industry
competitors. It is important to note that there is not necessarily a one-
to-one mapping between elements or data types of the SE data and ACs.
For example, the effect of hot/cold wind may be rendered on a single
device with two capabilities, i.e., a heater or air conditioner, and a fan or
ventilator.
As shown in Figure 2.3, the SEs can be adjusted into adapted SEs
(i.e., defined in MPEG-V, Part 5, as device commands) in accordance with the
capabilities of the actuators (ACs, defined in MPEG-V, Part 2) and actuation
preferences (APs,defined in MPEG-V,Part 2,as user sensory preferences).
Source
Media
processing
engine Rendering
devices
User
Offaction
Audition
Vision
Control
Media
Control
Thermoception
mechanoreception
Media + SEM
Figure 2.1 Concept of MPEG-V SEDL [1].
Adding Sensorial Effects to Media Content 23
Figure 2.4 shows an example of combining SEs (SEs in MPEG-V,
Part 3) with sensed information (SI in MPEG-V, Part 5) to generate
adapted actuator commands (ACmd in MPEG-V, Part 5). For example,
the SE corresponding to the scene might be cooling the temperature to
Author’s
intention to
trigger
Sensorial
effect
actuation data
Actuator
capabilities
<Effect “LightType”…/>
<Effect “ScentType…/>
<Effect “WindType…/>
<Effect “FlashType…/>
Scope of standardization
.
.
.
Single
sense
Multiple
senses
<cap–1…/>
<cap–2…/>
<cap–3…/>
<cap–n…/>
.
Adaptation VR
(inform.)
Figure 2.2 Mapping of author’s intentions to SE data and actuator capabilities (ACs) [2].
MPEG-V, Part 3
Sensorial effects
SE
MPEG-V, Part 2
Actuator capabilities Actuation preferences
AC1 AC2 ACn AP1 AP2 APn
MPEG-V, Part 5 (actuator commands)
Adapted sensorial effects
SE
Adaptation
engine
Figure 2.3 The adapted SEs (actuator commands defined in MPEG-V, Part 5) generated
by combining SEs with ACs and user’s APs.
MPEG-V
24
5°C and adding a wind effect with 100% intensity. Assume instead that
the current room temperature is 12°C. It would be unwise to deploy the
cooling and wind effect as described in the SE data because the current
temperature inside the room is already low, and users may feel uncom-
fortable with the generated SEs. Therefore, a sensor measures the room
temperature and the adaptation engine generates the adapted SEs (i.e.,
ACmds), which are a reduced wind effect (20% intensity) and a heating
effect (20°C), for instance.
This chapter is organized as follows. Section 2.2 describes the details of
the SEDL. Section 2.3 presents the SEV, which specifies the data formats
used for creating SEs. Section 2.4 presents XML instances using SEDL
and SEV. Finally, Section 2.5 concludes the chapter.
2.2 SENSORY EFFECT DESCRIPTION LANGUAGE
2.2.1 SEDL Structure
The SEDL is a language providing basic building blocks to instantiate sen-
sory effect metadata defined by the MPEG-V standard based on XML
that can be authored by content providers.
MPEG-V, Part 5
Sensed information
SIn
SI1
Adaptation
engine
SI2
MPEG-V, Part 3
SE SE
MPEG-V, Part 5 (actuator commands)
Adapted sensorial effects
Sensorial effects
Figure 2.4 The adapted SEs (actuator commands defined in MPEG-V, Part 5) generated
by combining SEs with SI.
Adding Sensorial Effects to Media Content 25
2.2.2 Base Data Types and Elements of SEDL
There are two base types in the SEDL. The first base type is
SEMBaseAttributes, which includes six base attributes and one base attribute
Group.The schema definition of SEMBaseAttributes is shown in Table 2.1.
The activate attribute describes whether the SE shall be activated.
The duration attribute describes the duration of any SE rendering.The
fade attribute describes the fade time within which the defined inten-
sity is reached. The alt attribute describes an alternative effect identi-
fied by the uniform resource identifier (URI). For example, an alternative
effect is chosen because the original intended effect cannot be rendered
owing to a lack of devices supporting this effect. The priority attri-
bute describes the priority for effects with respect to other effects in the
same group of effects sharing the same point in time when they should
become available for consumption. A value of 1 indicates the highest pri-
ority, and larger values indicate lower priorities. The location attribute
describes the location from where the effect is expected to be received
from the user’s perspective according to the X, Y, and Z axes, as depicted
in Figure 2.5. A classification scheme that may be used for this purpose
is LocationCS, as defined in Annex A of ISO/IEC 23005-6. For example,
urn:mpeg:mpeg-v:01-SI-LocationCS-NS:left:*:midway defines the location
as follows: left on the X-axis, any location on the Y-axis, and midway on
the Z-axis. That is, it describes all effects on the left-midway side of the
user. The SEMAdaptabilityAttributes contains two attributes related to
the adaptability of the SEs.The adaptType attribute describes the preferred
Table 2.1 Schema definition of SEMBaseAttributes
<attributeGroup name="SEMBaseAttributes">
<attribute name="activate" type="boolean" use="optional"/>
<attribute name="duration" type="positiveInteger" use="optional"/>
<attribute name="fade" type="positiveInteger" use="optional"/>
<attribute name="alt" type="anyURI" use="optional"/>
<attribute name="priority" type="positiveInteger" use="optional"/>
<attribute name="location" type="mpeg7:termReferenceType"
use="optional"/>
<attributeGroup ref="sedl:SEMAdaptabilityAttributes"/>
</attributeGroup>
<attributeGroup name="SEMAdaptabilityAttributes">
<attribute name="adaptType" type="sedl:adaptTypeType" use="optional"/>
<
attribute name=adaptRange type=sedl:adaptRangeType default=10
use=optional/
/attributeGroup
MPEG-V
26
type of adaptation using the following possible instantiations: strict, i.e.,
an adaptation by approximation may not be performed, i.e., an adapta-
tion by approximation may be performed with a smaller effect value than
the specified effect value, i.e., an adaptation by approximation may be per-
formed with a greater effect value than the specified effect value, and i.e.,
an adaptation by approximation may be performed between the upper and
lower bounds specified by adaptRange.The adaptRange attribute describes
the upper and lower bounds in terms of percentage for adaptType.
There are five base elements (Table 2.2), i.e., Declaration,
GroupOfEffects, Effect, ReferenceEffect, and Parameter, which are
explained in detail in the following sections, extended from the abstract
SEMBaseType type (the top-most base type in SEDL).This structure of hav-
ing an abstract type is a way of providing extensibility in the standard that
allows any elements having the extended type of SEMBaseType to be used
when each element is instantiated. SEMBaseType has an id attribute that
identifies the id of SEMBaseType (Table 2.3 and Figure 2.6).
Left
Centerleft
Center
Centerright
Right
Top
Middle
Bottom
Back
Midway
Front
X
Y
Z
Figure 2.5 Location model for SEs and reference coordinate system.
Adding Sensorial Effects to Media Content 27
2.2.3 Root Element of SEDL
Table 2.4 shows the schema definition of the SEM root element of SEDL
along with the structure diagram shown in Figure 2.7.The SEM root ele-
ment can contain the DescriptionMetadata element; unlimited repetitions
of the Declarations element, GroupOfEffects element, Effect element,
and ReferenceEffect element; and anyAttribute, which can identify the
process units and associating time information. The DescriptionMetadata
element, Declarations element, GroupOfEffects element, Effect element,
and ReferenceEffect element types are explained in the following sections
in detail.
Figure 2.6 Definition of the SEMBaseType type.
Table 2.2 Schema definition of base elements in SEDL
element name=Declarations type=sedl:DeclarationsType/
element name=GroupOfEffects type=sedl:GroupOfEffectsType/
element name=Effect type=sedl:EffectBaseType/
element name=ReferenceEffect type=sedl:ReferenceEffectType/
element name=Parameter type=sedl:ParameterBaseType/
Table 2.3 Schema definition of SEMBaseType
complexType name=SEMBaseType abstract=true 
complexContent
restriction base=anyType
attribute name=id type=ID use=optional/
/restriction
/complexContent
/complexType
MPEG-V
28
The anyAttribute contains siAttributeList, which holds properties
related to the process unit fragmentation, i.e., anchorElement, puMode, and
encodesAsRAP, and properties related to the time information, i.e., time­
scale, ptsDelta, absTimeScheme, absTime, and pts.There is a rule that the
SEM element must have a timescale attribute. siAttributeList is the XML
streaming instruction defined in ISO/IEC 21000-7 (MPEG-21). The
XML streaming instructions allow first identifying the process units in an
XML document, and second, assigning time information to these units.
These instructions are particularly required when an entire XML docu-
ment is fragmented into small pieces of (e.g., well formed) XML docu-
ments for effective streaming or storing purposes. The GroupOfEffects
element, Effect element, and ReferenceEffect element can again contain
siAttributeList to describe the properties related to the fragmentation
and time information.
Table 2.5 shows an instance of the SEM root element including sev-
eral attributes used to identify the namespaces, as well as an example of
the siAttributeList attribute. The puMode and timescale in the SEM
root element are inherited to the child anchor elements. The puMode
“ancestorDescendants” indicates that each process unit contains the
anchor element, its ancestor, and descendent element. The timescale
specifies the timescale, i.e., the number of ticks per second.
Table 2.4 Schema definition of SEM root element
element name=SEM
complexType
sequence
element name=DescriptionMetadata
type=sedl:DescriptionMetadataType minOccurs=0 maxOccurs=1/
choice maxOccurs=unbounded
element ref=sedl:Declarations/
element ref=sedl:GroupOfEffects/
element ref=sedl:Effect/
element ref=sedl:ReferenceEffect/
/choice
/sequence
anyAttribute namespace=##other processContents=lax /
/complexType
/element
Figure 2.7 Structure diagram of the SEM root element.
Table 2.5 Example instance of SEM root element
?xml version=1.0?
SEM
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance
xmlns=urn:mpeg:mpeg-v:2010:01-SEDL-NS
xmlns:sev=urn:mpeg:mpeg-v:2010:01-SEV-NS
xmlns:mpeg7=urn:mpeg:mpeg7:schema:2004
xmlns:si=urn:mpeg:mpeg21:2003:01-DIA-XSI-NS
xsi:schemaLocation=urn:mpeg:mpeg-v:2010:01-SEV-NS MPEG-V-SEV.xsd
si:puMode=ancestorsDescendants si:timeScale=1000
…
/SEM
MPEG-V
30
2.2.4 Description Metadata
The DescriptionMetadata element describes general information about
the SE metadata, such as the creation information or classification scheme
alias. As shown in Table 2.4, the DescriptionMetadata element extends
DescriptionMetadataType, which again extends the MPEG7:Description
MetadataType. As shown in Figure 2.8, MPEG7:DescriptionMetadataType
describes the general information such as the creators, version, creation
time, and proper information. DescriptionMetadataType also contains
the ClassificationSchemeAlias element, which describes an alias for
Figure 2.8 Structure diagram of DescriptionMetadataType.
Adding Sensorial Effects to Media Content 31
a classification scheme referenced by a URI. An example instance of the
ClassificationSchemeAlias element of DescriptionMetadataType is
shown in Table 2.6. In this instance, the URI of the classification scheme,
urn:mpeg:mpeg-v:01-SI-ColorCS-NS, is replaced by the alias “COLOR” such
that the light effect specifies its light color attribute as “:COLOR:amber”
instead of using“urn:mpeg:mpeg-v:01-SI-ColorCS-NS:amber.”
2.2.5 Declarations
The Declarations type, which extends the SEMBaseType type, describes
a declaration of sensory effects, groups of sensory effects, or the param-
eters (Figure 2.9). In other words, an element defined by the Declarations
type can contain an unbounded number of effects, groups of effects, or
parameters that can be referenced later by the ReferenceEffect element.
Table 2.6 Example instance of the DescriptionMetadata element and its usage in a
light effect
sedl:DescriptionMetadata
sedl:ClassificationSchemeAlias href=urn:mpeg:mpeg-v:01-SI-ColorCS-NS
alias=COLOR/
/sedl:DescriptionMetadata
sedl:Effect xsi:type=sev:LightType intensity-value=50.0 intensity-
range=0.00001 32000.0 duration=28 color=:COLOR:amber si:pts=0/
Figure 2.9 Structure diagram of DeclarationsType.
MPEG-V
32
For example, if a group of effect called “explosion,” which is composed of
light, scent, and vibration effects, is declared in the Declarations element,
it can be reused several times during the last part of a movie sequence
using ReferenceEffect elements.
2.2.6 Group of Effects
GroupOfEffectsType, which extends SEMBaseType, describes a group of
two or more SEs (Figure 2.10). The SE elements in GroupOfEffects can
be defined by either EffectBaseType or ReferenceEffectType. There are
several rules applied for implementing GroupOfEffects. GroupOfEffects
will have a timestamp (i.e., pts, ptsDelta, or absTime). Outside of the
Figure 2.10 Structure diagram of GroupOfEffectsType.
Adding Sensorial Effects to Media Content 33
Declarations, GroupOfEffects shall not have both pts and absTime at the
same time because if these two attributes contain different timestamps,
the decoder cannot decide which one to follow for rendering the SEs.
GroupOfEffects within Declarations will have only ptsDelta as a time-
stamp. This means that SEs in GroupOfEffects within the Declarations
may have different starting times.The GroupOfEffects element can contain
the siAttributeList to describe the properties related to the fragmenta-
tion and time information for effective XML streaming.
2.2.7 Effect
EffectBaseType extends SEMBaseType and provides a base abstract
type for a subset of types defined as part of the sensory effect metadata
types (Figure 2.11). EffectBaseType contains the siAttributeList in
anyAttribute to describe the properties related to the fragmentation and
time information for effective XML streaming. This type includes the
autoExtraction attribute, which describes the automatic extraction of SEs
Figure 2.11 Structure diagram of EffectBaseType.
MPEG-V
34
and their major attributes such as the intensity-value from the media
resource such as a video or audio sequence.
This type also includes the SupplementalInformation element with
SupplementalInformationType (Figure 2.12) to describe the reference
region (i.e., ReferenceRegion element) for an automatic extraction from a
video sequence, and the Operator element, which describes how to extract
SEs from the reference region of the video sequence. The Operator ele-
ment can be specified as either average or dominant.
The following rules shall be referenced to generate the valid Effect
metadata.
1. At the least, activate, duration, or fade shall be defined.
2. Effect outside of GroupOfEffects shall have a timestamp (i.e., pts,
ptsDelta, or absTime).
3. Effect within GroupOfEffects shall have only ptsDelta for a
timestamp.
4. Effect shall not have both pts and absTime at the same time.
5. Effect within Declarations shall have only ptsDelta for a timestamp.
6. If duration is defined, activate may not be defined.
7. If fade and duration are defined, activate may not be defined.
8. If fade is defined, the intensity is also defined.
9. If fade and duration are defined, fade must be less than or equal to
duration.
2.2.8 Reference Effect
ReferenceType describes a reference to a SE, groups of SEs, or param-
eters (Figure 2.13). The uri attribute describes a reference to a SE,
groups of SEs, or parameters by a URI. ReferenceEffectType contains
siAttributeList in anyAttribute to describe the properties related to
fragmentation and time information for effective XML streaming.
Figure 2.12 Structure diagram of SupplementalInfomationType.
Adding Sensorial Effects to Media Content 35
The following rules shall be referenced to generate valid ReferenceEffect
metadata.
1. ReferenceEffect outside of GroupOfEffects shall have a timestamp
(i.e., pts, ptsDelta, or absTime).
2. ReferenceEffect within GroupOfEffects shall have only ptsDelta for a
timestamp.
3. ReferenceEffect shall not have both pts and absTime at the same time.
4. ReferenceEffect within Declarations shall have only ptsDelta for a
timestamp.
2.2.9 Parameters
ParameterBaseType simply extends SEMBaseType, as shown in Figure
2.14. ColorCorrectionParameterType is the only type of parameter sup-
porting the color correction effect. The parameters define the color
characteristics of the content provider’s display device along with the
Figure 2.13 Structure diagram of ReferenceEffectType.
MPEG-V
36
lighting conditions surrounding the content provider. The param-
eters passing through this type enable reproducing the display col-
ors of the consumer side, which are exactly the same as the colors
created by the content provider. ColorCorrectionParameterType
contains five elements: ToneReproductionCurves, ConversionLUT,
ColorTemperature, InputDeviceColorGamut, and IlluminanceOfSurround.
The ToneReproductionCurves element represents the characteristics (e.g.,
gamma curves for R, G, and B channels) of the provider’s display device.
The ConversionLUT element is a look-up table (matrix), which con-
verts an image between an image color space (e.g., RGB) and a stan-
dard connection color space (e.g., CIE XYZ). The ColorTemperature
element describes the white point setting (e.g., D65, D93) of the con-
tent provider’s display device. The InputDeviceColorGamut element
describes an input display device’s color gamut, which is represented by
the chromaticity values of the R, G, and B channels at the maximum
Digital-to-Analog (DAC) values. The IlluminanceOfSurround element
describes the illuminance level of the provider’s viewing environment.
The illuminance is represented in lux. Figure 2.15 shows the structure of
ColorCorrectionParameterType.
2.3 SENSORY EFFECT VOCABULARY: DATA FORMATS FOR
CREATING SEs
The SEDL provides a high-level structure, and as described in this chapter,
only provides abstract elements through which the extended type of indi-
vidual SEs can be instantiated. The data format for creating the meta-
data of each individual SE is defined as a sensory effect vocabulary in
this standard.Table 2.7 shows the list of SEs defined in ISO/IEC 23005-
3:2013. There are 15 SEs currently defined, and all of them are defined
as extensions of EffectBaseType with the exception of FlashType and
PassiveKinestheticMotionType. FlashType is defined as an extension of
Figure 2.14 Structure diagram of ParameterBaseType.
Adding Sensorial Effects to Media Content 37
Figure 2.15 Structure diagram of ColorCorrectionParameterType.
Table 2.7 Sensory effect vocabulary defined in ISO/IEC 23005-3:2013
Type name Base type
LightType EffectBaseType
FlashType LightType
TemperatureType EffectBaseType
WindType EffectBaseType
VibrationType EffectBaseType
SprayingType EffectBaseType
ScentType EffectBaseType
FogType EffectBaseType
ColorCorrectionType EffectBaseType
RigidBodyMotionType EffectBaseType
PassiveKinestheticMotionType RigidBodyMotionType
PassiveKinestheticForceType EffectBaseType
ActiveKinestheticType EffectBaseType
TactileType EffectBaseType
ParameterizedTactileType EffectBaseType
Discovering Diverse Content Through
Random Scribd Documents
The latest newspapers, bouquets, of choice flowers for everyone, concert
parties, and indeed everything good that kind hearts could think of was
showered upon us. I remember a western bed-patient asked me if I thought
they would get him a plug of a special brand of chewing tobacco. He hadn't
been able to buy it in four years overseas and it was his favorite. Sure
thing, one of them said and in half-an-hour two men came aboard lugging
along enough of that tobacco to stock a small store! I cannot go further into
details. Enough to say that every trip it was the same, except that their
hospitality became more systematized. Probably fifteen thousand wounded
Canadian soldiers passed through Portland on their way home, and I know
they will find it hard to forget the free-handed, warm-hearted welcome they
got in that city. This memory will surely be a leaven working towards the
maintenance and development of peace and good-will between Canada and
the United States.
But I must tell my story of the North. One trip Major Dick Shillington
persuaded me to give them a Klondike evening. We gathered down below in
H Mess and there I told something of the life-story of my old friend of
by-gone days, a trail-blazer and prospector, Duncan McLeod.
* * * * *
When first I met Duncan McLeod, Cassiar Mac he was commonly
called, he and his partner, John Donaldson, both old men, were working far
up a tributary of Gold Bottom Creek which had not yet been fully
prospected. Each liked to have his own house and do his own cooking, and
so they lived within a few yards of each other in the creek bottom at the
foot of the mountain summit that rose between them and Indian River. My
trail over to the other creeks passed across their ground, and when we
became friends I seldom failed to arrange my fortnightly trip over the
divide so as to reach their place about dusk. I would have supper with one
or the other and stay the night.
McLeod was an old-country Scot, Donaldson born of Scottish parents in
Glengarry county, Ontario. I am not using their real names, but they were
real men. One of them, Donaldson, is still living in the wilds of the Yukon,
still prospecting. He was the first white man the Teslin Indians had seen and
known. They looked upon him as their Hi-yu-tyee, a sort of super-chief,
during the years he lived near them. He had been just and kind with them,
and his consequent influence saved occasional serious friction between the
Indians and whites from becoming murder or massacre.
After supper we would all three gather in one of the cabins and I would
hear good talk until far towards midnight. Then there would be a pause and
McLeod would say, Well, Mr. Pringle, I think it is time we were getting
ready for our beds. I knew what he meant. Yes, it is, I would reply. The
Bible would be handed me, I would read a chapter and we would kneel in
prayer to God. Then to our bunks for a good sleep and early away in the
morning for me to make the twenty-five miles over the heights to Gold Run
before dark.
What great talks those were I used to hear. I was only a boy then, and
these old men had seen so much of the wild free life of the West of long ago
days. What stirring adventures they had had! They came west before the
railways by way of the American prairies, and, lured by gold discoveries,
had entered the mountains, and then following the prospector's will-o-the-
wisp, the better luck that lies just over the divide, they had gone farther
and farther north. They had met and become partners in the Caribou camp,
and had been together nearly forty years, in the Cassiar, on the Stewart, at
Forty-Mile and now the Klondike.
Donaldson had a wonderful native power of description. When story-
telling he would pace slowly back and forth in the shadow beyond the dim
candle-light and picture with quiet, resonant voice scenes and events of the
past. How vivid it seemed to me! How the soul of the young man thrilled as
he listened! Often there was a yearning at my heart when under his spell to
lay aside my mission and go out into the farthest wilds, seeking adventure
and living the free, fascinating life they had lived. How I wish I had written
down these stories as they were told to me. But maybe they wouldn't have
written, for much of the interest lay in the personality of the story-teller.
McLeod's part was usually to help with dates or names when
Donaldson's memory failed to recall them, but often he too would spin a
yarn, and when he did there was always in its telling a gentleness, I can
think of no better word, that gave a charm often missing in Donaldson's
rougher style.
They were both big men physically, but McLeod had been magnificent.
He was now nearly eighty years old and broken with rheumatism, but in the
giant frame and noble face and head crowned with its snow-white hair I saw
my ideal of what a great Highland Chieftain might have been in the brave
days of old.
Donaldson told me one night, while his partner was making a batch of
bread in his own cabin, what he knew of McLeod's history. I have never
known a man, he said, that would measure up to my partner. None of us
want our record searched too closely but it wouldn't make any difference to
him. Nothing, nobody, seemed to have any power to make Mac do anything
crooked or dirty. Whisky, gambling, bad women—he passed them up
without apparent effort. Very strange too, even the few good women we
have met in these camps never won anything from him but wholesome
admiration. He had only to say the word and he could have had any one of
them, but he didn't seem to care that way. What his experience had been
before we met I do not know, he has never spoken much about it to anyone.
But he and I have lived together as partners for nearly half a century,
through the crazy, wicked days of all these gold camps, and Mac never did
anything that he would be ashamed to tell his own mother.
A fine tribute. Perhaps under the circumstances the finest thing that
could be said of any man, for you cannot imagine the thousand almost
irresistible temptations that were part of the daily life of the stampeders in
those northern camps. Enough for me to say that many men of really good
character back East, where they were unconsciously propped up by
influences of family, church, and community, failed miserably to keep their
footing when they came to the far north where all these supports were
absent and temptation was everywhere. I do not judge them. God only
knows the fight they had before they surrendered. So it was an arresting
event to meet a man who had seen it all and whose partner of forty years
told me he had lived clean.
I often wondered what McLeod's story was. I had known him for three
years before I ventured to ask him details about his home in Scotland, and
why he left it to come so far away. I knew he had been reared in a village
south of Edinburgh, in a good home with good parents, and much else he
had told me, but there had always been a reticence that made you certain
there was something else held back.
One winter night when we were alone in his cabin, he opened his heart
to me. He was an old-fashioned Scot. I was his minister and he knew me
well. Besides he was coming to the end of the trail, and he needed a
confidant.
He said his story was hardly worth while bothering me with, I knew
most of it, but what he could never tell anyone was about the lassie he had
loved and lost. He had fallen in love with the brown eyes and winsome face
of Margaret Campbell, a neighbour's daughter. They had gone to the same
school, had taken their first communion together, and had both sung in the
village church choir. When he revealed his love to her she told him she had
guessed his secret and had lang syne given her heart to him. They were
betrothed and very happy. But Margaret took ill in the fall and died before
the new year. Early in the year he sailed from Leith for Canada, hoping that
new scenes would soften his grief. As the years passed he kept moving west
and then north. He grew to like the free life of the prospector and had not
cared to leave the mountains and the trails.
Time had healed the wound but his love for the sweetheart of his youth
was just as true and tender as ever. From a hidden niche at the head of his
bed he took down a small box, brought it to the table near the candle and
unlocked it. He showed me his simple treasures. His rough, calloused hands
trembled as he lifted them carefully from the box. There was a small photo
so faded I could barely see the face on it. You'll see she was very
beautiful, he said, for he saw with the clear vision of loving memory what
was not for my younger but duller eyes to discern. There was her gold
locket with a wisp of brown hair in it. She left me this, he said, when she
died. Last, there was an old letter, stained and worn, the only love-letter
she had ever written him, for he had only once been far enough or long
enough away to need letters. He had spent a week in Glasgow after they
became engaged and she had written to him. This was all.
Somehow I felt as if I were on sacred ground, that the curtain had been
drawn from before a Holy Place, and I was looking upon something more
beautiful than I had ever seen before. As the old man put the box away his
eyes were shining with a light that never was on sea or land. Mine were
moist, and for a little I couldn't trust my voice to speak as I thought of the
life-time of unswerving fealty to his dead lassie. Such long, lonely years
they must have been!
We did not say much more that night but the words we spoke were full
of understanding and reverence. When it grew late and he handed me the
Bible I hesitated in choosing a chapter, but not for long. The comfort and
rejoicing of the twenty-third Psalm were all we wanted.
One morning, not long afterwards, Donaldson came into my cabin on
Hunker creek in evident distress. McLeod hadn't come out as usual to his
work that morning, and he had gone to see what was wrong and found him
in his bunk hardly able to speak. He had taken a stroke. A neighbouring
miner watched by the sick man while Donaldson hitched up his dogs and
raced to Dawson for medical aid. Donaldson went off down the trail and I
hurried up the gulch to my old friend. He lingered for two or three days.
The doctor could do nothing for him but to ease his last moments.
I stayed near him until the end came. When he tried to speak his
utterance was indistinct and what few words I could make out showed that
his mind was wandering. Sometimes he was on the trail or in the camp, but
oftenest he was home again in the far away land he loved, and in boyhood
days among folk we did not know save one, known only to me, whose
name was continually on his lips.
He had a lucid interval just before he died and for a minute or two he
thought and spoke clearly. I told him that death was near. Was there
anything that we could do for him? Not very much, he said, I want
Donaldson to have all I own. He's been a good partner. Bury my box with
me. I'm not afraid to go now. It's just another prospecting trip to an
unknown land and I have a Great Guide. He won't forsake an old
prospector. He was one Himself, I'm thinking, when He came seeking us.
He will keep a firm grip of me now that the trail is growing dark. I'm not
afraid.
These were his last words, and as he slipped away, we, who were
gathered in the dimly-lighted little cabin, felt somehow that the Guide he
spoke of was right at hand. He would surely keep a firm grip of the old
miner on his last prospecting trip, even if strange storms were blowing, and
it was black dark when they crossed the Great Divide. It would come
morning too in that land when night was past, and when the new day
dawned I know he would soon find the one whom he had loved long since
and lost awhile.
XVI.
Soapy Smith, the Skagway Bandit
My billet on the hospital ship Araguaya was very comfortable and my
duties agreeable, but every time we reached port on the Canadian side of
the Atlantic I had an impulse to desert the ship and become a stowaway on
the hospital-train bound for British Columbia. It was there my wife and boy
lived and I hadn't seen them for three years. However I got the chance at
last to go without breaking regulations, for when I requested it, leave was
readily granted me to stay ashore over one round-trip of the boat. This was
supplemented by my taking the place of an absent conducting officer on the
western train. So my transportation cost me nothing, except the congenial
task of making myself generally useful to the returning soldiers.
We had crossed the prairies, dropping many of our crowd at way points,
and were climbing slowly along after supper up through a lonely stretch of
mountains, when someone in the car where I was visiting gave it as his
opinion that this would be a good piece of road on which to stage a train-
robbery. This, of course, led to the mention of gun-men that they had known
or heard of, men of the same ilk as Jesse James and Bill Miner. I
contributed the story of Soapy Smith, the man who pulled off the most
remarkably prolonged hold-up of which I have ever read. In the most
approved dime-novel style he terrorized a town, not for a few days or
weeks, but for six months.
* * * * *
You'll have to see the spot where Soapy died. The Skagway man who
said this was rather proud of the celebrity which the bandit had brought to
the place. I had come by the steamboat the nine hundred miles north from
Vancouver, and was forced to spend a day in Skagway before going over
the White Pass on my way to Dawson. A resident of the town was taking me
around showing me the sights of this mushroom camp. It was humming
with life and packed with people. The rush to the goldfields was then at its
height. I judged by my friend's tone that he expected me to be deeply
impressed with this particular sight. So down to the sea we went and out on
the wharf. As we walked down he outlined the story of Smith's career in the
camp. On the pier he showed me a dark stain, covering about a square foot,
made by the life-blood of the man who for half-a-year forced Skagway to
pay him tribute in hard cash. He was the leader of a group of men who
robbed and cheated in wholesale style, and when it was necessary, in
getting their victim's money, did not stop at murder. No one had attempted
successfully to interfere with him. Reputable merchants were all
intimidated into handing him their life-insurance premiums whenever he
asked for them. His reputation as a killer was such that on the fourth of
July, when good Americans celebrate their freedom, he rode at the head of
the procession on a white horse! Very few complained loudly enough for
Soapy to hear. Without question his nerve is to be admired. I have never
heard or read in the annals of the west anything to equal his record in that
Alaskan port. Desperadoes have ridden into towns, shot them up, took
what they wanted and got away with it. But this man and his gang lived
openly in a town of several thousands and in the most brazen fashion ran
the place for months, although he was known as a crook, gunman, and
leader of a gang of thugs. Skagway, it is true, was simply an eddy in a
stream running into the gold-fields. In their mad haste to get on and over the
Pass people wouldn't take time to straighten out the morals of the camp.
The Soapy Smith business was especially uninviting as something to mix
into. It isn't my funeral, they would say, and I don't want it to be.
Jefferson B. Smith hailed from the city of St. Louis in the U.S.A. He got
the nickname he bore because at the beginning of his career of crookedness
he used to sell soap to some of the citizens of Denver, Colorado. There is
nothing remarkable about selling soap unless you do it Smith's way. In the
evenings he and a confederate would set up their stand on a suitable
downtown street. All he needed was a high box for a pulpit and a smaller
box behind it to stand on. This with a flaring torch giving an uneven light,
some cakes of cheap soap, a couple of five-dollar bills and some change,
completed the outfit. A little clever spieling, kept up more or less all
evening, and the usual crowd would gather out of curiosity. He would show
them an unwrapped piece of soap all the while extolling its great merits as a
cleanser. To show how disinterested he was in introducing this superior
article that only needed to be known to become popular, he would say he
was going to wrap a five-dollar-bill in with some of these cakes of soap. He
would sell the soap at fifty cents each piece, and everyone that bought stood
to get the soap and make four dollars and fifty cents in cash out of the deal.
Further if they watched him carefully they would see him actually put the
five-dollar bill in when he wrapped up the soap, although he wouldn't
guarantee that it would always be found there when the purchaser
unwrapped his package. Of course he deceived them simply by clever
sleight-of-hand. Rarely would any money be found, but people liked to be
fooled if it is done the right way. To get them biting he might let one of
the bills go to a confederate who was seemingly just one of the crowd. It
was a money-making business as a rule for there were ordinarily quite a
number of easy-marks around. They got the soap anyway. So came the
name Soapy.
Well, it was the same old clever, crooked game in other bigger and
bolder forms that he now worked in Skagway, with the gun-play in
addition. When the steamboat City of Seattle came into port there on
January 17th, 1898, Soapy and his merrie-men were among the
passengers. He was a slight built man, only five feet seven inches tall, very
dark complexioned with a full beard and moustache. He wore a round
Stetson hat with a hard brim. He soon established headquarters in the
Kentucky saloon and Jeff Smith's Parlors. These were liquor saloons,
not providing board or lodging, and running crooked gambling games in
their rear, a fruitful source of revenue to Smith's card-sharpers. Then he and
his confederates got busy on all sorts of other schemes to steal people's
money. He had at least thirty followers, and there wasn't a dishonest trick
known to the underworld of those days that some of them couldn't work.
They wore Masonic, Oddfellow, Elk and other fraternity emblems that
might help in working confidence-games. They opened up Information
Bureaus where newcomers could be conveniently sized-up and robbed then
or later on. One member who was very successful in luring victims was Old
Man Tripp. He had grey hair, a long white beard and a benevolent
countenance. It seemed impossible to suspect him of criminal intent. Smith
had most of the gambling-joints paying him a big percentage. He even had
men clever at the old, old shell-game running it in the fine weather at
relay points on the trail.
One of his favorite stunts for a while at first was to recruit for the
Spanish-American war which was just then stirring the fighting blood of
Americans. While the would-be soldier was stripped, having a fake medical
examination, his clothing was looted of whatever money or valuables it
might contain.
A rather amusing incident occurred during Smith's regime in connection
with the efforts of a Sky Pilot to raise some money at Skagway to build a
church in a little place along the coast called Dyea. The parson came to
Skagway in a rowboat one morning and started out with his subscription
list. One of the first he tackled by chance and unknown to himself was the
notorious bandit. Smith heartily endorsed the proposition and headed the
list with one hundred dollars which he paid over in cash to the clergyman.
Then he took the latter gentleman along to the principal merchants, hotel-
men and gamblers and saw to it that they all gave handsome donations. At
the close of the day the visitor decided to make for home. He was happy in
the possession of over $2,000 in cash for his new church, thinking too what
a splendid fellow this Mr. Smith was. On the way to the beach he was held
up by one of Mr. Smith's lieutenants and relieved of all the money he had
collected. He could get no redress.
Other occurrences, such as the smothering of the negro-wench in order
to steal the few hundred dollars she had earned by washing, were despicable
and worthy only of the meanest type of criminal.
Naturally there were many shooting scrapes in connection with the
operations of the gang, and some killings, but nothing was done to end it.
Not only was no move made to interfere with Soapy, but almost everyone
refrained from speaking against him openly for reasons easy to understand.
Of course there were men in Skagway who hotly resented the hold this
outlaw had on the town, and were doing what they could to bring public
sentiment to efficient action against him. One of these, a Canadian, was the
editor of a local news sheet. In later years he became governor of Alaska.
His name was Strong and it suited him, for he wasn't lacking in strength of
character. One day, after his paper had appeared with an editorial making a
scarcely-veiled attack on Soapy and his gang, he was met and stopped on
the street by Smith accompanied by a tough named Mike Daley. They were
loud and boisterous in accusing Strong of having offered personal insult to
them in his newspaper. They demanded a retraction and apology and
evidently meant to force a street-fight. Strong refused to withdraw his
statement and declared that he intended to stand by his editorial. The loud
quarrelling tones of the two desperadoes attracted the attention of two
friends of Strong's, named D. C. Stephens and Allen, who happened to be
walking down the same street. They hurried to the aid of their friend who at
the risk of his life still refused to back down. The sight of reinforcements
spoiled Smith's game and he and Daley went on without accomplishing
their sinister purpose.
There was another man who did not hesitate to say anywhere, and in
most forcible terms what he thought of these criminals. This man was Frank
Reid, a land-surveyor. He was fearless, and too quick with a gun for these
crooks to attempt to silence. But he got very little open support and could
do nothing single-handed.
Of course things couldn't go on like this. In the Spring matters reached a
climax. Word had at last got into the Klondike that it wasn't safe to come
out by way of Skagway with your gold, that you were likely to be relieved
of your poke by desperadoes. This news commenced to turn out-going
gold-laden traffic down the Yukon and out by way of St. Michaels. The
Skagway merchants saw the goose that laid the golden eggs flying away,
and it put them at last into a ferment of anger at the cause of it. This led to
the formation of a Vigilance Committee of which Reid was the moving
spirit.
Finally a Nanaimo man named Stewart, waiting for the steamboat on his
way home from the Klondike, had $3,000.00 in nuggets stolen from him by
one of Soapy's confidence men who had offered to turn it into currency. It
was all he had and he made such a fuss that the whole town knew about his
loss. He reported it to the U.S. Deputy-Marshal, a man named Taylor who
was in Smith's pay. He got no satisfaction. The Vigilance Committee then
took it up, and made it a casus belli against Soapy. They attempted to hold
a secret meeting in a private hall but Smith and his confederates managed to
break in on them. They then adjourned to Sylvester's wharf. At the land-end
of the pier Frank Reid and a man named Murphy were posted to stop
anyone approaching who was not a member of the Committee. Smith heard
of this move and set off on the war-path down the main street towards the
water-front. He carried a loaded .30-.30 Winchester rifle and as he went
down the road he called on everyone to put up their hands. There were
hundreds of men there but Soapy got a completely unanimous vote as he
passed along, until he reached Reid and in him he met a man who called his
bluff. Reid ordered him to stop and fired at him, but his revolver, a .45 Colt,
failed to go off. He then grabbed the muzzle of Smith's gun and shoved it
up in the air before he could shoot. Smith in the struggle backed away
hanging on to his rifle, and while the gun was thus lowered and pointed
momentarily at Reid's groin he fired. Reid fell to the ground but instantly
fired at Smith again. This time the revolver responded and Smith dropped
shot through the heart. He bled to death in a few minutes where he lay. This
was the evening of July 8th, three days after the celebration already
mentioned in which the gunman had taken the leading part. So the wharf
was stained, and so ended the life of a man with a career of which the last
six months were unique in the history of the wild west.
Their leader gone, the break-up of his followers was quick and easy.
After caring for Reid the Committee split up into armed groups of five or
six men each. Some guarded the exits from the town, others closed the
dance-halls, saloons, and gambling places. Every cabin was searched. Smith
was killed on Friday and by Sunday the lot were rounded up and jailed. The
captures included the five most dangerous members of the gang, Old Man
Tripp, Slim Jim, Bowers, Mike Daly, and Scar-faced Charlie. It was indeed
hard for any of them to escape. In front was the sea and behind the
mountains with only one passable trail through them over into the Yukon
Territory. They were all deported on out-going steamers. Most of them got
long terms in penetentiary. Before the shooting a few of them who saw
danger ahead straggled over into Canada by way of the White Pass but they
changed into model citizens when they came under the surveillance of the
Mounted Police.
Smith was buried with scant ceremony and no mourners. Frank Reid
lingered for two weeks when he also died. The whole place turned out at his
funeral to do honor to his bravery in ridding the town of the pestilential
group of criminals who had been in control so long.
Warwick Bros.  Rutter, Limited
Printers and Bookbinders, Toronto, Canada
*** END OF THE PROJECT GUTENBERG EBOOK TILLICUMS OF THE
TRAIL ***
Updated editions will replace the previous one—the old editions will
be renamed.
Creating the works from print editions not protected by U.S.
copyright law means that no one owns a United States copyright in
these works, so the Foundation (and you!) can copy and distribute it
in the United States without permission and without paying
copyright royalties. Special rules, set forth in the General Terms of
Use part of this license, apply to copying and distributing Project
Gutenberg™ electronic works to protect the PROJECT GUTENBERG™
concept and trademark. Project Gutenberg is a registered trademark,
and may not be used if you charge for an eBook, except by following
the terms of the trademark license, including paying royalties for use
of the Project Gutenberg trademark. If you do not charge anything
for copies of this eBook, complying with the trademark license is
very easy. You may use this eBook for nearly any purpose such as
creation of derivative works, reports, performances and research.
Project Gutenberg eBooks may be modified and printed and given
away—you may do practically ANYTHING in the United States with
eBooks not protected by U.S. copyright law. Redistribution is subject
to the trademark license, especially commercial redistribution.
START: FULL LICENSE
THE FULL PROJECT GUTENBERG LICENSE
PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK
To protect the Project Gutenberg™ mission of promoting the free
distribution of electronic works, by using or distributing this work (or
any other work associated in any way with the phrase “Project
Gutenberg”), you agree to comply with all the terms of the Full
Project Gutenberg™ License available with this file or online at
www.gutenberg.org/license.
Section 1. General Terms of Use and
Redistributing Project Gutenberg™
electronic works
1.A. By reading or using any part of this Project Gutenberg™
electronic work, you indicate that you have read, understand, agree
to and accept all the terms of this license and intellectual property
(trademark/copyright) agreement. If you do not agree to abide by all
the terms of this agreement, you must cease using and return or
destroy all copies of Project Gutenberg™ electronic works in your
possession. If you paid a fee for obtaining a copy of or access to a
Project Gutenberg™ electronic work and you do not agree to be
bound by the terms of this agreement, you may obtain a refund
from the person or entity to whom you paid the fee as set forth in
paragraph 1.E.8.
1.B. “Project Gutenberg” is a registered trademark. It may only be
used on or associated in any way with an electronic work by people
who agree to be bound by the terms of this agreement. There are a
few things that you can do with most Project Gutenberg™ electronic
works even without complying with the full terms of this agreement.
See paragraph 1.C below. There are a lot of things you can do with
Project Gutenberg™ electronic works if you follow the terms of this
agreement and help preserve free future access to Project
Gutenberg™ electronic works. See paragraph 1.E below.
1.C. The Project Gutenberg Literary Archive Foundation (“the
Foundation” or PGLAF), owns a compilation copyright in the
collection of Project Gutenberg™ electronic works. Nearly all the
individual works in the collection are in the public domain in the
United States. If an individual work is unprotected by copyright law
in the United States and you are located in the United States, we do
not claim a right to prevent you from copying, distributing,
performing, displaying or creating derivative works based on the
work as long as all references to Project Gutenberg are removed. Of
course, we hope that you will support the Project Gutenberg™
mission of promoting free access to electronic works by freely
sharing Project Gutenberg™ works in compliance with the terms of
this agreement for keeping the Project Gutenberg™ name associated
with the work. You can easily comply with the terms of this
agreement by keeping this work in the same format with its attached
full Project Gutenberg™ License when you share it without charge
with others.
1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside the
United States, check the laws of your country in addition to the
terms of this agreement before downloading, copying, displaying,
performing, distributing or creating derivative works based on this
work or any other Project Gutenberg™ work. The Foundation makes
no representations concerning the copyright status of any work in
any country other than the United States.
1.E. Unless you have removed all references to Project Gutenberg:
1.E.1. The following sentence, with active links to, or other
immediate access to, the full Project Gutenberg™ License must
appear prominently whenever any copy of a Project Gutenberg™
work (any work on which the phrase “Project Gutenberg” appears,
or with which the phrase “Project Gutenberg” is associated) is
accessed, displayed, performed, viewed, copied or distributed:
This eBook is for the use of anyone anywhere in the United
States and most other parts of the world at no cost and with
almost no restrictions whatsoever. You may copy it, give it away
or re-use it under the terms of the Project Gutenberg License
included with this eBook or online at www.gutenberg.org. If you
are not located in the United States, you will have to check the
laws of the country where you are located before using this
eBook.
1.E.2. If an individual Project Gutenberg™ electronic work is derived
from texts not protected by U.S. copyright law (does not contain a
notice indicating that it is posted with permission of the copyright
holder), the work can be copied and distributed to anyone in the
United States without paying any fees or charges. If you are
redistributing or providing access to a work with the phrase “Project
Gutenberg” associated with or appearing on the work, you must
comply either with the requirements of paragraphs 1.E.1 through
1.E.7 or obtain permission for the use of the work and the Project
Gutenberg™ trademark as set forth in paragraphs 1.E.8 or 1.E.9.
1.E.3. If an individual Project Gutenberg™ electronic work is posted
with the permission of the copyright holder, your use and distribution
must comply with both paragraphs 1.E.1 through 1.E.7 and any
additional terms imposed by the copyright holder. Additional terms
will be linked to the Project Gutenberg™ License for all works posted
with the permission of the copyright holder found at the beginning
of this work.
1.E.4. Do not unlink or detach or remove the full Project
Gutenberg™ License terms from this work, or any files containing a
part of this work or any other work associated with Project
Gutenberg™.
1.E.5. Do not copy, display, perform, distribute or redistribute this
electronic work, or any part of this electronic work, without
prominently displaying the sentence set forth in paragraph 1.E.1
with active links or immediate access to the full terms of the Project
Gutenberg™ License.
1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if you
provide access to or distribute copies of a Project Gutenberg™ work
in a format other than “Plain Vanilla ASCII” or other format used in
the official version posted on the official Project Gutenberg™ website
(www.gutenberg.org), you must, at no additional cost, fee or
expense to the user, provide a copy, a means of exporting a copy, or
a means of obtaining a copy upon request, of the work in its original
“Plain Vanilla ASCII” or other form. Any alternate format must
include the full Project Gutenberg™ License as specified in
paragraph 1.E.1.
1.E.7. Do not charge a fee for access to, viewing, displaying,
performing, copying or distributing any Project Gutenberg™ works
unless you comply with paragraph 1.E.8 or 1.E.9.
1.E.8. You may charge a reasonable fee for copies of or providing
access to or distributing Project Gutenberg™ electronic works
provided that:
• You pay a royalty fee of 20% of the gross profits you derive
from the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”
• You provide a full refund of any money paid by a user who
notifies you in writing (or by e-mail) within 30 days of receipt
that s/he does not agree to the terms of the full Project
Gutenberg™ License. You must require such a user to return or
destroy all copies of the works possessed in a physical medium
and discontinue all use of and all access to other copies of
Project Gutenberg™ works.
• You provide, in accordance with paragraph 1.F.3, a full refund of
any money paid for a work or a replacement copy, if a defect in
the electronic work is discovered and reported to you within 90
days of receipt of the work.
• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.
1.E.9. If you wish to charge a fee or distribute a Project Gutenberg™
electronic work or group of works on different terms than are set
forth in this agreement, you must obtain permission in writing from
the Project Gutenberg Literary Archive Foundation, the manager of
the Project Gutenberg™ trademark. Contact the Foundation as set
forth in Section 3 below.
1.F.
1.F.1. Project Gutenberg volunteers and employees expend
considerable effort to identify, do copyright research on, transcribe
and proofread works not protected by U.S. copyright law in creating
the Project Gutenberg™ collection. Despite these efforts, Project
Gutenberg™ electronic works, and the medium on which they may
be stored, may contain “Defects,” such as, but not limited to,
incomplete, inaccurate or corrupt data, transcription errors, a
copyright or other intellectual property infringement, a defective or
damaged disk or other medium, a computer virus, or computer
codes that damage or cannot be read by your equipment.
1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except for
the “Right of Replacement or Refund” described in paragraph 1.F.3,
the Project Gutenberg Literary Archive Foundation, the owner of the
Project Gutenberg™ trademark, and any other party distributing a
Project Gutenberg™ electronic work under this agreement, disclaim
all liability to you for damages, costs and expenses, including legal
fees. YOU AGREE THAT YOU HAVE NO REMEDIES FOR
NEGLIGENCE, STRICT LIABILITY, BREACH OF WARRANTY OR
BREACH OF CONTRACT EXCEPT THOSE PROVIDED IN PARAGRAPH
1.F.3. YOU AGREE THAT THE FOUNDATION, THE TRADEMARK
OWNER, AND ANY DISTRIBUTOR UNDER THIS AGREEMENT WILL
NOT BE LIABLE TO YOU FOR ACTUAL, DIRECT, INDIRECT,
CONSEQUENTIAL, PUNITIVE OR INCIDENTAL DAMAGES EVEN IF
YOU GIVE NOTICE OF THE POSSIBILITY OF SUCH DAMAGE.
1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you
discover a defect in this electronic work within 90 days of receiving
it, you can receive a refund of the money (if any) you paid for it by
sending a written explanation to the person you received the work
from. If you received the work on a physical medium, you must
return the medium with your written explanation. The person or
entity that provided you with the defective work may elect to provide
a replacement copy in lieu of a refund. If you received the work
electronically, the person or entity providing it to you may choose to
give you a second opportunity to receive the work electronically in
lieu of a refund. If the second copy is also defective, you may
demand a refund in writing without further opportunities to fix the
problem.
1.F.4. Except for the limited right of replacement or refund set forth
in paragraph 1.F.3, this work is provided to you ‘AS-IS’, WITH NO
OTHER WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.
1.F.5. Some states do not allow disclaimers of certain implied
warranties or the exclusion or limitation of certain types of damages.
If any disclaimer or limitation set forth in this agreement violates the
law of the state applicable to this agreement, the agreement shall be
interpreted to make the maximum disclaimer or limitation permitted
by the applicable state law. The invalidity or unenforceability of any
provision of this agreement shall not void the remaining provisions.
1.F.6. INDEMNITY - You agree to indemnify and hold the Foundation,
the trademark owner, any agent or employee of the Foundation,
anyone providing copies of Project Gutenberg™ electronic works in
accordance with this agreement, and any volunteers associated with
the production, promotion and distribution of Project Gutenberg™
electronic works, harmless from all liability, costs and expenses,
including legal fees, that arise directly or indirectly from any of the
following which you do or cause to occur: (a) distribution of this or
any Project Gutenberg™ work, (b) alteration, modification, or
additions or deletions to any Project Gutenberg™ work, and (c) any
Defect you cause.
Section 2. Information about the Mission
of Project Gutenberg™
Project Gutenberg™ is synonymous with the free distribution of
electronic works in formats readable by the widest variety of
computers including obsolete, old, middle-aged and new computers.
It exists because of the efforts of hundreds of volunteers and
donations from people in all walks of life.
Volunteers and financial support to provide volunteers with the
assistance they need are critical to reaching Project Gutenberg™’s
goals and ensuring that the Project Gutenberg™ collection will
remain freely available for generations to come. In 2001, the Project
Gutenberg Literary Archive Foundation was created to provide a
secure and permanent future for Project Gutenberg™ and future
generations. To learn more about the Project Gutenberg Literary
Archive Foundation and how your efforts and donations can help,
see Sections 3 and 4 and the Foundation information page at
www.gutenberg.org.
Section 3. Information about the Project
Gutenberg Literary Archive Foundation
The Project Gutenberg Literary Archive Foundation is a non-profit
501(c)(3) educational corporation organized under the laws of the
state of Mississippi and granted tax exempt status by the Internal
Revenue Service. The Foundation’s EIN or federal tax identification
number is 64-6221541. Contributions to the Project Gutenberg
Literary Archive Foundation are tax deductible to the full extent
permitted by U.S. federal laws and your state’s laws.
The Foundation’s business office is located at 809 North 1500 West,
Salt Lake City, UT 84116, (801) 596-1887. Email contact links and up
to date contact information can be found at the Foundation’s website
and official page at www.gutenberg.org/contact
Section 4. Information about Donations to
the Project Gutenberg Literary Archive
Foundation
Project Gutenberg™ depends upon and cannot survive without
widespread public support and donations to carry out its mission of
increasing the number of public domain and licensed works that can
be freely distributed in machine-readable form accessible by the
widest array of equipment including outdated equipment. Many
small donations ($1 to $5,000) are particularly important to
maintaining tax exempt status with the IRS.
The Foundation is committed to complying with the laws regulating
charities and charitable donations in all 50 states of the United
States. Compliance requirements are not uniform and it takes a
considerable effort, much paperwork and many fees to meet and
keep up with these requirements. We do not solicit donations in
locations where we have not received written confirmation of
compliance. To SEND DONATIONS or determine the status of
compliance for any particular state visit www.gutenberg.org/donate.
While we cannot and do not solicit contributions from states where
we have not met the solicitation requirements, we know of no
prohibition against accepting unsolicited donations from donors in
such states who approach us with offers to donate.
International donations are gratefully accepted, but we cannot make
any statements concerning tax treatment of donations received from
outside the United States. U.S. laws alone swamp our small staff.
Please check the Project Gutenberg web pages for current donation
methods and addresses. Donations are accepted in a number of
other ways including checks, online payments and credit card
donations. To donate, please visit: www.gutenberg.org/donate.
Section 5. General Information About
Project Gutenberg™ electronic works
Professor Michael S. Hart was the originator of the Project
Gutenberg™ concept of a library of electronic works that could be
freely shared with anyone. For forty years, he produced and
distributed Project Gutenberg™ eBooks with only a loose network of
volunteer support.
Project Gutenberg™ eBooks are often created from several printed
editions, all of which are confirmed as not protected by copyright in
the U.S. unless a copyright notice is included. Thus, we do not
necessarily keep eBooks in compliance with any particular paper
edition.
Most people start at our website which has the main PG search
facility: www.gutenberg.org.
This website includes information about Project Gutenberg™,
including how to make donations to the Project Gutenberg Literary
Archive Foundation, how to help produce our new eBooks, and how
to subscribe to our email newsletter to hear about new eBooks.
Welcome to Our Bookstore - The Ultimate Destination for Book Lovers
Are you passionate about books and eager to explore new worlds of
knowledge? At our website, we offer a vast collection of books that
cater to every interest and age group. From classic literature to
specialized publications, self-help books, and children’s stories, we
have it all! Each book is a gateway to new adventures, helping you
expand your knowledge and nourish your soul
Experience Convenient and Enjoyable Book Shopping Our website is more
than just an online bookstore—it’s a bridge connecting readers to the
timeless values of culture and wisdom. With a sleek and user-friendly
interface and a smart search system, you can find your favorite books
quickly and easily. Enjoy special promotions, fast home delivery, and
a seamless shopping experience that saves you time and enhances your
love for reading.
Let us accompany you on the journey of exploring knowledge and
personal growth!
ebookgate.com

More Related Content

PDF
Methods and techniques for fire detection : signal, image and video processin...
PDF
Group And Crowd Behavior For Computer Vision 1st Edition Vittorio Murino
PDF
Methods and techniques for fire detection : signal, image and video processin...
PDF
FEM and Micromechatronics with ATILA Software Kenji Uchino
PDF
FEM and Micromechatronics with ATILA Software Kenji Uchino
PDF
Group and Crowd Behavior for Computer Vision 1st Edition Edition Vittorio Murino
PDF
Proceedings Of The International Conference On Information Engineering And Ap...
PDF
Group and Crowd Behavior for Computer Vision 1st Edition Edition Vittorio Murino
Methods and techniques for fire detection : signal, image and video processin...
Group And Crowd Behavior For Computer Vision 1st Edition Vittorio Murino
Methods and techniques for fire detection : signal, image and video processin...
FEM and Micromechatronics with ATILA Software Kenji Uchino
FEM and Micromechatronics with ATILA Software Kenji Uchino
Group and Crowd Behavior for Computer Vision 1st Edition Edition Vittorio Murino
Proceedings Of The International Conference On Information Engineering And Ap...
Group and Crowd Behavior for Computer Vision 1st Edition Edition Vittorio Murino

Similar to MPEG V Bridging the Virtual and Real World 1st Edition Kyoungro Yoon (20)

PDF
Advanced Multimedia and Ubiquitous Engineering Future Information Technology ...
PDF
Cellular Internet of Things From Massive Deployments to Critical 5G Applicati...
PDF
Download full ebook of VLSI Technology Wai instant download pdf
PDF
Consumer-Electronics.pdf all information to
PDF
All information to consumers appliances are present
PDF
Multimedia Security Technologies for Digital Rights Management 1st Edition We...
PDF
The Role Of Iot And Blockchain Techniques And Applications Sanjay K Kuanar
PDF
Download full ebook of VLSI Technology Wai instant download pdf
PDF
Advances In Computational Vision And Robotics Proceedings Of The Internationa...
PDF
Computer Networking A Top Down Approach 6th Edition Edition James F. Kurose
PDF
DSP for Embedded and Real Time Systems Expert Guide 1st Edition Robert Oshana
PDF
Multimedia Security Technologies for Digital Rights Management 1st Edition We...
PDF
MPEG and the governance of materials
PDF
Microwave wireless communications : from transistor to system level 1st Editi...
PDF
Software Language Engineering 6th International Conference Sle 2013 Indianapo...
PDF
DSP for Embedded and Real Time Systems Expert Guide 1st Edition Robert Oshana
PDF
Download full ebook of VLSI Technology Wai instant download pdf
PDF
Social Informatics Socinfo 2013 International Workshops Qmc And Histoinformat...
PDF
Microelectronics 2nd Edition International Student Version 2nd Behzad Razavi
PDF
Microwave wireless communications : from transistor to system level 1st Editi...
Advanced Multimedia and Ubiquitous Engineering Future Information Technology ...
Cellular Internet of Things From Massive Deployments to Critical 5G Applicati...
Download full ebook of VLSI Technology Wai instant download pdf
Consumer-Electronics.pdf all information to
All information to consumers appliances are present
Multimedia Security Technologies for Digital Rights Management 1st Edition We...
The Role Of Iot And Blockchain Techniques And Applications Sanjay K Kuanar
Download full ebook of VLSI Technology Wai instant download pdf
Advances In Computational Vision And Robotics Proceedings Of The Internationa...
Computer Networking A Top Down Approach 6th Edition Edition James F. Kurose
DSP for Embedded and Real Time Systems Expert Guide 1st Edition Robert Oshana
Multimedia Security Technologies for Digital Rights Management 1st Edition We...
MPEG and the governance of materials
Microwave wireless communications : from transistor to system level 1st Editi...
Software Language Engineering 6th International Conference Sle 2013 Indianapo...
DSP for Embedded and Real Time Systems Expert Guide 1st Edition Robert Oshana
Download full ebook of VLSI Technology Wai instant download pdf
Social Informatics Socinfo 2013 International Workshops Qmc And Histoinformat...
Microelectronics 2nd Edition International Student Version 2nd Behzad Razavi
Microwave wireless communications : from transistor to system level 1st Editi...
Ad

Recently uploaded (20)

PPTX
202450812 BayCHI UCSC-SV 20250812 v17.pptx
PDF
Microbial disease of the cardiovascular and lymphatic systems
PDF
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
PDF
Classroom Observation Tools for Teachers
PPTX
Final Presentation General Medicine 03-08-2024.pptx
PDF
Complications of Minimal Access Surgery at WLH
PDF
STATICS OF THE RIGID BODIES Hibbelers.pdf
PPTX
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
PDF
01-Introduction-to-Information-Management.pdf
PPTX
GDM (1) (1).pptx small presentation for students
PPTX
Microbial diseases, their pathogenesis and prophylaxis
PDF
Abdominal Access Techniques with Prof. Dr. R K Mishra
PDF
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
PDF
GENETICS IN BIOLOGY IN SECONDARY LEVEL FORM 3
PPTX
Lesson notes of climatology university.
DOC
Soft-furnishing-By-Architect-A.F.M.Mohiuddin-Akhand.doc
PDF
O5-L3 Freight Transport Ops (International) V1.pdf
PDF
Module 4: Burden of Disease Tutorial Slides S2 2025
PPTX
Final Presentation General Medicine 03-08-2024.pptx
PDF
A systematic review of self-coping strategies used by university students to ...
202450812 BayCHI UCSC-SV 20250812 v17.pptx
Microbial disease of the cardiovascular and lymphatic systems
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
Classroom Observation Tools for Teachers
Final Presentation General Medicine 03-08-2024.pptx
Complications of Minimal Access Surgery at WLH
STATICS OF THE RIGID BODIES Hibbelers.pdf
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
01-Introduction-to-Information-Management.pdf
GDM (1) (1).pptx small presentation for students
Microbial diseases, their pathogenesis and prophylaxis
Abdominal Access Techniques with Prof. Dr. R K Mishra
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
GENETICS IN BIOLOGY IN SECONDARY LEVEL FORM 3
Lesson notes of climatology university.
Soft-furnishing-By-Architect-A.F.M.Mohiuddin-Akhand.doc
O5-L3 Freight Transport Ops (International) V1.pdf
Module 4: Burden of Disease Tutorial Slides S2 2025
Final Presentation General Medicine 03-08-2024.pptx
A systematic review of self-coping strategies used by university students to ...
Ad

MPEG V Bridging the Virtual and Real World 1st Edition Kyoungro Yoon

  • 1. MPEG V Bridging the Virtual and Real World 1st Edition Kyoungro Yoon download https://ebookgate.com/product/mpeg-v-bridging-the-virtual-and- real-world-1st-edition-kyoungro-yoon/ Get Instant Ebook Downloads – Browse at https://ebookgate.com
  • 2. Get Your Digital Files Instantly: PDF, ePub, MOBI and More Quick Digital Downloads: PDF, ePub, MOBI and Other Formats Maya The World as Virtual Reality Richard L. Thompson https://ebookgate.com/product/maya-the-world-as-virtual-reality- richard-l-thompson/ Architectonics of Game Spaces The Spatial Logic of the Virtual and Its Meaning for the Real 1st Edition Andri Gerber https://ebookgate.com/product/architectonics-of-game-spaces-the- spatial-logic-of-the-virtual-and-its-meaning-for-the-real-1st- edition-andri-gerber/ Real World Hadoop 1st Edition Ted Dunning https://ebookgate.com/product/real-world-hadoop-1st-edition-ted- dunning/ Real World Haskell 1st Edition Bryan O'Sullivan https://ebookgate.com/product/real-world-haskell-1st-edition- bryan-osullivan/
  • 3. Developing Real World Software 1st Edition Richard Schlesinger https://ebookgate.com/product/developing-real-world-software-1st- edition-richard-schlesinger/ Real estate finance in the new economic world 1st Edition White https://ebookgate.com/product/real-estate-finance-in-the-new- economic-world-1st-edition-white/ Vision and Brain How We Perceive the World 1st Edition James V. Stone https://ebookgate.com/product/vision-and-brain-how-we-perceive- the-world-1st-edition-james-v-stone/ Everyday Thinking Memory Reasoning and Judgment in the Real World 1st Edition Stanley Woll https://ebookgate.com/product/everyday-thinking-memory-reasoning- and-judgment-in-the-real-world-1st-edition-stanley-woll/ HTML5 CSS3 for the Real World 2nd Edition Alexis Goldstein https://ebookgate.com/product/html5-css3-for-the-real-world-2nd- edition-alexis-goldstein/
  • 5. MPEG-V BRIDGING THE VIRTUAL AND REAL WORLD KYOUNGRO YOON SANG-KYUN KIM JAE JOON HAN SEUNGJU HAN MARIUS PREDA AMSTERDAM • BOSTON • HEIDELBERG • LONDON NEW YORK • OXFORD • PARIS • SAN DIEGO SAN FRANCISCO • SINGAPORE • SYDNEY • TOKYO Academic Press is an imprint of Elsevier
  • 6. Academic Press is an imprint of Elsevier 125 London Wall, London EC2Y 5AS, UK 525 B Street, Suite 1800, San Diego, CA 92101-4495, USA 225 Wyman Street,Waltham, MA 02451, USA The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, UK © 2015 Elsevier Inc.All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions. This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein). Notices Knowledge and best practice in this field are constantly changing.As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary. Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility. To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein. ISBN: 978-0-12-420140-8 British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library. Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the Library of Congress. For Information on all Academic Press publications visit our website at http://store.elsevier.com/ Typeset by MPS Limited, Chennai, India www.adi-mps.com Printed and bound in the United States Publisher:Todd Green Acquisition Editor:Tim Pitts Editorial Project Manager: Charlie Kent Production Project Manager: Jason Mitchell Designer: Matthew Limbert
  • 7. vii ACKNOWLEDGMENT This book would not be possible without the hard work of all the MPEG-V contributors that, meeting after meeting, during 3 years, built a consistent architecture supporting multi-sensorial user experiences, bring- ing innovative ideas and giving them a shape in terms of standard specifi- cations.We would like to express our honor and satisfaction for working in such a challenging environment. Naming all the MPEG-V contributors would require a few pages and, probably, would not be complete, however, we would like to express special thanks to Jean Gelissen from Philipps and Sanghyun Joo from ETRI, the original initiators of the project and to Leonardo Chiariglione from Cedeo for the significant help in positioning MPEG-V in the MPEG eco-system.
  • 8. ix AUTHOR BIOGRAPHIES Kyoungro Yoon is a professor in School of Computer Science and Engineering at Konkuk University, Seoul, Korea. He received the BS degree in electronic and computer engineering from Yonsei University, Korea, in 1987, the MSE degree in electrical and computer engineering from University of Michigan, Ann Arbor, in 1989, and the PhD degree in computer and information science in 1999 from Syracuse University, USA. From 1999 to 2003, he was a Chief Research Engineer and Group Leader in charge of development of various product-related technolo- gies and standards in the field of image and audio processing at the LG Electronics Institute of Technology. Since 2003, he joined Konkuk University as an assistant professor and has been a professor since 2012. He actively participated in the development of standards such as MPEG-7, MPEG-21, MPEG-V, JPSearch, and TV-Anytime and served as a co-chair for Ad Hoc Groups on User Preferences, chair for Ad Hoc Group on MPEG Query Format. He is currently serving as the chair for Ad Hoc Group on MPEG-V, the chair for Ad Hoc Group on JPSearch, and the chair for the Metadata Subgroup of ISO/IEC JTC1 SC29 WG1 (a.k.a. JPEG). He also served as an editor of various international standards such as ISO/IEC 15938-12, ISO/IEC 23005-2/5/6, and ISO/IEC 24800-2/5. He has co-authored over 40 conference and journal publications in the field of multimedia information systems. He is also an inventor/ co-inventor of more than 30 US patents and 70 Korean patents. Sang-Kyun Kim received the BS, MS, and PhD degrees in computer science from University of Iowa in 1991, 1994, and 1997, respectively. In 1997, he joined the Samsung Advanced Institute of Technology as a researcher. He was a senior researcher as well as a project leader on the Image and Video Content Search Team of the Computing Technology Lab until 2007. Since 2007, he joined Myongji University as an assis- tant Professor and has been an associate Professor in the Department of Computer Engineering since 2011. His research interests include digital content (image, video, and music) analysis and management, image search and indexing, color adaptation, mulsemedia adaptation, sensors and actu- ators,VR, and media-centric-IoT. He actively participated in the multi- media standardization activities such as MPEG-7, MPEG-21, MPEG-A,
  • 9. Author Biographies x MPEG-V, as a co-chair and a project editor. He serves currently as a project editor of MPEG-V International Standards, i.e. ISO/IEC 23005- 2/3/4/5, and 23005-7. He has co-authored over 40 conference and journal publications in the field of digital content management and mul- semedia simulation and adaptation. He is also an inventor/co-inventor of more than 25 US patents and 90 Korean patents. Jae Joon Han has been a principal researcher at Samsung Advanced Institute of Technology (SAIT) in Samsung Electronics, Korea since 2007. He received the BS degree in electronic engineering from Yonsei University, Korea, in 1997, the MS degree in electrical and computer engineering from the University of Southern California, Los Angeles, in 2001, and the PhD degree in electrical and computer engineering from Purdue University, West Lafayette, IN, in August 2006. Since receiving the PhD degree, he was at Purdue as a Postdoctoral Fellow in 2007. His research interests include statistical machine learning and data mining, computer vision, and real-time recognition technologies. He participated in the development of standards such as ISO/IEC 23005 (MPEG-V) and ISO/IEC 23007 (MPEG-U), and served as the editor of ISO/IEC 23005- 1/4/6. He has co-authored over 20 conference and journal publications. He is also an inventor/co-inventor of three US patents and 70 filed inter- national patent applications. Seungju Han is currently a senior researcher at Samsung Advanced Institute of Technology (SAIT) in Samsung Electronics, Korea. He received the PhD degree in electrical and computer engineering in 2007, from the University of Florida, USA. Since 2007, he has joined Samsung Advanced Institute of Technology as a research engineer. He participated in the development of standards such as ISO/IEC 23005 (MPEG-V) and ISO/IEC 23007 (MPEG-U), and served as the editor of ISO/IEC 23005- 2/5. He has authored and co-authored over 25 research papers in the field of pattern recognition and human–computer interaction. He is also an inventor/co-inventor of four US patents and 70 filed international patent applications. Marius Preda is an associate professor at Institut MINES-Telecom and Chairman of the 3D Graphics group of ISO’s MPEG (Moving Picture Expert Group). He contributes to various ISO standards with technolo- gies in the fields of 3D graphics, virtual worlds, and augmented reality and
  • 10. Author Biographies xi has received several ISO Certifications of Appreciation. He leads a research team with a focus on Augmented Reality, Cloud Computing, Games and Interactive Media and regularly presents results in journals and at speaking engagements worldwide. He serves on the program committee interna- tional conferences and reviews top-level research journals. After being part of various research groups and networks, in 2010 he founded a research team within Institut MINES-Telecom, called GRIN – GRaphics and INteractive media. The team is conducting research at the international level cooperating with academic partners worldwide and industrial ICT leaders. Selected results are showcased on www.MyMultimediaWorld.com. Academically, Marius received a degree in Engineering from Politehnica Bucharest, a PhD in Mathematics and Informatics from University ParisV and an eMBA from Telecom Business School, Paris.
  • 11. xiii PREFACE Traditional multimedia content is typically consumed via audio-visual (AV) devices like displays and speakers. Recent advances in 3D video and spatial audio allow for a deeper user immersion into the digital AV con- tent, and thus a richer user experience.The norm, however, is that just two of our five senses – sight and hearing – are exercised, while the other three (touch, smell, and taste) are neglected. The recent multitude of new sensors map the data they cap- ture onto our five senses and enable us to better perceive the environ- ment both locally and remotely. In the literature, the former is referred to as “Augmented Reality”, and the latter as “Immersive Experience”. In parallel, new types of actuators produce different kinds of multi- sensory effect. In early periods such effects were mostly used in dedicated installations in attraction parks equipped with motion chairs, lighting sources, liquid sprays, etc., but it is more and more to see multi-sensory effects produced in more familiar environments such as at home. Recognizing the need to represent, compress, and transmit this kind of contextual data captured by sensors, and of synthesizing effects that stimulate all human senses in a holistic fashion, the Moving Picture Experts Group (MPEG, formally ISO/IEC JTC 1/SC 29/WG 11) rati- fied in 2011 the first version of the MPEG-V standard (officially known as “ISO/IEC 23005 – Media context and control”). MPEG-V provides the architecture and specifies the associated information representations that enable interoperable multimedia and multimodal communication within Virtual Worlds (VWs) but also with the real world, paving the way to a “Metaverse”, i.e. an online shared space created by the convergence of virtually enhanced reality and physically persistent virtual space that include the sum of allVirtual Worlds and Augmented Realities. For exam- ple, MPEG-V may be used to provide multi-sensorial content associated to traditional AV data enriching multimedia presentations with sensory effects created by lights, winds, sprays, tactile sensations, scents, etc.; or it may be used to interact with a multimedia scene by using more advanced interaction paradigms such as hand/body gestures; or to access different VWs with an avatar with a similar appearance in all of them. In the MPEG-V vision, a piece of digital content is not limited to an AV asset, but may be a collection of multimedia and multimodal objects
  • 12. Preface xiv forming a scene, having their own behaviour, capturing their context, pro- ducing effects in the real world, interacting with one or several users, etc. In other words, a digital item can be as complex as an entire VW. Since a standardizing VW representation is technically possible but not aligned with industry interests, MPEG-V offers interoperability between VWs (and between any of them and the real world) by describing virtual objects, and specifically avatars, so that they can “move” from oneVW to another. This book on MPEG-V draws a global picture of the features made possible by the MPEG-V standard, and is divided into seven chapters, cov- ering all aspects from the global architecture, to technical details of key components – sensors, actuators, multi-sensorial effects – and to applica- tion examples. At the time this text was written (November 2014), three editions of MPEG-V have been published and the technical community developing the standard is still very active. As the main MPEG-V philosophy is not expected to change in future editions, this book is a good starting point to understand the principles that were at the basis of the standard. Readers interested in the latest technical details can see the MPEG-V Web-site (http://wg11.sc29.org/mpeg-v/). Marius Preda Leonardo Chiariglione
  • 13. 1 MPEG-V . DOI: © 2014 Elsevier Inc. All rights reserved. 2015 http://dx.doi.org/10.1016/B978-0-12-420140-8.00001-9 CHAPTER 1 Introduction to MPEG-V Standards Contents 1.1 Introduction to Virtual Worlds 1 1.2 Advances in Multiple Sensorial Media 3 1.2.1 Basic Studies on Multiple Sensorial Media 3 1.2.2 Authoring of MulSeMedia 4 1.2.3 Quality of Experience of MulSeMedia 7 1.2.3.1 Test Setups 8 1.2.3.2 Test Procedures 8 1.2.3.3 Experimental QoE Results for Sensorial Effects 10 1.3 History of MPEG-V 11 1.4 Organizations of MPEG-V 14 1.5 Conclusion 17 References 18 1.1 INTRODUCTION TO VIRTUAL WORLDS The concept of a virtual world has become a part of our everyday lives so recently that we have not even noticed the change. There have been various attempts at defining a virtual world, each with its own point of view. The worlds that we are currently experiencing, from the view- point of information technology, can be divided into three types: the real world, virtual worlds, and mixed worlds. Conventionally, a virtual world, also referred to frequently as virtual reality (VR), is a computer-generated environment, giving the participants the impression that the participants are present within that environment [1]. According to Milgram and Kishino [1], real objects are those having actual existence that can be observed directly or can be sampled and resynthesized for viewing, whereas virtual objects are those that exist in essence or effect, but not formally or actually, and must be simulated. Recently, Gelissen and Sivan [2] redefined a virtual world as an inte- gration of 3D, Community, Creation, and Commerce (3D3C). Here, 3D indicates a 3D visualization and navigation for the representation of a virtual world, and 3C represents the three key factors that make a virtual
  • 14. MPEG-V 2 world closer to the real world, which can be characterized by daily inter- actions for either economic (creation and commerce) or noneconomic/ cultural (community) purposes. Virtual worlds can also be divided into gaming and nongaming worlds. A virtual gaming world is a virtual world in which the behavior of the avatar (user) is goal-driven.The goal of a particular game is given within its design. Lineage [3] and World ofWarcraft [4] are examples of virtual gam- ing worlds. Figure 1.1 shows a screen capture from World of Warcraft. In contrast, a nongaming virtual world is a virtual world in which the behav- ior of the avatar (user) is not goal-driven. In a nongaming virtual world, there is no goal provided by the designer, and the behavior of the avatar depends on the user’s own intention. An example of a nongaming virtual world is Second Life by Linden Lab, a captured image of which is shown in Figure 1.2 [5]. A virtual world can provide an environment for both collabora- tion and entertainment [6]. Collaboration can mainly be enabled by the features of the virtual world, such as the 3D virtual environments in which the presence, realism, and interactivity can be supported at a higher degree than in conventional collaboration technology, and avatar- based interactions through which the social presence of the participants and the self-presentation can be provided at a higher degree than in any other existing environment. Figure 1.1 A virtual gaming world (from World of Warcraft).
  • 15. Introduction to MPEG-V Standards 3 1.2 ADVANCES IN MULTIPLE SENSORIAL MEDIA 1.2.1 Basic Studies on Multiple Sensorial Media Along with the sensations associated with 3D films and UHD display panels, the development of Multiple Sensorial Media (MulSeMedia), or 4D media, has received significant attention from the public. 4D content generally adds sensorial effects to 3D, UHD, and/or IMAX content, allowing audiences to immerse themselves more deeply into the content-viewing experience. Along with the two human senses of sight and hearing, sensorial effects such as wind, vibration, and scent can stimulate other senses, such as the tactile and olfaction senses. MulSeMedia content indicates audiovisual content annotated with sensory effect metadata [7]. The attempts to stimulate other senses while playing multimedia content have a long history. Sensorama [8,9] which was an immersiveVR motorbike simulator, was a pioneer in MulSeMedia history. As a type of futuristic cinema, Sensorama rendered sensorial effects with nine different fans, a vibrating seat, and aromas to simulate a blowing wind, driving over gravel, and the scent of a flower garden or pizzeria. Although Sensorama was not successful in its day, its technology soon became a pioneer of cur- rent 4D theaters and the gaming industry. The significance of olfactory or tactile cues has been reported in many previous studies [10–14]. Dinh et al. [10] reported that the addition of tactile, olfactory, and auditory cues into a VR environment increases the Figure 1.2 A nongaming virtual world (from Second Life).
  • 16. MPEG-V 4 user’s sense of presence and memory of the environment. Bodnar et al. [11] reported that the olfactory modality is less effective in alarming users than the other modalities such as vibration and sound, but can have a less disruptive effect on continuing the primary task of the users. Ryu and Kim [12] studied the effectiveness of vibro-tactile effects on the whole body to simulate collisions between users and their virtual environment. Olfactory cues can be used to evoke human memories. Brewster et al. [13] presented a study on the use of smell for searching through digital photo collections, and compared text- and odor-based tagging (Figure 1.3). For the first stage, sets of odors and tag names from the user descriptions of different photos were generated.The participants then used these to tag their photos, returning two weeks later to answer questions regarding these images. The results showed that the performance when using odors was lower than that from simple text searching but that some of the participants had their memories of their photos evoked through the use of smell. Ghinea and Ademoye [14] presented a few design guidelines for the integration of olfaction (with six odor categories) in multimedia appli- cations. Finally, Kannan et al. [15] encompassed the significance of other senses incorporated in the creation of digital content for the packaging industry, healthcare systems, and educational learning models. 1.2.2 Authoring of MulSeMedia The difficulties in producing MulSeMedia content mainly lie in the time and effort incurred by authoring the sensory effects. For the successful industrial deployment of MulSeMedia services, the provisioning of an easy and efficient means of producing MulSeMedia content plays a critical role. Figure 1.4 shows examples of the authoring tools used to create digital content with sensorial effects. Photo viewing pane Tagging pane Thumbnail pane Searching pane Figure 1.3 Search and retrieve based on odor [13].
  • 17. Introduction to MPEG-V Standards 5 (A) (B) (C) Figure 1.4 Authoring tools for sensorial effects: (A) SEVino by Waltl et al. [18,19], (B) RoSEStudio by Choi et al. [16], and (C) SMURF by Kim [17].
  • 18. MPEG-V 6 Waltl et al. [18,19] presented a sensory effect authoring tool called SEVino (Figure 1.4A), which can verify XML instances from the Java Architecture for XML Binding (JAXB) complying with the XML schema specified in MPEG-V, Part 3 (which is described in Chapter 2). Choi et al. [16] presented an authoring tool known as RoSEStudio (Figure 1.4B) with a framework for streaming services with sensorial effects to bring about an at-home 4D entertainment system based on the MPEG-V standard. Kim [17] presented an authoring tool known as SMURF (Figure 1.4C), which not only can create GroupOfEffects but also supports the Declaration and ReferenceEffect for ordinary users to easily create their own desired senso- rial effect metadata. Figure 1.5 shows 20 icons indicating sensorial effects such as wind, temperature, scent, fog, light, vibration, motion, and tactile sensations. The authoring of MulSeMedia content can be boosted by extract- ing sensorial information automatically from the content itself. In other words, sensory effects can be generated automatically by extracting senso- rial (physical and emotional) properties from the content and by mapping the major attributes of the extracted properties to the sensory effects [7]. This can speed up the authoring process significantly. Extracting physical properties such as the color characteristics from the content was achieved by Waltl et al. [19] and Timmerer et al. [20]. In their Figure 1.5 Sensorial effect menu icons [16].
  • 19. Introduction to MPEG-V Standards 7 Figure 1.6 Sensorial effect simulation [21]. works, ambient light devices were controlled using automatic color calcula- tions (e.g., averaging the RGB or dominant colors in the RGB, HSV, and HMMD spaces) to enable an immediate reaction to color changes within the content. Kim et al. [7] extracted the color temperature from the content to convert them into four categories of emotional properties (i.e., hot, warm, moderate, and cool).The extracted emotional properties are in turn mapped to temperature effects to author the MulSeMedia content automatically. The sensory effects created by different authoring tools can be visual- ized through sensory effect simulators. Kim et al. [21] presented a sensible media simulator (Figure 1.6) for a 4D simulation in an automobile envi- ronment and the implementation of sensorial actuators. Waltl et al. [19] briefly described a simulator (SESim) to evaluate the quality of the multi- media experience presented to the users. 1.2.3 Quality of Experience of MulSeMedia It is important to know how digital content enriched with additional sen- sorial effects actually affects the level of satisfaction.Therefore, the quality
  • 20. MPEG-V 8 of experience regarding sensorial effects is measured through a careful experimental design. In this section, publicly known test setups along with regulated test procedures are described as well as a few experimental results of the quality of experience (QoE) of MulSeMedia. 1.2.3.1 Test Setups Waltl et al. [18] collected a total of 76 video sequences from different genres, i.e., action, documentaries, sports, news, and commercial sequences, and described them based on their sensorial effects (i.e.,wind,vibration,and light). They opened a dataset comprising a number of video sequences from dif- ferent genres as a means to inspire similar researches. Furthermore, they described possible test setups using off-the-shelf hardware for conducting sub- jective quality assessments.The setup for one amBX system consists of two fans, two light-speakers, a wall washer, a Wrist Rumbler, and a subwoofer (left-most side of Figure 1.7A). The middle of Figure 1.7A shows the test setup using two amBX systems.The third test setup (right-most side of Figure 1.7A) consists of two amBX systems and two sets of Cyborg Gaming Lights. Figure 1.7B shows the actual test setup depicted on the right-most side of Figure 1.7A. Waltl et al. [22] presented a demonstration setup that uses stereoscopic 3D and sensory devices, i.e., fans, vibration panels, and lights (Figure 1.7C). This chapter reported that the combination of 3D content with sensorial effects allows further improvement in the viewing experience for users. 1.2.3.2 Test Procedures Rainer et al. [23] presented recommendations for the test setups and methods used in the MulSeMedia experience. Figure 1.8 shows the experimental procedures for a MulSeMedia viewing experience. In the first stage, the test participants have to read the introduction, which explains the purpose of the actual experiment. In the second stage, some demographic and educational information of the participants is acquired using a pre-questionnaire.The training phase is provided to eliminate the surprise effect and help the participants become familiar with the stimu- lus presentation.The main evaluation adheres to the recommendations of ITU P.910 and P.911 [24,25] regarding the test methods and design.Two of the main evaluation methods used, i.e., DCR and DSCQS, are pre- sented in Figure 1.9. Finally, a post-questionnaire was provided to ask the participants whether they had already participated in a similar experiment and to provide them a chance to give their feedback.
  • 21. Introduction to MPEG-V Standards 9 (a) (A) (B) (C) amBX wrist rumbler amBX light amBX wall washer amBX fan Cyborg gaming light (b) (c) Figure 1.7 Sensorial effect test setups [18,22]. Introduction Settings Test procedure Task Rating method Disclaimer Age Gender Occupation Education Nationaliy ACR ACR-HR SSCQS DCR DSCQS Participated? Feedback Pre-Quest Training Main Eval. Post-Quest. Figure 1.8 Test procedure for the sensorial effects [23]. T1 (A) (B) T2 Vote Vote Em-vote T1 T2 T3 T4 Figure 1.9 (A) DSR and (B) DSCQS [23].
  • 22. MPEG-V 10 Figure 1.9A shows the Degradation Category Rating (DCR) method. In T1, the reference content is presented, and in T2, the content with sensorial effects is shown. Between T1 and T2, a gray screen is presented to the participants. Figure 1.9B shows the Double Stimulus Continuous Quality Scale (DSCQS) method. T1 shows the presentation of a video sequence without sensorial effects. T2 illustrates the rating of emotions and their intensity. T3 shows a presentation of the same video sequence with sensorial effects, and finally, T4 provides the rating of the emotions and their intensity for the video sequence with sensorial effects. 1.2.3.3 Experimental QoE Results for Sensorial Effects Waltl et al. [26] investigated the QoE based on various video bit-rates of multimedia contents annotated with sensorial effects (e.g., wind, vibra- tion, and light). The results show that the level of satisfaction of a video sequence with sensorial effects is higher than that of a video without sensorial effects. Timmerer et al. [20] presented the QoE test results for wind, vibration, and lighting effects for the action, sports, documentary, news, and commercial genres, which indicate that the action, sports, and documentary genres benefit more from sensorial effects than the news and commercial genres. Rainer et al. [27] presented the emotional response of users and an enhancement of the QoE of Web video sequences. In particular, the authors’ QoE experiments were conducted in Austria and Australia to investigate whether geographical and cultural differences affect elicited emotional responses of the users. Timmerer et al. [28] derived a utility model for sensory experiences using their previous QoE experimental results. The aim of this util- ity model was to estimate the QoE of multimedia content with senso- rial effects as compared with the QoE of multimedia content without sensorial effects. The proposed utility model shows that a linear relation- ship exists between the QoE without sensorial effects and the QoE with sensorial effects. Kim et al. [21] presented the relationship between the QoE with sen- sorial effects and the learning types of the participants. The experimen- tal results showed that stimulations from the vibration effects generated greater satisfaction in people with a high tactile perception capability at a statistically significant level. Stimulations through vibration effects gener- ated more satisfaction in people with a low visual perception level as well. This indicates that vibration effects can be assumed to be a high priority
  • 23. Introduction to MPEG-V Standards 11 for people with a high tactile perception capability and/or a low visual perception capability. Kim et al. [7] also showed that the sequences with temperature effects automatically generated through a color temperature estimation clearly enhanced the level of satisfaction. Yazdani et al. [29] analyzed the electroencephalogram (EEG) of five participants during their perception of both unpleasant and pleasant odor- ous stimuli.They identified the regions of the brain cortex that are active during the discrimination of unpleasant and pleasant odor stimuli. 1.3 HISTORY OF MPEG-V MPEG-V shares a similar view of virtual and real worlds, except that its definition of a real world is tighter, and that of a virtual world has been extended as compared to their conventional definitions. In MPEG-V, the sampled and resynthesized environments of the real world are no longer considered real worlds and are viewed as virtual worlds.Therefore, movies or video sequences depicting the real world are also considered another representation of a virtual world. Such a change in the definitions of real and virtual worlds has made it possible to develop the concepts of virtual- to-real and real-to-virtual adaptations. Creating and enjoying films in 3D have become popular, a break- ing point being the 3D movie, Avatar, which had unprecedented suc- cess owing to its 3D effects. One reason for this success is the ability to immerge the user into the story through the creation of a full audiovisual environment.Additionally, by providing more effects on top of the audio- visual effects, it is possible to obtain more immersion in terms of user experience. One possibility is to add special effects provided by (senso- rial) actuators, so-called 4D effects, which affect senses other than seeing and hearing. Other modalities, such as olfaction, mechanoreception, equi- librioception, or thermoception may be stimulated, giving the feeling of being part of the media content, and resulting in a meaningful and consis- tent user experience. In particular, 4D movies that include sensorial effects such as wind, vibration, lighting, and scent can stimulate the human sen- sory system using actuators such as fans, motion chairs, lighting devices, and scent generators. Such rendering of sensorial effects in the real world is an example of a virtual-to-real adaptation. It is also well known that user interaction is a powerful means to improve the user experience. Interacting with digital content, thereby changing it from a linear content, as in the case of traditional movies,
  • 24. MPEG-V 12 allows users to be not only spectators but also actors. The success of complex video games that create an entire universe is an indicator of the role such an interaction can play. More generally, virtual worlds are typical applications using 3D technologies, allowing the user to interact and change both the storyline and the environment. A notable example is Second Life, which allows users to project themselves into virtual char- acters (called avatars).Through their avatar, the user can live a virtual life; communicate with others, perform daily activities, and own virtual assets such as houses and other types of property. In massive multiplayer online role-playing games (MMORPG) such as World of Warcraft or Lineage, users can operate their characters in a virtual world and cooperate with oth- ers to fulfill missions. Such 3D games immerse users in a virtual world by providing a fictional environment that can otherwise only be experienced in their imagination. Moreover, controlling virtual worlds with sensors provides an even more immersive media experience.The effective control of objects in such virtual worlds has been developed in many ways: the motions of users captured from a set of sensors are used to control game characters. The recently developed “Kinect” sensor can capture the full-body skeleton of each user and use captured data to manipulate objects in a virtual world. In addition, some pioneering technologies are used to capture brain waves to recognize the user’s intention and/or internal state.These activities for controlling the avatars or objects of a virtual world by sensing the real environment and real objects can be viewed as an example of a real-to-virtual adaptation. Because each of these technologies related to immersive multisenso- rial experiences is based on proprietary products, there is no standard way for representing the data from sensors and actuators in the real world, and no common way to interface with a virtual world. As a result, each pro- prietary virtual world has also been isolated from other virtual worlds. This hinders users when migrating from one virtual world to another, and therefore, when a virtual world loses its interest, all assets produced and the entire community itself are lost.To increase the usability of each vir- tual world and their interoperability, and improve the controls and increase the quality of the user experience, the MPEG community has developed the MPEG-V standard (ISO/IEC 23005) with the intention of offering a common information representation format. The standardization work in MPEG-V was initiated in 2008, and the second version of the standard was published in 2013 [30–36].
  • 25. Introduction to MPEG-V Standards 13 MPEG-V was initiated in 2008 based on two separate projects with different objectives. One is the Metaverse EU project whose objective is to provide a framework for interoperability between heterogeneous virtual worlds [2,37]. The other is the Single Media Multiple Devices (SMMD) project of ETRI, Korea, whose objective is to develop tech- nology providing new media services with sensory effects using multiple devices [38]. Metaverse project-related proposals were first submitted at the 81st MPEG meeting in Lausanne, Switzerland, in July 2007, and SMMD project-related proposals were first submitted at the 82nd MPEG meeting in Shenzhen in October 2007.The Metaverse project was renamed MPEG-V, and is focused on the exchange of information between virtual worlds.The SMMD project was renamed Representation of Sensory Effects (RoSE),and is focused on the representation of sensory effects for new types of media services. At the 87th meeting in Lausanne, Switzerland, in February 2009, the two on-going projects of MPEG-V and RoSE were merged into the MPEG-V standard, which deals with both virtual and real worlds. The architecture and introduction of the standard was given in Part 1 of MPEG-V.The control information was provided in Part 2. Representations of the sensory effects and sensory effect metadata were given in Part 3, and the representation of avatars was provided in Part 4. Committee drafts of the first edition were released at the 89th meeting in London in July 2009. At the 90th meeting in Xian, China, in October 2009, discussions were held on the subdivision of the control information of Part 2, which was finally divided into two separate parts at an Ad Hoc meeting in Paris in December 2009, i.e., the control information in Part 2, and the data formats for inter- action devices in Part 5.At the 91st Kyoto meeting, the common tools and types from each part of the standard were extracted and became the newly added Part 6. Finally, the reference software is provided in Part 7. The first edition of the complete set of MPEG-V specifications was published in early 2011. At the 91st Kyoto meeting in January 2010, the need for binary representations of the MPEG-V tools for a greater transfer efficiency was raised, and work on the second edition of the standard was started. After creating a binary representation of all existing tools in the first edition, as well as new sensory effects and other additional tools, the second edition was finally published in 2013. Currently, the third edition of the standard is progressing with the addition of more effects, devices, and sensors.
  • 26. MPEG-V 14 1.4 ORGANIZATIONS OF MPEG-V MPEG-V (Media context and control), published in ISO/IEC 23005, provides an architecture and specifies the associated information repre- sentations to enable bridges between the real world and digital content, and to increase the interoperability between virtual worlds. MPEG-V is applicable in various business models/domains for which audiovisual con- tents can be associated with sensorial effects that need to be rendered on appropriate actuators and/or benefit from well-defined interactions with an associated virtual world. A well-defined connection between the real and virtual worlds is needed to reach simultaneous reactions in both worlds. This is done in MPEG-V by defining an architecture that provides interoperabil- ity at various levels. Efficient, effective, intuitive, and entertaining inter- faces between users and virtual worlds are of crucial importance for their wide acceptance and use of such technologies.To improve the process of creating virtual worlds, a better design methodology and better tools are indispensable. The MPEG-V standard consists of the following parts: Part 1: Architecture [30]; Part 2: Control Information [31]; Part 3: Sensory Information [32]; Part 4: Virtual World Object Characteristics [33]; Part 5: Formats for Interaction Devices [34]; Part 6: Common Types and Tools [35]; and Part 7: Conformance and Reference Software [36]. Part 1 provides an overview of MPEG-V along with the architecture and various use cases or applications of the MPEG-V standard. Part 2 provides the tools for a description of the capabilities of the actuators and sensors, the user’s preferences regarding the sensory effects, and their preferences in terms of the sensor adaptations. Altogether, these tools are called the control information, and are used for the detailed and personalized control of the actuators and sensors.The control information is provided using the Control Information Description Language (CIDL) with the Device Capability Description Vocabulary (DCDV), Sensor Capability Description Vocabulary (SCDV), User’s Sensory Preference Vocabulary (USPV), and Sensor Adaptation Preference Vocabulary (SAPV), whose syntaxes are defined using the XML schema. Part 3 provides the tools for a description of the sensorial effect in syn- chronization with the media content. The descriptions of the sensorial effect or sensory effect metadata (SEM) are defined using the Sensory Effect Description Language (SEDL) with the Sensory Effect Vocabulary (SEV) based on the XML schema.
  • 27. Introduction to MPEG-V Standards 15 Part 4 defines the characteristics of a virtual-world object to provide tools enabling the interoperability between virtual worlds. It also provides tools for the description or metadata of avatars and virtual objects. The metadata describe the characteristics of the avatars and virtual objects in terms of their nature, character, and appearance, to name a few, but do not provide the actual shape, texture, or rendering information. Part 5 specifies the interfaces or data formats for an interoperable exchange of information to/from the sensors and actuators. These inter- faces are defined by the Interaction Information Description Language (IIDL) with the Device Command Vocabulary (DCV) and Sensed Information Vocabulary (SIV) based on the XML schema. The DCV defines the data formats used as commands to the actuators. The SIV defines the data formats used for transferring sensed information from a sensor to the adaptation engine or to the information destination. Part 6 specifies the syntax and semantics of the data types and tools that are common to more than one part of the MPEG-V standard. In the appendix of this part of the standard, the classification schemes for various sets of terms, such as the unit and scent types, are also defined. Part 7 provides the reference software and specifies the conformance using a Schematron. Figure 1.10 shows a diagram of the MPEG-V system architec- ture and its data transition scenarios. The MPEG-V specifications are used for three different types of media exchanges between real and vir- tual worlds.The first media exchange is the information adaptation from a virtual world into the real world (Figure 1.10A). It accepts sensorial effect data (specified in MPEG-V, Part 3) and/or Virtual World Object Characteristics (MPEG-V, Part 2) as contextual inputs; accepts Actuator Capability and/or Actuation Preferences (MPEG-V, Part 2) and/or Sensed Information (MPEG-V, Part 5) as control parameters; and gener- ates Actuator Commands (MPEG-V, Part 5) to the real-world actuators. The VR adaptation engine converts (or adapts) either the Virtual World Object Characteristics or the sensorial effect data from a virtual world into the Actuator Commands in the real world in accordance with the input control parameters. The manner in which the adaptation engine is implemented is not within the scope of the MPEG-V standardiza- tion. The second media exchange is the information adaptation from the real world into a virtual world.The real-to-virtual adaptation engine accepts Sensed Information (MPEG-V, Part 5) from sensors as the real- world context; accepts Sensor Capability and/or Sensor Adaptation
  • 28. MPEG-V 16 Real World (Sensors) Real World (Actuators) Sensed Information Sensor Capability Actuator Capability Actuator Commands Actuation Preferences Virtual World (A) Sensorial Effects Sensor Adaptation Preferences VR Adaptation: converts Sensorial Effect data and/or VW Object Char. from VW into Actuator Cmds applied to RW RV Adaptation: converts Sensed Info from RW to VW Object Char/Sensed Info applied to VW Sensed Information Virtual World Object Characteristics Engine Virtual World to Real World User (B) Real World (Sensors) Real World (Actuators) Sensed Information Sensor Capability Actuator Capability Actuator Commands Actuation Preferences Virtual World Sensorial Effects Sensor Adaptation Preferences VR Adaptation: converts Sensorial Effect data and/or VW Object Char. from VW into Actuator Cmds applied to RW RV Adaptation: converts Sensed Info from RW to VW Object Char/Sensed Info applied to VW Sensed Information Virtual World Object Characteristics Engine Real World to Virtual World User Figure 1.10 MPEG-V architectures and data transition scenarios: (A) a virtual- into real-world scenario, (B) a real- into virtual-world scenario, and (C) a virtual- into virtual- world scenario.
  • 29. Introduction to MPEG-V Standards 17 Real World (Sensors) Real World (Actuators) User Sensed Information Sensor Capability Actuator Capability Actuator Commands Actuation Preferences Virtual World Sensorial Effects Sensor Adaptation Preferences VR Adaptation: converts Sensorial Effect data and/or VW Object Char. from VW into Actuator Cmds applied to RW RV Adaptation: converts Sensed Info from RW to VW Object Char/Sensed Info applied to VW Sensed Information Virtual World Object Characteristics Engine Virtual World to Virtual World (C) Figure 1.10 (Continued). Preferences (MPEG-V, Part 2) as control parameters; and generatesVirtual World Object Characteristics (MPEG-V, Part 4) and/or adapted Sensed Information (MPEG-V, Part 5) to the associated virtual-world objects (Figure 1.10B).The RV adaptation engine converts (or adapts) the sensed information from the real-world sensors into the Virtual World Object Characteristics and/or the adapted sensed information of a virtual world in accordance with the input control parameters. Finally, information exchange between virtual worlds is conducted by adapting proprietary Virtual World Object Characteristics into the normatively specifiedVirtual World Object Characteristics (MPEG-V, Part 4) (Figure 1.10C). 1.5 CONCLUSION MPEG-V (ISO/IEC 23005) provides the architecture and neces- sary associated information representation supporting the informa- tion exchanges between the real and virtual worlds, and the information exchange between virtual worlds. To support the information exchanges,
  • 30. MPEG-V 18 the information between the two worlds should be adapted by consider- ing the capabilities of each world and the user preferences regarding the information. Each component for the information adaption is addressed in sections of ISO/IEC 23005. Finally, adoption of the standardized infor- mation representation provides opportunities for 4D broadcasting, natu- ral interaction with intelligent sensors within any virtual world, seamless interaction between real and virtual worlds, and the importing of virtual characters and objects between virtual worlds. REFERENCES [1] P. Milgram, F. Kishino, A taxonomy of mixed reality visual displays, IEICE Trans. Inf. Syst. E77-D (12) (1994). [2] J.H.A. Gelissen,Y.Y. Sivan,The Metaverse1 case: historical review of making one vir- tual worlds standard (MPEG-V), J.Virtual Worlds Res. 4 (3) (2011). [3] Lineage. <http://lineage.plaync.com>, (last accessed on 20.09.14). [4] World of WarCraft. <http://www.battle.net/wow/>, (last accessed on 20.09.14). [5] Second Life. <http://secondlife.com>, (last accessed on 20.09.14). [6] S. van der Land,A.P. Schouten, B. van der Hooff, F. Feldberg, Modelling the Metaverse: a theoretical model of effective team collaboration in 3D virtual environments, J.Virtual Worlds Res. 4 (3) (2011). [7] S.-K. Kim, S.-J.Yang, C.Ahn,Y. Joo, Sensorial information extraction and mapping to generate temperature sensory effects, ETRI J. 36 (2) (2014) 232–241. [8] H. Rheingold,Virtual Reality, Summit Books, NewYork, NY, 1991 (Chapter 2). [9] J.J. Kaye, Making scents: aromatic output for HCI, Interactions 11 (1) (2004) 48–61. [10] H.Q. Dinh, N.Walker, L.F. Hodges, C. Song,A. Kobayashi, Evaluating the importance of multisensory input on memory and the sense of presence in virtual environments, in:Proceedings—Virtual Reality Annual International Symposium,1999,pp.222–228. [11] A. Bodnar, R. Corbett, D. Nekrasovski, AROMA: ambient awareness through olfac- tion in a messaging application: Does olfactory notification make “scents?” in: Sixth International Conference on Multimodal Interfaces, 2004, pp. 183. [12] J. Ryu, G.J. Kim, Using a vibro-tactile display for enhanced collision perception and presence, in: VRST ‘04: Proceedings of the ACM Symposium on Virtual Reality Software and Technology,ACM, NewYork, NY, 2004, pp. 89–96. [13] S.A. Brewster, D.K. McGookin, C.A. Miller, Olfoto: designing a smell-based interac- tion, in: CHI 2006: Conference on Human Factors in Computing Systems, 2006, p. 653. [14] G. Ghinea, O.A. Ademoye, Olfaction-enhanced multimedia: perspectives and chal- lenges, Multimed.Tools Appl. (2010) 1–26. [15] R. Kannan, S.R. Balasundaram, F. Andres,The role of mulsemedia in digital content ecosystem design, in: Proceedings of the International Conference on Management of Emergent Digital EcoSystems, 2010, pp. 264–266. [16] B. Choi, E.-S. Lee, K.Yoon, Streaming media with sensory effect, in: Proceedings of the International Conference on Information Science and Application, Jeju Island, Republic of Korea,April 26–29, 2011, pp. 1–6. [17] S.-K. Kim, Authoring multisensorial content, Signal Process. Image Commun. 28 (2) (2013) 162–167. [18] M. Waltl, C. Timmerer, B. Rainer, H. Hellwagner, Sensory effect dataset and test setups, in: IEEE Proceedings of the Fourth International Workshop Quality Multimedia Experience, 2012, pp. 115–120.
  • 31. Introduction to MPEG-V Standards 19 [19] M.Waltl,C.Timmerer,H.Hellwagner,A test-bed for quality of multimedia experience evaluation of sensory effects, in: Proceedings of the International Workshop Quality Multimedia Experience, San Diego, CA, July 29–31, 2009, pp. 145–150. [20] C. Timmerer, M. Waltl, B. Rainer, H. Hellwagner, Assessing the quality of sensory experience for multimedia presentations, Signal Process. Image Commun. 27 (8) (2012) 909–916. [21] S.-K.Kim,Y.-S.Joo,Y.Lee,Sensible media simulation in an automobile application and human responses to sensory effects, ETRI J. 35 (6) (2013) 1001–1010. [22] M.Waltl, B. Rainer, S. Lederer, et al.,A 4D multimedia player enabling sensory experi- ence, in: IEEE Proceedings of the Fifth International Workshop Quality Multimedia Experience, 2013, pp. 126–127. [23] Rainer, B.,Timmerer, C.,Waltl, M., Recommendations for the subjective evaluation of sensory experience, in: Fourth International Workshop on Perceptual Quality of Systems, 2013. [24] ITU-T Rec. P.910, Subjective Video Quality Assessment Methods for Multimedia Applications,April 2008. [25] ITU-T Rec.P.911,SubjectiveAudiovisual QualityAssessment Methods for Multimedia Applications, December 2008. [26] M. Waltl, C. Timmerer, H. Hellwagner, Improving the quality of multimedia expe- rience through sensory effects, in: IEEE Proceedings of the Second International Workshop Quality Multimedia Experience, 2010, pp. 124–129. [27] B. Rainer, M.Waltl, E. Cheng et al., Investigating the impact of sensory effects on the quality of experience and emotional response in web videos, in: IEEE Proceedings of the Fourth International Workshop Quality Multimedia Experience, 2012, pp. 115–120. [28] C. Timmerer, B. Rainer, M. Waltl, A utility model for sensory experience, in: IEEE Proceedings of the Fifth International Workshop Quality Multimedia Experience, 2013, pp. 224–229. [29] A.Yazdani, E. Kroupi, J.Vesni, T. Ebrahimi, Electroencephalogram alterations during perception of pleasant and unpleasant odors, in: IEEE Proceedings of the Fourth International Workshop Quality Multimedia Experience,Yarra Valley, Australia, 2012, pp. 272–277. [30] ISO/IEC 23005-1: 2014 Information technology—Media context and control—Part 1:Architecture, January 2014. [31] ISO/IEC 23005-2:2013 Information technology—Media context and control—Part 2: Control information, November 2013. [32] ISO/IEC 23005-3:2013 Information technology—Media context and control—Part 3: Sensory information, November 2013. [33] ISO/IEC 23005-4:2013 Information technology—Media context and control—Part 4:Virtual world object characteristics, November 2013. [34] ISO/IEC 23005-5: 2013 Information technology—Media context and control—Part 5: Data formats for interaction devices, November 2013. [35] ISO/IEC 23005-6: 2013 Information technology—Media context and control—Part 6: Common types and tools, November 2013. [36] ISO/IEC 23005-7: 2014 Information technology—Media context and control—Part 7: Conformance and reference software, January 2014. [37] Metaverse, <http://www.metaverse1.org>, (last accessed on 20.09.14). [38] B.S. Choi, S.H. Joo, H.Y. Lee, Sensory effect metadata for SMMD media service, in: Proceedings of the Fourth International Conference on Internet andWeb Applications and Services,Venice/Mestre, Italy, May 2009.
  • 32. 21 MPEG-V . DOI: © 2014 Elsevier Inc. All rights reserved. 2015 http://dx.doi.org/10.1016/B978-0-12-420140-8.00002-0 CHAPTER 2 Adding Sensorial Effects to Media Content Contents 2.1 Introduction 21 2.2 Sensory Effect Description Language 24 2.2.1 SEDL Structure 24 2.2.2 Base Data Types and Elements of SEDL 25 2.2.3 Root Element of SEDL 27 2.2.4 Description Metadata 30 2.2.5 Declarations 31 2.2.6 Group of Effects 32 2.2.7 Effect 33 2.2.8 Reference Effect 34 2.2.9 Parameters 35 2.3 Sensory Effect Vocabulary: Data Formats for Creating SEs 36 2.4 Creating SEs 49 2.5 Conclusion 56 References 56 2.1 INTRODUCTION MPEG-V, Part 3: Sensory information (ISO/IEC 23005-3), specifies the Sensory Effect Description Language (SEDL) [1] as an XML schema- based language that enables one to describe sensorial effects (SEs) such as light, wind, fog, and vibration that trigger human senses. The actual SEs are not part of the SEDL but are defined within the Sensory Effect Vocabulary (SEV) for extensibility and flexibility, allowing each applica- tion domain to define its own SEs. A description conforming to SEDL is referred to as Sensory Effect Metadata (SEM) and may be associated with any type of multimedia content (e.g., movies, music,Web sites, games).The SEM is used to steer actuators such as fans, vibration chairs, and lamps using an appropriate mediation device to increase the user experience. That is, in addition to the audiovisual (AV) content of a movie, e.g., the user will also perceive other effects such as those described above, giving the user the sensation of being part of the particular media content, which
  • 33. MPEG-V 22 will result in a worthwhile, informative user experience. The concept of receiving SEs in addition to AV content is depicted in Figure 2.1. The media and corresponding SEM may be obtained from a Digital Versatile Disc (DVD), Blu-ray Disc (BD), or any type of online service (i.e., download/play or streaming).The media processing engine, which is also referred to as the adaptation engine, acts as the mediation device and is responsible for playing the actual media content resource and accompa- nied SEs in a synchronized way based on the user’s setup in terms of both the media content and rendering of the SE.Therefore, the media process- ing engine may adapt both the media resource and the SEM according to the capabilities of the various rendering devices. The SEV defines a clear set of actual SEs to be used with the SEDL in an extensible and flexible way.That is, it can be easily extended with new effects or through a derivation of existing effects thanks to the extensibil- ity feature of the XML schema. Furthermore, the effects are defined based on the authors’ (i.e., creators of the SEM) intention independent from the end user’s device setting, as shown in Figure 2.2. The sensory effect metadata elements or data types are mapped to commands that control the actuators based on their capabilities.This map- ping is usually provided by theVirtual-to-Real adaptation engine and was deliberately not defined in this standard, i.e., it is left open for industry competitors. It is important to note that there is not necessarily a one- to-one mapping between elements or data types of the SE data and ACs. For example, the effect of hot/cold wind may be rendered on a single device with two capabilities, i.e., a heater or air conditioner, and a fan or ventilator. As shown in Figure 2.3, the SEs can be adjusted into adapted SEs (i.e., defined in MPEG-V, Part 5, as device commands) in accordance with the capabilities of the actuators (ACs, defined in MPEG-V, Part 2) and actuation preferences (APs,defined in MPEG-V,Part 2,as user sensory preferences). Source Media processing engine Rendering devices User Offaction Audition Vision Control Media Control Thermoception mechanoreception Media + SEM Figure 2.1 Concept of MPEG-V SEDL [1].
  • 34. Adding Sensorial Effects to Media Content 23 Figure 2.4 shows an example of combining SEs (SEs in MPEG-V, Part 3) with sensed information (SI in MPEG-V, Part 5) to generate adapted actuator commands (ACmd in MPEG-V, Part 5). For example, the SE corresponding to the scene might be cooling the temperature to Author’s intention to trigger Sensorial effect actuation data Actuator capabilities <Effect “LightType”…/> <Effect “ScentType…/> <Effect “WindType…/> <Effect “FlashType…/> Scope of standardization . . . Single sense Multiple senses <cap–1…/> <cap–2…/> <cap–3…/> <cap–n…/> . Adaptation VR (inform.) Figure 2.2 Mapping of author’s intentions to SE data and actuator capabilities (ACs) [2]. MPEG-V, Part 3 Sensorial effects SE MPEG-V, Part 2 Actuator capabilities Actuation preferences AC1 AC2 ACn AP1 AP2 APn MPEG-V, Part 5 (actuator commands) Adapted sensorial effects SE Adaptation engine Figure 2.3 The adapted SEs (actuator commands defined in MPEG-V, Part 5) generated by combining SEs with ACs and user’s APs.
  • 35. MPEG-V 24 5°C and adding a wind effect with 100% intensity. Assume instead that the current room temperature is 12°C. It would be unwise to deploy the cooling and wind effect as described in the SE data because the current temperature inside the room is already low, and users may feel uncom- fortable with the generated SEs. Therefore, a sensor measures the room temperature and the adaptation engine generates the adapted SEs (i.e., ACmds), which are a reduced wind effect (20% intensity) and a heating effect (20°C), for instance. This chapter is organized as follows. Section 2.2 describes the details of the SEDL. Section 2.3 presents the SEV, which specifies the data formats used for creating SEs. Section 2.4 presents XML instances using SEDL and SEV. Finally, Section 2.5 concludes the chapter. 2.2 SENSORY EFFECT DESCRIPTION LANGUAGE 2.2.1 SEDL Structure The SEDL is a language providing basic building blocks to instantiate sen- sory effect metadata defined by the MPEG-V standard based on XML that can be authored by content providers. MPEG-V, Part 5 Sensed information SIn SI1 Adaptation engine SI2 MPEG-V, Part 3 SE SE MPEG-V, Part 5 (actuator commands) Adapted sensorial effects Sensorial effects Figure 2.4 The adapted SEs (actuator commands defined in MPEG-V, Part 5) generated by combining SEs with SI.
  • 36. Adding Sensorial Effects to Media Content 25 2.2.2 Base Data Types and Elements of SEDL There are two base types in the SEDL. The first base type is SEMBaseAttributes, which includes six base attributes and one base attribute Group.The schema definition of SEMBaseAttributes is shown in Table 2.1. The activate attribute describes whether the SE shall be activated. The duration attribute describes the duration of any SE rendering.The fade attribute describes the fade time within which the defined inten- sity is reached. The alt attribute describes an alternative effect identi- fied by the uniform resource identifier (URI). For example, an alternative effect is chosen because the original intended effect cannot be rendered owing to a lack of devices supporting this effect. The priority attri- bute describes the priority for effects with respect to other effects in the same group of effects sharing the same point in time when they should become available for consumption. A value of 1 indicates the highest pri- ority, and larger values indicate lower priorities. The location attribute describes the location from where the effect is expected to be received from the user’s perspective according to the X, Y, and Z axes, as depicted in Figure 2.5. A classification scheme that may be used for this purpose is LocationCS, as defined in Annex A of ISO/IEC 23005-6. For example, urn:mpeg:mpeg-v:01-SI-LocationCS-NS:left:*:midway defines the location as follows: left on the X-axis, any location on the Y-axis, and midway on the Z-axis. That is, it describes all effects on the left-midway side of the user. The SEMAdaptabilityAttributes contains two attributes related to the adaptability of the SEs.The adaptType attribute describes the preferred Table 2.1 Schema definition of SEMBaseAttributes <attributeGroup name="SEMBaseAttributes"> <attribute name="activate" type="boolean" use="optional"/> <attribute name="duration" type="positiveInteger" use="optional"/> <attribute name="fade" type="positiveInteger" use="optional"/> <attribute name="alt" type="anyURI" use="optional"/> <attribute name="priority" type="positiveInteger" use="optional"/> <attribute name="location" type="mpeg7:termReferenceType" use="optional"/> <attributeGroup ref="sedl:SEMAdaptabilityAttributes"/> </attributeGroup> <attributeGroup name="SEMAdaptabilityAttributes"> <attribute name="adaptType" type="sedl:adaptTypeType" use="optional"/> < attribute name=adaptRange type=sedl:adaptRangeType default=10 use=optional/ /attributeGroup
  • 37. MPEG-V 26 type of adaptation using the following possible instantiations: strict, i.e., an adaptation by approximation may not be performed, i.e., an adapta- tion by approximation may be performed with a smaller effect value than the specified effect value, i.e., an adaptation by approximation may be per- formed with a greater effect value than the specified effect value, and i.e., an adaptation by approximation may be performed between the upper and lower bounds specified by adaptRange.The adaptRange attribute describes the upper and lower bounds in terms of percentage for adaptType. There are five base elements (Table 2.2), i.e., Declaration, GroupOfEffects, Effect, ReferenceEffect, and Parameter, which are explained in detail in the following sections, extended from the abstract SEMBaseType type (the top-most base type in SEDL).This structure of hav- ing an abstract type is a way of providing extensibility in the standard that allows any elements having the extended type of SEMBaseType to be used when each element is instantiated. SEMBaseType has an id attribute that identifies the id of SEMBaseType (Table 2.3 and Figure 2.6). Left Centerleft Center Centerright Right Top Middle Bottom Back Midway Front X Y Z Figure 2.5 Location model for SEs and reference coordinate system.
  • 38. Adding Sensorial Effects to Media Content 27 2.2.3 Root Element of SEDL Table 2.4 shows the schema definition of the SEM root element of SEDL along with the structure diagram shown in Figure 2.7.The SEM root ele- ment can contain the DescriptionMetadata element; unlimited repetitions of the Declarations element, GroupOfEffects element, Effect element, and ReferenceEffect element; and anyAttribute, which can identify the process units and associating time information. The DescriptionMetadata element, Declarations element, GroupOfEffects element, Effect element, and ReferenceEffect element types are explained in the following sections in detail. Figure 2.6 Definition of the SEMBaseType type. Table 2.2 Schema definition of base elements in SEDL element name=Declarations type=sedl:DeclarationsType/ element name=GroupOfEffects type=sedl:GroupOfEffectsType/ element name=Effect type=sedl:EffectBaseType/ element name=ReferenceEffect type=sedl:ReferenceEffectType/ element name=Parameter type=sedl:ParameterBaseType/ Table 2.3 Schema definition of SEMBaseType complexType name=SEMBaseType abstract=true complexContent restriction base=anyType attribute name=id type=ID use=optional/ /restriction /complexContent /complexType
  • 39. MPEG-V 28 The anyAttribute contains siAttributeList, which holds properties related to the process unit fragmentation, i.e., anchorElement, puMode, and encodesAsRAP, and properties related to the time information, i.e., time­ scale, ptsDelta, absTimeScheme, absTime, and pts.There is a rule that the SEM element must have a timescale attribute. siAttributeList is the XML streaming instruction defined in ISO/IEC 21000-7 (MPEG-21). The XML streaming instructions allow first identifying the process units in an XML document, and second, assigning time information to these units. These instructions are particularly required when an entire XML docu- ment is fragmented into small pieces of (e.g., well formed) XML docu- ments for effective streaming or storing purposes. The GroupOfEffects element, Effect element, and ReferenceEffect element can again contain siAttributeList to describe the properties related to the fragmentation and time information. Table 2.5 shows an instance of the SEM root element including sev- eral attributes used to identify the namespaces, as well as an example of the siAttributeList attribute. The puMode and timescale in the SEM root element are inherited to the child anchor elements. The puMode “ancestorDescendants” indicates that each process unit contains the anchor element, its ancestor, and descendent element. The timescale specifies the timescale, i.e., the number of ticks per second. Table 2.4 Schema definition of SEM root element element name=SEM complexType sequence element name=DescriptionMetadata type=sedl:DescriptionMetadataType minOccurs=0 maxOccurs=1/ choice maxOccurs=unbounded element ref=sedl:Declarations/ element ref=sedl:GroupOfEffects/ element ref=sedl:Effect/ element ref=sedl:ReferenceEffect/ /choice /sequence anyAttribute namespace=##other processContents=lax / /complexType /element
  • 40. Figure 2.7 Structure diagram of the SEM root element. Table 2.5 Example instance of SEM root element ?xml version=1.0? SEM xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance xmlns=urn:mpeg:mpeg-v:2010:01-SEDL-NS xmlns:sev=urn:mpeg:mpeg-v:2010:01-SEV-NS xmlns:mpeg7=urn:mpeg:mpeg7:schema:2004 xmlns:si=urn:mpeg:mpeg21:2003:01-DIA-XSI-NS xsi:schemaLocation=urn:mpeg:mpeg-v:2010:01-SEV-NS MPEG-V-SEV.xsd si:puMode=ancestorsDescendants si:timeScale=1000 … /SEM
  • 41. MPEG-V 30 2.2.4 Description Metadata The DescriptionMetadata element describes general information about the SE metadata, such as the creation information or classification scheme alias. As shown in Table 2.4, the DescriptionMetadata element extends DescriptionMetadataType, which again extends the MPEG7:Description MetadataType. As shown in Figure 2.8, MPEG7:DescriptionMetadataType describes the general information such as the creators, version, creation time, and proper information. DescriptionMetadataType also contains the ClassificationSchemeAlias element, which describes an alias for Figure 2.8 Structure diagram of DescriptionMetadataType.
  • 42. Adding Sensorial Effects to Media Content 31 a classification scheme referenced by a URI. An example instance of the ClassificationSchemeAlias element of DescriptionMetadataType is shown in Table 2.6. In this instance, the URI of the classification scheme, urn:mpeg:mpeg-v:01-SI-ColorCS-NS, is replaced by the alias “COLOR” such that the light effect specifies its light color attribute as “:COLOR:amber” instead of using“urn:mpeg:mpeg-v:01-SI-ColorCS-NS:amber.” 2.2.5 Declarations The Declarations type, which extends the SEMBaseType type, describes a declaration of sensory effects, groups of sensory effects, or the param- eters (Figure 2.9). In other words, an element defined by the Declarations type can contain an unbounded number of effects, groups of effects, or parameters that can be referenced later by the ReferenceEffect element. Table 2.6 Example instance of the DescriptionMetadata element and its usage in a light effect sedl:DescriptionMetadata sedl:ClassificationSchemeAlias href=urn:mpeg:mpeg-v:01-SI-ColorCS-NS alias=COLOR/ /sedl:DescriptionMetadata sedl:Effect xsi:type=sev:LightType intensity-value=50.0 intensity- range=0.00001 32000.0 duration=28 color=:COLOR:amber si:pts=0/ Figure 2.9 Structure diagram of DeclarationsType.
  • 43. MPEG-V 32 For example, if a group of effect called “explosion,” which is composed of light, scent, and vibration effects, is declared in the Declarations element, it can be reused several times during the last part of a movie sequence using ReferenceEffect elements. 2.2.6 Group of Effects GroupOfEffectsType, which extends SEMBaseType, describes a group of two or more SEs (Figure 2.10). The SE elements in GroupOfEffects can be defined by either EffectBaseType or ReferenceEffectType. There are several rules applied for implementing GroupOfEffects. GroupOfEffects will have a timestamp (i.e., pts, ptsDelta, or absTime). Outside of the Figure 2.10 Structure diagram of GroupOfEffectsType.
  • 44. Adding Sensorial Effects to Media Content 33 Declarations, GroupOfEffects shall not have both pts and absTime at the same time because if these two attributes contain different timestamps, the decoder cannot decide which one to follow for rendering the SEs. GroupOfEffects within Declarations will have only ptsDelta as a time- stamp. This means that SEs in GroupOfEffects within the Declarations may have different starting times.The GroupOfEffects element can contain the siAttributeList to describe the properties related to the fragmenta- tion and time information for effective XML streaming. 2.2.7 Effect EffectBaseType extends SEMBaseType and provides a base abstract type for a subset of types defined as part of the sensory effect metadata types (Figure 2.11). EffectBaseType contains the siAttributeList in anyAttribute to describe the properties related to the fragmentation and time information for effective XML streaming. This type includes the autoExtraction attribute, which describes the automatic extraction of SEs Figure 2.11 Structure diagram of EffectBaseType.
  • 45. MPEG-V 34 and their major attributes such as the intensity-value from the media resource such as a video or audio sequence. This type also includes the SupplementalInformation element with SupplementalInformationType (Figure 2.12) to describe the reference region (i.e., ReferenceRegion element) for an automatic extraction from a video sequence, and the Operator element, which describes how to extract SEs from the reference region of the video sequence. The Operator ele- ment can be specified as either average or dominant. The following rules shall be referenced to generate the valid Effect metadata. 1. At the least, activate, duration, or fade shall be defined. 2. Effect outside of GroupOfEffects shall have a timestamp (i.e., pts, ptsDelta, or absTime). 3. Effect within GroupOfEffects shall have only ptsDelta for a timestamp. 4. Effect shall not have both pts and absTime at the same time. 5. Effect within Declarations shall have only ptsDelta for a timestamp. 6. If duration is defined, activate may not be defined. 7. If fade and duration are defined, activate may not be defined. 8. If fade is defined, the intensity is also defined. 9. If fade and duration are defined, fade must be less than or equal to duration. 2.2.8 Reference Effect ReferenceType describes a reference to a SE, groups of SEs, or param- eters (Figure 2.13). The uri attribute describes a reference to a SE, groups of SEs, or parameters by a URI. ReferenceEffectType contains siAttributeList in anyAttribute to describe the properties related to fragmentation and time information for effective XML streaming. Figure 2.12 Structure diagram of SupplementalInfomationType.
  • 46. Adding Sensorial Effects to Media Content 35 The following rules shall be referenced to generate valid ReferenceEffect metadata. 1. ReferenceEffect outside of GroupOfEffects shall have a timestamp (i.e., pts, ptsDelta, or absTime). 2. ReferenceEffect within GroupOfEffects shall have only ptsDelta for a timestamp. 3. ReferenceEffect shall not have both pts and absTime at the same time. 4. ReferenceEffect within Declarations shall have only ptsDelta for a timestamp. 2.2.9 Parameters ParameterBaseType simply extends SEMBaseType, as shown in Figure 2.14. ColorCorrectionParameterType is the only type of parameter sup- porting the color correction effect. The parameters define the color characteristics of the content provider’s display device along with the Figure 2.13 Structure diagram of ReferenceEffectType.
  • 47. MPEG-V 36 lighting conditions surrounding the content provider. The param- eters passing through this type enable reproducing the display col- ors of the consumer side, which are exactly the same as the colors created by the content provider. ColorCorrectionParameterType contains five elements: ToneReproductionCurves, ConversionLUT, ColorTemperature, InputDeviceColorGamut, and IlluminanceOfSurround. The ToneReproductionCurves element represents the characteristics (e.g., gamma curves for R, G, and B channels) of the provider’s display device. The ConversionLUT element is a look-up table (matrix), which con- verts an image between an image color space (e.g., RGB) and a stan- dard connection color space (e.g., CIE XYZ). The ColorTemperature element describes the white point setting (e.g., D65, D93) of the con- tent provider’s display device. The InputDeviceColorGamut element describes an input display device’s color gamut, which is represented by the chromaticity values of the R, G, and B channels at the maximum Digital-to-Analog (DAC) values. The IlluminanceOfSurround element describes the illuminance level of the provider’s viewing environment. The illuminance is represented in lux. Figure 2.15 shows the structure of ColorCorrectionParameterType. 2.3 SENSORY EFFECT VOCABULARY: DATA FORMATS FOR CREATING SEs The SEDL provides a high-level structure, and as described in this chapter, only provides abstract elements through which the extended type of indi- vidual SEs can be instantiated. The data format for creating the meta- data of each individual SE is defined as a sensory effect vocabulary in this standard.Table 2.7 shows the list of SEs defined in ISO/IEC 23005- 3:2013. There are 15 SEs currently defined, and all of them are defined as extensions of EffectBaseType with the exception of FlashType and PassiveKinestheticMotionType. FlashType is defined as an extension of Figure 2.14 Structure diagram of ParameterBaseType.
  • 48. Adding Sensorial Effects to Media Content 37 Figure 2.15 Structure diagram of ColorCorrectionParameterType. Table 2.7 Sensory effect vocabulary defined in ISO/IEC 23005-3:2013 Type name Base type LightType EffectBaseType FlashType LightType TemperatureType EffectBaseType WindType EffectBaseType VibrationType EffectBaseType SprayingType EffectBaseType ScentType EffectBaseType FogType EffectBaseType ColorCorrectionType EffectBaseType RigidBodyMotionType EffectBaseType PassiveKinestheticMotionType RigidBodyMotionType PassiveKinestheticForceType EffectBaseType ActiveKinestheticType EffectBaseType TactileType EffectBaseType ParameterizedTactileType EffectBaseType
  • 49. Discovering Diverse Content Through Random Scribd Documents
  • 50. The latest newspapers, bouquets, of choice flowers for everyone, concert parties, and indeed everything good that kind hearts could think of was showered upon us. I remember a western bed-patient asked me if I thought they would get him a plug of a special brand of chewing tobacco. He hadn't been able to buy it in four years overseas and it was his favorite. Sure thing, one of them said and in half-an-hour two men came aboard lugging along enough of that tobacco to stock a small store! I cannot go further into details. Enough to say that every trip it was the same, except that their hospitality became more systematized. Probably fifteen thousand wounded Canadian soldiers passed through Portland on their way home, and I know they will find it hard to forget the free-handed, warm-hearted welcome they got in that city. This memory will surely be a leaven working towards the maintenance and development of peace and good-will between Canada and the United States. But I must tell my story of the North. One trip Major Dick Shillington persuaded me to give them a Klondike evening. We gathered down below in H Mess and there I told something of the life-story of my old friend of by-gone days, a trail-blazer and prospector, Duncan McLeod. * * * * * When first I met Duncan McLeod, Cassiar Mac he was commonly called, he and his partner, John Donaldson, both old men, were working far up a tributary of Gold Bottom Creek which had not yet been fully prospected. Each liked to have his own house and do his own cooking, and so they lived within a few yards of each other in the creek bottom at the foot of the mountain summit that rose between them and Indian River. My trail over to the other creeks passed across their ground, and when we became friends I seldom failed to arrange my fortnightly trip over the divide so as to reach their place about dusk. I would have supper with one or the other and stay the night. McLeod was an old-country Scot, Donaldson born of Scottish parents in Glengarry county, Ontario. I am not using their real names, but they were real men. One of them, Donaldson, is still living in the wilds of the Yukon, still prospecting. He was the first white man the Teslin Indians had seen and known. They looked upon him as their Hi-yu-tyee, a sort of super-chief,
  • 51. during the years he lived near them. He had been just and kind with them, and his consequent influence saved occasional serious friction between the Indians and whites from becoming murder or massacre. After supper we would all three gather in one of the cabins and I would hear good talk until far towards midnight. Then there would be a pause and McLeod would say, Well, Mr. Pringle, I think it is time we were getting ready for our beds. I knew what he meant. Yes, it is, I would reply. The Bible would be handed me, I would read a chapter and we would kneel in prayer to God. Then to our bunks for a good sleep and early away in the morning for me to make the twenty-five miles over the heights to Gold Run before dark. What great talks those were I used to hear. I was only a boy then, and these old men had seen so much of the wild free life of the West of long ago days. What stirring adventures they had had! They came west before the railways by way of the American prairies, and, lured by gold discoveries, had entered the mountains, and then following the prospector's will-o-the- wisp, the better luck that lies just over the divide, they had gone farther and farther north. They had met and become partners in the Caribou camp, and had been together nearly forty years, in the Cassiar, on the Stewart, at Forty-Mile and now the Klondike. Donaldson had a wonderful native power of description. When story- telling he would pace slowly back and forth in the shadow beyond the dim candle-light and picture with quiet, resonant voice scenes and events of the past. How vivid it seemed to me! How the soul of the young man thrilled as he listened! Often there was a yearning at my heart when under his spell to lay aside my mission and go out into the farthest wilds, seeking adventure and living the free, fascinating life they had lived. How I wish I had written down these stories as they were told to me. But maybe they wouldn't have written, for much of the interest lay in the personality of the story-teller. McLeod's part was usually to help with dates or names when Donaldson's memory failed to recall them, but often he too would spin a yarn, and when he did there was always in its telling a gentleness, I can think of no better word, that gave a charm often missing in Donaldson's rougher style.
  • 52. They were both big men physically, but McLeod had been magnificent. He was now nearly eighty years old and broken with rheumatism, but in the giant frame and noble face and head crowned with its snow-white hair I saw my ideal of what a great Highland Chieftain might have been in the brave days of old. Donaldson told me one night, while his partner was making a batch of bread in his own cabin, what he knew of McLeod's history. I have never known a man, he said, that would measure up to my partner. None of us want our record searched too closely but it wouldn't make any difference to him. Nothing, nobody, seemed to have any power to make Mac do anything crooked or dirty. Whisky, gambling, bad women—he passed them up without apparent effort. Very strange too, even the few good women we have met in these camps never won anything from him but wholesome admiration. He had only to say the word and he could have had any one of them, but he didn't seem to care that way. What his experience had been before we met I do not know, he has never spoken much about it to anyone. But he and I have lived together as partners for nearly half a century, through the crazy, wicked days of all these gold camps, and Mac never did anything that he would be ashamed to tell his own mother. A fine tribute. Perhaps under the circumstances the finest thing that could be said of any man, for you cannot imagine the thousand almost irresistible temptations that were part of the daily life of the stampeders in those northern camps. Enough for me to say that many men of really good character back East, where they were unconsciously propped up by influences of family, church, and community, failed miserably to keep their footing when they came to the far north where all these supports were absent and temptation was everywhere. I do not judge them. God only knows the fight they had before they surrendered. So it was an arresting event to meet a man who had seen it all and whose partner of forty years told me he had lived clean. I often wondered what McLeod's story was. I had known him for three years before I ventured to ask him details about his home in Scotland, and why he left it to come so far away. I knew he had been reared in a village south of Edinburgh, in a good home with good parents, and much else he
  • 53. had told me, but there had always been a reticence that made you certain there was something else held back. One winter night when we were alone in his cabin, he opened his heart to me. He was an old-fashioned Scot. I was his minister and he knew me well. Besides he was coming to the end of the trail, and he needed a confidant. He said his story was hardly worth while bothering me with, I knew most of it, but what he could never tell anyone was about the lassie he had loved and lost. He had fallen in love with the brown eyes and winsome face of Margaret Campbell, a neighbour's daughter. They had gone to the same school, had taken their first communion together, and had both sung in the village church choir. When he revealed his love to her she told him she had guessed his secret and had lang syne given her heart to him. They were betrothed and very happy. But Margaret took ill in the fall and died before the new year. Early in the year he sailed from Leith for Canada, hoping that new scenes would soften his grief. As the years passed he kept moving west and then north. He grew to like the free life of the prospector and had not cared to leave the mountains and the trails. Time had healed the wound but his love for the sweetheart of his youth was just as true and tender as ever. From a hidden niche at the head of his bed he took down a small box, brought it to the table near the candle and unlocked it. He showed me his simple treasures. His rough, calloused hands trembled as he lifted them carefully from the box. There was a small photo so faded I could barely see the face on it. You'll see she was very beautiful, he said, for he saw with the clear vision of loving memory what was not for my younger but duller eyes to discern. There was her gold locket with a wisp of brown hair in it. She left me this, he said, when she died. Last, there was an old letter, stained and worn, the only love-letter she had ever written him, for he had only once been far enough or long enough away to need letters. He had spent a week in Glasgow after they became engaged and she had written to him. This was all. Somehow I felt as if I were on sacred ground, that the curtain had been drawn from before a Holy Place, and I was looking upon something more beautiful than I had ever seen before. As the old man put the box away his
  • 54. eyes were shining with a light that never was on sea or land. Mine were moist, and for a little I couldn't trust my voice to speak as I thought of the life-time of unswerving fealty to his dead lassie. Such long, lonely years they must have been! We did not say much more that night but the words we spoke were full of understanding and reverence. When it grew late and he handed me the Bible I hesitated in choosing a chapter, but not for long. The comfort and rejoicing of the twenty-third Psalm were all we wanted. One morning, not long afterwards, Donaldson came into my cabin on Hunker creek in evident distress. McLeod hadn't come out as usual to his work that morning, and he had gone to see what was wrong and found him in his bunk hardly able to speak. He had taken a stroke. A neighbouring miner watched by the sick man while Donaldson hitched up his dogs and raced to Dawson for medical aid. Donaldson went off down the trail and I hurried up the gulch to my old friend. He lingered for two or three days. The doctor could do nothing for him but to ease his last moments. I stayed near him until the end came. When he tried to speak his utterance was indistinct and what few words I could make out showed that his mind was wandering. Sometimes he was on the trail or in the camp, but oftenest he was home again in the far away land he loved, and in boyhood days among folk we did not know save one, known only to me, whose name was continually on his lips. He had a lucid interval just before he died and for a minute or two he thought and spoke clearly. I told him that death was near. Was there anything that we could do for him? Not very much, he said, I want Donaldson to have all I own. He's been a good partner. Bury my box with me. I'm not afraid to go now. It's just another prospecting trip to an unknown land and I have a Great Guide. He won't forsake an old prospector. He was one Himself, I'm thinking, when He came seeking us. He will keep a firm grip of me now that the trail is growing dark. I'm not afraid. These were his last words, and as he slipped away, we, who were gathered in the dimly-lighted little cabin, felt somehow that the Guide he
  • 55. spoke of was right at hand. He would surely keep a firm grip of the old miner on his last prospecting trip, even if strange storms were blowing, and it was black dark when they crossed the Great Divide. It would come morning too in that land when night was past, and when the new day dawned I know he would soon find the one whom he had loved long since and lost awhile. XVI. Soapy Smith, the Skagway Bandit My billet on the hospital ship Araguaya was very comfortable and my duties agreeable, but every time we reached port on the Canadian side of the Atlantic I had an impulse to desert the ship and become a stowaway on the hospital-train bound for British Columbia. It was there my wife and boy lived and I hadn't seen them for three years. However I got the chance at last to go without breaking regulations, for when I requested it, leave was readily granted me to stay ashore over one round-trip of the boat. This was supplemented by my taking the place of an absent conducting officer on the western train. So my transportation cost me nothing, except the congenial task of making myself generally useful to the returning soldiers. We had crossed the prairies, dropping many of our crowd at way points, and were climbing slowly along after supper up through a lonely stretch of mountains, when someone in the car where I was visiting gave it as his opinion that this would be a good piece of road on which to stage a train- robbery. This, of course, led to the mention of gun-men that they had known or heard of, men of the same ilk as Jesse James and Bill Miner. I contributed the story of Soapy Smith, the man who pulled off the most remarkably prolonged hold-up of which I have ever read. In the most approved dime-novel style he terrorized a town, not for a few days or weeks, but for six months.
  • 56. * * * * * You'll have to see the spot where Soapy died. The Skagway man who said this was rather proud of the celebrity which the bandit had brought to the place. I had come by the steamboat the nine hundred miles north from Vancouver, and was forced to spend a day in Skagway before going over the White Pass on my way to Dawson. A resident of the town was taking me around showing me the sights of this mushroom camp. It was humming with life and packed with people. The rush to the goldfields was then at its height. I judged by my friend's tone that he expected me to be deeply impressed with this particular sight. So down to the sea we went and out on the wharf. As we walked down he outlined the story of Smith's career in the camp. On the pier he showed me a dark stain, covering about a square foot, made by the life-blood of the man who for half-a-year forced Skagway to pay him tribute in hard cash. He was the leader of a group of men who robbed and cheated in wholesale style, and when it was necessary, in getting their victim's money, did not stop at murder. No one had attempted successfully to interfere with him. Reputable merchants were all intimidated into handing him their life-insurance premiums whenever he asked for them. His reputation as a killer was such that on the fourth of July, when good Americans celebrate their freedom, he rode at the head of the procession on a white horse! Very few complained loudly enough for Soapy to hear. Without question his nerve is to be admired. I have never heard or read in the annals of the west anything to equal his record in that Alaskan port. Desperadoes have ridden into towns, shot them up, took what they wanted and got away with it. But this man and his gang lived openly in a town of several thousands and in the most brazen fashion ran the place for months, although he was known as a crook, gunman, and leader of a gang of thugs. Skagway, it is true, was simply an eddy in a stream running into the gold-fields. In their mad haste to get on and over the Pass people wouldn't take time to straighten out the morals of the camp. The Soapy Smith business was especially uninviting as something to mix into. It isn't my funeral, they would say, and I don't want it to be. Jefferson B. Smith hailed from the city of St. Louis in the U.S.A. He got the nickname he bore because at the beginning of his career of crookedness he used to sell soap to some of the citizens of Denver, Colorado. There is
  • 57. nothing remarkable about selling soap unless you do it Smith's way. In the evenings he and a confederate would set up their stand on a suitable downtown street. All he needed was a high box for a pulpit and a smaller box behind it to stand on. This with a flaring torch giving an uneven light, some cakes of cheap soap, a couple of five-dollar bills and some change, completed the outfit. A little clever spieling, kept up more or less all evening, and the usual crowd would gather out of curiosity. He would show them an unwrapped piece of soap all the while extolling its great merits as a cleanser. To show how disinterested he was in introducing this superior article that only needed to be known to become popular, he would say he was going to wrap a five-dollar-bill in with some of these cakes of soap. He would sell the soap at fifty cents each piece, and everyone that bought stood to get the soap and make four dollars and fifty cents in cash out of the deal. Further if they watched him carefully they would see him actually put the five-dollar bill in when he wrapped up the soap, although he wouldn't guarantee that it would always be found there when the purchaser unwrapped his package. Of course he deceived them simply by clever sleight-of-hand. Rarely would any money be found, but people liked to be fooled if it is done the right way. To get them biting he might let one of the bills go to a confederate who was seemingly just one of the crowd. It was a money-making business as a rule for there were ordinarily quite a number of easy-marks around. They got the soap anyway. So came the name Soapy. Well, it was the same old clever, crooked game in other bigger and bolder forms that he now worked in Skagway, with the gun-play in addition. When the steamboat City of Seattle came into port there on January 17th, 1898, Soapy and his merrie-men were among the passengers. He was a slight built man, only five feet seven inches tall, very dark complexioned with a full beard and moustache. He wore a round Stetson hat with a hard brim. He soon established headquarters in the Kentucky saloon and Jeff Smith's Parlors. These were liquor saloons, not providing board or lodging, and running crooked gambling games in their rear, a fruitful source of revenue to Smith's card-sharpers. Then he and his confederates got busy on all sorts of other schemes to steal people's money. He had at least thirty followers, and there wasn't a dishonest trick known to the underworld of those days that some of them couldn't work.
  • 58. They wore Masonic, Oddfellow, Elk and other fraternity emblems that might help in working confidence-games. They opened up Information Bureaus where newcomers could be conveniently sized-up and robbed then or later on. One member who was very successful in luring victims was Old Man Tripp. He had grey hair, a long white beard and a benevolent countenance. It seemed impossible to suspect him of criminal intent. Smith had most of the gambling-joints paying him a big percentage. He even had men clever at the old, old shell-game running it in the fine weather at relay points on the trail. One of his favorite stunts for a while at first was to recruit for the Spanish-American war which was just then stirring the fighting blood of Americans. While the would-be soldier was stripped, having a fake medical examination, his clothing was looted of whatever money or valuables it might contain. A rather amusing incident occurred during Smith's regime in connection with the efforts of a Sky Pilot to raise some money at Skagway to build a church in a little place along the coast called Dyea. The parson came to Skagway in a rowboat one morning and started out with his subscription list. One of the first he tackled by chance and unknown to himself was the notorious bandit. Smith heartily endorsed the proposition and headed the list with one hundred dollars which he paid over in cash to the clergyman. Then he took the latter gentleman along to the principal merchants, hotel- men and gamblers and saw to it that they all gave handsome donations. At the close of the day the visitor decided to make for home. He was happy in the possession of over $2,000 in cash for his new church, thinking too what a splendid fellow this Mr. Smith was. On the way to the beach he was held up by one of Mr. Smith's lieutenants and relieved of all the money he had collected. He could get no redress. Other occurrences, such as the smothering of the negro-wench in order to steal the few hundred dollars she had earned by washing, were despicable and worthy only of the meanest type of criminal. Naturally there were many shooting scrapes in connection with the operations of the gang, and some killings, but nothing was done to end it. Not only was no move made to interfere with Soapy, but almost everyone
  • 59. refrained from speaking against him openly for reasons easy to understand. Of course there were men in Skagway who hotly resented the hold this outlaw had on the town, and were doing what they could to bring public sentiment to efficient action against him. One of these, a Canadian, was the editor of a local news sheet. In later years he became governor of Alaska. His name was Strong and it suited him, for he wasn't lacking in strength of character. One day, after his paper had appeared with an editorial making a scarcely-veiled attack on Soapy and his gang, he was met and stopped on the street by Smith accompanied by a tough named Mike Daley. They were loud and boisterous in accusing Strong of having offered personal insult to them in his newspaper. They demanded a retraction and apology and evidently meant to force a street-fight. Strong refused to withdraw his statement and declared that he intended to stand by his editorial. The loud quarrelling tones of the two desperadoes attracted the attention of two friends of Strong's, named D. C. Stephens and Allen, who happened to be walking down the same street. They hurried to the aid of their friend who at the risk of his life still refused to back down. The sight of reinforcements spoiled Smith's game and he and Daley went on without accomplishing their sinister purpose. There was another man who did not hesitate to say anywhere, and in most forcible terms what he thought of these criminals. This man was Frank Reid, a land-surveyor. He was fearless, and too quick with a gun for these crooks to attempt to silence. But he got very little open support and could do nothing single-handed. Of course things couldn't go on like this. In the Spring matters reached a climax. Word had at last got into the Klondike that it wasn't safe to come out by way of Skagway with your gold, that you were likely to be relieved of your poke by desperadoes. This news commenced to turn out-going gold-laden traffic down the Yukon and out by way of St. Michaels. The Skagway merchants saw the goose that laid the golden eggs flying away, and it put them at last into a ferment of anger at the cause of it. This led to the formation of a Vigilance Committee of which Reid was the moving spirit.
  • 60. Finally a Nanaimo man named Stewart, waiting for the steamboat on his way home from the Klondike, had $3,000.00 in nuggets stolen from him by one of Soapy's confidence men who had offered to turn it into currency. It was all he had and he made such a fuss that the whole town knew about his loss. He reported it to the U.S. Deputy-Marshal, a man named Taylor who was in Smith's pay. He got no satisfaction. The Vigilance Committee then took it up, and made it a casus belli against Soapy. They attempted to hold a secret meeting in a private hall but Smith and his confederates managed to break in on them. They then adjourned to Sylvester's wharf. At the land-end of the pier Frank Reid and a man named Murphy were posted to stop anyone approaching who was not a member of the Committee. Smith heard of this move and set off on the war-path down the main street towards the water-front. He carried a loaded .30-.30 Winchester rifle and as he went down the road he called on everyone to put up their hands. There were hundreds of men there but Soapy got a completely unanimous vote as he passed along, until he reached Reid and in him he met a man who called his bluff. Reid ordered him to stop and fired at him, but his revolver, a .45 Colt, failed to go off. He then grabbed the muzzle of Smith's gun and shoved it up in the air before he could shoot. Smith in the struggle backed away hanging on to his rifle, and while the gun was thus lowered and pointed momentarily at Reid's groin he fired. Reid fell to the ground but instantly fired at Smith again. This time the revolver responded and Smith dropped shot through the heart. He bled to death in a few minutes where he lay. This was the evening of July 8th, three days after the celebration already mentioned in which the gunman had taken the leading part. So the wharf was stained, and so ended the life of a man with a career of which the last six months were unique in the history of the wild west. Their leader gone, the break-up of his followers was quick and easy. After caring for Reid the Committee split up into armed groups of five or six men each. Some guarded the exits from the town, others closed the dance-halls, saloons, and gambling places. Every cabin was searched. Smith was killed on Friday and by Sunday the lot were rounded up and jailed. The captures included the five most dangerous members of the gang, Old Man Tripp, Slim Jim, Bowers, Mike Daly, and Scar-faced Charlie. It was indeed hard for any of them to escape. In front was the sea and behind the mountains with only one passable trail through them over into the Yukon
  • 61. Territory. They were all deported on out-going steamers. Most of them got long terms in penetentiary. Before the shooting a few of them who saw danger ahead straggled over into Canada by way of the White Pass but they changed into model citizens when they came under the surveillance of the Mounted Police. Smith was buried with scant ceremony and no mourners. Frank Reid lingered for two weeks when he also died. The whole place turned out at his funeral to do honor to his bravery in ridding the town of the pestilential group of criminals who had been in control so long. Warwick Bros. Rutter, Limited Printers and Bookbinders, Toronto, Canada
  • 62. *** END OF THE PROJECT GUTENBERG EBOOK TILLICUMS OF THE TRAIL *** Updated editions will replace the previous one—the old editions will be renamed. Creating the works from print editions not protected by U.S. copyright law means that no one owns a United States copyright in these works, so the Foundation (and you!) can copy and distribute it in the United States without permission and without paying copyright royalties. Special rules, set forth in the General Terms of Use part of this license, apply to copying and distributing Project Gutenberg™ electronic works to protect the PROJECT GUTENBERG™ concept and trademark. Project Gutenberg is a registered trademark, and may not be used if you charge for an eBook, except by following the terms of the trademark license, including paying royalties for use of the Project Gutenberg trademark. If you do not charge anything for copies of this eBook, complying with the trademark license is very easy. You may use this eBook for nearly any purpose such as creation of derivative works, reports, performances and research. Project Gutenberg eBooks may be modified and printed and given away—you may do practically ANYTHING in the United States with eBooks not protected by U.S. copyright law. Redistribution is subject to the trademark license, especially commercial redistribution. START: FULL LICENSE
  • 63. THE FULL PROJECT GUTENBERG LICENSE
  • 64. PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK To protect the Project Gutenberg™ mission of promoting the free distribution of electronic works, by using or distributing this work (or any other work associated in any way with the phrase “Project Gutenberg”), you agree to comply with all the terms of the Full Project Gutenberg™ License available with this file or online at www.gutenberg.org/license. Section 1. General Terms of Use and Redistributing Project Gutenberg™ electronic works 1.A. By reading or using any part of this Project Gutenberg™ electronic work, you indicate that you have read, understand, agree to and accept all the terms of this license and intellectual property (trademark/copyright) agreement. If you do not agree to abide by all the terms of this agreement, you must cease using and return or destroy all copies of Project Gutenberg™ electronic works in your possession. If you paid a fee for obtaining a copy of or access to a Project Gutenberg™ electronic work and you do not agree to be bound by the terms of this agreement, you may obtain a refund from the person or entity to whom you paid the fee as set forth in paragraph 1.E.8. 1.B. “Project Gutenberg” is a registered trademark. It may only be used on or associated in any way with an electronic work by people who agree to be bound by the terms of this agreement. There are a few things that you can do with most Project Gutenberg™ electronic works even without complying with the full terms of this agreement. See paragraph 1.C below. There are a lot of things you can do with Project Gutenberg™ electronic works if you follow the terms of this agreement and help preserve free future access to Project Gutenberg™ electronic works. See paragraph 1.E below.
  • 65. 1.C. The Project Gutenberg Literary Archive Foundation (“the Foundation” or PGLAF), owns a compilation copyright in the collection of Project Gutenberg™ electronic works. Nearly all the individual works in the collection are in the public domain in the United States. If an individual work is unprotected by copyright law in the United States and you are located in the United States, we do not claim a right to prevent you from copying, distributing, performing, displaying or creating derivative works based on the work as long as all references to Project Gutenberg are removed. Of course, we hope that you will support the Project Gutenberg™ mission of promoting free access to electronic works by freely sharing Project Gutenberg™ works in compliance with the terms of this agreement for keeping the Project Gutenberg™ name associated with the work. You can easily comply with the terms of this agreement by keeping this work in the same format with its attached full Project Gutenberg™ License when you share it without charge with others. 1.D. The copyright laws of the place where you are located also govern what you can do with this work. Copyright laws in most countries are in a constant state of change. If you are outside the United States, check the laws of your country in addition to the terms of this agreement before downloading, copying, displaying, performing, distributing or creating derivative works based on this work or any other Project Gutenberg™ work. The Foundation makes no representations concerning the copyright status of any work in any country other than the United States. 1.E. Unless you have removed all references to Project Gutenberg: 1.E.1. The following sentence, with active links to, or other immediate access to, the full Project Gutenberg™ License must appear prominently whenever any copy of a Project Gutenberg™ work (any work on which the phrase “Project Gutenberg” appears, or with which the phrase “Project Gutenberg” is associated) is accessed, displayed, performed, viewed, copied or distributed:
  • 66. This eBook is for the use of anyone anywhere in the United States and most other parts of the world at no cost and with almost no restrictions whatsoever. You may copy it, give it away or re-use it under the terms of the Project Gutenberg License included with this eBook or online at www.gutenberg.org. If you are not located in the United States, you will have to check the laws of the country where you are located before using this eBook. 1.E.2. If an individual Project Gutenberg™ electronic work is derived from texts not protected by U.S. copyright law (does not contain a notice indicating that it is posted with permission of the copyright holder), the work can be copied and distributed to anyone in the United States without paying any fees or charges. If you are redistributing or providing access to a work with the phrase “Project Gutenberg” associated with or appearing on the work, you must comply either with the requirements of paragraphs 1.E.1 through 1.E.7 or obtain permission for the use of the work and the Project Gutenberg™ trademark as set forth in paragraphs 1.E.8 or 1.E.9. 1.E.3. If an individual Project Gutenberg™ electronic work is posted with the permission of the copyright holder, your use and distribution must comply with both paragraphs 1.E.1 through 1.E.7 and any additional terms imposed by the copyright holder. Additional terms will be linked to the Project Gutenberg™ License for all works posted with the permission of the copyright holder found at the beginning of this work. 1.E.4. Do not unlink or detach or remove the full Project Gutenberg™ License terms from this work, or any files containing a part of this work or any other work associated with Project Gutenberg™. 1.E.5. Do not copy, display, perform, distribute or redistribute this electronic work, or any part of this electronic work, without prominently displaying the sentence set forth in paragraph 1.E.1
  • 67. with active links or immediate access to the full terms of the Project Gutenberg™ License. 1.E.6. You may convert to and distribute this work in any binary, compressed, marked up, nonproprietary or proprietary form, including any word processing or hypertext form. However, if you provide access to or distribute copies of a Project Gutenberg™ work in a format other than “Plain Vanilla ASCII” or other format used in the official version posted on the official Project Gutenberg™ website (www.gutenberg.org), you must, at no additional cost, fee or expense to the user, provide a copy, a means of exporting a copy, or a means of obtaining a copy upon request, of the work in its original “Plain Vanilla ASCII” or other form. Any alternate format must include the full Project Gutenberg™ License as specified in paragraph 1.E.1. 1.E.7. Do not charge a fee for access to, viewing, displaying, performing, copying or distributing any Project Gutenberg™ works unless you comply with paragraph 1.E.8 or 1.E.9. 1.E.8. You may charge a reasonable fee for copies of or providing access to or distributing Project Gutenberg™ electronic works provided that: • You pay a royalty fee of 20% of the gross profits you derive from the use of Project Gutenberg™ works calculated using the method you already use to calculate your applicable taxes. The fee is owed to the owner of the Project Gutenberg™ trademark, but he has agreed to donate royalties under this paragraph to the Project Gutenberg Literary Archive Foundation. Royalty payments must be paid within 60 days following each date on which you prepare (or are legally required to prepare) your periodic tax returns. Royalty payments should be clearly marked as such and sent to the Project Gutenberg Literary Archive Foundation at the address specified in Section 4, “Information
  • 68. about donations to the Project Gutenberg Literary Archive Foundation.” • You provide a full refund of any money paid by a user who notifies you in writing (or by e-mail) within 30 days of receipt that s/he does not agree to the terms of the full Project Gutenberg™ License. You must require such a user to return or destroy all copies of the works possessed in a physical medium and discontinue all use of and all access to other copies of Project Gutenberg™ works. • You provide, in accordance with paragraph 1.F.3, a full refund of any money paid for a work or a replacement copy, if a defect in the electronic work is discovered and reported to you within 90 days of receipt of the work. • You comply with all other terms of this agreement for free distribution of Project Gutenberg™ works. 1.E.9. If you wish to charge a fee or distribute a Project Gutenberg™ electronic work or group of works on different terms than are set forth in this agreement, you must obtain permission in writing from the Project Gutenberg Literary Archive Foundation, the manager of the Project Gutenberg™ trademark. Contact the Foundation as set forth in Section 3 below. 1.F. 1.F.1. Project Gutenberg volunteers and employees expend considerable effort to identify, do copyright research on, transcribe and proofread works not protected by U.S. copyright law in creating the Project Gutenberg™ collection. Despite these efforts, Project Gutenberg™ electronic works, and the medium on which they may be stored, may contain “Defects,” such as, but not limited to, incomplete, inaccurate or corrupt data, transcription errors, a copyright or other intellectual property infringement, a defective or
  • 69. damaged disk or other medium, a computer virus, or computer codes that damage or cannot be read by your equipment. 1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except for the “Right of Replacement or Refund” described in paragraph 1.F.3, the Project Gutenberg Literary Archive Foundation, the owner of the Project Gutenberg™ trademark, and any other party distributing a Project Gutenberg™ electronic work under this agreement, disclaim all liability to you for damages, costs and expenses, including legal fees. YOU AGREE THAT YOU HAVE NO REMEDIES FOR NEGLIGENCE, STRICT LIABILITY, BREACH OF WARRANTY OR BREACH OF CONTRACT EXCEPT THOSE PROVIDED IN PARAGRAPH 1.F.3. YOU AGREE THAT THE FOUNDATION, THE TRADEMARK OWNER, AND ANY DISTRIBUTOR UNDER THIS AGREEMENT WILL NOT BE LIABLE TO YOU FOR ACTUAL, DIRECT, INDIRECT, CONSEQUENTIAL, PUNITIVE OR INCIDENTAL DAMAGES EVEN IF YOU GIVE NOTICE OF THE POSSIBILITY OF SUCH DAMAGE. 1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you discover a defect in this electronic work within 90 days of receiving it, you can receive a refund of the money (if any) you paid for it by sending a written explanation to the person you received the work from. If you received the work on a physical medium, you must return the medium with your written explanation. The person or entity that provided you with the defective work may elect to provide a replacement copy in lieu of a refund. If you received the work electronically, the person or entity providing it to you may choose to give you a second opportunity to receive the work electronically in lieu of a refund. If the second copy is also defective, you may demand a refund in writing without further opportunities to fix the problem. 1.F.4. Except for the limited right of replacement or refund set forth in paragraph 1.F.3, this work is provided to you ‘AS-IS’, WITH NO OTHER WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED,
  • 70. INCLUDING BUT NOT LIMITED TO WARRANTIES OF MERCHANTABILITY OR FITNESS FOR ANY PURPOSE. 1.F.5. Some states do not allow disclaimers of certain implied warranties or the exclusion or limitation of certain types of damages. If any disclaimer or limitation set forth in this agreement violates the law of the state applicable to this agreement, the agreement shall be interpreted to make the maximum disclaimer or limitation permitted by the applicable state law. The invalidity or unenforceability of any provision of this agreement shall not void the remaining provisions. 1.F.6. INDEMNITY - You agree to indemnify and hold the Foundation, the trademark owner, any agent or employee of the Foundation, anyone providing copies of Project Gutenberg™ electronic works in accordance with this agreement, and any volunteers associated with the production, promotion and distribution of Project Gutenberg™ electronic works, harmless from all liability, costs and expenses, including legal fees, that arise directly or indirectly from any of the following which you do or cause to occur: (a) distribution of this or any Project Gutenberg™ work, (b) alteration, modification, or additions or deletions to any Project Gutenberg™ work, and (c) any Defect you cause. Section 2. Information about the Mission of Project Gutenberg™ Project Gutenberg™ is synonymous with the free distribution of electronic works in formats readable by the widest variety of computers including obsolete, old, middle-aged and new computers. It exists because of the efforts of hundreds of volunteers and donations from people in all walks of life. Volunteers and financial support to provide volunteers with the assistance they need are critical to reaching Project Gutenberg™’s goals and ensuring that the Project Gutenberg™ collection will
  • 71. remain freely available for generations to come. In 2001, the Project Gutenberg Literary Archive Foundation was created to provide a secure and permanent future for Project Gutenberg™ and future generations. To learn more about the Project Gutenberg Literary Archive Foundation and how your efforts and donations can help, see Sections 3 and 4 and the Foundation information page at www.gutenberg.org. Section 3. Information about the Project Gutenberg Literary Archive Foundation The Project Gutenberg Literary Archive Foundation is a non-profit 501(c)(3) educational corporation organized under the laws of the state of Mississippi and granted tax exempt status by the Internal Revenue Service. The Foundation’s EIN or federal tax identification number is 64-6221541. Contributions to the Project Gutenberg Literary Archive Foundation are tax deductible to the full extent permitted by U.S. federal laws and your state’s laws. The Foundation’s business office is located at 809 North 1500 West, Salt Lake City, UT 84116, (801) 596-1887. Email contact links and up to date contact information can be found at the Foundation’s website and official page at www.gutenberg.org/contact Section 4. Information about Donations to the Project Gutenberg Literary Archive Foundation Project Gutenberg™ depends upon and cannot survive without widespread public support and donations to carry out its mission of increasing the number of public domain and licensed works that can be freely distributed in machine-readable form accessible by the widest array of equipment including outdated equipment. Many
  • 72. small donations ($1 to $5,000) are particularly important to maintaining tax exempt status with the IRS. The Foundation is committed to complying with the laws regulating charities and charitable donations in all 50 states of the United States. Compliance requirements are not uniform and it takes a considerable effort, much paperwork and many fees to meet and keep up with these requirements. We do not solicit donations in locations where we have not received written confirmation of compliance. To SEND DONATIONS or determine the status of compliance for any particular state visit www.gutenberg.org/donate. While we cannot and do not solicit contributions from states where we have not met the solicitation requirements, we know of no prohibition against accepting unsolicited donations from donors in such states who approach us with offers to donate. International donations are gratefully accepted, but we cannot make any statements concerning tax treatment of donations received from outside the United States. U.S. laws alone swamp our small staff. Please check the Project Gutenberg web pages for current donation methods and addresses. Donations are accepted in a number of other ways including checks, online payments and credit card donations. To donate, please visit: www.gutenberg.org/donate. Section 5. General Information About Project Gutenberg™ electronic works Professor Michael S. Hart was the originator of the Project Gutenberg™ concept of a library of electronic works that could be freely shared with anyone. For forty years, he produced and distributed Project Gutenberg™ eBooks with only a loose network of volunteer support.
  • 73. Project Gutenberg™ eBooks are often created from several printed editions, all of which are confirmed as not protected by copyright in the U.S. unless a copyright notice is included. Thus, we do not necessarily keep eBooks in compliance with any particular paper edition. Most people start at our website which has the main PG search facility: www.gutenberg.org. This website includes information about Project Gutenberg™, including how to make donations to the Project Gutenberg Literary Archive Foundation, how to help produce our new eBooks, and how to subscribe to our email newsletter to hear about new eBooks.
  • 74. Welcome to Our Bookstore - The Ultimate Destination for Book Lovers Are you passionate about books and eager to explore new worlds of knowledge? At our website, we offer a vast collection of books that cater to every interest and age group. From classic literature to specialized publications, self-help books, and children’s stories, we have it all! Each book is a gateway to new adventures, helping you expand your knowledge and nourish your soul Experience Convenient and Enjoyable Book Shopping Our website is more than just an online bookstore—it’s a bridge connecting readers to the timeless values of culture and wisdom. With a sleek and user-friendly interface and a smart search system, you can find your favorite books quickly and easily. Enjoy special promotions, fast home delivery, and a seamless shopping experience that saves you time and enhances your love for reading. Let us accompany you on the journey of exploring knowledge and personal growth! ebookgate.com