SlideShare a Scribd company logo
Intelligent Computing Proceedings of the 2020
Computing Conference Volume 3 Kohei Arai
download
https://textbookfull.com/product/intelligent-computing-
proceedings-of-the-2020-computing-conference-volume-3-kohei-arai/
Download full version ebook from https://textbookfull.com
We believe these products will be a great fit for you. Click
the link to download now, or visit textbookfull.com
to discover even more!
Intelligent Computing Proceedings of the 2020 Computing
Conference Volume 2 Kohei Arai
https://textbookfull.com/product/intelligent-computing-
proceedings-of-the-2020-computing-conference-volume-2-kohei-arai/
Intelligent Computing Proceedings of the 2020 Computing
Conference Volume 1 Kohei Arai
https://textbookfull.com/product/intelligent-computing-
proceedings-of-the-2020-computing-conference-volume-1-kohei-arai/
Intelligent Computing: Proceedings of the 2018
Computing Conference, Volume 2 Kohei Arai
https://textbookfull.com/product/intelligent-computing-
proceedings-of-the-2018-computing-conference-volume-2-kohei-arai/
Intelligent Systems and Applications: Proceedings of
the 2020 Intelligent Systems Conference (IntelliSys)
Volume 3 Kohei Arai
https://textbookfull.com/product/intelligent-systems-and-
applications-proceedings-of-the-2020-intelligent-systems-
conference-intellisys-volume-3-kohei-arai/
Intelligent Systems and Applications: Proceedings of
the 2020 Intelligent Systems Conference (IntelliSys)
Volume 2 Kohei Arai
https://textbookfull.com/product/intelligent-systems-and-
applications-proceedings-of-the-2020-intelligent-systems-
conference-intellisys-volume-2-kohei-arai/
Proceedings of the Future Technologies Conference (FTC)
2020, Volume 1 Kohei Arai
https://textbookfull.com/product/proceedings-of-the-future-
technologies-conference-ftc-2020-volume-1-kohei-arai/
Proceedings of the Future Technologies Conference FTC
2018 Volume 2 Kohei Arai
https://textbookfull.com/product/proceedings-of-the-future-
technologies-conference-ftc-2018-volume-2-kohei-arai/
Proceedings of the Future Technologies Conference FTC
2018 Volume 1 Kohei Arai
https://textbookfull.com/product/proceedings-of-the-future-
technologies-conference-ftc-2018-volume-1-kohei-arai/
Advances in Computer Vision: Proceedings of the 2019
Computer Vision Conference (CVC), Volume 1 Kohei Arai
https://textbookfull.com/product/advances-in-computer-vision-
proceedings-of-the-2019-computer-vision-conference-cvc-
volume-1-kohei-arai/
Advances in Intelligent Systems and Computing 1230
Kohei Arai
Supriya Kapoor
Rahul Bhatia Editors
Intelligent
Computing
Proceedings of the 2020 Computing
Conference, Volume 3
Advances in Intelligent Systems and Computing
Volume 1230
Series Editor
Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences,
Warsaw, Poland
Advisory Editors
Nikhil R. Pal, Indian Statistical Institute, Kolkata, India
Rafael Bello Perez, Faculty of Mathematics, Physics and Computing,
Universidad Central de Las Villas, Santa Clara, Cuba
Emilio S. Corchado, University of Salamanca, Salamanca, Spain
Hani Hagras, School of Computer Science and Electronic Engineering,
University of Essex, Colchester, UK
László T. Kóczy, Department of Automation, Széchenyi István University,
Gyor, Hungary
Vladik Kreinovich, Department of Computer Science, University of Texas
at El Paso, El Paso, TX, USA
Chin-Teng Lin, Department of Electrical Engineering, National Chiao
Tung University, Hsinchu, Taiwan
Jie Lu, Faculty of Engineering and Information Technology,
University of Technology Sydney, Sydney, NSW, Australia
Patricia Melin, Graduate Program of Computer Science, Tijuana Institute
of Technology, Tijuana, Mexico
Nadia Nedjah, Department of Electronics Engineering, University of Rio de Janeiro,
Rio de Janeiro, Brazil
Ngoc Thanh Nguyen , Faculty of Computer Science and Management,
Wrocław University of Technology, Wrocław, Poland
Jun Wang, Department of Mechanical and Automation Engineering,
The Chinese University of Hong Kong, Shatin, Hong Kong
The series “Advances in Intelligent Systems and Computing” contains publications
on theory, applications, and design methods of Intelligent Systems and Intelligent
Computing. Virtually all disciplines such as engineering, natural sciences, computer
and information science, ICT, economics, business, e-commerce, environment,
healthcare, life science are covered. The list of topics spans all the areas of modern
intelligent systems and computing such as: computational intelligence, soft comput-
ing including neural networks, fuzzy systems, evolutionary computing and the fusion
of these paradigms, social intelligence, ambient intelligence, computational neuro-
science, artificial life, virtual worlds and society, cognitive science and systems,
Perception and Vision, DNA and immune based systems, self-organizing and
adaptive systems, e-Learning and teaching, human-centered and human-centric
computing, recommender systems, intelligent control, robotics and mechatronics
including human-machine teaming, knowledge-based paradigms, learning para-
digms, machine ethics, intelligent data analysis, knowledge management, intelligent
agents, intelligent decision making and support, intelligent network security, trust
management, interactive entertainment, Web intelligence and multimedia.
The publications within “Advances in Intelligent Systems and Computing” are
primarily proceedings of important conferences, symposia and congresses. They
cover significant recent developments in the field, both of a foundational and
applicable character. An important characteristic feature of the series is the short
publication time and world-wide distribution. This permits a rapid and broad
dissemination of research results.
** Indexing: The books of this series are submitted to ISI Proceedings,
EI-Compendex, DBLP, SCOPUS, Google Scholar and Springerlink **
More information about this series at http://www.springer.com/series/11156
Kohei Arai • Supriya Kapoor •
Rahul Bhatia
Editors
Intelligent Computing
Proceedings of the 2020 Computing
Conference, Volume 3
123
Editors
Kohei Arai
Faculty of Science and Engineering
Saga University
Saga, Japan
Rahul Bhatia
The Science and Information
(SAI) Organization
Bradford, West Yorkshire, UK
Supriya Kapoor
The Science and Information
(SAI) Organization
Bradford, West Yorkshire, UK
ISSN 2194-5357 ISSN 2194-5365 (electronic)
Advances in Intelligent Systems and Computing
ISBN 978-3-030-52242-1 ISBN 978-3-030-52243-8 (eBook)
https://doi.org/10.1007/978-3-030-52243-8
© Springer Nature Switzerland AG 2020
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt from
the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, expressed or implied, with respect to the material contained
herein or for any errors or omissions that may have been made. The publisher remains neutral with regard
to jurisdictional claims in published maps and institutional affiliations.
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Editor’s Preface
On behalf of the Committee, we welcome you to the Computing Conference 2020.
The aim of this conference is to give a platform to researchers with fundamental
contributions and to be a premier venue for industry practitioners to share and
report on up-to-the-minute innovations and developments, to summarize the state
of the art and to exchange ideas and advances in all aspects of computer sciences
and its applications.
For this edition of the conference, we received 514 submissions from 50+
countries around the world. These submissions underwent a double-blind peer
review process. Of those 514 submissions, 160 submissions (including 15 posters)
have been selected to be included in this proceedings. The published proceedings
has been divided into three volumes covering a wide range of conference tracks,
such as technology trends, computing, intelligent systems, machine vision, security,
communication, electronics and e-learning to name a few. In addition to the con-
tributed papers, the conference program included inspiring keynote talks. Their
talks were anticipated to pique the interest of the entire computing audience by their
thought-provoking claims which were streamed live during the conferences. Also,
the authors had very professionally presented their research papers which were
viewed by a large international audience online. All this digital content engaged
significant contemplation and discussions amongst all participants.
Deep appreciation goes to the keynote speakers for sharing their knowledge and
expertise with us and to all the authors who have spent the time and effort to
contribute significantly to this conference. We are also indebted to the Organizing
Committee for their great efforts in ensuring the successful implementation of the
conference. In particular, we would like to thank the Technical Committee for their
constructive and enlightening reviews on the manuscripts in the limited timescale.
We hope that all the participants and the interested readers benefit scientifically
from this book and find it stimulating in the process. We are pleased to present the
proceedings of this conference as its published record.
v
Hope to see you in 2021, in our next Computing Conference, with the same
amplitude, focus and determination.
Kohei Arai
vi Editor’s Preface
Contents
Preventing Neural Network Weight Stealing
via Network Obfuscation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Kálmán Szentannai, Jalal Al-Afandi, and András Horváth
Applications of Z-Numbers and Neural Networks in Engineering . . . . . 12
Raheleh Jafari, Sina Razvarz, and Alexander Gegov
5G-FOG: Freezing of Gait Identification in Multi-class Softmax
Neural Network Exploiting 5G Spectrum . . . . . . . . . . . . . . . . . . . . . . . . 26
Jan Sher Khan, Ahsen Tahir, Jawad Ahmad, Syed Aziz Shah,
Qammer H. Abbasi, Gordon Russell, and William Buchanan
Adaptive Blending Units: Trainable Activation Functions for Deep
Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Leon René Sütfeld, Flemming Brieger, Holger Finger, Sonja Füllhase,
and Gordon Pipa
Application of Neural Networks to Characterization
of Chemical Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Mahmoud Zaki Iskandarani
Application of Machine Learning in Deception Detection. . . . . . . . . . . . 61
Owolafe Otasowie
A New Approach to Estimate the Discharge Coefficient
in Sharp-Crested Rectangular Side Orifices Using Gene
Expression Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Hossein Bonakdari, Bahram Gharabaghi, Isa Ebtehaj, and Ali Sharifi
DiaTTroD: A Logical Agent Diagnostic Test for Tropical Diseases . . . . 97
Sandra Mae W. Famador and Tardi Tjahjadi
A Weighted Combination Method of Multiple K-Nearest Neighbor
Classifiers for EEG-Based Cognitive Task Classification . . . . . . . . . . . . 116
Abduljalil Mohamed, Amer Mohamed, and Yasir Mustafa
vii
Detection and Localization of Breast Tumor in 2D Using
Microwave Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
Abdelfettah Miraoui, Lotfi Merad Sidi, and Mohamed Meriah
Regression Analysis of Brain Biomechanics Under
Uniaxial Deformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
O. Abuomar, F. Patterson, and R. K. Prabhu
Exudate-Based Classification for Detection of Severity of Diabetic
Macula Edema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
Nandana Prabhu, Deepak Bhoir, Nita Shanbhag, and Uma Rao
Analysis and Detection of Brain Tumor Using U-Net-Based
Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
Vibhu Garg, Madhur Bansal, A. Sanjana, and Mayank Dave
Implementation of Deep Neural Networks in Facial Emotion
Perception in Patients Suffering from Depressive Disorder: Promising
Tool in the Diagnostic Process and Treatment Evaluation . . . . . . . . . . . 174
Krzysztof Michalik and Katarzyna Kucharska
Invisibility and Fidelity Vector Map Watermarking Based
on Linear Cellular Automata Transform . . . . . . . . . . . . . . . . . . . . . . . . 185
Saleh Al-Ardhi, Vijey Thayananthan, and Abdullah Basuhail
Implementing Variable Power Transmission Patterns
for Authentication Purposes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
Hosam Alamleh, Ali Abdullah S. Alqahtani, and Dalia Alamleh
SADDLE: Secure Aerial Data Delivery with Lightweight Encryption . . . 204
Anthony Demeri, William Diehl, and Ahmad Salman
Malware Analysis with Machine Learning for Evaluating the Integrity
of Mission Critical Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
Robert Heras and Alexander Perez-Pons
Enhanced Security Using Elasticsearch and Machine Learning . . . . . . . 244
Ovidiu Negoita and Mihai Carabas
Memory Incentive Provenance (MIP) to Secure the Wireless Sensor
Data Stream . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
Mohammad Amanul Islam
Tightly Close It, Robustly Secure It: Key-Based Lightweight Process
for Propping up Encryption Techniques . . . . . . . . . . . . . . . . . . . . . . . . 278
Muhammed Jassem Al-Muhammed, Ahmad Al-Daraiseh,
and Raed Abuzitar
viii Contents
Statistical Analysis to Optimize the Generation of Cryptographic Keys
from Physical Unclonable Functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
Bertrand Cambou, Mohammad Mohammadi, Christopher Philabaum,
and Duane Booher
Towards an Intelligent Intrusion Detection System:
A Proposed Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
Raghda Fawzey Hriez, Ali Hadi, and Jalal Omer Atoum
LockChain Technology as One Source of Truth for Cyber,
Information Security and Privacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
Yuri Bobbert and Nese Ozkanli
Introduction of a Hybrid Monitor for Cyber-Physical Systems . . . . . . . 348
J. Ceasar Aguma, Bruce McMillin, and Amelia Regan
Software Implementation of a SRAM PUF-Based Password Manager . . . 361
Sareh Assiri, Bertrand Cambou, D. Duane Booher,
and Mohammad Mohammadinodoushan
Contactless Palm Vein Authentication Security Technique for Better
Adoption of e-Commerce in Developing Countries . . . . . . . . . . . . . . . . . 380
Sunday Alabi, Martin White, and Natalia Beloff
LightGBM Algorithm for Malware Detection . . . . . . . . . . . . . . . . . . . . 391
Mouhammd Al-kasassbeh, Mohammad A. Abbadi,
and Ahmed M. Al-Bustanji
Exploiting Linearity in White-Box AES with Differential
Computation Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
Jakub Klemsa and Martin Novotný
Immune-Based Network Dynamic Risk Control Strategy Knowledge
Ontology Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
Meng Huang, Tao Li, Hui Zhao, Xiaojie Liu, and Zhan Gao
Windows 10 Hibernation File Forensics . . . . . . . . . . . . . . . . . . . . . . . . . 431
Ahmad Ghafarian and Deniz Keskin
Behavior and Biometrics Based Masquerade Detection
Mobile Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446
Pranieth Chandrasekara, Hasini Abeywardana, Sammani Rajapaksha,
Sanjeevan Parameshwaran, and Kavinga Yapa Abeywardana
Spoofed/Unintentional Fingerprint Detection Using Behavioral
Biometric Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
Ammar S. Salman and Odai S. Salman
Enabling Paratransit and TNC Services with Blockchain Based
Smart Contracts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
Amari N. Lewis and Amelia C. Regan
Contents ix
A Review of Cyber Security Issues in Hospitality Industry . . . . . . . . . . 482
Neda Shabani and Arslan Munir
Extended Protocol Using Keyless Encryption Based on Memristors. . . . 494
Yuxuan Zhu, Bertrand Cambou, David Hely, and Sareh Assiri
Recommendations for Effective Security Assurance
of Software-Dependent Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511
Jason Jaskolka
On Generating Cancelable Biometric Templates Using Visual
Secret Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532
Manisha and Nitin Kumar
An Integrated Safe and Secure Approach for Authentication and
Secret Key Establishment in Automotive Cyber-Physical Systems . . . . . 545
Naresh Kumar Giri, Arslan Munir, and Joonho Kong
How Many Clusters? An Entropic Approach to Hierarchical
Cluster Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560
Sergei Koltcov, Vera Ignatenko, and Sergei Pashakhin
Analysis of Structural Liveness and Boundedness in Weighted
Free-Choice Net Based on Circuit Flow Values . . . . . . . . . . . . . . . . . . . 570
Yojiro Harie and Katsumi Wasaki
Classification of a Pedestrian’s Behaviour Using Dual Deep
Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 581
James Spooner, Madeline Cheah, Vasile Palade, Stratis Kanarachos,
and Alireza Daneshkhah
Towards Porting Astrophysics Visual Analytics Services
in the European Open Science Cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . 598
Eva Sciacca, Fabio Vitello, Ugo Becciani, Cristobal Bordiu,
Filomena Bufano, Antonio Calanducci, Alessandro Costa, Mario Raciti,
and Simone Riggi
Computer Graphics-Based Analysis of Anterior Cruciate
Ligament in a Partially Replaced Knee . . . . . . . . . . . . . . . . . . . . . . . . . 607
Ahmed Imran
An Assessment Algorithm for Evaluating Students Satisfaction
in e-Learning Environments: A Case Study . . . . . . . . . . . . . . . . . . . . . . 613
M. Caramihai, Irina Severin, and Ana Maria Bogatu
The Use of New Technologies in the Organization
of the Educational Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 622
Y. A. Daineko, N. T. Duzbayev, K. B. Kozhaly, M. T. Ipalakova,
Zh. M. Bekaulova, N. Zh. Nalgozhina, and R. N. Sharshova
x Contents
Design and Implementation of Cryptocurrency Price
Prediction System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 628
Milena Karova, Ivaylo Penev, and Daniel Marinov
Strategic Behavior Discovery of Multi-agent Systems Based
on Deep Learning Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644
Boris Morose, Sabina Aledort, and Gal Zaidman
Development of Prediction Methods for Taxi Order Service
on the Basis of Intellectual Data Analysis. . . . . . . . . . . . . . . . . . . . . . . . 652
N. A. Andriyanov
Discourse Analysis on Learning Theories and AI. . . . . . . . . . . . . . . . . . 665
Rosemary Papa, Karen Moran Jackson, Ric Brown, and David Jackson
False Asymptotic Instability Behavior at Iterated Functions
with Lyapunov Stability in Nonlinear Time Series . . . . . . . . . . . . . . . . . 673
Charles Roberto Telles
The Influence of Methodological Tools on the Diagnosed Level
of Intellectual Competence in Older Adolescents . . . . . . . . . . . . . . . . . . 694
Sipovskaya Yana Ivanovna
The Automated Solar Activity Prediction System (ASAP) Update
Based on Optimization of a Machine Learning Approach . . . . . . . . . . . 702
Ali K. Abed and Rami Qahwaji
Author Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 719
Contents xi
Preventing Neural Network Weight
Stealing via Network Obfuscation
Kálmán Szentannai, Jalal Al-Afandi, and András Horváth(B)
Faculty of Information Technology and Bionics, Peter Pazmany Catholic University,
Práter u. 50/A, Budapest 1083, Hungary
horvath.andras@itk.ppke.hu
Abstract. Deep Neural Networks are robust to minor perturbations of
the learned network parameters and their minor modifications do not
change the overall network response significantly. This allows space for
model stealing, where a malevolent attacker can steal an already trained
network, modify the weights and claim the new network his own intel-
lectual property. In certain cases this can prevent the free distribution
and application of networks in the embedded domain. In this paper, we
propose a method for creating an equivalent version of an already trained
fully connected deep neural network that can prevent network stealing,
namely, it produces the same responses and classification accuracy, but
it is extremely sensitive to weight changes.
Keywords: Neural networks · Networks stealing · Weight stealing ·
Obfuscation
1 Introduction
Deep neural networks are employed in an emerging number of tasks, many of
which were not solvable before with traditional machine learning approaches. In
these structures, expert knowledge which is represented in annotated datasets is
transformed into learned network parameters known as network weights during
training.
Methods, approaches and network architectures are distributed openly in
this community, but most companies protect their data and trained networks
obtained from tremendous amount of working hours annotating datasets and
fine-tuning training parameters.
Model stealing and detection of unauthorized use via stolen weights is a key
challenge of the field as there are techniques (scaling, noising, fine-tuning, distil-
lation) to modify the weights to hide the abuse, while preserving the functionality
and accuracy of the original network. Since networks are trained by stochastic
optimization methods and are initialized with random weights, training on a
dataset might result various different networks with similar accuracy.
There are several existing methods to measure distances between network
weights after these modifications and independent trainings: [1–3] Obfuscation of
c
 Springer Nature Switzerland AG 2020
K. Arai et al. (Eds.): SAI 2020, AISC 1230, pp. 1–11, 2020.
https://doi.org/10.1007/978-3-030-52243-8_1
2 K. Szentannai et al.
neural networks was introduced in [4], which showed the viability and importance
of these approaches. In this paper the authors present a method to obfuscate
the architecture, but not the learned network functionality. We would argue that
most ownership concerns are not raised because of network architectures, since
most industrial applications use previously published structures, but because of
network functionality and the learned weights of the network.
Other approaches try to embed additional, hidden information in the network
such as hidden functionalities or non-plausible, predefined answers for previously
selected images (usually referred as watermarks) [5,6]. In case of a stolen network
one can claim ownership of the network by unraveling the hidden functionality,
which can not just be formed randomly in the structure. A good summary com-
paring different watermarking methods and their possible evasions can be found
in [7].
Instead of creating evidence, based on which relation between the original
and the stolen, modified model could be proven, we have developed a method
which generates a completely sensitive and fragile network, which can be freely
shared, since even minor modification of the network weights would drastically
alter the networks response.
In this paper, we present a method which can transform a previously trained
network into a fragile one, by extending the number of neurons in the selected
layers, without changing the response of the network. These transformations can
be applied in an iterative manner on any layer of the network, except the first and
the last layers (since their size is determined by the problem representation). In
Sect. 2 we will first introduce our method and the possible modifications on stolen
networks and in Sect. 3 we will describe our simulations and results. Finally in
Sect. 4 we will conclude our results and describe our planned future work.
2 Mathematical Model of Unrobust Networks
2.1 Fully Connected Layers
In this section we would like to present our method, how a robust network
can be transformed into a non-robust one. We have chosen fully connected net-
works because of their generality and compact mathematical representation.
Fully connected networks are generally applied covering the whole spectrum of
machine learning problems from regression through data generation to classi-
fication problems. The authors can not deny the fact, that in most practical
problems convolutional networks are used, but we would like to emphasize the
following properties of fully connected networks: (1) in those cases when there
is no topographic correlation in the data, fully connected networks are applied
(2) most problems also contain additional fully connected layers after the fea-
ture extraction of the convolutional or residual layers (3) every convolutional
network can be considered as a special case of fully connected ones, where all
weights outside the convolutional kernels are set to zero.
A fully connected deep neural network might consist of several hidden lay-
ers each containing certain number of neurons. Since all layers have the same
Preventing Neural NetworkWeight Stealing via Network Obfuscation 3
architecture, without the loss of generality, we will focus here only on three con-
secutive layers in the network (i − 1, i and i + 1). We will show how neurons
in layer i can be changed, increasing the number of neurons in this layer and
making the network fragile, meanwhile keeping the functionality of the three
layers intact. We have to emphasize that this method can be applied on any
three layers, including the first and last three layers of the network and also that
it can be applied repeatedly on each layer, still without changing the overall
functionality of the network.
The input of the layer i, the activations of the previous layer (i − 1) can be
noted by the vector xi−1 containing N elements. The weights of the network
are noted by the weight matrix Wi and the bias bi where W is a matrix of
N × K elements, creating a mapping RN
→ RK
and bi is a vector containing K
elements. The output of layer i, also the input of layer i + 1 can be written as:
xi = φ(WiN×K
xi−1 + bi) (1)
where φ is the activation function of the neurons.
The activations of layer i + 1 can be extended as using Eq. 1:
xi+1 = φ(φ(xWi−1N×K
+ bi−1)WiK×L
+ bi) (2)
Creating a mapping RN
→ RL
.
One way of identifying a fully connected neural network is to represent it as a
sequence of synaptic weights. Our assumption was that in case of model stealing
certain application of additive noise on the weights would prevent others to reveal
the attacker and conceal thievery. Since fully connected networks are known to be
robust against such modifications, the attacker could use the modified network
with approximately the same classification accuracy. Thus, our goal was to find
a transformation that preserves the loss and accuracy rates of the network, but
introduces a significant decrease in terms of the robustness against parameter
tuning. In case of a three-layered structure one has to preserve the mapping
between the first and third layers (Eq. 2) to keep the functionality of this three
consecutive layers, but the mapping in Eq. 1 (the mapping between the first and
second, or second and third layers), can be changed freely.
Also, our model must rely on an identification mechanism based on a repre-
sentation of the synaptic weights. Therefore, the owner of a network should be
able to verify the ownership based on the representation of the neural network,
examining the average distance between the weights [7].
2.2 Decomposing Neurons
We would like to find such W
i−1N×M
and W
iM×L
(M ∈ N, M  K) matrices, for
which:
φ(φ(xWi−1N×K
+ bi−1)WiK×L
+ bi)
= φ(φ(xW
i−1N×M
+ b
i−1)W
iM×L
+ bi)
(3)
4 K. Szentannai et al.
Considering the linear case when φ(x) = x we obtain the following form:
xWi−1N×K
WiK×L
+ bi−1WiK×L
+ bi
= xW
i−1N×M
W
iM×L
+ b
i−1W
iM×L
+ bi
(4)
The equation above holds only for the special case of φ(x) = x, however in
most cases nonlinear activation functions are used. We have selected the rectified
linear unit (ReLU) for our investigation (φ(x) = max(0, x)). This non-linearity
consist of two linear parts, which means that a variable could be in a linear
domain of Eq. 3 resulting selected lines of 4 (if x ≥ 0), or the equation system
is independent from the variable if the activation function results a constant
zero (if x ≤ 0). This way ReLU gives a selection of given variables (lines) of 4.
However, applying the ReLU activation function has certain constraints.
Assume, that a neuron with the ReLU activation function should be replaced
by two other neurons. This can be achieved by using an α ∈ (0, 1) multiplier:
φ(
n

i=1
Wl
jixi + bl
j) = Nl
j (5)
Nl
j = αNl
j + (1 − α)Nl
j (6)
where αNl
j and (1 − α)Nl
j correspond to the activation of the two neurons.
For each of these, the activation would only be positive if the original neuron
had a positive activation, otherwise it would be zero, this means that all the
decomposed neuron must have the same bias.
After decomposing a neuron, it is needed to choose the appropriate weights
on the subsequent layer. A trivial solution is to keep the original synaptic weights
represented by the W
l+1
j column vector. This would lead to the same activation
since
Nl
jW
l+1
j = αNl
jW
l+1
j + (1 − α)Nl
jW
l+1
j (7)
A fragile network can be created by choosing the same synaptic weights for
the selected two neurons, but it would be easy to spot by the attacker, thus
another solution is needed. In order to find a nontrivial solution we constructed
a linear equation system that can be described by equation system Ap = c,
where A contains the original, already decomposed synaptic weights of the first
layer, meanwhile, p represents the unknown synaptic weights of the subsequent
layer. Vector c contains the corresponding weights from the original network
multiplied together: each element represents the amount of activation related
to one input. Finally the non-trivial solution can be obtained by solving the
following non-homogeneous linear equation system for each output neuron where
index j denotes the output neuron.
⎡
⎢
⎢
⎢
⎢
⎢
⎢
⎣
w1
11 w1
21 . . . w1
m1
w1
12 w1
22 . . . w1
m2
.
.
.
.
.
.
...
.
.
.
w1
1n w1
2n . . . w1
mn
b1
1 b1
2 . . . b1
m
⎤
⎥
⎥
⎥
⎥
⎥
⎥
⎦
×
⎡
⎢
⎢
⎢
⎢
⎣
w2
j1
w2
j2
.
.
.
w2
jm
⎤
⎥
⎥
⎥
⎥
⎦
=
⎡
⎢
⎢
⎢
⎢
⎢
⎢
⎣
k
i=1 w2
jiw1
i1
k
i=1 w2
jiw1
i2
.
.
.
k
i=1 w2
jiw1
ik
k
i=1 w2
jib1
i
⎤
⎥
⎥
⎥
⎥
⎥
⎥
⎦
(8)
Preventing Neural NetworkWeight Stealing via Network Obfuscation 5
It is important to note, that all the predefined weights on layer l + 1 might
change. In summary, this step can be considered as the replacement of a layer,
changing all synaptic weights connecting from and to this layer, but keeping the
biases of the neurons and the functionality of the network intact.
The only constraint of this method is related to the number of neurons regard-
ing the two consecutive layers. It is known, that for matrix A with the size of
M ×N, equation Ap = c has a solution if and only if rank(A) = rank[A|c] where
[A|c] is the extended matrix. The decomposition of a neuron described in Eq. 7
results in linearly dependent weight vectors on layer l, therefore when solving the
equation system the rank of the matrix A is less than or equal to min(N +1, K).
If the rank is equal to N + 1 (meaning that K ≥ N + 1) then vector c with the
dimension of N +1 would not introduce a new dimension to the subspace defined
by matrix A. However if rank(A) = K (meaning that K ≤ N + 1) then vector
c could extend the subspace defined by A. Therefore, the general condition for
solving the equation system is: K ≥ N + 1.
This shows that one could increase the number of the neurons in a layer,
and divide the weights of the existing neuron in that layer. We have used this
derivation and aim to find a solution of Eq. 7 where the order of magnitudes are
significantly different (in the range of 106
) for both the network parameters and
for the eigenvalues of the mapping RN
→ RL
.
2.3 Introducing Deceptive Neurons
The method described in the previous section results a fragile neural network, but
unfortunately it is not enough to protect the network weights, since an attacker
could identify the decomposed neurons based on their biases or could fit a new
neural network on the functionality implemented by the layer. To prevent this
we will introduce deceptive neurons in layers. The purpose of these neurons is to
have non-zero activation in sum if and only if noise was added to their weights
apart from this all these neurons have to cancel each others effect out in the
network, but not necessarily in a single layer.
The simplest method is to define a neuron with an arbitrary weight and a bias
of an existing neuron resulting a large activation and making a copy of it with
the difference of multiplying the output weights by −1. As a result, these neurons
do not contribute to the functionality of the network. However, adding noise to
the weights of these neurons would have unexpected consequences depending on
the characteristics of the noise, eventually leading to a decrease of classification
accuracy.
One important aspect of this method is to hide the generated neurons and
obfuscate the network to prevent the attacker to easily filter our deceptive neu-
rons in the architecture. Choosing the same weights again on both layers would
be an obvious sign to an attacker, therefore this method should be combined
with decomposition described in Sect. 2.2.
Since decomposition allows the generation of arbitrarily small weights one
can select a suitably small magnitude, which allows the generation of R real
(non deceptive) neurons in the system, and half of their weights (α parameters)
6 K. Szentannai et al.
can be set arbitrarily, meanwhile the other half of the weights will be determined
by Eq. 8. For each real neuron one can generate a number (F) of fake neurons
forming groups of R number of real and F number of fake neurons. These groups
can be easily identified in the network since all of them will have the same bias,
but the identification of fake and real neurons in a group is non-polynomial.
The efficiency of this method should be measured in the computational com-
plexity of successfully finding two or more corresponding fake neurons having
a total activation of zero in a group. Assuming that only one pair of fake neu-
rons was added to the network, it requires
L
i=0
Ri+Fi
2

steps to successfully
identify the fake neurons, where Ri + Fi denotes the number of neurons in the
corresponding hidden layer, and L is the number of hidden layers. This can be
further increased by decomposing the fake neurons using Eq. 8: in that case the
required number of steps is
L
i=0
Ri+Fi
d+2

, d being the number of extra decom-
posed neurons. This can be maximized if d + 2 = Ri + Fi/2, where i denotes
the layer, where the fake neurons are located. However, this is true only if the
attacker has information about the number of deceptive neurons. Without any
prior knowledge, the attacker has to guess the number of deceptive neurons
as well (0, 1, 2 . . . Ri + Fi − 1) which leads to exponentially increasing computa-
tional time.
3 Experiments
3.1 Simulation of a Simple Network
As a case study we have created a simple fully connected neural network with
three layers, each containing two neurons to present the validity of our approach.
The functionality of the network can be considered as a mapping f : R2
→ R2
.
w1 =

6 −1
−1 7

, b1 =

1 −5 w2 =

5 3
9 −1

, b2 =

7 1
We added two neurons to the hidden layer with decomposition, which does
not modify the input and output space and no deceptive neurons were used in
this experiment. After applying the methods described in Sect. 2.1, we obtained
a solution of:
w1 =

0.0525 −0.4213 6.0058 −0.5744
−0.0087 2.9688 −0.9991 4.0263

b1 =

0.0087 −2.1066 1.0009 −2.8722
w2 =
⎡
⎢
⎢
⎣
4.1924e + 03 −5.4065e + 03
−2.3914 7.3381
−3.2266 5.7622
6.9634 −7.0666
⎤
⎥
⎥
⎦
b2 =

7 1
Preventing Neural NetworkWeight Stealing via Network Obfuscation 7
Fig. 1. This figure depicts the response of a simple two-layered fully connected network
for a selected input (red dot) and the response of its variants with %1 noise (yellow
dots) added proportionally to the weights. The blue dots represent the responses of the
transformed MimosaNets under the same level of noise on their weights, meanwhile the
response of the transformed network (without noise) remained exactly the same.
In the following experiment we have chosen an arbitrary input vector: [7,9].
We have measured the response of the network for this input, each time intro-
ducing 1% noise to the weights of the network. Figure 1 shows the response of the
original network and the modified network after adding 1% noise. The variances
of the original network for the first output dimension is 6.083 and 8.399 for the
second, meanwhile the variances are 476.221 and 767.877 for the decomposed
networks respectively. This example demonstrates how decomposition of a layer
can increase the networks dependence on its weights.
3.2 Simulations on the MNIST Dataset
We have created a five layered fully connected network containing 32 neurons in
each hidden-layer (and 728 and 10 neurons in the input and output layers) and
trained it on the MNIST [8] dataset, using batches of 32 and Adam Optimizer
[9] for 7500 iterations. The network has reached an accuracy of 98.4% on the
independent test set.
We have created different modifications of the network by adding 9,18,36,72
extra neurons. These neurons were divided equally between the three hidden-
layers and 2/3 of them were deceptive neurons (since they were always created
in pairs) and 1/3 of them were created by decomposition. This means that in
case of 36 additional neurons 2 × 4 deceptive neurons were added to each layer
and four new neurons per layer were created by decomposition.
8 K. Szentannai et al.
In our hypothetical situations these networks (along with the original) could
be stolen by a malevolent attacker, who would try to conceal his thievery by
using the following three methods: adding additive noise proportionally to the
network weights, continuing network training on arbitrary data and network
knowledge distillation. All reported datapoints are an average of 25 independent
measurements.
Dependence on Additive Noise. We have investigated network performance
using additive noise to the network weights. The decrease of accuracy which
depends on the ratio of the additive noise can be seen in Fig. 2.
At first we have tested a fully connected neural network trained on the
MNIST dataset without making modifications to it. The decrease of accuracy
was not more than 0.2% even with a relatively high 5% noise. This shows the
robustness of a fully connected network.
After applying the methods described in Sect. 2 network accuracy retro-
gressed to 10% even in case of noise which was less than 1% of the network
weights, as Fig. 2 depicts. This alone would reason the applicability of our
method, but we have investigated low level noises further, which can be seen
on Fig. 3. As it can be seen from the figure, accuracy starts to drop when the
ratio of additive noise reaches the level of 10−7
, which means the attacker can
not significantly modify the weights. This effect could be increased by adding
more and more neurons to the network.
Fig. 2. This figure depicts accuracy changes on the MNIST dataset under various level
of additive noise applied on the weights. The original network (purple) is not dependent
on these weight changes, meanwhile accuracies retrogress in the transformed networks,
even with the lowest level of noise.
Dependence on Further Training Steps. Additive noise randomly modi-
fies the weights, but it is important to examine how accuracy changes in case
of structured changes exploiting the gradients of the network. Figure 4 depicts
accuracy changes and average in weights distances by applying further training
Preventing Neural NetworkWeight Stealing via Network Obfuscation 9
steps in the network. Further training was examined using different step sizes
and optimizers (SGD,AdaGrad and ADAM) training the network with original
MNIST and randomly selected labels and the results were qualitatively the same
in all cases.
Dependence on Network Distillation. We have tried to distill the knowledge
in the network and train a new neural network to approximate the functionality
of previously selected layers, by applying the method described in [10].
We have generated one million random input samples with their outputs for
the modified networks and have used this dataset to approximate the function-
ality of the network.
Fig. 3. A logarithmic plot depicting the same accuracy dependence as on Fig. 2, focus-
ing on low noise levels. As it can be seen from the plot, accuracy values do not change
significantly under 10−7
percent of noise, which means the most important values of
the weights would remain intact to proof connection between the original and modified
networks.
We have created three-layered neural networks containing 32, 48, 64, 128
neurons in the hidden layer (The number of neurons in the first and last layer
were determined by the original network) and tried to approximate the function-
ality of the hidden layers of the original structure. Since deceptive neurons have
activations in the same order of magnitude as the original responses, these values
disturb the manifold of the embedded representations learned by the network
and it is more difficult to be approximated by a neural network. Table 1 contains
the maximum accuracies which could be reached with knowledge distillation,
depending on the number of deceptive neurons and the neurons in the architec-
ture used for distillation. This demonstrates, that our method is also resilient
towards knowledge distillation.
10 K. Szentannai et al.
Fig. 4. The figure plots accuracy dependence of the networks in case of further training
(applying further optimization steps). As it can be seen from the plot weights had to
be kept in 10−7
average distance to keep the same level of accuracy.
Table 1. The table displays the maximum accuracies reached with knowledge distil-
lation. The different rows display the number of extra neurons which were added to
the investigated layer, and the different columns show the number of neurons in the
hidden layer of the fully connected architecture, which was used for distillation.
#Deceptive N. #N. = 32 #N. = 48 #N. = 64 #N. = 128
9 0.64 0.65 0.69 0.71
18 0.12 0.14 0.15 0.17
36 0.10 0.11 0.10 0.13
72 0.11 0.09 0.10 0.10
4 Conclusion
In this paper, we have shown a transformation method which can significantly
increase a network’s dependence on its weights, keeping the original functionality
intact. We have also presented how deceptive neurons can be added to a network,
without disturbing its original response. Using these transformations iteratively
one can create and openly share a trained network, where it is computationally
extensive to reverse engineer the original network architecture and embeddings
in the hidden layers. The drawback of the method is the additional computa-
tional need for the extra neurons, but this is not significant, since computational
increase is polynomial.
Preventing Neural NetworkWeight Stealing via Network Obfuscation 11
We have tested our method on simple toy problems and on the MNIST
dataset using fully-connected neural networks and demonstrated that our app-
roach results non-robust networks for the following perturbations: additive noise,
application of further training steps and knowledge distillation.
Acknowledgments. This research has been partially supported by the Hungarian
Government by the following grant: 2018-1.2.1-NKP-00008: Exploring the Mathemat-
ical Foundations of Artificial Intelligence also the funds of grant EFOP-3.6.2-16-2017-
00013 are gratefully acknowledged.
References
1. Koch, E., Zhao, J.: Towards robust and hidden image copyright labeling. In: IEEE
Workshop on Nonlinear Signal and Image Processing, vol. 1174, pp. 185–206,
Greece, Neos Marmaras (1995)
2. Wolfgang, R.B., Delp, E.J.: A watermark for digital images. In: Proceedings of the
International Conference on Image Processing, vol. 3, pp. 219–222. IEEE (1996)
3. Zarrabi, H., Hajabdollahi, M., Soroushmehr, S., Karimi, N., Samavi, S., Najarian,
K.: Reversible image watermarking for health informatics systems using distortion
compensation in wavelet domain (2018 ) arXiv preprintarXiv:1802.07786
4. Xu, H., Su, Y., Zhao, Z., Zhou, Y., Lyu, M.R., King, I.: Deepobfuscation: securing
the structure of convolutional neural networks via knowledge distillation (2018)
arXiv preprint arXiv:1806.10313
5. Namba, R., Sakuma, J.: Robust watermarking of neural network with exponential
weighting (2019) arXiv preprint arXiv:1901.06151
6. Gomez, L., Ibarrondo, A., Márquez, J., Duverger, P.: Intellectual property protec-
tion for distributed neural networks (2018)
7. Hitaj, D., Mancini, L.V.: Have you stolen my model? evasion attacks against deep
neural network watermarking techniques (2018) arXiv preprint arXiv:1809.00615
8. LeCun, Y., Cortes, C., Burges, C.: MNIST handwritten digit database. ATT
Labs 2 (2010). http://yann.lecun.com/exdb/mnist
9. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization (2014) arXiv
preprint arXiv:1412.6980
10. Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network
(2015) arXiv preprint arXiv:1503.02531
Applications of Z-Numbers and Neural
Networks in Engineering
Raheleh Jafari1(B)
, Sina Razvarz2
, and Alexander Gegov3
1
School of design, University of Leeds, Leeds LS2 9JT, UK
r.jafari@leeds.ac.uk
2
Departamento de Control Automático, CINVESTAV-IPN (National Polytechnic
Institute), Mexico City, Mexico
srazvarz@yahoo.com
3
School of Computing, University of Portsmouth, Buckingham Building, PO1 3HE
Portsmouth, UK
alexander.gegov@port.ac.uk
Abstract. In the real world, much of the information on which deci-
sions are based is vague, imprecise and incomplete. Artificial intelligence
techniques can deal with extensive uncertainties. Currently, various types
of artificial intelligence technologies, like fuzzy logic and artificial neural
network are broadly utilized in the engineering field. In this paper, the
combined Z-number and neural network techniques are studied. Further-
more, the applications of Z-numbers and neural networks in engineering
are introduced.
Keywords: Artificial intelligence · Fuzzy logic · Z-number · Neural
network
1 Introduction
Intelligent systems are composed of fuzzy systems and neural networks. They
have particular properties such as the capability of learning, modeling and resolv-
ing optimizing problems, suitable for specific kind of applications. The intelligent
system can be named hybrid system in case that it combines a minimum of two
intelligent systems. For example, the mixture of the fuzzy system and neural
network causes the hybrid system to be called a neuron-fuzzy system.
Neural networks are made of interrelated groups of artificial neurons that
have information which is obtainable by computations linked to them. Mostly,
neural networks can adapt themselves to structural alterations while the training
phase. Neural networks have been utilized in modeling complicated connections
among inputs and outputs or acquiring patterns for the data [1–12].
Fuzzy logic systems are broadly utilized to model the systems characterizing
vague and unreliable information [13–29]. During the years, investigators have
proposed extensions to the theory of fuzzy logic. Remarkable extension includes
Z-numbers [30]. The Z-number is defined as an ordered pair of fuzzy numbers
c
 Springer Nature Switzerland AG 2020
K. Arai et al. (Eds.): SAI 2020, AISC 1230, pp. 12–25, 2020.
https://doi.org/10.1007/978-3-030-52243-8_2
Applications of Z-Numbers and Neural Networks in Engineering 13
(C, D), such that C is a value of some variables and D is the reliability which
is a value of probability rate of C. Z-numbers are widely applied in various
implementations in different areas [31–36].
In this paper, the basic principles and explanations of Z-numbers and neu-
ral networks are given. The applications of Z-numbers and neural networks in
engineering are introduced. Also, the combined Z-number and neural network
techniques are studied. The rest of the paper is organized as follows. The the-
oretical background of Z-numbers and artificial neural networks are detailed in
Sect. 2. Comparison analysis of neural networks and Z-number systems is pre-
sented in Sect. 3. The combined Z-number and neural network techniques are
given in Sect. 4. The conclusion of this work is summarized in Sect. 5.
2 Theoretical Background
In this section, we provide a brief theoretical insight of Z-numbers and artificial
neural networks.
2.1 Z-Numbers
Mathematical Preliminaries. Here some necessary definitions of Z-number
theory are given.
Definition 1. If q is: 1) normal, there exists ω0 ∈  where q(ω0) = 1, 2) convex,
q(υω + (1 − υ)ω) ≥ min{q(ω), q(τ)}, ∀ω, τ ∈ , ∀υ ∈ [0, 1], 3) upper semi-
continuous on , q(ω) ≤ q(ω0) + , ∀ω ∈ N(ω0), ∀ω0 ∈ , ∀  0, N(ω0) is a
neighborhood, 4) q +
= {ω ∈ , q(ω)  0} is compact, so q is a fuzzy variable,
q ∈ E :  → [0, 1].
The fuzzy variable q is defined as below
q =

q, q

(1)
such that q is the lower-bound variable and q is the upper-bound variable.
Definition 2. The Z-number is composed of two elements Z = [q(ω), p]. q(ω)
is considered as the restriction on the real-valued uncertain variable ω and p is
considered as a measure of the reliability of q. The Z-number is defined as Z+
-
number, when q(ω) is a fuzzy number and p is the probability distribution of ω.
If q(ω), and p, are fuzzy numbers, then the Z-number is defined as Z−
-number.
The Z+
-number has more information in comparison with the Z−
-number.
In this work, we use the definition of Z+
-number, i.e., Z = [q, p] , q is a fuzzy
number and p is a probability distribution.
The triangular membership function is defined as
μq = G (a, b, c) =
 ω−a
b−a a ≤ ω ≤ b
c−ω
c−b b ≤ ω ≤ c
otherwise μq = 0 (2)
14 R. Jafari et al.
and the trapezoidal membership function is defined as
μq = G (a, b, c, d) =
⎧
⎨
⎩
ω−a
b−a a ≤ ω ≤ b
d−ω
d−c c ≤ ω ≤ d
1 b ≤ ω ≤ c
otherwise μq = 0 (3)
The probability measure of q is defined as
P(q) =


μq(ω)p(ω)dω (4)
such that p is the probability density of ω. For discrete Z-numbers we have
P(q) =
n
j=1
μq(ωj)p(ωj) (5)
Definition 3. The α-level of the Z-number Z = (q, p) is stated as below
[Z]α
= ([q]α
, [p]α
) (6)
such that 0  α ≤ 1. [p]α
is calculated by the Nguyen’s theorem
[p]α
= p([q]α
) = p([qα
, qα
]) = Pα
, P
α
(7)
such that p([q]α
) = {p(ω)|ω ∈ [q]α
}. Hence, [Z]α
is defined as
[Z]α
= Zα
, Z
α
=

qα
, Pα

, qα
, P
α
(8)
such that Pα
= qα
p(ωα
j ), P
α
= qα
p(ωα
j ), [ωj]α
= (ωα
j , ωα
j ).
Let Z1 = (q1, p1) and Z2 = (q2, p2), we have
Z12 = Z1 ∗ Z2 = (q1 ∗ q2, p1 ∗ p2) (9)
where ∗ ∈ {⊕, , }. ⊕, and , indicate sum, subtract and multiply, respec-
tively.
The operations utilized for the fuzzy numbers [q1]α
= [qα
11, qα
12] and [q2]α
=
[qα
21, qα
22] are defined as [37],
[q1 ⊕ q2]α
= [q1]α
+ [q2]α
= [qα
11 + qα
21, qα
12 + qα
22]
[q1 q2]α
= [q1]α
− [q2]α
= [qα
11 − qα
22, qα
12 − qα
21]
[q1 q2]α
=

min{qα
11qα
21, qα
11qα
22, qα
12qα
21, qα
12qα
22}
max{qα
11qα
21, qα
11qα
22, qα
12qα
21, qα
12qα
22}
 (10)
For the discrete probability distributions, the following relation is defined for all
p1 ∗ p2 operations
p1 ∗ p2 =
ι
p1(ω1,j)p2(ω2,(n−j)) = p12(ω) (11)
Applications of Z-Numbers and Neural Networks in Engineering 15
Fig. 1. Membership functions applied for (a) cereal yield, cereal production, economic
growth, (b) threat rate, and (c) reliability
Background and Related Work. The implementations of Z-numbers based
techniques are bounded because of the shortage of effective approaches for cal-
culation with Z-numbers.
In [38], the capabilities of the Z-numbers in the improvement of the quality
of risk assessment are studied. Prediction equal to (High, Very Sure) is institu-
tionalized in the form of Z-evaluation “y is Z(c, p)”, such that y is considered
as a random variable of threat probability, c and p are taken to be fuzzy sets,
demonstrating soft constraints on a threat probability and a partial reliability,
respectively. The likelihood of risk is illustrated by Z-number as: Probability
= Z1(High, Very Sure), such that c is indicated through linguistic terms High,
Medium, Low, also, p is indicated through terms Very Sure, Sure, etc. Likewise,
consequence rate is explained as: Consequence measure = Z2(Low, Sure). Threat
rates (Z12) is computed as the product of the probability (Z1) and consequence
measure (Z2).
In [39], Z-number-based fuzzy system is suggested to determine the food
security risk level. The proposed system is relying on fuzzy If-Then rules, which
applies the basic parameters such as cereal production, cereal yield, and economic
growth to specify the threat rate of food security. The membership functions
applied to explain input, as well as output variables, are demonstrated in Fig. 1.
In [40], the application of the Z-number theory to selection of optimal
alloy is illustrated. Three alloys named Ti12Mo2Sn alloy, Ti12Mo4Sn alloy, and
Ti12Mo6Sn alloy are examined and an optimal titanium alloy is selected using
the proposed approach. The optimality of the alloys is studied based on three
criteria: strength level, plastic deformation degree, and tensile strength.
16 R. Jafari et al.
Fig. 2. The structure of a biological neuron
2.2 Neural Networks
Neural networks are constructed from neurons and synapses. They alter their
rates in reply from nearby neurons as well as synapses. Neural networks operate
similar to computer as they map inputs to outputs. Neurons, as well as synapses,
are silicon members, which mimic their treatment. A neuron gathers the total
incoming signals from other neurons, afterward simulate its reply represented
by a number. Signals move among the synapses, which contain numerical rates.
Neural networks learn once they vary the value of their synapsis. The structure
of a biological neuron or nerve cell is shown in Fig. 2. The processing steps inside
each neuron is demonstrated in Fig. 3.
Background and Related Work. In [41], artificial neural network technique
is utilized for modeling the void fraction in two-phase flow inside helical vertical
coils with water as work fluid. In [42] artificial neural network and multi-objective
genetic algorithm are applied for optimizing the subcooled flow boiling in a
vertical pipe. Pressure, the mass flux of the water, inlet subcooled temperature,
as well as heat flux are considered as inlet parameters. The artificial neural
network utilizes inlet parameters for predicting the objective functions, which
are the maximum wall surface temperature as well as averaged vapor volume
fraction at the outlet. The optimization procedure of design parameters is shown
in Fig. 4.
In [43], artificial neural network technique is applied for predicting heat trans-
fer in supercritical water. The artificial neural network is trained on the basis
of 5280 data points gathered from experimental results. Mass flux, heat flux,
pressure, tube diameter, as well as bulk specific enthalpy are taken to be the
inputs of the proposed artificial neural network. The tube wall temperature is
taken to be the output, see Fig. 5.
Applications of Z-Numbers and Neural Networks in Engineering 17
Fig. 3. Processing steps inside each neuron
3 Comparison Analysis of Neural Networks
and Z-Number Systems
Neural networks and Z-number systems can be considered as a part of the soft
computing field. The comparison of Neural networks and Z-number systems is
represented in Table 1. Neural networks have the following advantageous:
Table 1. The comparison of neural networks and Z-number systems.
Z-number systems Neural networks
Knowledge presentation Very good Very bad
Uncertainty tolerance Very good Very good
Inaccuracy tolerance Very good Very good
Compatibility Bad Very good
Learning capability Very bad Very good
Interpretation capability Very good Very bad
Knowledge detection and data mining Bad Very good
Maintainability Good Very good
i Adaptive Learning: capability in learning tasks on the basis of the data sup-
plied to train or initial experience.
ii Self-organization: neural networks are able to create their organization while
time learning.
iii Real-time execution: the calculations of neural networks may be executed in
parallel, also specific hardware devices are constructed, which can capture
the benefit of this feature.
Neural networks have the following drawbacks:
Random documents with unrelated
content Scribd suggests to you:
Wanklin (astuen pikkupöydän ääreen — hermostuneella
ystävällisyydellä): No, Thomas, miten on asiat? Mikä oli tulos
kokouksestanne?
Rous: Simon Harnessilla on vastauksemme. Hän sanoo teille, mikä
se on. Odotamme häntä. Hän puhuu puolestamme.
Wanklin: Onko asia niin Thomas?
Thomas (jurosti): Kyllä. Roberts ei tule, hänen vaimonsa on
kuollut.
Scantlebury: Niin, niin! Vaimo raukka! Niin niin!
Frost (tullen eteisestä): Herra Harness!
(Kun Harness tulee, Frost poistuu. Harnessilla on paperi
kädessään, kumartaa johtokunnalle, nyökäyttää miehiin päin
ja asettuu seisomaan pikkupöydän taakse, aivan keskelle
huonetta.)
Harness: Hyvää iltaa, hyvät herrat.
(Tench kirjottamansa paperi kädessään tulee hänen
luokseen. He puhuvat hiljaa.)
Wilder: Olemme odottaneet teitä, Harness. Toivotaan että voidaan
päästä johonkin —
Frost (tullen eteisestä): Roberts! (Hän poistuu.)
(Roberts tulee hätäisesti sisään ja seisottuu tuijottaen
Anthonyhin. Hänen naamansa on rasittuneen ja vanhentuneen
näköinen.)
Roberts: Herra Anthony, pelkään, että olen vähän myöhästynyt,
olisin ollut täällä aikanaan, ellei olisi — jotakin tapahtunut. (Miehille):
Onko jo jotakin sanottu?
Thomas: Mutta mies, mikä sinut tänne saattoi?
Roberts: Te sanoitte meille tänä aamuna, hyvät herrat, että
menkää ja harkitkaa uudelleen asemaanne. Me olemme sitä
harkinneet; olemme tuomassa nyt miesten vastausta. (Anthonylle.)
Menkää te takasin Lontooseen. Meillä ei ole teille mitään
myönnettävää. Emme rikkuakaan tule helpottamaan
vaatimuksistamme, emmekä taivu ennen kuin kaikki nuo
vaatimukset on hyväksytty.
(Anthony katsoo häneen, mutta ei puhu. Miehet näyttävät
hämmästyneiltä.)
Harness: Roberts!
Roberts (katsoo kiivaasti häneen, sitte Anthonyhin): Onko asia
teille kyllin selvä? Onko se kyllin lyhyesti ja sattuvasti sanottu? Te
erehdyitte suuresti luullessanne meidän tulevan armoille. Te voitte
ruhjoa ruumiimme, mutta ette murtaa mieltä. Menkää takasin
Lontooseen, miehillä ei ole teille mitään sanottavaa!
(Pysähtyen, levottomana ottaa askeleen liikkumattomana
istuvaan Anthonyhin päin.)
Edgar: Meitä surettaa onnettomuutenne, Roberts, mutta —
Roberts: Pitäkää säälinne, nuori mies. Antakaa isänne puhua!
Harness (paperiarkki kädessään, puhuu pikkupöydän takaa):
Roberts!
Roberts (Anthonylle hyvin kärsimättömästi): Miksi ette vastaa!
Harness: Roberts!
Roberts (kääntyen äkkiä): Mitä niin?
Harness (vakavasti): Te puhutte ilman valtuuksia, asiat ovat
menneet teidän edellenne.
(Hän viittaa Tenchiin, joka antaa paperin johtokunnan
jäsenille. He allekirjoittavat sukkelaan sopimuspaperin.)
Katsokaa tätä, hyvä mies! (Kohottaen paperiaan.) Vaatimukset
hyväksytään, paitsi koneenkäyttäjiä ja lämmittäjiä koskevat.
Lauvantain ylityöstä kaksinkertainen palkka. Työvuorot entiselleen.
Näihin ehtoihin on suostuttu. Miehet menevät aamulla jälleen
työhön.
Lakko on loppunut.
Roberts (lukee paperia ja kääntyy miehiin. He vetäytyvät taapäin,
paitsi Rousia, joka pysyy paikoillaan. Ihan tyynesti): Te olette
pelanneet minun selkäni takana? Minä seisoin rinnallanne kuolemaan
saakka; te odotitte sitä, hyljätäksenne minut!
(Miehet vastaavat kaikki yht'aikaa.)
Rous: Se on vale!
Thomas: Teitä oli mahdoton sietää, hyvä mies!
Green: Jos minua olisi kuultu —
Bulgin (hengästyneesti): Tuki kitasi!
Roberts: Sitä te odotitte!
Harness (ottaen johtokunnan allekirjoittaman sopimuksen ja
antaen omansa Tenchille): Jo riittää, miehet. Teidän on parasta
poistua.
(Miehet poistuvat hitaasti ja kömpelösti.)
Wilder (hermostuneella, hiljaisella äänellä): Luultavasti ei ole enää
mitään esteitä. (Seuraa ovelle.) Minun on yritettävä tähän junaan!
Tuletteko, Scantlebury?
Scantlebury (Wanklinin kanssa seuraten): Jaa, jaa. Odottakaa
minua.
(Hän seisahtuu kun Roberts puhuu.)
Roberts (Anthonylle): Mutta te ette ole hyväksyneet noita ehtoja!
Eiväthän he voi tehdä sopimusta ilman esimiestään! Te ette koskaan
hyväksyisi noita ehtoja! (Anthony katsoo häneen vaieten.) Sanokaa
herran tähden että ette ole hyväksyneet! (Varmasti.) Näen sen
teistä!
Harness (ojentaen johtokunnan hyväksymää sopimusta):
Johtokunta on allekirjottanut!
(Roberts katsoo tyrmistyneenä allekirjotuksia — työntää
paperin pois ja peittää silmänsä.)
Scantlebury (kämmenensä suojassa Tenchille): Pitäkää huolta
puheenjohtajasta. Hän ei voi hyvin, hän ei voi hyvin — hän ei syönyt
puolista. Jos vaimoille ja lapsille toimitetaan avunkeräystä, merkitkää
minun puolestani — puolestani viisisataa markkaa.
(Hän menee eteiseen vaivaloisen hätäisesti ja Wanklin, joka
on tähystellyt Robertsia ja Anthonyä värähtelevin ilmein,
seuraa perästä. Edgar jää istumaan sohvaan katsoen maahan.
Tench palaten paikoilleen kirjottaa pöytäkirjaansa. Harness
seisoo pikkupöydän ääressä katsellen vakavasti Robertsia.)
Roberts: Sitte te ette ole enää tämän yhtiön esimiehenä.
(Purskahtaen hurjanlaiseen nauruun.) Ah-ha, ha-ha, ha-ha! He ovat
hyljänneet teidät — hyljänneet esimiehensä: Ah ha-ha! (Äkkiä
pelottavan synkästi.) Siis — ovat syösseet meidät molemmatkin alas,
herra Anthony?
(Enid tulee kiireesti pariovesta isänsä luokse
ja kumartuu hänen ylitsensä.)
Harness (tullen ja tarttuen Robertsin hihaan): Hävetkää, Roberts!
Menkää kauniisti kotiinne, mies; menkää kotiinne!
Roberts (tempaisten kätensä pois): Kotiin? (Lyyhistyen —
kuiskaten.) Kotiin!
Enid (tyyneesti isälleen): Tule pois rakas isä! Tule omaan
huoneeseesi!
(Anthony ponnistautuu ylös. Hän kääntyy Robertsiin, joka
katsoo häneen. He seisovat useita sekuntteja silmäillen
toisiaan tiukasti. Anthony nostaa kätensä tervehtiäkseen,
mutta antaa sen vaipua. Vihainen ilme Robertsin kasvoissa
muuttuu ihmetteleväksi. Molemmat taivuttavat päätään
kunnianosotukseksi. Anthony kääntyy ja kävelee hitaasti
verhotulle ovelle. Äkkiä hän horjahtaa kuin kaatuisi, tointuu,
Enidin ja Edgarin, joka on huoneen poikki rientänyt apuun,
tukemana menee pois. Roberts seisoo liikkumatta useita
sekuntteja, tuijottaen tiukasti Anthonyn jälkeen ja menee sitte
eteiseen.)
Tench (läheten Harnessia): On suuri paino pois sydämeltäni, herra
Harness! Mutta mikä järkyttävä kohtaus, herra!
(Hän pyyhkii ohimoitaan. Harness on kalpea ja päättävä,
katsellen hieman jäykästi hymyillen vapisevaa Tenchiä.)
Tämä kaikki on ollut niin ankaraa! Mitähän hän tarkoitti
sanoessaan: He ovat syösseet meidät molemmatkin alas? Vaikka
hän on menettänyt vaimonsa, mies raukka, ei hänen olisi tarvinnut
puheenjohtajalle noin sanoa!
Harness: Yksi vaimo kuollut ja kaksi parasta miestä murtunut!
(Underwood tulee äkkiä.)
Tench (tuijottaen Harnessiin — äkkiä): Tiedättekö, herra — nämä
ehdot, ne ovat juuri samat, jotka yhdessä laadimme, te ja minä ja
esitimme molemmille puolille ennen taistelun alkua. Kaikki tämä —
kaikki tämä ja — ja minkä tähden?
Harness: (matalalla jäykällä äänellä): Sepä siinä merkillisintä
onkin!
(Underwood kääntymättä ovelta tekee hyväksyvän viittauksen.)
Esirippu.
*** END OF THE PROJECT GUTENBERG EBOOK TAISTELU ***
Updated editions will replace the previous one—the old editions will
be renamed.
Creating the works from print editions not protected by U.S.
copyright law means that no one owns a United States copyright in
these works, so the Foundation (and you!) can copy and distribute it
in the United States without permission and without paying
copyright royalties. Special rules, set forth in the General Terms of
Use part of this license, apply to copying and distributing Project
Gutenberg™ electronic works to protect the PROJECT GUTENBERG™
concept and trademark. Project Gutenberg is a registered trademark,
and may not be used if you charge for an eBook, except by following
the terms of the trademark license, including paying royalties for use
of the Project Gutenberg trademark. If you do not charge anything
for copies of this eBook, complying with the trademark license is
very easy. You may use this eBook for nearly any purpose such as
creation of derivative works, reports, performances and research.
Project Gutenberg eBooks may be modified and printed and given
away—you may do practically ANYTHING in the United States with
eBooks not protected by U.S. copyright law. Redistribution is subject
to the trademark license, especially commercial redistribution.
START: FULL LICENSE
THE FULL PROJECT GUTENBERG LICENSE
PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK
To protect the Project Gutenberg™ mission of promoting the free
distribution of electronic works, by using or distributing this work (or
any other work associated in any way with the phrase “Project
Gutenberg”), you agree to comply with all the terms of the Full
Project Gutenberg™ License available with this file or online at
www.gutenberg.org/license.
Section 1. General Terms of Use and
Redistributing Project Gutenberg™
electronic works
1.A. By reading or using any part of this Project Gutenberg™
electronic work, you indicate that you have read, understand, agree
to and accept all the terms of this license and intellectual property
(trademark/copyright) agreement. If you do not agree to abide by all
the terms of this agreement, you must cease using and return or
destroy all copies of Project Gutenberg™ electronic works in your
possession. If you paid a fee for obtaining a copy of or access to a
Project Gutenberg™ electronic work and you do not agree to be
bound by the terms of this agreement, you may obtain a refund
from the person or entity to whom you paid the fee as set forth in
paragraph 1.E.8.
1.B. “Project Gutenberg” is a registered trademark. It may only be
used on or associated in any way with an electronic work by people
who agree to be bound by the terms of this agreement. There are a
few things that you can do with most Project Gutenberg™ electronic
works even without complying with the full terms of this agreement.
See paragraph 1.C below. There are a lot of things you can do with
Project Gutenberg™ electronic works if you follow the terms of this
agreement and help preserve free future access to Project
Gutenberg™ electronic works. See paragraph 1.E below.
1.C. The Project Gutenberg Literary Archive Foundation (“the
Foundation” or PGLAF), owns a compilation copyright in the
collection of Project Gutenberg™ electronic works. Nearly all the
individual works in the collection are in the public domain in the
United States. If an individual work is unprotected by copyright law
in the United States and you are located in the United States, we do
not claim a right to prevent you from copying, distributing,
performing, displaying or creating derivative works based on the
work as long as all references to Project Gutenberg are removed. Of
course, we hope that you will support the Project Gutenberg™
mission of promoting free access to electronic works by freely
sharing Project Gutenberg™ works in compliance with the terms of
this agreement for keeping the Project Gutenberg™ name associated
with the work. You can easily comply with the terms of this
agreement by keeping this work in the same format with its attached
full Project Gutenberg™ License when you share it without charge
with others.
1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside the
United States, check the laws of your country in addition to the
terms of this agreement before downloading, copying, displaying,
performing, distributing or creating derivative works based on this
work or any other Project Gutenberg™ work. The Foundation makes
no representations concerning the copyright status of any work in
any country other than the United States.
1.E. Unless you have removed all references to Project Gutenberg:
1.E.1. The following sentence, with active links to, or other
immediate access to, the full Project Gutenberg™ License must
appear prominently whenever any copy of a Project Gutenberg™
work (any work on which the phrase “Project Gutenberg” appears,
or with which the phrase “Project Gutenberg” is associated) is
accessed, displayed, performed, viewed, copied or distributed:
This eBook is for the use of anyone anywhere in the United
States and most other parts of the world at no cost and with
almost no restrictions whatsoever. You may copy it, give it away
or re-use it under the terms of the Project Gutenberg License
included with this eBook or online at www.gutenberg.org. If you
are not located in the United States, you will have to check the
laws of the country where you are located before using this
eBook.
1.E.2. If an individual Project Gutenberg™ electronic work is derived
from texts not protected by U.S. copyright law (does not contain a
notice indicating that it is posted with permission of the copyright
holder), the work can be copied and distributed to anyone in the
United States without paying any fees or charges. If you are
redistributing or providing access to a work with the phrase “Project
Gutenberg” associated with or appearing on the work, you must
comply either with the requirements of paragraphs 1.E.1 through
1.E.7 or obtain permission for the use of the work and the Project
Gutenberg™ trademark as set forth in paragraphs 1.E.8 or 1.E.9.
1.E.3. If an individual Project Gutenberg™ electronic work is posted
with the permission of the copyright holder, your use and distribution
must comply with both paragraphs 1.E.1 through 1.E.7 and any
additional terms imposed by the copyright holder. Additional terms
will be linked to the Project Gutenberg™ License for all works posted
with the permission of the copyright holder found at the beginning
of this work.
1.E.4. Do not unlink or detach or remove the full Project
Gutenberg™ License terms from this work, or any files containing a
part of this work or any other work associated with Project
Gutenberg™.
1.E.5. Do not copy, display, perform, distribute or redistribute this
electronic work, or any part of this electronic work, without
prominently displaying the sentence set forth in paragraph 1.E.1
with active links or immediate access to the full terms of the Project
Gutenberg™ License.
1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if you
provide access to or distribute copies of a Project Gutenberg™ work
in a format other than “Plain Vanilla ASCII” or other format used in
the official version posted on the official Project Gutenberg™ website
(www.gutenberg.org), you must, at no additional cost, fee or
expense to the user, provide a copy, a means of exporting a copy, or
a means of obtaining a copy upon request, of the work in its original
“Plain Vanilla ASCII” or other form. Any alternate format must
include the full Project Gutenberg™ License as specified in
paragraph 1.E.1.
1.E.7. Do not charge a fee for access to, viewing, displaying,
performing, copying or distributing any Project Gutenberg™ works
unless you comply with paragraph 1.E.8 or 1.E.9.
1.E.8. You may charge a reasonable fee for copies of or providing
access to or distributing Project Gutenberg™ electronic works
provided that:
• You pay a royalty fee of 20% of the gross profits you derive
from the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”
• You provide a full refund of any money paid by a user who
notifies you in writing (or by e-mail) within 30 days of receipt
that s/he does not agree to the terms of the full Project
Gutenberg™ License. You must require such a user to return or
destroy all copies of the works possessed in a physical medium
and discontinue all use of and all access to other copies of
Project Gutenberg™ works.
• You provide, in accordance with paragraph 1.F.3, a full refund of
any money paid for a work or a replacement copy, if a defect in
the electronic work is discovered and reported to you within 90
days of receipt of the work.
• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.
1.E.9. If you wish to charge a fee or distribute a Project Gutenberg™
electronic work or group of works on different terms than are set
forth in this agreement, you must obtain permission in writing from
the Project Gutenberg Literary Archive Foundation, the manager of
the Project Gutenberg™ trademark. Contact the Foundation as set
forth in Section 3 below.
1.F.
1.F.1. Project Gutenberg volunteers and employees expend
considerable effort to identify, do copyright research on, transcribe
and proofread works not protected by U.S. copyright law in creating
the Project Gutenberg™ collection. Despite these efforts, Project
Gutenberg™ electronic works, and the medium on which they may
be stored, may contain “Defects,” such as, but not limited to,
incomplete, inaccurate or corrupt data, transcription errors, a
copyright or other intellectual property infringement, a defective or
damaged disk or other medium, a computer virus, or computer
codes that damage or cannot be read by your equipment.
1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except for
the “Right of Replacement or Refund” described in paragraph 1.F.3,
the Project Gutenberg Literary Archive Foundation, the owner of the
Project Gutenberg™ trademark, and any other party distributing a
Project Gutenberg™ electronic work under this agreement, disclaim
all liability to you for damages, costs and expenses, including legal
fees. YOU AGREE THAT YOU HAVE NO REMEDIES FOR
NEGLIGENCE, STRICT LIABILITY, BREACH OF WARRANTY OR
BREACH OF CONTRACT EXCEPT THOSE PROVIDED IN PARAGRAPH
1.F.3. YOU AGREE THAT THE FOUNDATION, THE TRADEMARK
OWNER, AND ANY DISTRIBUTOR UNDER THIS AGREEMENT WILL
NOT BE LIABLE TO YOU FOR ACTUAL, DIRECT, INDIRECT,
CONSEQUENTIAL, PUNITIVE OR INCIDENTAL DAMAGES EVEN IF
YOU GIVE NOTICE OF THE POSSIBILITY OF SUCH DAMAGE.
1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you
discover a defect in this electronic work within 90 days of receiving
it, you can receive a refund of the money (if any) you paid for it by
sending a written explanation to the person you received the work
from. If you received the work on a physical medium, you must
return the medium with your written explanation. The person or
entity that provided you with the defective work may elect to provide
a replacement copy in lieu of a refund. If you received the work
electronically, the person or entity providing it to you may choose to
give you a second opportunity to receive the work electronically in
lieu of a refund. If the second copy is also defective, you may
demand a refund in writing without further opportunities to fix the
problem.
1.F.4. Except for the limited right of replacement or refund set forth
in paragraph 1.F.3, this work is provided to you ‘AS-IS’, WITH NO
OTHER WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED,
INCLUDING BUT NOT LIMITED TO WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.
1.F.5. Some states do not allow disclaimers of certain implied
warranties or the exclusion or limitation of certain types of damages.
If any disclaimer or limitation set forth in this agreement violates the
law of the state applicable to this agreement, the agreement shall be
interpreted to make the maximum disclaimer or limitation permitted
by the applicable state law. The invalidity or unenforceability of any
provision of this agreement shall not void the remaining provisions.
1.F.6. INDEMNITY - You agree to indemnify and hold the Foundation,
the trademark owner, any agent or employee of the Foundation,
anyone providing copies of Project Gutenberg™ electronic works in
accordance with this agreement, and any volunteers associated with
the production, promotion and distribution of Project Gutenberg™
electronic works, harmless from all liability, costs and expenses,
including legal fees, that arise directly or indirectly from any of the
following which you do or cause to occur: (a) distribution of this or
any Project Gutenberg™ work, (b) alteration, modification, or
additions or deletions to any Project Gutenberg™ work, and (c) any
Defect you cause.
Section 2. Information about the Mission
of Project Gutenberg™
Project Gutenberg™ is synonymous with the free distribution of
electronic works in formats readable by the widest variety of
computers including obsolete, old, middle-aged and new computers.
It exists because of the efforts of hundreds of volunteers and
donations from people in all walks of life.
Volunteers and financial support to provide volunteers with the
assistance they need are critical to reaching Project Gutenberg™’s
goals and ensuring that the Project Gutenberg™ collection will
remain freely available for generations to come. In 2001, the Project
Gutenberg Literary Archive Foundation was created to provide a
secure and permanent future for Project Gutenberg™ and future
generations. To learn more about the Project Gutenberg Literary
Archive Foundation and how your efforts and donations can help,
see Sections 3 and 4 and the Foundation information page at
www.gutenberg.org.
Section 3. Information about the Project
Gutenberg Literary Archive Foundation
The Project Gutenberg Literary Archive Foundation is a non-profit
501(c)(3) educational corporation organized under the laws of the
state of Mississippi and granted tax exempt status by the Internal
Revenue Service. The Foundation’s EIN or federal tax identification
number is 64-6221541. Contributions to the Project Gutenberg
Literary Archive Foundation are tax deductible to the full extent
permitted by U.S. federal laws and your state’s laws.
The Foundation’s business office is located at 809 North 1500 West,
Salt Lake City, UT 84116, (801) 596-1887. Email contact links and up
to date contact information can be found at the Foundation’s website
and official page at www.gutenberg.org/contact
Section 4. Information about Donations to
the Project Gutenberg Literary Archive
Foundation
Project Gutenberg™ depends upon and cannot survive without
widespread public support and donations to carry out its mission of
increasing the number of public domain and licensed works that can
be freely distributed in machine-readable form accessible by the
widest array of equipment including outdated equipment. Many
small donations ($1 to $5,000) are particularly important to
maintaining tax exempt status with the IRS.
The Foundation is committed to complying with the laws regulating
charities and charitable donations in all 50 states of the United
States. Compliance requirements are not uniform and it takes a
considerable effort, much paperwork and many fees to meet and
keep up with these requirements. We do not solicit donations in
locations where we have not received written confirmation of
compliance. To SEND DONATIONS or determine the status of
compliance for any particular state visit www.gutenberg.org/donate.
While we cannot and do not solicit contributions from states where
we have not met the solicitation requirements, we know of no
prohibition against accepting unsolicited donations from donors in
such states who approach us with offers to donate.
International donations are gratefully accepted, but we cannot make
any statements concerning tax treatment of donations received from
outside the United States. U.S. laws alone swamp our small staff.
Please check the Project Gutenberg web pages for current donation
methods and addresses. Donations are accepted in a number of
other ways including checks, online payments and credit card
donations. To donate, please visit: www.gutenberg.org/donate.
Section 5. General Information About
Project Gutenberg™ electronic works
Professor Michael S. Hart was the originator of the Project
Gutenberg™ concept of a library of electronic works that could be
freely shared with anyone. For forty years, he produced and
distributed Project Gutenberg™ eBooks with only a loose network of
volunteer support.
Project Gutenberg™ eBooks are often created from several printed
editions, all of which are confirmed as not protected by copyright in
the U.S. unless a copyright notice is included. Thus, we do not
necessarily keep eBooks in compliance with any particular paper
edition.
Most people start at our website which has the main PG search
facility: www.gutenberg.org.
This website includes information about Project Gutenberg™,
including how to make donations to the Project Gutenberg Literary
Archive Foundation, how to help produce our new eBooks, and how
to subscribe to our email newsletter to hear about new eBooks.
Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
Let us accompany you on the journey of exploring knowledge and
personal growth!
textbookfull.com

More Related Content

PDF
Intelligent Computing Proceedings of the 2020 Computing Conference Volume 1 K...
PDF
Intelligent Computing Proceedings Of The 2020 Computing Conference Volume 3 1...
PDF
Intelligent Computing Proceedings Of The 2022 Computing Conference Volume 2 K...
PDF
Intelligent Computing Proceedings Of The 2022 Computing Conference Volume 3 K...
PDF
Intelligent Computing: Proceedings of the 2021 Computing Conference, Volume 1...
PDF
Proceedings of the Second International Conference on Computer and Communicat...
PDF
Proceedings of the Second International Conference on Computer and Communicat...
PDF
Proceedings of the International Conference on Data Engineering and Communica...
Intelligent Computing Proceedings of the 2020 Computing Conference Volume 1 K...
Intelligent Computing Proceedings Of The 2020 Computing Conference Volume 3 1...
Intelligent Computing Proceedings Of The 2022 Computing Conference Volume 2 K...
Intelligent Computing Proceedings Of The 2022 Computing Conference Volume 3 K...
Intelligent Computing: Proceedings of the 2021 Computing Conference, Volume 1...
Proceedings of the Second International Conference on Computer and Communicat...
Proceedings of the Second International Conference on Computer and Communicat...
Proceedings of the International Conference on Data Engineering and Communica...

Similar to Intelligent Computing Proceedings of the 2020 Computing Conference Volume 3 Kohei Arai (20)

PDF
Intelligent Computing Proceedings Of The 2022 Computing Conference Volume 1 K...
PDF
Intelligent Computing And Communication Proceedings Of 6th Icicc 2022 M Seetha
PDF
Proceedings Of International Conference On Frontiers In Computing And Systems...
PDF
Information Technology and Systems Proceedings of ICITS 2019 Álvaro Rocha
PDF
Progress in Advanced Computing and Intelligent Engineering Proceedings of ICA...
PDF
Ict Systems And Sustainability Proceedings Of Ict4sd 2020 Volume 1 1st Ed Mil...
PDF
The 10th International Conference on Computer Engineering and Networks Qi Liu
PDF
Applied Computer Vision and Image Processing Proceedings of ICCET 2020 Volume...
PDF
Proceedings of the Second International Conference on Computer and Communicat...
PDF
Complex, Intelligent, and Software Intensive Systems Leonard Barolli
PDF
Soft Computing Theories And Applications Proceedings Of Socta 2019 Advances I...
PDF
Proceedings Of International Conference On Recent Trends In Machine Learning ...
PDF
Intelligent Computing & Optimization Pandian Vasant
PDF
Soft Computing for Problem Solving Proceedings of SocProS 2020 Volume 2 Advan...
PDF
Intelligent Computing & Optimization Pandian Vasant
PDF
Soft Computing for Problem Solving Proceedings of SocProS 2020 Volume 2 Advan...
PDF
Proceedings of International Conference on ICT for Sustainable Development IC...
PDF
Where can buy Innovations in Computer Science and Engineering Proceedings of ...
PDF
System and Architecture Sunil Kumar Muttoo
PDF
Proceedings of International Conference on ICT for Sustainable Development IC...
Intelligent Computing Proceedings Of The 2022 Computing Conference Volume 1 K...
Intelligent Computing And Communication Proceedings Of 6th Icicc 2022 M Seetha
Proceedings Of International Conference On Frontiers In Computing And Systems...
Information Technology and Systems Proceedings of ICITS 2019 Álvaro Rocha
Progress in Advanced Computing and Intelligent Engineering Proceedings of ICA...
Ict Systems And Sustainability Proceedings Of Ict4sd 2020 Volume 1 1st Ed Mil...
The 10th International Conference on Computer Engineering and Networks Qi Liu
Applied Computer Vision and Image Processing Proceedings of ICCET 2020 Volume...
Proceedings of the Second International Conference on Computer and Communicat...
Complex, Intelligent, and Software Intensive Systems Leonard Barolli
Soft Computing Theories And Applications Proceedings Of Socta 2019 Advances I...
Proceedings Of International Conference On Recent Trends In Machine Learning ...
Intelligent Computing & Optimization Pandian Vasant
Soft Computing for Problem Solving Proceedings of SocProS 2020 Volume 2 Advan...
Intelligent Computing & Optimization Pandian Vasant
Soft Computing for Problem Solving Proceedings of SocProS 2020 Volume 2 Advan...
Proceedings of International Conference on ICT for Sustainable Development IC...
Where can buy Innovations in Computer Science and Engineering Proceedings of ...
System and Architecture Sunil Kumar Muttoo
Proceedings of International Conference on ICT for Sustainable Development IC...
Ad

Recently uploaded (20)

PDF
Classroom Observation Tools for Teachers
PDF
Hazard Identification & Risk Assessment .pdf
PDF
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
PPTX
Final Presentation General Medicine 03-08-2024.pptx
PDF
advance database management system book.pdf
PDF
SOIL: Factor, Horizon, Process, Classification, Degradation, Conservation
PDF
Supply Chain Operations Speaking Notes -ICLT Program
PPTX
202450812 BayCHI UCSC-SV 20250812 v17.pptx
PDF
RMMM.pdf make it easy to upload and study
PPTX
A powerpoint presentation on the Revised K-10 Science Shaping Paper
PPTX
Cell Types and Its function , kingdom of life
PDF
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
PPTX
History, Philosophy and sociology of education (1).pptx
PDF
RTP_AR_KS1_Tutor's Guide_English [FOR REPRODUCTION].pdf
PDF
احياء السادس العلمي - الفصل الثالث (التكاثر) منهج متميزين/كلية بغداد/موهوبين
PDF
LDMMIA Reiki Yoga Finals Review Spring Summer
PPTX
Tissue processing ( HISTOPATHOLOGICAL TECHNIQUE
PPTX
Radiologic_Anatomy_of_the_Brachial_plexus [final].pptx
PDF
Trump Administration's workforce development strategy
PPTX
Orientation - ARALprogram of Deped to the Parents.pptx
Classroom Observation Tools for Teachers
Hazard Identification & Risk Assessment .pdf
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
Final Presentation General Medicine 03-08-2024.pptx
advance database management system book.pdf
SOIL: Factor, Horizon, Process, Classification, Degradation, Conservation
Supply Chain Operations Speaking Notes -ICLT Program
202450812 BayCHI UCSC-SV 20250812 v17.pptx
RMMM.pdf make it easy to upload and study
A powerpoint presentation on the Revised K-10 Science Shaping Paper
Cell Types and Its function , kingdom of life
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
History, Philosophy and sociology of education (1).pptx
RTP_AR_KS1_Tutor's Guide_English [FOR REPRODUCTION].pdf
احياء السادس العلمي - الفصل الثالث (التكاثر) منهج متميزين/كلية بغداد/موهوبين
LDMMIA Reiki Yoga Finals Review Spring Summer
Tissue processing ( HISTOPATHOLOGICAL TECHNIQUE
Radiologic_Anatomy_of_the_Brachial_plexus [final].pptx
Trump Administration's workforce development strategy
Orientation - ARALprogram of Deped to the Parents.pptx
Ad

Intelligent Computing Proceedings of the 2020 Computing Conference Volume 3 Kohei Arai

  • 1. Intelligent Computing Proceedings of the 2020 Computing Conference Volume 3 Kohei Arai download https://textbookfull.com/product/intelligent-computing- proceedings-of-the-2020-computing-conference-volume-3-kohei-arai/ Download full version ebook from https://textbookfull.com
  • 2. We believe these products will be a great fit for you. Click the link to download now, or visit textbookfull.com to discover even more! Intelligent Computing Proceedings of the 2020 Computing Conference Volume 2 Kohei Arai https://textbookfull.com/product/intelligent-computing- proceedings-of-the-2020-computing-conference-volume-2-kohei-arai/ Intelligent Computing Proceedings of the 2020 Computing Conference Volume 1 Kohei Arai https://textbookfull.com/product/intelligent-computing- proceedings-of-the-2020-computing-conference-volume-1-kohei-arai/ Intelligent Computing: Proceedings of the 2018 Computing Conference, Volume 2 Kohei Arai https://textbookfull.com/product/intelligent-computing- proceedings-of-the-2018-computing-conference-volume-2-kohei-arai/ Intelligent Systems and Applications: Proceedings of the 2020 Intelligent Systems Conference (IntelliSys) Volume 3 Kohei Arai https://textbookfull.com/product/intelligent-systems-and- applications-proceedings-of-the-2020-intelligent-systems- conference-intellisys-volume-3-kohei-arai/
  • 3. Intelligent Systems and Applications: Proceedings of the 2020 Intelligent Systems Conference (IntelliSys) Volume 2 Kohei Arai https://textbookfull.com/product/intelligent-systems-and- applications-proceedings-of-the-2020-intelligent-systems- conference-intellisys-volume-2-kohei-arai/ Proceedings of the Future Technologies Conference (FTC) 2020, Volume 1 Kohei Arai https://textbookfull.com/product/proceedings-of-the-future- technologies-conference-ftc-2020-volume-1-kohei-arai/ Proceedings of the Future Technologies Conference FTC 2018 Volume 2 Kohei Arai https://textbookfull.com/product/proceedings-of-the-future- technologies-conference-ftc-2018-volume-2-kohei-arai/ Proceedings of the Future Technologies Conference FTC 2018 Volume 1 Kohei Arai https://textbookfull.com/product/proceedings-of-the-future- technologies-conference-ftc-2018-volume-1-kohei-arai/ Advances in Computer Vision: Proceedings of the 2019 Computer Vision Conference (CVC), Volume 1 Kohei Arai https://textbookfull.com/product/advances-in-computer-vision- proceedings-of-the-2019-computer-vision-conference-cvc- volume-1-kohei-arai/
  • 4. Advances in Intelligent Systems and Computing 1230 Kohei Arai Supriya Kapoor Rahul Bhatia Editors Intelligent Computing Proceedings of the 2020 Computing Conference, Volume 3
  • 5. Advances in Intelligent Systems and Computing Volume 1230 Series Editor Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Advisory Editors Nikhil R. Pal, Indian Statistical Institute, Kolkata, India Rafael Bello Perez, Faculty of Mathematics, Physics and Computing, Universidad Central de Las Villas, Santa Clara, Cuba Emilio S. Corchado, University of Salamanca, Salamanca, Spain Hani Hagras, School of Computer Science and Electronic Engineering, University of Essex, Colchester, UK László T. Kóczy, Department of Automation, Széchenyi István University, Gyor, Hungary Vladik Kreinovich, Department of Computer Science, University of Texas at El Paso, El Paso, TX, USA Chin-Teng Lin, Department of Electrical Engineering, National Chiao Tung University, Hsinchu, Taiwan Jie Lu, Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW, Australia Patricia Melin, Graduate Program of Computer Science, Tijuana Institute of Technology, Tijuana, Mexico Nadia Nedjah, Department of Electronics Engineering, University of Rio de Janeiro, Rio de Janeiro, Brazil Ngoc Thanh Nguyen , Faculty of Computer Science and Management, Wrocław University of Technology, Wrocław, Poland Jun Wang, Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Shatin, Hong Kong
  • 6. The series “Advances in Intelligent Systems and Computing” contains publications on theory, applications, and design methods of Intelligent Systems and Intelligent Computing. Virtually all disciplines such as engineering, natural sciences, computer and information science, ICT, economics, business, e-commerce, environment, healthcare, life science are covered. The list of topics spans all the areas of modern intelligent systems and computing such as: computational intelligence, soft comput- ing including neural networks, fuzzy systems, evolutionary computing and the fusion of these paradigms, social intelligence, ambient intelligence, computational neuro- science, artificial life, virtual worlds and society, cognitive science and systems, Perception and Vision, DNA and immune based systems, self-organizing and adaptive systems, e-Learning and teaching, human-centered and human-centric computing, recommender systems, intelligent control, robotics and mechatronics including human-machine teaming, knowledge-based paradigms, learning para- digms, machine ethics, intelligent data analysis, knowledge management, intelligent agents, intelligent decision making and support, intelligent network security, trust management, interactive entertainment, Web intelligence and multimedia. The publications within “Advances in Intelligent Systems and Computing” are primarily proceedings of important conferences, symposia and congresses. They cover significant recent developments in the field, both of a foundational and applicable character. An important characteristic feature of the series is the short publication time and world-wide distribution. This permits a rapid and broad dissemination of research results. ** Indexing: The books of this series are submitted to ISI Proceedings, EI-Compendex, DBLP, SCOPUS, Google Scholar and Springerlink ** More information about this series at http://www.springer.com/series/11156
  • 7. Kohei Arai • Supriya Kapoor • Rahul Bhatia Editors Intelligent Computing Proceedings of the 2020 Computing Conference, Volume 3 123
  • 8. Editors Kohei Arai Faculty of Science and Engineering Saga University Saga, Japan Rahul Bhatia The Science and Information (SAI) Organization Bradford, West Yorkshire, UK Supriya Kapoor The Science and Information (SAI) Organization Bradford, West Yorkshire, UK ISSN 2194-5357 ISSN 2194-5365 (electronic) Advances in Intelligent Systems and Computing ISBN 978-3-030-52242-1 ISBN 978-3-030-52243-8 (eBook) https://doi.org/10.1007/978-3-030-52243-8 © Springer Nature Switzerland AG 2020 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
  • 9. Editor’s Preface On behalf of the Committee, we welcome you to the Computing Conference 2020. The aim of this conference is to give a platform to researchers with fundamental contributions and to be a premier venue for industry practitioners to share and report on up-to-the-minute innovations and developments, to summarize the state of the art and to exchange ideas and advances in all aspects of computer sciences and its applications. For this edition of the conference, we received 514 submissions from 50+ countries around the world. These submissions underwent a double-blind peer review process. Of those 514 submissions, 160 submissions (including 15 posters) have been selected to be included in this proceedings. The published proceedings has been divided into three volumes covering a wide range of conference tracks, such as technology trends, computing, intelligent systems, machine vision, security, communication, electronics and e-learning to name a few. In addition to the con- tributed papers, the conference program included inspiring keynote talks. Their talks were anticipated to pique the interest of the entire computing audience by their thought-provoking claims which were streamed live during the conferences. Also, the authors had very professionally presented their research papers which were viewed by a large international audience online. All this digital content engaged significant contemplation and discussions amongst all participants. Deep appreciation goes to the keynote speakers for sharing their knowledge and expertise with us and to all the authors who have spent the time and effort to contribute significantly to this conference. We are also indebted to the Organizing Committee for their great efforts in ensuring the successful implementation of the conference. In particular, we would like to thank the Technical Committee for their constructive and enlightening reviews on the manuscripts in the limited timescale. We hope that all the participants and the interested readers benefit scientifically from this book and find it stimulating in the process. We are pleased to present the proceedings of this conference as its published record. v
  • 10. Hope to see you in 2021, in our next Computing Conference, with the same amplitude, focus and determination. Kohei Arai vi Editor’s Preface
  • 11. Contents Preventing Neural Network Weight Stealing via Network Obfuscation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Kálmán Szentannai, Jalal Al-Afandi, and András Horváth Applications of Z-Numbers and Neural Networks in Engineering . . . . . 12 Raheleh Jafari, Sina Razvarz, and Alexander Gegov 5G-FOG: Freezing of Gait Identification in Multi-class Softmax Neural Network Exploiting 5G Spectrum . . . . . . . . . . . . . . . . . . . . . . . . 26 Jan Sher Khan, Ahsen Tahir, Jawad Ahmad, Syed Aziz Shah, Qammer H. Abbasi, Gordon Russell, and William Buchanan Adaptive Blending Units: Trainable Activation Functions for Deep Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Leon René Sütfeld, Flemming Brieger, Holger Finger, Sonja Füllhase, and Gordon Pipa Application of Neural Networks to Characterization of Chemical Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Mahmoud Zaki Iskandarani Application of Machine Learning in Deception Detection. . . . . . . . . . . . 61 Owolafe Otasowie A New Approach to Estimate the Discharge Coefficient in Sharp-Crested Rectangular Side Orifices Using Gene Expression Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 Hossein Bonakdari, Bahram Gharabaghi, Isa Ebtehaj, and Ali Sharifi DiaTTroD: A Logical Agent Diagnostic Test for Tropical Diseases . . . . 97 Sandra Mae W. Famador and Tardi Tjahjadi A Weighted Combination Method of Multiple K-Nearest Neighbor Classifiers for EEG-Based Cognitive Task Classification . . . . . . . . . . . . 116 Abduljalil Mohamed, Amer Mohamed, and Yasir Mustafa vii
  • 12. Detection and Localization of Breast Tumor in 2D Using Microwave Imaging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 Abdelfettah Miraoui, Lotfi Merad Sidi, and Mohamed Meriah Regression Analysis of Brain Biomechanics Under Uniaxial Deformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 O. Abuomar, F. Patterson, and R. K. Prabhu Exudate-Based Classification for Detection of Severity of Diabetic Macula Edema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 Nandana Prabhu, Deepak Bhoir, Nita Shanbhag, and Uma Rao Analysis and Detection of Brain Tumor Using U-Net-Based Deep Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 Vibhu Garg, Madhur Bansal, A. Sanjana, and Mayank Dave Implementation of Deep Neural Networks in Facial Emotion Perception in Patients Suffering from Depressive Disorder: Promising Tool in the Diagnostic Process and Treatment Evaluation . . . . . . . . . . . 174 Krzysztof Michalik and Katarzyna Kucharska Invisibility and Fidelity Vector Map Watermarking Based on Linear Cellular Automata Transform . . . . . . . . . . . . . . . . . . . . . . . . 185 Saleh Al-Ardhi, Vijey Thayananthan, and Abdullah Basuhail Implementing Variable Power Transmission Patterns for Authentication Purposes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 Hosam Alamleh, Ali Abdullah S. Alqahtani, and Dalia Alamleh SADDLE: Secure Aerial Data Delivery with Lightweight Encryption . . . 204 Anthony Demeri, William Diehl, and Ahmad Salman Malware Analysis with Machine Learning for Evaluating the Integrity of Mission Critical Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224 Robert Heras and Alexander Perez-Pons Enhanced Security Using Elasticsearch and Machine Learning . . . . . . . 244 Ovidiu Negoita and Mihai Carabas Memory Incentive Provenance (MIP) to Secure the Wireless Sensor Data Stream . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 Mohammad Amanul Islam Tightly Close It, Robustly Secure It: Key-Based Lightweight Process for Propping up Encryption Techniques . . . . . . . . . . . . . . . . . . . . . . . . 278 Muhammed Jassem Al-Muhammed, Ahmad Al-Daraiseh, and Raed Abuzitar viii Contents
  • 13. Statistical Analysis to Optimize the Generation of Cryptographic Keys from Physical Unclonable Functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . 302 Bertrand Cambou, Mohammad Mohammadi, Christopher Philabaum, and Duane Booher Towards an Intelligent Intrusion Detection System: A Proposed Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 Raghda Fawzey Hriez, Ali Hadi, and Jalal Omer Atoum LockChain Technology as One Source of Truth for Cyber, Information Security and Privacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336 Yuri Bobbert and Nese Ozkanli Introduction of a Hybrid Monitor for Cyber-Physical Systems . . . . . . . 348 J. Ceasar Aguma, Bruce McMillin, and Amelia Regan Software Implementation of a SRAM PUF-Based Password Manager . . . 361 Sareh Assiri, Bertrand Cambou, D. Duane Booher, and Mohammad Mohammadinodoushan Contactless Palm Vein Authentication Security Technique for Better Adoption of e-Commerce in Developing Countries . . . . . . . . . . . . . . . . . 380 Sunday Alabi, Martin White, and Natalia Beloff LightGBM Algorithm for Malware Detection . . . . . . . . . . . . . . . . . . . . 391 Mouhammd Al-kasassbeh, Mohammad A. Abbadi, and Ahmed M. Al-Bustanji Exploiting Linearity in White-Box AES with Differential Computation Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404 Jakub Klemsa and Martin Novotný Immune-Based Network Dynamic Risk Control Strategy Knowledge Ontology Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420 Meng Huang, Tao Li, Hui Zhao, Xiaojie Liu, and Zhan Gao Windows 10 Hibernation File Forensics . . . . . . . . . . . . . . . . . . . . . . . . . 431 Ahmad Ghafarian and Deniz Keskin Behavior and Biometrics Based Masquerade Detection Mobile Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446 Pranieth Chandrasekara, Hasini Abeywardana, Sammani Rajapaksha, Sanjeevan Parameshwaran, and Kavinga Yapa Abeywardana Spoofed/Unintentional Fingerprint Detection Using Behavioral Biometric Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459 Ammar S. Salman and Odai S. Salman Enabling Paratransit and TNC Services with Blockchain Based Smart Contracts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471 Amari N. Lewis and Amelia C. Regan Contents ix
  • 14. A Review of Cyber Security Issues in Hospitality Industry . . . . . . . . . . 482 Neda Shabani and Arslan Munir Extended Protocol Using Keyless Encryption Based on Memristors. . . . 494 Yuxuan Zhu, Bertrand Cambou, David Hely, and Sareh Assiri Recommendations for Effective Security Assurance of Software-Dependent Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511 Jason Jaskolka On Generating Cancelable Biometric Templates Using Visual Secret Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 532 Manisha and Nitin Kumar An Integrated Safe and Secure Approach for Authentication and Secret Key Establishment in Automotive Cyber-Physical Systems . . . . . 545 Naresh Kumar Giri, Arslan Munir, and Joonho Kong How Many Clusters? An Entropic Approach to Hierarchical Cluster Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560 Sergei Koltcov, Vera Ignatenko, and Sergei Pashakhin Analysis of Structural Liveness and Boundedness in Weighted Free-Choice Net Based on Circuit Flow Values . . . . . . . . . . . . . . . . . . . 570 Yojiro Harie and Katsumi Wasaki Classification of a Pedestrian’s Behaviour Using Dual Deep Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 581 James Spooner, Madeline Cheah, Vasile Palade, Stratis Kanarachos, and Alireza Daneshkhah Towards Porting Astrophysics Visual Analytics Services in the European Open Science Cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . 598 Eva Sciacca, Fabio Vitello, Ugo Becciani, Cristobal Bordiu, Filomena Bufano, Antonio Calanducci, Alessandro Costa, Mario Raciti, and Simone Riggi Computer Graphics-Based Analysis of Anterior Cruciate Ligament in a Partially Replaced Knee . . . . . . . . . . . . . . . . . . . . . . . . . 607 Ahmed Imran An Assessment Algorithm for Evaluating Students Satisfaction in e-Learning Environments: A Case Study . . . . . . . . . . . . . . . . . . . . . . 613 M. Caramihai, Irina Severin, and Ana Maria Bogatu The Use of New Technologies in the Organization of the Educational Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 622 Y. A. Daineko, N. T. Duzbayev, K. B. Kozhaly, M. T. Ipalakova, Zh. M. Bekaulova, N. Zh. Nalgozhina, and R. N. Sharshova x Contents
  • 15. Design and Implementation of Cryptocurrency Price Prediction System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 628 Milena Karova, Ivaylo Penev, and Daniel Marinov Strategic Behavior Discovery of Multi-agent Systems Based on Deep Learning Technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 644 Boris Morose, Sabina Aledort, and Gal Zaidman Development of Prediction Methods for Taxi Order Service on the Basis of Intellectual Data Analysis. . . . . . . . . . . . . . . . . . . . . . . . 652 N. A. Andriyanov Discourse Analysis on Learning Theories and AI. . . . . . . . . . . . . . . . . . 665 Rosemary Papa, Karen Moran Jackson, Ric Brown, and David Jackson False Asymptotic Instability Behavior at Iterated Functions with Lyapunov Stability in Nonlinear Time Series . . . . . . . . . . . . . . . . . 673 Charles Roberto Telles The Influence of Methodological Tools on the Diagnosed Level of Intellectual Competence in Older Adolescents . . . . . . . . . . . . . . . . . . 694 Sipovskaya Yana Ivanovna The Automated Solar Activity Prediction System (ASAP) Update Based on Optimization of a Machine Learning Approach . . . . . . . . . . . 702 Ali K. Abed and Rami Qahwaji Author Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 719 Contents xi
  • 16. Preventing Neural Network Weight Stealing via Network Obfuscation Kálmán Szentannai, Jalal Al-Afandi, and András Horváth(B) Faculty of Information Technology and Bionics, Peter Pazmany Catholic University, Práter u. 50/A, Budapest 1083, Hungary horvath.andras@itk.ppke.hu Abstract. Deep Neural Networks are robust to minor perturbations of the learned network parameters and their minor modifications do not change the overall network response significantly. This allows space for model stealing, where a malevolent attacker can steal an already trained network, modify the weights and claim the new network his own intel- lectual property. In certain cases this can prevent the free distribution and application of networks in the embedded domain. In this paper, we propose a method for creating an equivalent version of an already trained fully connected deep neural network that can prevent network stealing, namely, it produces the same responses and classification accuracy, but it is extremely sensitive to weight changes. Keywords: Neural networks · Networks stealing · Weight stealing · Obfuscation 1 Introduction Deep neural networks are employed in an emerging number of tasks, many of which were not solvable before with traditional machine learning approaches. In these structures, expert knowledge which is represented in annotated datasets is transformed into learned network parameters known as network weights during training. Methods, approaches and network architectures are distributed openly in this community, but most companies protect their data and trained networks obtained from tremendous amount of working hours annotating datasets and fine-tuning training parameters. Model stealing and detection of unauthorized use via stolen weights is a key challenge of the field as there are techniques (scaling, noising, fine-tuning, distil- lation) to modify the weights to hide the abuse, while preserving the functionality and accuracy of the original network. Since networks are trained by stochastic optimization methods and are initialized with random weights, training on a dataset might result various different networks with similar accuracy. There are several existing methods to measure distances between network weights after these modifications and independent trainings: [1–3] Obfuscation of c Springer Nature Switzerland AG 2020 K. Arai et al. (Eds.): SAI 2020, AISC 1230, pp. 1–11, 2020. https://doi.org/10.1007/978-3-030-52243-8_1
  • 17. 2 K. Szentannai et al. neural networks was introduced in [4], which showed the viability and importance of these approaches. In this paper the authors present a method to obfuscate the architecture, but not the learned network functionality. We would argue that most ownership concerns are not raised because of network architectures, since most industrial applications use previously published structures, but because of network functionality and the learned weights of the network. Other approaches try to embed additional, hidden information in the network such as hidden functionalities or non-plausible, predefined answers for previously selected images (usually referred as watermarks) [5,6]. In case of a stolen network one can claim ownership of the network by unraveling the hidden functionality, which can not just be formed randomly in the structure. A good summary com- paring different watermarking methods and their possible evasions can be found in [7]. Instead of creating evidence, based on which relation between the original and the stolen, modified model could be proven, we have developed a method which generates a completely sensitive and fragile network, which can be freely shared, since even minor modification of the network weights would drastically alter the networks response. In this paper, we present a method which can transform a previously trained network into a fragile one, by extending the number of neurons in the selected layers, without changing the response of the network. These transformations can be applied in an iterative manner on any layer of the network, except the first and the last layers (since their size is determined by the problem representation). In Sect. 2 we will first introduce our method and the possible modifications on stolen networks and in Sect. 3 we will describe our simulations and results. Finally in Sect. 4 we will conclude our results and describe our planned future work. 2 Mathematical Model of Unrobust Networks 2.1 Fully Connected Layers In this section we would like to present our method, how a robust network can be transformed into a non-robust one. We have chosen fully connected net- works because of their generality and compact mathematical representation. Fully connected networks are generally applied covering the whole spectrum of machine learning problems from regression through data generation to classi- fication problems. The authors can not deny the fact, that in most practical problems convolutional networks are used, but we would like to emphasize the following properties of fully connected networks: (1) in those cases when there is no topographic correlation in the data, fully connected networks are applied (2) most problems also contain additional fully connected layers after the fea- ture extraction of the convolutional or residual layers (3) every convolutional network can be considered as a special case of fully connected ones, where all weights outside the convolutional kernels are set to zero. A fully connected deep neural network might consist of several hidden lay- ers each containing certain number of neurons. Since all layers have the same
  • 18. Preventing Neural NetworkWeight Stealing via Network Obfuscation 3 architecture, without the loss of generality, we will focus here only on three con- secutive layers in the network (i − 1, i and i + 1). We will show how neurons in layer i can be changed, increasing the number of neurons in this layer and making the network fragile, meanwhile keeping the functionality of the three layers intact. We have to emphasize that this method can be applied on any three layers, including the first and last three layers of the network and also that it can be applied repeatedly on each layer, still without changing the overall functionality of the network. The input of the layer i, the activations of the previous layer (i − 1) can be noted by the vector xi−1 containing N elements. The weights of the network are noted by the weight matrix Wi and the bias bi where W is a matrix of N × K elements, creating a mapping RN → RK and bi is a vector containing K elements. The output of layer i, also the input of layer i + 1 can be written as: xi = φ(WiN×K xi−1 + bi) (1) where φ is the activation function of the neurons. The activations of layer i + 1 can be extended as using Eq. 1: xi+1 = φ(φ(xWi−1N×K + bi−1)WiK×L + bi) (2) Creating a mapping RN → RL . One way of identifying a fully connected neural network is to represent it as a sequence of synaptic weights. Our assumption was that in case of model stealing certain application of additive noise on the weights would prevent others to reveal the attacker and conceal thievery. Since fully connected networks are known to be robust against such modifications, the attacker could use the modified network with approximately the same classification accuracy. Thus, our goal was to find a transformation that preserves the loss and accuracy rates of the network, but introduces a significant decrease in terms of the robustness against parameter tuning. In case of a three-layered structure one has to preserve the mapping between the first and third layers (Eq. 2) to keep the functionality of this three consecutive layers, but the mapping in Eq. 1 (the mapping between the first and second, or second and third layers), can be changed freely. Also, our model must rely on an identification mechanism based on a repre- sentation of the synaptic weights. Therefore, the owner of a network should be able to verify the ownership based on the representation of the neural network, examining the average distance between the weights [7]. 2.2 Decomposing Neurons We would like to find such W i−1N×M and W iM×L (M ∈ N, M K) matrices, for which: φ(φ(xWi−1N×K + bi−1)WiK×L + bi) = φ(φ(xW i−1N×M + b i−1)W iM×L + bi) (3)
  • 19. 4 K. Szentannai et al. Considering the linear case when φ(x) = x we obtain the following form: xWi−1N×K WiK×L + bi−1WiK×L + bi = xW i−1N×M W iM×L + b i−1W iM×L + bi (4) The equation above holds only for the special case of φ(x) = x, however in most cases nonlinear activation functions are used. We have selected the rectified linear unit (ReLU) for our investigation (φ(x) = max(0, x)). This non-linearity consist of two linear parts, which means that a variable could be in a linear domain of Eq. 3 resulting selected lines of 4 (if x ≥ 0), or the equation system is independent from the variable if the activation function results a constant zero (if x ≤ 0). This way ReLU gives a selection of given variables (lines) of 4. However, applying the ReLU activation function has certain constraints. Assume, that a neuron with the ReLU activation function should be replaced by two other neurons. This can be achieved by using an α ∈ (0, 1) multiplier: φ( n i=1 Wl jixi + bl j) = Nl j (5) Nl j = αNl j + (1 − α)Nl j (6) where αNl j and (1 − α)Nl j correspond to the activation of the two neurons. For each of these, the activation would only be positive if the original neuron had a positive activation, otherwise it would be zero, this means that all the decomposed neuron must have the same bias. After decomposing a neuron, it is needed to choose the appropriate weights on the subsequent layer. A trivial solution is to keep the original synaptic weights represented by the W l+1 j column vector. This would lead to the same activation since Nl jW l+1 j = αNl jW l+1 j + (1 − α)Nl jW l+1 j (7) A fragile network can be created by choosing the same synaptic weights for the selected two neurons, but it would be easy to spot by the attacker, thus another solution is needed. In order to find a nontrivial solution we constructed a linear equation system that can be described by equation system Ap = c, where A contains the original, already decomposed synaptic weights of the first layer, meanwhile, p represents the unknown synaptic weights of the subsequent layer. Vector c contains the corresponding weights from the original network multiplied together: each element represents the amount of activation related to one input. Finally the non-trivial solution can be obtained by solving the following non-homogeneous linear equation system for each output neuron where index j denotes the output neuron. ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ w1 11 w1 21 . . . w1 m1 w1 12 w1 22 . . . w1 m2 . . . . . . ... . . . w1 1n w1 2n . . . w1 mn b1 1 b1 2 . . . b1 m ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ × ⎡ ⎢ ⎢ ⎢ ⎢ ⎣ w2 j1 w2 j2 . . . w2 jm ⎤ ⎥ ⎥ ⎥ ⎥ ⎦ = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ k i=1 w2 jiw1 i1 k i=1 w2 jiw1 i2 . . . k i=1 w2 jiw1 ik k i=1 w2 jib1 i ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ (8)
  • 20. Preventing Neural NetworkWeight Stealing via Network Obfuscation 5 It is important to note, that all the predefined weights on layer l + 1 might change. In summary, this step can be considered as the replacement of a layer, changing all synaptic weights connecting from and to this layer, but keeping the biases of the neurons and the functionality of the network intact. The only constraint of this method is related to the number of neurons regard- ing the two consecutive layers. It is known, that for matrix A with the size of M ×N, equation Ap = c has a solution if and only if rank(A) = rank[A|c] where [A|c] is the extended matrix. The decomposition of a neuron described in Eq. 7 results in linearly dependent weight vectors on layer l, therefore when solving the equation system the rank of the matrix A is less than or equal to min(N +1, K). If the rank is equal to N + 1 (meaning that K ≥ N + 1) then vector c with the dimension of N +1 would not introduce a new dimension to the subspace defined by matrix A. However if rank(A) = K (meaning that K ≤ N + 1) then vector c could extend the subspace defined by A. Therefore, the general condition for solving the equation system is: K ≥ N + 1. This shows that one could increase the number of the neurons in a layer, and divide the weights of the existing neuron in that layer. We have used this derivation and aim to find a solution of Eq. 7 where the order of magnitudes are significantly different (in the range of 106 ) for both the network parameters and for the eigenvalues of the mapping RN → RL . 2.3 Introducing Deceptive Neurons The method described in the previous section results a fragile neural network, but unfortunately it is not enough to protect the network weights, since an attacker could identify the decomposed neurons based on their biases or could fit a new neural network on the functionality implemented by the layer. To prevent this we will introduce deceptive neurons in layers. The purpose of these neurons is to have non-zero activation in sum if and only if noise was added to their weights apart from this all these neurons have to cancel each others effect out in the network, but not necessarily in a single layer. The simplest method is to define a neuron with an arbitrary weight and a bias of an existing neuron resulting a large activation and making a copy of it with the difference of multiplying the output weights by −1. As a result, these neurons do not contribute to the functionality of the network. However, adding noise to the weights of these neurons would have unexpected consequences depending on the characteristics of the noise, eventually leading to a decrease of classification accuracy. One important aspect of this method is to hide the generated neurons and obfuscate the network to prevent the attacker to easily filter our deceptive neu- rons in the architecture. Choosing the same weights again on both layers would be an obvious sign to an attacker, therefore this method should be combined with decomposition described in Sect. 2.2. Since decomposition allows the generation of arbitrarily small weights one can select a suitably small magnitude, which allows the generation of R real (non deceptive) neurons in the system, and half of their weights (α parameters)
  • 21. 6 K. Szentannai et al. can be set arbitrarily, meanwhile the other half of the weights will be determined by Eq. 8. For each real neuron one can generate a number (F) of fake neurons forming groups of R number of real and F number of fake neurons. These groups can be easily identified in the network since all of them will have the same bias, but the identification of fake and real neurons in a group is non-polynomial. The efficiency of this method should be measured in the computational com- plexity of successfully finding two or more corresponding fake neurons having a total activation of zero in a group. Assuming that only one pair of fake neu- rons was added to the network, it requires L i=0 Ri+Fi 2 steps to successfully identify the fake neurons, where Ri + Fi denotes the number of neurons in the corresponding hidden layer, and L is the number of hidden layers. This can be further increased by decomposing the fake neurons using Eq. 8: in that case the required number of steps is L i=0 Ri+Fi d+2 , d being the number of extra decom- posed neurons. This can be maximized if d + 2 = Ri + Fi/2, where i denotes the layer, where the fake neurons are located. However, this is true only if the attacker has information about the number of deceptive neurons. Without any prior knowledge, the attacker has to guess the number of deceptive neurons as well (0, 1, 2 . . . Ri + Fi − 1) which leads to exponentially increasing computa- tional time. 3 Experiments 3.1 Simulation of a Simple Network As a case study we have created a simple fully connected neural network with three layers, each containing two neurons to present the validity of our approach. The functionality of the network can be considered as a mapping f : R2 → R2 . w1 = 6 −1 −1 7 , b1 = 1 −5 w2 = 5 3 9 −1 , b2 = 7 1 We added two neurons to the hidden layer with decomposition, which does not modify the input and output space and no deceptive neurons were used in this experiment. After applying the methods described in Sect. 2.1, we obtained a solution of: w1 = 0.0525 −0.4213 6.0058 −0.5744 −0.0087 2.9688 −0.9991 4.0263 b1 = 0.0087 −2.1066 1.0009 −2.8722 w2 = ⎡ ⎢ ⎢ ⎣ 4.1924e + 03 −5.4065e + 03 −2.3914 7.3381 −3.2266 5.7622 6.9634 −7.0666 ⎤ ⎥ ⎥ ⎦ b2 = 7 1
  • 22. Preventing Neural NetworkWeight Stealing via Network Obfuscation 7 Fig. 1. This figure depicts the response of a simple two-layered fully connected network for a selected input (red dot) and the response of its variants with %1 noise (yellow dots) added proportionally to the weights. The blue dots represent the responses of the transformed MimosaNets under the same level of noise on their weights, meanwhile the response of the transformed network (without noise) remained exactly the same. In the following experiment we have chosen an arbitrary input vector: [7,9]. We have measured the response of the network for this input, each time intro- ducing 1% noise to the weights of the network. Figure 1 shows the response of the original network and the modified network after adding 1% noise. The variances of the original network for the first output dimension is 6.083 and 8.399 for the second, meanwhile the variances are 476.221 and 767.877 for the decomposed networks respectively. This example demonstrates how decomposition of a layer can increase the networks dependence on its weights. 3.2 Simulations on the MNIST Dataset We have created a five layered fully connected network containing 32 neurons in each hidden-layer (and 728 and 10 neurons in the input and output layers) and trained it on the MNIST [8] dataset, using batches of 32 and Adam Optimizer [9] for 7500 iterations. The network has reached an accuracy of 98.4% on the independent test set. We have created different modifications of the network by adding 9,18,36,72 extra neurons. These neurons were divided equally between the three hidden- layers and 2/3 of them were deceptive neurons (since they were always created in pairs) and 1/3 of them were created by decomposition. This means that in case of 36 additional neurons 2 × 4 deceptive neurons were added to each layer and four new neurons per layer were created by decomposition.
  • 23. 8 K. Szentannai et al. In our hypothetical situations these networks (along with the original) could be stolen by a malevolent attacker, who would try to conceal his thievery by using the following three methods: adding additive noise proportionally to the network weights, continuing network training on arbitrary data and network knowledge distillation. All reported datapoints are an average of 25 independent measurements. Dependence on Additive Noise. We have investigated network performance using additive noise to the network weights. The decrease of accuracy which depends on the ratio of the additive noise can be seen in Fig. 2. At first we have tested a fully connected neural network trained on the MNIST dataset without making modifications to it. The decrease of accuracy was not more than 0.2% even with a relatively high 5% noise. This shows the robustness of a fully connected network. After applying the methods described in Sect. 2 network accuracy retro- gressed to 10% even in case of noise which was less than 1% of the network weights, as Fig. 2 depicts. This alone would reason the applicability of our method, but we have investigated low level noises further, which can be seen on Fig. 3. As it can be seen from the figure, accuracy starts to drop when the ratio of additive noise reaches the level of 10−7 , which means the attacker can not significantly modify the weights. This effect could be increased by adding more and more neurons to the network. Fig. 2. This figure depicts accuracy changes on the MNIST dataset under various level of additive noise applied on the weights. The original network (purple) is not dependent on these weight changes, meanwhile accuracies retrogress in the transformed networks, even with the lowest level of noise. Dependence on Further Training Steps. Additive noise randomly modi- fies the weights, but it is important to examine how accuracy changes in case of structured changes exploiting the gradients of the network. Figure 4 depicts accuracy changes and average in weights distances by applying further training
  • 24. Preventing Neural NetworkWeight Stealing via Network Obfuscation 9 steps in the network. Further training was examined using different step sizes and optimizers (SGD,AdaGrad and ADAM) training the network with original MNIST and randomly selected labels and the results were qualitatively the same in all cases. Dependence on Network Distillation. We have tried to distill the knowledge in the network and train a new neural network to approximate the functionality of previously selected layers, by applying the method described in [10]. We have generated one million random input samples with their outputs for the modified networks and have used this dataset to approximate the function- ality of the network. Fig. 3. A logarithmic plot depicting the same accuracy dependence as on Fig. 2, focus- ing on low noise levels. As it can be seen from the plot, accuracy values do not change significantly under 10−7 percent of noise, which means the most important values of the weights would remain intact to proof connection between the original and modified networks. We have created three-layered neural networks containing 32, 48, 64, 128 neurons in the hidden layer (The number of neurons in the first and last layer were determined by the original network) and tried to approximate the function- ality of the hidden layers of the original structure. Since deceptive neurons have activations in the same order of magnitude as the original responses, these values disturb the manifold of the embedded representations learned by the network and it is more difficult to be approximated by a neural network. Table 1 contains the maximum accuracies which could be reached with knowledge distillation, depending on the number of deceptive neurons and the neurons in the architec- ture used for distillation. This demonstrates, that our method is also resilient towards knowledge distillation.
  • 25. 10 K. Szentannai et al. Fig. 4. The figure plots accuracy dependence of the networks in case of further training (applying further optimization steps). As it can be seen from the plot weights had to be kept in 10−7 average distance to keep the same level of accuracy. Table 1. The table displays the maximum accuracies reached with knowledge distil- lation. The different rows display the number of extra neurons which were added to the investigated layer, and the different columns show the number of neurons in the hidden layer of the fully connected architecture, which was used for distillation. #Deceptive N. #N. = 32 #N. = 48 #N. = 64 #N. = 128 9 0.64 0.65 0.69 0.71 18 0.12 0.14 0.15 0.17 36 0.10 0.11 0.10 0.13 72 0.11 0.09 0.10 0.10 4 Conclusion In this paper, we have shown a transformation method which can significantly increase a network’s dependence on its weights, keeping the original functionality intact. We have also presented how deceptive neurons can be added to a network, without disturbing its original response. Using these transformations iteratively one can create and openly share a trained network, where it is computationally extensive to reverse engineer the original network architecture and embeddings in the hidden layers. The drawback of the method is the additional computa- tional need for the extra neurons, but this is not significant, since computational increase is polynomial.
  • 26. Preventing Neural NetworkWeight Stealing via Network Obfuscation 11 We have tested our method on simple toy problems and on the MNIST dataset using fully-connected neural networks and demonstrated that our app- roach results non-robust networks for the following perturbations: additive noise, application of further training steps and knowledge distillation. Acknowledgments. This research has been partially supported by the Hungarian Government by the following grant: 2018-1.2.1-NKP-00008: Exploring the Mathemat- ical Foundations of Artificial Intelligence also the funds of grant EFOP-3.6.2-16-2017- 00013 are gratefully acknowledged. References 1. Koch, E., Zhao, J.: Towards robust and hidden image copyright labeling. In: IEEE Workshop on Nonlinear Signal and Image Processing, vol. 1174, pp. 185–206, Greece, Neos Marmaras (1995) 2. Wolfgang, R.B., Delp, E.J.: A watermark for digital images. In: Proceedings of the International Conference on Image Processing, vol. 3, pp. 219–222. IEEE (1996) 3. Zarrabi, H., Hajabdollahi, M., Soroushmehr, S., Karimi, N., Samavi, S., Najarian, K.: Reversible image watermarking for health informatics systems using distortion compensation in wavelet domain (2018 ) arXiv preprintarXiv:1802.07786 4. Xu, H., Su, Y., Zhao, Z., Zhou, Y., Lyu, M.R., King, I.: Deepobfuscation: securing the structure of convolutional neural networks via knowledge distillation (2018) arXiv preprint arXiv:1806.10313 5. Namba, R., Sakuma, J.: Robust watermarking of neural network with exponential weighting (2019) arXiv preprint arXiv:1901.06151 6. Gomez, L., Ibarrondo, A., Márquez, J., Duverger, P.: Intellectual property protec- tion for distributed neural networks (2018) 7. Hitaj, D., Mancini, L.V.: Have you stolen my model? evasion attacks against deep neural network watermarking techniques (2018) arXiv preprint arXiv:1809.00615 8. LeCun, Y., Cortes, C., Burges, C.: MNIST handwritten digit database. ATT Labs 2 (2010). http://yann.lecun.com/exdb/mnist 9. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization (2014) arXiv preprint arXiv:1412.6980 10. Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network (2015) arXiv preprint arXiv:1503.02531
  • 27. Applications of Z-Numbers and Neural Networks in Engineering Raheleh Jafari1(B) , Sina Razvarz2 , and Alexander Gegov3 1 School of design, University of Leeds, Leeds LS2 9JT, UK r.jafari@leeds.ac.uk 2 Departamento de Control Automático, CINVESTAV-IPN (National Polytechnic Institute), Mexico City, Mexico srazvarz@yahoo.com 3 School of Computing, University of Portsmouth, Buckingham Building, PO1 3HE Portsmouth, UK alexander.gegov@port.ac.uk Abstract. In the real world, much of the information on which deci- sions are based is vague, imprecise and incomplete. Artificial intelligence techniques can deal with extensive uncertainties. Currently, various types of artificial intelligence technologies, like fuzzy logic and artificial neural network are broadly utilized in the engineering field. In this paper, the combined Z-number and neural network techniques are studied. Further- more, the applications of Z-numbers and neural networks in engineering are introduced. Keywords: Artificial intelligence · Fuzzy logic · Z-number · Neural network 1 Introduction Intelligent systems are composed of fuzzy systems and neural networks. They have particular properties such as the capability of learning, modeling and resolv- ing optimizing problems, suitable for specific kind of applications. The intelligent system can be named hybrid system in case that it combines a minimum of two intelligent systems. For example, the mixture of the fuzzy system and neural network causes the hybrid system to be called a neuron-fuzzy system. Neural networks are made of interrelated groups of artificial neurons that have information which is obtainable by computations linked to them. Mostly, neural networks can adapt themselves to structural alterations while the training phase. Neural networks have been utilized in modeling complicated connections among inputs and outputs or acquiring patterns for the data [1–12]. Fuzzy logic systems are broadly utilized to model the systems characterizing vague and unreliable information [13–29]. During the years, investigators have proposed extensions to the theory of fuzzy logic. Remarkable extension includes Z-numbers [30]. The Z-number is defined as an ordered pair of fuzzy numbers c Springer Nature Switzerland AG 2020 K. Arai et al. (Eds.): SAI 2020, AISC 1230, pp. 12–25, 2020. https://doi.org/10.1007/978-3-030-52243-8_2
  • 28. Applications of Z-Numbers and Neural Networks in Engineering 13 (C, D), such that C is a value of some variables and D is the reliability which is a value of probability rate of C. Z-numbers are widely applied in various implementations in different areas [31–36]. In this paper, the basic principles and explanations of Z-numbers and neu- ral networks are given. The applications of Z-numbers and neural networks in engineering are introduced. Also, the combined Z-number and neural network techniques are studied. The rest of the paper is organized as follows. The the- oretical background of Z-numbers and artificial neural networks are detailed in Sect. 2. Comparison analysis of neural networks and Z-number systems is pre- sented in Sect. 3. The combined Z-number and neural network techniques are given in Sect. 4. The conclusion of this work is summarized in Sect. 5. 2 Theoretical Background In this section, we provide a brief theoretical insight of Z-numbers and artificial neural networks. 2.1 Z-Numbers Mathematical Preliminaries. Here some necessary definitions of Z-number theory are given. Definition 1. If q is: 1) normal, there exists ω0 ∈ where q(ω0) = 1, 2) convex, q(υω + (1 − υ)ω) ≥ min{q(ω), q(τ)}, ∀ω, τ ∈ , ∀υ ∈ [0, 1], 3) upper semi- continuous on , q(ω) ≤ q(ω0) + , ∀ω ∈ N(ω0), ∀ω0 ∈ , ∀ 0, N(ω0) is a neighborhood, 4) q + = {ω ∈ , q(ω) 0} is compact, so q is a fuzzy variable, q ∈ E : → [0, 1]. The fuzzy variable q is defined as below q = q, q (1) such that q is the lower-bound variable and q is the upper-bound variable. Definition 2. The Z-number is composed of two elements Z = [q(ω), p]. q(ω) is considered as the restriction on the real-valued uncertain variable ω and p is considered as a measure of the reliability of q. The Z-number is defined as Z+ - number, when q(ω) is a fuzzy number and p is the probability distribution of ω. If q(ω), and p, are fuzzy numbers, then the Z-number is defined as Z− -number. The Z+ -number has more information in comparison with the Z− -number. In this work, we use the definition of Z+ -number, i.e., Z = [q, p] , q is a fuzzy number and p is a probability distribution. The triangular membership function is defined as μq = G (a, b, c) = ω−a b−a a ≤ ω ≤ b c−ω c−b b ≤ ω ≤ c otherwise μq = 0 (2)
  • 29. 14 R. Jafari et al. and the trapezoidal membership function is defined as μq = G (a, b, c, d) = ⎧ ⎨ ⎩ ω−a b−a a ≤ ω ≤ b d−ω d−c c ≤ ω ≤ d 1 b ≤ ω ≤ c otherwise μq = 0 (3) The probability measure of q is defined as P(q) = μq(ω)p(ω)dω (4) such that p is the probability density of ω. For discrete Z-numbers we have P(q) = n j=1 μq(ωj)p(ωj) (5) Definition 3. The α-level of the Z-number Z = (q, p) is stated as below [Z]α = ([q]α , [p]α ) (6) such that 0 α ≤ 1. [p]α is calculated by the Nguyen’s theorem [p]α = p([q]α ) = p([qα , qα ]) = Pα , P α (7) such that p([q]α ) = {p(ω)|ω ∈ [q]α }. Hence, [Z]α is defined as [Z]α = Zα , Z α = qα , Pα , qα , P α (8) such that Pα = qα p(ωα j ), P α = qα p(ωα j ), [ωj]α = (ωα j , ωα j ). Let Z1 = (q1, p1) and Z2 = (q2, p2), we have Z12 = Z1 ∗ Z2 = (q1 ∗ q2, p1 ∗ p2) (9) where ∗ ∈ {⊕, , }. ⊕, and , indicate sum, subtract and multiply, respec- tively. The operations utilized for the fuzzy numbers [q1]α = [qα 11, qα 12] and [q2]α = [qα 21, qα 22] are defined as [37], [q1 ⊕ q2]α = [q1]α + [q2]α = [qα 11 + qα 21, qα 12 + qα 22] [q1 q2]α = [q1]α − [q2]α = [qα 11 − qα 22, qα 12 − qα 21] [q1 q2]α = min{qα 11qα 21, qα 11qα 22, qα 12qα 21, qα 12qα 22} max{qα 11qα 21, qα 11qα 22, qα 12qα 21, qα 12qα 22} (10) For the discrete probability distributions, the following relation is defined for all p1 ∗ p2 operations p1 ∗ p2 = ι p1(ω1,j)p2(ω2,(n−j)) = p12(ω) (11)
  • 30. Applications of Z-Numbers and Neural Networks in Engineering 15 Fig. 1. Membership functions applied for (a) cereal yield, cereal production, economic growth, (b) threat rate, and (c) reliability Background and Related Work. The implementations of Z-numbers based techniques are bounded because of the shortage of effective approaches for cal- culation with Z-numbers. In [38], the capabilities of the Z-numbers in the improvement of the quality of risk assessment are studied. Prediction equal to (High, Very Sure) is institu- tionalized in the form of Z-evaluation “y is Z(c, p)”, such that y is considered as a random variable of threat probability, c and p are taken to be fuzzy sets, demonstrating soft constraints on a threat probability and a partial reliability, respectively. The likelihood of risk is illustrated by Z-number as: Probability = Z1(High, Very Sure), such that c is indicated through linguistic terms High, Medium, Low, also, p is indicated through terms Very Sure, Sure, etc. Likewise, consequence rate is explained as: Consequence measure = Z2(Low, Sure). Threat rates (Z12) is computed as the product of the probability (Z1) and consequence measure (Z2). In [39], Z-number-based fuzzy system is suggested to determine the food security risk level. The proposed system is relying on fuzzy If-Then rules, which applies the basic parameters such as cereal production, cereal yield, and economic growth to specify the threat rate of food security. The membership functions applied to explain input, as well as output variables, are demonstrated in Fig. 1. In [40], the application of the Z-number theory to selection of optimal alloy is illustrated. Three alloys named Ti12Mo2Sn alloy, Ti12Mo4Sn alloy, and Ti12Mo6Sn alloy are examined and an optimal titanium alloy is selected using the proposed approach. The optimality of the alloys is studied based on three criteria: strength level, plastic deformation degree, and tensile strength.
  • 31. 16 R. Jafari et al. Fig. 2. The structure of a biological neuron 2.2 Neural Networks Neural networks are constructed from neurons and synapses. They alter their rates in reply from nearby neurons as well as synapses. Neural networks operate similar to computer as they map inputs to outputs. Neurons, as well as synapses, are silicon members, which mimic their treatment. A neuron gathers the total incoming signals from other neurons, afterward simulate its reply represented by a number. Signals move among the synapses, which contain numerical rates. Neural networks learn once they vary the value of their synapsis. The structure of a biological neuron or nerve cell is shown in Fig. 2. The processing steps inside each neuron is demonstrated in Fig. 3. Background and Related Work. In [41], artificial neural network technique is utilized for modeling the void fraction in two-phase flow inside helical vertical coils with water as work fluid. In [42] artificial neural network and multi-objective genetic algorithm are applied for optimizing the subcooled flow boiling in a vertical pipe. Pressure, the mass flux of the water, inlet subcooled temperature, as well as heat flux are considered as inlet parameters. The artificial neural network utilizes inlet parameters for predicting the objective functions, which are the maximum wall surface temperature as well as averaged vapor volume fraction at the outlet. The optimization procedure of design parameters is shown in Fig. 4. In [43], artificial neural network technique is applied for predicting heat trans- fer in supercritical water. The artificial neural network is trained on the basis of 5280 data points gathered from experimental results. Mass flux, heat flux, pressure, tube diameter, as well as bulk specific enthalpy are taken to be the inputs of the proposed artificial neural network. The tube wall temperature is taken to be the output, see Fig. 5.
  • 32. Applications of Z-Numbers and Neural Networks in Engineering 17 Fig. 3. Processing steps inside each neuron 3 Comparison Analysis of Neural Networks and Z-Number Systems Neural networks and Z-number systems can be considered as a part of the soft computing field. The comparison of Neural networks and Z-number systems is represented in Table 1. Neural networks have the following advantageous: Table 1. The comparison of neural networks and Z-number systems. Z-number systems Neural networks Knowledge presentation Very good Very bad Uncertainty tolerance Very good Very good Inaccuracy tolerance Very good Very good Compatibility Bad Very good Learning capability Very bad Very good Interpretation capability Very good Very bad Knowledge detection and data mining Bad Very good Maintainability Good Very good i Adaptive Learning: capability in learning tasks on the basis of the data sup- plied to train or initial experience. ii Self-organization: neural networks are able to create their organization while time learning. iii Real-time execution: the calculations of neural networks may be executed in parallel, also specific hardware devices are constructed, which can capture the benefit of this feature. Neural networks have the following drawbacks:
  • 33. Random documents with unrelated content Scribd suggests to you:
  • 34. Wanklin (astuen pikkupöydän ääreen — hermostuneella ystävällisyydellä): No, Thomas, miten on asiat? Mikä oli tulos kokouksestanne? Rous: Simon Harnessilla on vastauksemme. Hän sanoo teille, mikä se on. Odotamme häntä. Hän puhuu puolestamme. Wanklin: Onko asia niin Thomas? Thomas (jurosti): Kyllä. Roberts ei tule, hänen vaimonsa on kuollut. Scantlebury: Niin, niin! Vaimo raukka! Niin niin! Frost (tullen eteisestä): Herra Harness! (Kun Harness tulee, Frost poistuu. Harnessilla on paperi kädessään, kumartaa johtokunnalle, nyökäyttää miehiin päin ja asettuu seisomaan pikkupöydän taakse, aivan keskelle huonetta.) Harness: Hyvää iltaa, hyvät herrat. (Tench kirjottamansa paperi kädessään tulee hänen luokseen. He puhuvat hiljaa.) Wilder: Olemme odottaneet teitä, Harness. Toivotaan että voidaan päästä johonkin — Frost (tullen eteisestä): Roberts! (Hän poistuu.) (Roberts tulee hätäisesti sisään ja seisottuu tuijottaen Anthonyhin. Hänen naamansa on rasittuneen ja vanhentuneen näköinen.)
  • 35. Roberts: Herra Anthony, pelkään, että olen vähän myöhästynyt, olisin ollut täällä aikanaan, ellei olisi — jotakin tapahtunut. (Miehille): Onko jo jotakin sanottu? Thomas: Mutta mies, mikä sinut tänne saattoi? Roberts: Te sanoitte meille tänä aamuna, hyvät herrat, että menkää ja harkitkaa uudelleen asemaanne. Me olemme sitä harkinneet; olemme tuomassa nyt miesten vastausta. (Anthonylle.) Menkää te takasin Lontooseen. Meillä ei ole teille mitään myönnettävää. Emme rikkuakaan tule helpottamaan vaatimuksistamme, emmekä taivu ennen kuin kaikki nuo vaatimukset on hyväksytty. (Anthony katsoo häneen, mutta ei puhu. Miehet näyttävät hämmästyneiltä.) Harness: Roberts! Roberts (katsoo kiivaasti häneen, sitte Anthonyhin): Onko asia teille kyllin selvä? Onko se kyllin lyhyesti ja sattuvasti sanottu? Te erehdyitte suuresti luullessanne meidän tulevan armoille. Te voitte ruhjoa ruumiimme, mutta ette murtaa mieltä. Menkää takasin Lontooseen, miehillä ei ole teille mitään sanottavaa! (Pysähtyen, levottomana ottaa askeleen liikkumattomana istuvaan Anthonyhin päin.) Edgar: Meitä surettaa onnettomuutenne, Roberts, mutta — Roberts: Pitäkää säälinne, nuori mies. Antakaa isänne puhua!
  • 36. Harness (paperiarkki kädessään, puhuu pikkupöydän takaa): Roberts! Roberts (Anthonylle hyvin kärsimättömästi): Miksi ette vastaa! Harness: Roberts! Roberts (kääntyen äkkiä): Mitä niin? Harness (vakavasti): Te puhutte ilman valtuuksia, asiat ovat menneet teidän edellenne. (Hän viittaa Tenchiin, joka antaa paperin johtokunnan jäsenille. He allekirjoittavat sukkelaan sopimuspaperin.) Katsokaa tätä, hyvä mies! (Kohottaen paperiaan.) Vaatimukset hyväksytään, paitsi koneenkäyttäjiä ja lämmittäjiä koskevat. Lauvantain ylityöstä kaksinkertainen palkka. Työvuorot entiselleen. Näihin ehtoihin on suostuttu. Miehet menevät aamulla jälleen työhön. Lakko on loppunut. Roberts (lukee paperia ja kääntyy miehiin. He vetäytyvät taapäin, paitsi Rousia, joka pysyy paikoillaan. Ihan tyynesti): Te olette pelanneet minun selkäni takana? Minä seisoin rinnallanne kuolemaan saakka; te odotitte sitä, hyljätäksenne minut! (Miehet vastaavat kaikki yht'aikaa.) Rous: Se on vale! Thomas: Teitä oli mahdoton sietää, hyvä mies! Green: Jos minua olisi kuultu —
  • 37. Bulgin (hengästyneesti): Tuki kitasi! Roberts: Sitä te odotitte! Harness (ottaen johtokunnan allekirjoittaman sopimuksen ja antaen omansa Tenchille): Jo riittää, miehet. Teidän on parasta poistua. (Miehet poistuvat hitaasti ja kömpelösti.) Wilder (hermostuneella, hiljaisella äänellä): Luultavasti ei ole enää mitään esteitä. (Seuraa ovelle.) Minun on yritettävä tähän junaan! Tuletteko, Scantlebury? Scantlebury (Wanklinin kanssa seuraten): Jaa, jaa. Odottakaa minua. (Hän seisahtuu kun Roberts puhuu.) Roberts (Anthonylle): Mutta te ette ole hyväksyneet noita ehtoja! Eiväthän he voi tehdä sopimusta ilman esimiestään! Te ette koskaan hyväksyisi noita ehtoja! (Anthony katsoo häneen vaieten.) Sanokaa herran tähden että ette ole hyväksyneet! (Varmasti.) Näen sen teistä! Harness (ojentaen johtokunnan hyväksymää sopimusta): Johtokunta on allekirjottanut! (Roberts katsoo tyrmistyneenä allekirjotuksia — työntää paperin pois ja peittää silmänsä.) Scantlebury (kämmenensä suojassa Tenchille): Pitäkää huolta puheenjohtajasta. Hän ei voi hyvin, hän ei voi hyvin — hän ei syönyt
  • 38. puolista. Jos vaimoille ja lapsille toimitetaan avunkeräystä, merkitkää minun puolestani — puolestani viisisataa markkaa. (Hän menee eteiseen vaivaloisen hätäisesti ja Wanklin, joka on tähystellyt Robertsia ja Anthonyä värähtelevin ilmein, seuraa perästä. Edgar jää istumaan sohvaan katsoen maahan. Tench palaten paikoilleen kirjottaa pöytäkirjaansa. Harness seisoo pikkupöydän ääressä katsellen vakavasti Robertsia.) Roberts: Sitte te ette ole enää tämän yhtiön esimiehenä. (Purskahtaen hurjanlaiseen nauruun.) Ah-ha, ha-ha, ha-ha! He ovat hyljänneet teidät — hyljänneet esimiehensä: Ah ha-ha! (Äkkiä pelottavan synkästi.) Siis — ovat syösseet meidät molemmatkin alas, herra Anthony? (Enid tulee kiireesti pariovesta isänsä luokse ja kumartuu hänen ylitsensä.) Harness (tullen ja tarttuen Robertsin hihaan): Hävetkää, Roberts! Menkää kauniisti kotiinne, mies; menkää kotiinne! Roberts (tempaisten kätensä pois): Kotiin? (Lyyhistyen — kuiskaten.) Kotiin! Enid (tyyneesti isälleen): Tule pois rakas isä! Tule omaan huoneeseesi! (Anthony ponnistautuu ylös. Hän kääntyy Robertsiin, joka katsoo häneen. He seisovat useita sekuntteja silmäillen toisiaan tiukasti. Anthony nostaa kätensä tervehtiäkseen, mutta antaa sen vaipua. Vihainen ilme Robertsin kasvoissa muuttuu ihmetteleväksi. Molemmat taivuttavat päätään
  • 39. kunnianosotukseksi. Anthony kääntyy ja kävelee hitaasti verhotulle ovelle. Äkkiä hän horjahtaa kuin kaatuisi, tointuu, Enidin ja Edgarin, joka on huoneen poikki rientänyt apuun, tukemana menee pois. Roberts seisoo liikkumatta useita sekuntteja, tuijottaen tiukasti Anthonyn jälkeen ja menee sitte eteiseen.) Tench (läheten Harnessia): On suuri paino pois sydämeltäni, herra Harness! Mutta mikä järkyttävä kohtaus, herra! (Hän pyyhkii ohimoitaan. Harness on kalpea ja päättävä, katsellen hieman jäykästi hymyillen vapisevaa Tenchiä.) Tämä kaikki on ollut niin ankaraa! Mitähän hän tarkoitti sanoessaan: He ovat syösseet meidät molemmatkin alas? Vaikka hän on menettänyt vaimonsa, mies raukka, ei hänen olisi tarvinnut puheenjohtajalle noin sanoa! Harness: Yksi vaimo kuollut ja kaksi parasta miestä murtunut! (Underwood tulee äkkiä.) Tench (tuijottaen Harnessiin — äkkiä): Tiedättekö, herra — nämä ehdot, ne ovat juuri samat, jotka yhdessä laadimme, te ja minä ja esitimme molemmille puolille ennen taistelun alkua. Kaikki tämä — kaikki tämä ja — ja minkä tähden? Harness: (matalalla jäykällä äänellä): Sepä siinä merkillisintä onkin! (Underwood kääntymättä ovelta tekee hyväksyvän viittauksen.) Esirippu.
  • 40. *** END OF THE PROJECT GUTENBERG EBOOK TAISTELU *** Updated editions will replace the previous one—the old editions will be renamed. Creating the works from print editions not protected by U.S. copyright law means that no one owns a United States copyright in these works, so the Foundation (and you!) can copy and distribute it in the United States without permission and without paying copyright royalties. Special rules, set forth in the General Terms of Use part of this license, apply to copying and distributing Project Gutenberg™ electronic works to protect the PROJECT GUTENBERG™ concept and trademark. Project Gutenberg is a registered trademark, and may not be used if you charge for an eBook, except by following the terms of the trademark license, including paying royalties for use of the Project Gutenberg trademark. If you do not charge anything for copies of this eBook, complying with the trademark license is very easy. You may use this eBook for nearly any purpose such as creation of derivative works, reports, performances and research. Project Gutenberg eBooks may be modified and printed and given away—you may do practically ANYTHING in the United States with eBooks not protected by U.S. copyright law. Redistribution is subject to the trademark license, especially commercial redistribution. START: FULL LICENSE
  • 41. THE FULL PROJECT GUTENBERG LICENSE
  • 42. PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK To protect the Project Gutenberg™ mission of promoting the free distribution of electronic works, by using or distributing this work (or any other work associated in any way with the phrase “Project Gutenberg”), you agree to comply with all the terms of the Full Project Gutenberg™ License available with this file or online at www.gutenberg.org/license. Section 1. General Terms of Use and Redistributing Project Gutenberg™ electronic works 1.A. By reading or using any part of this Project Gutenberg™ electronic work, you indicate that you have read, understand, agree to and accept all the terms of this license and intellectual property (trademark/copyright) agreement. If you do not agree to abide by all the terms of this agreement, you must cease using and return or destroy all copies of Project Gutenberg™ electronic works in your possession. If you paid a fee for obtaining a copy of or access to a Project Gutenberg™ electronic work and you do not agree to be bound by the terms of this agreement, you may obtain a refund from the person or entity to whom you paid the fee as set forth in paragraph 1.E.8. 1.B. “Project Gutenberg” is a registered trademark. It may only be used on or associated in any way with an electronic work by people who agree to be bound by the terms of this agreement. There are a few things that you can do with most Project Gutenberg™ electronic works even without complying with the full terms of this agreement. See paragraph 1.C below. There are a lot of things you can do with Project Gutenberg™ electronic works if you follow the terms of this agreement and help preserve free future access to Project Gutenberg™ electronic works. See paragraph 1.E below.
  • 43. 1.C. The Project Gutenberg Literary Archive Foundation (“the Foundation” or PGLAF), owns a compilation copyright in the collection of Project Gutenberg™ electronic works. Nearly all the individual works in the collection are in the public domain in the United States. If an individual work is unprotected by copyright law in the United States and you are located in the United States, we do not claim a right to prevent you from copying, distributing, performing, displaying or creating derivative works based on the work as long as all references to Project Gutenberg are removed. Of course, we hope that you will support the Project Gutenberg™ mission of promoting free access to electronic works by freely sharing Project Gutenberg™ works in compliance with the terms of this agreement for keeping the Project Gutenberg™ name associated with the work. You can easily comply with the terms of this agreement by keeping this work in the same format with its attached full Project Gutenberg™ License when you share it without charge with others. 1.D. The copyright laws of the place where you are located also govern what you can do with this work. Copyright laws in most countries are in a constant state of change. If you are outside the United States, check the laws of your country in addition to the terms of this agreement before downloading, copying, displaying, performing, distributing or creating derivative works based on this work or any other Project Gutenberg™ work. The Foundation makes no representations concerning the copyright status of any work in any country other than the United States. 1.E. Unless you have removed all references to Project Gutenberg: 1.E.1. The following sentence, with active links to, or other immediate access to, the full Project Gutenberg™ License must appear prominently whenever any copy of a Project Gutenberg™ work (any work on which the phrase “Project Gutenberg” appears, or with which the phrase “Project Gutenberg” is associated) is accessed, displayed, performed, viewed, copied or distributed:
  • 44. This eBook is for the use of anyone anywhere in the United States and most other parts of the world at no cost and with almost no restrictions whatsoever. You may copy it, give it away or re-use it under the terms of the Project Gutenberg License included with this eBook or online at www.gutenberg.org. If you are not located in the United States, you will have to check the laws of the country where you are located before using this eBook. 1.E.2. If an individual Project Gutenberg™ electronic work is derived from texts not protected by U.S. copyright law (does not contain a notice indicating that it is posted with permission of the copyright holder), the work can be copied and distributed to anyone in the United States without paying any fees or charges. If you are redistributing or providing access to a work with the phrase “Project Gutenberg” associated with or appearing on the work, you must comply either with the requirements of paragraphs 1.E.1 through 1.E.7 or obtain permission for the use of the work and the Project Gutenberg™ trademark as set forth in paragraphs 1.E.8 or 1.E.9. 1.E.3. If an individual Project Gutenberg™ electronic work is posted with the permission of the copyright holder, your use and distribution must comply with both paragraphs 1.E.1 through 1.E.7 and any additional terms imposed by the copyright holder. Additional terms will be linked to the Project Gutenberg™ License for all works posted with the permission of the copyright holder found at the beginning of this work. 1.E.4. Do not unlink or detach or remove the full Project Gutenberg™ License terms from this work, or any files containing a part of this work or any other work associated with Project Gutenberg™. 1.E.5. Do not copy, display, perform, distribute or redistribute this electronic work, or any part of this electronic work, without prominently displaying the sentence set forth in paragraph 1.E.1
  • 45. with active links or immediate access to the full terms of the Project Gutenberg™ License. 1.E.6. You may convert to and distribute this work in any binary, compressed, marked up, nonproprietary or proprietary form, including any word processing or hypertext form. However, if you provide access to or distribute copies of a Project Gutenberg™ work in a format other than “Plain Vanilla ASCII” or other format used in the official version posted on the official Project Gutenberg™ website (www.gutenberg.org), you must, at no additional cost, fee or expense to the user, provide a copy, a means of exporting a copy, or a means of obtaining a copy upon request, of the work in its original “Plain Vanilla ASCII” or other form. Any alternate format must include the full Project Gutenberg™ License as specified in paragraph 1.E.1. 1.E.7. Do not charge a fee for access to, viewing, displaying, performing, copying or distributing any Project Gutenberg™ works unless you comply with paragraph 1.E.8 or 1.E.9. 1.E.8. You may charge a reasonable fee for copies of or providing access to or distributing Project Gutenberg™ electronic works provided that: • You pay a royalty fee of 20% of the gross profits you derive from the use of Project Gutenberg™ works calculated using the method you already use to calculate your applicable taxes. The fee is owed to the owner of the Project Gutenberg™ trademark, but he has agreed to donate royalties under this paragraph to the Project Gutenberg Literary Archive Foundation. Royalty payments must be paid within 60 days following each date on which you prepare (or are legally required to prepare) your periodic tax returns. Royalty payments should be clearly marked as such and sent to the Project Gutenberg Literary Archive Foundation at the address specified in Section 4, “Information
  • 46. about donations to the Project Gutenberg Literary Archive Foundation.” • You provide a full refund of any money paid by a user who notifies you in writing (or by e-mail) within 30 days of receipt that s/he does not agree to the terms of the full Project Gutenberg™ License. You must require such a user to return or destroy all copies of the works possessed in a physical medium and discontinue all use of and all access to other copies of Project Gutenberg™ works. • You provide, in accordance with paragraph 1.F.3, a full refund of any money paid for a work or a replacement copy, if a defect in the electronic work is discovered and reported to you within 90 days of receipt of the work. • You comply with all other terms of this agreement for free distribution of Project Gutenberg™ works. 1.E.9. If you wish to charge a fee or distribute a Project Gutenberg™ electronic work or group of works on different terms than are set forth in this agreement, you must obtain permission in writing from the Project Gutenberg Literary Archive Foundation, the manager of the Project Gutenberg™ trademark. Contact the Foundation as set forth in Section 3 below. 1.F. 1.F.1. Project Gutenberg volunteers and employees expend considerable effort to identify, do copyright research on, transcribe and proofread works not protected by U.S. copyright law in creating the Project Gutenberg™ collection. Despite these efforts, Project Gutenberg™ electronic works, and the medium on which they may be stored, may contain “Defects,” such as, but not limited to, incomplete, inaccurate or corrupt data, transcription errors, a copyright or other intellectual property infringement, a defective or
  • 47. damaged disk or other medium, a computer virus, or computer codes that damage or cannot be read by your equipment. 1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except for the “Right of Replacement or Refund” described in paragraph 1.F.3, the Project Gutenberg Literary Archive Foundation, the owner of the Project Gutenberg™ trademark, and any other party distributing a Project Gutenberg™ electronic work under this agreement, disclaim all liability to you for damages, costs and expenses, including legal fees. YOU AGREE THAT YOU HAVE NO REMEDIES FOR NEGLIGENCE, STRICT LIABILITY, BREACH OF WARRANTY OR BREACH OF CONTRACT EXCEPT THOSE PROVIDED IN PARAGRAPH 1.F.3. YOU AGREE THAT THE FOUNDATION, THE TRADEMARK OWNER, AND ANY DISTRIBUTOR UNDER THIS AGREEMENT WILL NOT BE LIABLE TO YOU FOR ACTUAL, DIRECT, INDIRECT, CONSEQUENTIAL, PUNITIVE OR INCIDENTAL DAMAGES EVEN IF YOU GIVE NOTICE OF THE POSSIBILITY OF SUCH DAMAGE. 1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you discover a defect in this electronic work within 90 days of receiving it, you can receive a refund of the money (if any) you paid for it by sending a written explanation to the person you received the work from. If you received the work on a physical medium, you must return the medium with your written explanation. The person or entity that provided you with the defective work may elect to provide a replacement copy in lieu of a refund. If you received the work electronically, the person or entity providing it to you may choose to give you a second opportunity to receive the work electronically in lieu of a refund. If the second copy is also defective, you may demand a refund in writing without further opportunities to fix the problem. 1.F.4. Except for the limited right of replacement or refund set forth in paragraph 1.F.3, this work is provided to you ‘AS-IS’, WITH NO OTHER WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED,
  • 48. INCLUDING BUT NOT LIMITED TO WARRANTIES OF MERCHANTABILITY OR FITNESS FOR ANY PURPOSE. 1.F.5. Some states do not allow disclaimers of certain implied warranties or the exclusion or limitation of certain types of damages. If any disclaimer or limitation set forth in this agreement violates the law of the state applicable to this agreement, the agreement shall be interpreted to make the maximum disclaimer or limitation permitted by the applicable state law. The invalidity or unenforceability of any provision of this agreement shall not void the remaining provisions. 1.F.6. INDEMNITY - You agree to indemnify and hold the Foundation, the trademark owner, any agent or employee of the Foundation, anyone providing copies of Project Gutenberg™ electronic works in accordance with this agreement, and any volunteers associated with the production, promotion and distribution of Project Gutenberg™ electronic works, harmless from all liability, costs and expenses, including legal fees, that arise directly or indirectly from any of the following which you do or cause to occur: (a) distribution of this or any Project Gutenberg™ work, (b) alteration, modification, or additions or deletions to any Project Gutenberg™ work, and (c) any Defect you cause. Section 2. Information about the Mission of Project Gutenberg™ Project Gutenberg™ is synonymous with the free distribution of electronic works in formats readable by the widest variety of computers including obsolete, old, middle-aged and new computers. It exists because of the efforts of hundreds of volunteers and donations from people in all walks of life. Volunteers and financial support to provide volunteers with the assistance they need are critical to reaching Project Gutenberg™’s goals and ensuring that the Project Gutenberg™ collection will
  • 49. remain freely available for generations to come. In 2001, the Project Gutenberg Literary Archive Foundation was created to provide a secure and permanent future for Project Gutenberg™ and future generations. To learn more about the Project Gutenberg Literary Archive Foundation and how your efforts and donations can help, see Sections 3 and 4 and the Foundation information page at www.gutenberg.org. Section 3. Information about the Project Gutenberg Literary Archive Foundation The Project Gutenberg Literary Archive Foundation is a non-profit 501(c)(3) educational corporation organized under the laws of the state of Mississippi and granted tax exempt status by the Internal Revenue Service. The Foundation’s EIN or federal tax identification number is 64-6221541. Contributions to the Project Gutenberg Literary Archive Foundation are tax deductible to the full extent permitted by U.S. federal laws and your state’s laws. The Foundation’s business office is located at 809 North 1500 West, Salt Lake City, UT 84116, (801) 596-1887. Email contact links and up to date contact information can be found at the Foundation’s website and official page at www.gutenberg.org/contact Section 4. Information about Donations to the Project Gutenberg Literary Archive Foundation Project Gutenberg™ depends upon and cannot survive without widespread public support and donations to carry out its mission of increasing the number of public domain and licensed works that can be freely distributed in machine-readable form accessible by the widest array of equipment including outdated equipment. Many
  • 50. small donations ($1 to $5,000) are particularly important to maintaining tax exempt status with the IRS. The Foundation is committed to complying with the laws regulating charities and charitable donations in all 50 states of the United States. Compliance requirements are not uniform and it takes a considerable effort, much paperwork and many fees to meet and keep up with these requirements. We do not solicit donations in locations where we have not received written confirmation of compliance. To SEND DONATIONS or determine the status of compliance for any particular state visit www.gutenberg.org/donate. While we cannot and do not solicit contributions from states where we have not met the solicitation requirements, we know of no prohibition against accepting unsolicited donations from donors in such states who approach us with offers to donate. International donations are gratefully accepted, but we cannot make any statements concerning tax treatment of donations received from outside the United States. U.S. laws alone swamp our small staff. Please check the Project Gutenberg web pages for current donation methods and addresses. Donations are accepted in a number of other ways including checks, online payments and credit card donations. To donate, please visit: www.gutenberg.org/donate. Section 5. General Information About Project Gutenberg™ electronic works Professor Michael S. Hart was the originator of the Project Gutenberg™ concept of a library of electronic works that could be freely shared with anyone. For forty years, he produced and distributed Project Gutenberg™ eBooks with only a loose network of volunteer support.
  • 51. Project Gutenberg™ eBooks are often created from several printed editions, all of which are confirmed as not protected by copyright in the U.S. unless a copyright notice is included. Thus, we do not necessarily keep eBooks in compliance with any particular paper edition. Most people start at our website which has the main PG search facility: www.gutenberg.org. This website includes information about Project Gutenberg™, including how to make donations to the Project Gutenberg Literary Archive Foundation, how to help produce our new eBooks, and how to subscribe to our email newsletter to hear about new eBooks.
  • 52. Welcome to our website – the ideal destination for book lovers and knowledge seekers. With a mission to inspire endlessly, we offer a vast collection of books, ranging from classic literary works to specialized publications, self-development books, and children's literature. Each book is a new journey of discovery, expanding knowledge and enriching the soul of the reade Our website is not just a platform for buying books, but a bridge connecting readers to the timeless values of culture and wisdom. With an elegant, user-friendly interface and an intelligent search system, we are committed to providing a quick and convenient shopping experience. Additionally, our special promotions and home delivery services ensure that you save time and fully enjoy the joy of reading. Let us accompany you on the journey of exploring knowledge and personal growth! textbookfull.com