SlideShare a Scribd company logo
F
MULTI SENSOR INPUT FUSION IN
WSN FOR AUTOMATED SORTING IN
A CLOUD BASED PLASTIC
RECYCLING PLANT
A THESIS SUBMITTED TO AUCKLAND UNIVERSITY OF TECHNOLOGY
IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF
MASTER OF SERVICE-ORIENTED COMPUTING
Supervisors
Prof. Alan Litchfeild
June 2016
By
Kavita Pillai
School of Engineering, Computer and Mathematical Sciences
Copyright
Copyright in text of this thesis rests with the Author. Copies (by any process) either
in full, or of extracts, may be made only in accordance with instructions given by the
Author and lodged in the library, Auckland University of Technology. Details may be
obtained from the Librarian. This page must form part of any such copies made. Further
copies (by any process) of copies made in accordance with such instructions may not
be made without the permission (in writing) of the Author.
The ownership of any intellectual property rights which may be described in this
thesis is vested in the Auckland University of Technology, subject to any prior agreement
to the contrary, and may not be made available for use by third parties without the
written permission of the University, which will prescribe the terms and conditions of
any such agreement.
Further information on the conditions under which disclosures and exploitation may
take place is available from the Librarian.
© Copyright 2016. Kavita Pillai
2
Declaration
I hereby declare that this submission is my own work and
that, to the best of my knowledge and belief, it contains no
material previously published or written by another person
nor material which to a substantial extent has been accepted
for the qualification of any other degree or diploma of a
university or other institution of higher learning.
Signature of candidate
3
Acknowledgements
I would like to thank my mother and my husband for their unparalleled support.
4
Abstract
The first step in thermoplastic recycling is identifying the plastic waste categorically.
This manual task is often inefficiency and costly. This study therefore analyzes the
problem and presents a automatic classifier based on a WSN infrastructure. The
classifier fuses data from two different sources using Kalman filter and neural network.
The algorithm is run on a matlab simulator to test the results.
5
Publications
Title of the first publication publication details
Title of the second publication publication details
Title of the third publication publication details
And so on publication details
6
Contents
Copyright 2
Declaration 3
Acknowledgements 4
Abstract 5
Publications 6
1 Introduction 10
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.2 Thermoplastic type, identifiers, parameters: . . . . . . . . . . . . . . . . 12
1.3 Hardware architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.4 Software architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2 Literature Review 31
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.2 Autonomic Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.3 IoT, WSN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.3.1 Wireless sensory network . . . . . . . . . . . . . . . . . . . . . 40
2.3.2 BaseStation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.3.3 Routing in IoT sensory network . . . . . . . . . . . . . . . . . 42
2.3.4 Communication with base station . . . . . . . . . . . . . . . . . 45
2.4 Data Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
2.4.1 Advantages of data fusion . . . . . . . . . . . . . . . . . . . . . 45
2.4.2 Motivation of sensor fusion . . . . . . . . . . . . . . . . . . . . 46
2.4.3 Issues with Data fusion . . . . . . . . . . . . . . . . . . . . . . . 47
2.5 Sensor fusion methodologies . . . . . . . . . . . . . . . . . . . . . . . . 47
2.5.1 PNN Classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.5.2 Kalman filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.6 Clusters and data aggregation . . . . . . . . . . . . . . . . . . . . . . . . 49
2.6.1 Event detection and processing . . . . . . . . . . . . . . . . . . 50
2.6.2 Localization of sensor objects . . . . . . . . . . . . . . . . . . . 50
2.7 Semantic Sensor N/W Ontology . . . . . . . . . . . . . . . . . . . . . . . 51
7
3 Method 57
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.2 Problem investigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.2.1 Problem investigation . . . . . . . . . . . . . . . . . . . . . . . . 58
3.2.2 Treatment design . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
3.2.3 Treatment implementation and evaluation . . . . . . . . . . . . 62
4 Analysis 65
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.2 Supervised learning classifier . . . . . . . . . . . . . . . . . . . . . . . . 66
4.3 Feature Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
4.4 Findings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.4.1 Feature fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.4.2 Decision fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.4.3 Final PNN Classification . . . . . . . . . . . . . . . . . . . . . . . 71
5 Discussion 82
5.0.1 Factors that could effect the accuracy of the algorithm . . . . . 83
6 Conclusion and future study 86
6.0.1 Future studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
References 90
8
List of Tables
1.1 Difference between a plastic polymer and a Thermoplastic polymer
taken from . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.2 Acceptable ranges for polymer attributes adapted from . . . . . . . . . 17
2.1 Assumptions to cloud manufacturing . . . . . . . . . . . . . . . . . . . 33
9
List of Figures
1.1 Thermoplastic structure on heating adapted from . . . . . . . . . . . . 15
1.2 Thermoplastic structure on cooling adapted from . . . . . . . . . . . . 16
1.3 Figure 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.4 Sensor Fusion Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.5 Fused Feature matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.6 Sensor Fusion based PNN classifier . . . . . . . . . . . . . . . . . . . . 29
1.7 Sensor Fusion flow chart . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.1 Feedback control process in autonomic system . . . . . . . . . . . . . . 53
2.2 Figure 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
2.3 PNN Classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
2.4 Semantic sensor network ontology representation . . . . . . . . . . . . 55
2.5 Semantic sensor network ontology representation . . . . . . . . . . . . 56
3.1 Data fusion architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.2 Problem investigation approach . . . . . . . . . . . . . . . . . . . . . . 59
3.3 Layered Approach architecture . . . . . . . . . . . . . . . . . . . . . . . 60
3.4 Layered Approach architecture . . . . . . . . . . . . . . . . . . . . . . . . 61
3.5 Multisensor system design process adapted from . . . . . . . . . . . . . 62
3.6 Figure 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
4.1 Data in PCA after feature fusion . . . . . . . . . . . . . . . . . . . . . . 67
4.2 Classifier structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.3 Data Fusion Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
4.4 Data Fusion Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
4.5 Difference between normalized and actual values of merged data adap-
ted from . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.6 Smoothing of tensile and temperature data . . . . . . . . . . . . . . . . 75
4.7 Difference in state d and d’ of multi-sensor data . . . . . . . . . . . . . 76
4.8 Fusion of tensile and temperature data . . . . . . . . . . . . . . . . . . 76
4.9 Difference between normalized and actual values of merged data . . . 77
4.10 Plotting of original image . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.11 Texture and contour generation . . . . . . . . . . . . . . . . . . . . . . 79
4.12 Pixel-wise comparison of source and target . . . . . . . . . . . . . . . 80
4.13 Center of gravity and density of the target . . . . . . . . . . . . . . . . . 81
10
4.14 Pixel-wise comparison of source and target . . . . . . . . . . . . . . . . 81
5.1 Temperature data before smoothing . . . . . . . . . . . . . . . . . . . . 83
5.2 Temperature data after smoothing . . . . . . . . . . . . . . . . . . . . . 84
5.3 Tensile yield before and after smoothing . . . . . . . . . . . . . . . . . 85
6.1 Pattern in Semantic sensor network taken from . . . . . . . . . . . . . . 88
6.2 Overview of the Semantic Sensor Network ontology classes and prop-
erties taken from . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
11
Chapter 1
Introduction
1.1 Introduction
All plastic products including bottles pose a serious threat to environment due to massive
consumption of petroleum for plastic manufacturing, their non-biodegradable nature,
and their low density to volume ratio. This is the elementary reason behind plastics
being the major part of municipal waste throughout the world. In the past decade
intensive research has been dedicated to, developing and providing efficient solutions
for plastic bottle recycling. The research in this area attempt to 1) Increase recyclable
plastic consumption worldwide 2) In particular recyclable thermoplastic
Thermoplastic is made from polymer resin that melts into homogenized liquid when
heated and turns hard when cooled (Jumaidin, Sapuan, Jawaid, Ishak & Sahari, 2016).
These type of plastics can be re-molded, reheated, and repeatedly solidified, thereby
making it recyclable. Major types of thermoplastic polymer used in today’s industries
include: 1) Acrylonitrile butadiene styrene (ABS): these are used for manufacturing
sports equipment, and automobile parts. 2) Polycarbonate used for manufacturing
compact discs, drinking bottles (PET bottles), storage containers etc. 3) Polythene:
used to make shampoo bottles, plastic grocery bags etc.(Jumaidin et al., 2016), (Coskun,
12
Chapter 1. Introduction 13
2015) The interest in recycling thermoplastic is due the fact that it is a crucial component
of everyday use. Furthermore, a single unit of usage (for e.g. a single plastic bottle)
might be composed of several types of thermoplastic. Since most forms of thermoplastic
resin polymer can be reused (Deng, Djukic, Paton & Ye, 2015) majority of research
dealing with sustainable usage of plastic for environmental conservation deals with
combining information technology, sensors, and web based control modules to generate
autonomous systems for plastic recycling .
The very first process in recycling plastics is sorting the plastic based on color, shape
roughness. Currently most of the plants throughout the world use manual processes for
plastic recycling (Peres, Pires & Oréfice, 2016), wherein trained operatives assist in
classifying the polymer of plastic based on color and shape. This approach however
is both expensive and ineffective. The reasoning behind this primitive approach in
sorting the plastic components is that recycling systems lack human perception (Peres
et al., 2016). This means that a human sensory receptors are capable of combining
the signals and data from the body (this includes sight, touch, smell, sound, and taste)
through which, even with a vague knowledge of our surrounding environment we can
dynamically create, visualize and update models of real world objects. Therefore, this
study considers the use-case scenario of automatic thermoplastic bottles classification
system based on combination of chemical property of the resin, colour and texture.
The hardware architecture of this thesis will be composed of three sensors including:
1) computer vision in spectra, 2) NIR (near infrared spectroscopy, and 3) inductive
sensors. The software architecture will be based on statistical classifiers and design to
achieve multi sensor data fusion. The purpose of this study is to develop a sustainable
classification solution for plastic recycling plants using the advancement in the fields of
web services, sensors, internet of things, and cloud computing.
Chapter 1. Introduction 14
1.2 Thermoplastic type, identifiers, parameters:
In order to create a sorting algorithm, it is of the utmost importance to know the
identifying characteristic of thermoplastics and the unique combination of attributes that
differentiates a type of thermoplastic from another (Oliva Teles, Fernandes, Amorim &
Vasconcelos, 2015). In modern consumer market it can be found that there exist many
forms of thermoplastics used on a daily basis. Each of the variety has got a mix of
parameters that give it a unique look and feel.
Conventional Thermoplastics are known as the two phase system (Deng et al., 2015).
The classifying property being the temperature (Lionetto, Dell’Anna, Montagna &
Maffezzoli, 2016). The first phase being the temperature required for a thermoplastic to
reach its liquid resin form (Otheguy, 2010). The second phase being the temperature
required for it to reach it solidified form. Therefore, the very first identifier parameter is
the temperature range. Some of the other properties that will help in the classification
process are: 1) Tensile Properties: tensile properties of a thermoplastic polymer deal
with the elasticity of the polymer. They describe to what extend a thermoplastic resin can
be stretched. Test may indicate how a polymer will perform in actual usage condition
(H. Zhang et al., 2015). 2) Tensile at Break: Also known as the ultimate tensile. This
indicate the point at which a thermoplastic polymer will break. The force required to
stretch the polymer until it beaks is measured in units (typically pounds) per square
inch or psi. It can also be presented as megaPascals (MPa)(H. Zhang et al., 2015).
Thermoplastic polymers with higher tensile at break values will be more difficult to
break by stretching compared to its lower value counterparts. 3) Tear strength: this
represents how resistant the thermoplastic polymer is against tearing. This property
is somewhat similar to tensile at break except for a propagation point. This point
symbolizes the force at which the polymer tears completely. Therefore, tensile at break
refers to point where the polymer starts breaking and the tensile strength refers to the end
Chapter 1. Introduction 15
point when the polymer is completely torn. The unit of measurement for this attribute
is typically given in psi or kiloNewtons per meter (kN/m) (H. Zhang et al., 2015). 4)
Tensile Modulus: In this test the polymer is stretched over a range of elongated points
and its stretching is measure over range of elongations. The unit of measurement for
this parameter is the percentage of the original length of the thermoplastic polymer.
For e.g. the outcome could be 50, 100, or 300 percent of the original length of the
polymer resin. Thermoplastic polymer that has a strong tensile strength (elasticity) may
become weaker as it elongates this is also known as necking (H. Zhang et al., 2015).
5) Elongation at break. This measures the stretching length of the polymer before it
breaks. This is also measured in terms of percentage of the original strength. A soft
polymer will have a higher value than when it hardens (H. Zhang et al., 2015).
In case of thermoplastic polymers there are certain factors that depend on the
methods in which thermoplastic resins are molded and the mold used that effect the
tensile properties (Jumaidin et al., 2016). It is for this reason that both direction of
flow and its transverse are measure for tensile properties of thermoplastic polymers. 1)
Direction of flow: the orientation of thermoplastic polymers on molding greatly effects
its tensile properties. The tensile properties may vary greatly depending on the time
when the polymer was stretched. It must be determined if the polymer is stretched in
the direction of polymer flow or in the transverse direction during the molding process
(Coskun, 2015). 2) Extruded or injection based plaques. It is very important to conduct
test on outcomes of a similar plaque (i.e. compare the outcomes of extrusion plaque
with another of the same type of plaque, as comparing the same with an intrusion based
mold will lead to an ambiguous outcome) (Xie et al., 2016).
Compression set for thermoplastic polymers: A compression set is the permanent
deformation that occurs to the polymer when it is exposed to a specific temperature.
The test performed on thermoplastic polymers to check for compression set deformities
is called the ‘ASTM’ or the ‘ASTM-D395’. This method for testing defines that normal
Chapter 1. Introduction 16
range of compression in a thermoplastic polymer should be around 25- 30 percent. In
any recycling plant, this test is conducted to check the true positive deformed units
and the sampling is done every half hour. The samples chosen as true positives are the
ones that fail to reach its original height. This deformity occurs at a specific time, in a
specific temperature (Otheguy, 2010).
Hardness in Thermoplastic polymer: The relative hardness or softness is the primary
property to be considered while classifying the thermoplastic polymer. Hardness is
also closely related to other properties of polymers such as tensile qualities and flexural
modulus. The hardness in polymer are commonly measured by an instrument called as
the Shore durometer. A metal indenter is pushed in the surface of the hardened polymer
using a spring measuring how far it penetrates. The depth penetration can be in a range
of 0 to 0.100 inches. Here zero indicates that the indenter is at the maximum depth and
value of 100 indicates that no penetration is detected. The readings can be taken either
immediately or after a certain delay. Readings taken immediately are always higher
than the delayed reading. The delayed reading represents the resiliency of the polymer
along with the hardness (H. Zhang et al., 2015).
Flexural modulus is the measurement of resistance to bending in a thermoplastic
polymer. This property is closely related and is often confused with hardness which
typically measures resistance to indentation. The two properties of hardness and flexural
modulus have direct correlation (meaning if the value of hardness goes up, so does
flexural modulus). Both the property deal with how a thermoplastic polymer feel in
customer’s hands(H. Zhang et al., 2015).
How does a thermoplastic polymer differ from ordinary plastic material? Thermo-
plastic polymers repeatedly soften or melt when heated and solidifies when cooled.
Thermoplastic can therefore burn to some degree. Temperatures of thermoplastic poly-
mers can vary greatly with the type and grade of the polymer. In most thermoplastic
polymers molecular chains can represented as loosely coupled, intertwining strings
Chapter 1. Introduction 17
resembling a spaghetti. Figure 1 shows the structure of the molecules on heating.
Figure 1.1: Thermoplastic structure on heating adapted from
(Thermoplastic Elastomer (TPE) FAQs, 2015)
On cooling the molecules are held together more firmly as shown in figure 2.
Some of the basic distinguishing factors are between thermoplastic and conventional
plastic polymers given in table 1
Shrinkage in thermoplastic polymers: When the polymers harden they shrink the
overall size of the molded part. Even shrinkage levels can reduce the deformities of
the molded thermoplastics. Shrinkage normally relates to molding and removal of
final thermoplastic (for e.g. a bottle form). The shrinkage level should always be even
(H. Zhang et al., 2015).
Chapter 1. Introduction 18
Figure 1.2: Thermoplastic structure on cooling adapted from
(Thermoplastic Elastomer (TPE) FAQs, 2015)
Therefore based on the above attributes table 2 gives a the range of acceptable
values for recycling of thermoplastic polymer based on for the training dataset for our
multisensory environment.
Shrinkage in thermoplastic polymers: When the polymers harden they shrink the
overall size of the molded part. Even shrinkage levels can reduce the deformities of the
molded thermoplastics. Shrinkage normally relates to molding and removal of final
thermoplastic (for e.g. a bottle form). The shrinkage level should always be even.
Therefore based on the above attributes table 2 gives a the range of acceptable
values for recycling of thermoplastic polymer based on for the training dataset for our
multisensory environment.
Chapter 1. Introduction 19
Table 1.1: Difference between a plastic polymer and a Thermoplastic polymer taken
from
(Thermoplastic Elastomer (TPE) FAQs, 2015)
VARIABLE Thermoplastic polymer Plastic polymer
Fabrication Rapid (seconds) Slow (Minutes)
Scrap Reusable High Percentage,waste
Curing Agents None Required
Machinery Conventional,Thermoplastic Equipment Special Vulcanizing,Equipment
Additives Minimal or None Numerous Processing,Aids
Design Optimization Unlimited Limited
Remold Parts Yes Unlikely
Heat Seal Yes No
Table 1.2: Acceptable ranges for polymer attributes adapted from
(Otheguy, 2010)
Physical,Characteristic Min Value Max,Value
Gravity 1.20,N/Kg 1.20,N/Kg
Molding,Shrinkage Flow 0.10% 0.30%
Tensile,Strength NA
13800
psi
Flexural,Modulus NA 870000,psi
Flexural,Strength NA 23200,psi
Mold,Temperature 149,degrees Fahrenheit 203 degrees,Fahrenheit
Cooling,Temperature 194 degrees,Fahrenheit 212 degrees,Fahrenheit
Cooling,time 4,hours 6,hours
Chapter 1. Introduction 20
1.3 Hardware architecture
In order to design an adequate system for thermoplastic classifier we need to give
consideration to the following questions. 1) The type of sensors to be used. 2) How
do we integrate these sensory inputs to form a singular model for the classifier? 3)
Designing the prototype for presenting the designed solution.
Classification of thermoplastic requires us to take into account data from different
devices, as a single sensor data will not give us sufficient information to successful
classify a polymer into a type. Therefore, we require the conjunction of different sensors
each of which, gives us data for a single attribute (e.g. temperature). This process of
combining data from different sensor is called sensor fusion. Each of the sensor used
in this study has a specific task (identifying attributes) associated with the automated
classifier.
1) CCD camera sensor: This sensor has two-fold use in our classifier firstly it can be
used to identify the gravity of the object in its perception. And secondly it can be used
to identify the color and texture associated with the thermoplastic shard processed by
it. Out of the three types of thermoplastic materials considered in this study i.e. ABS,
PET, and polythene, especially PET bottles are characterized by its solid texture and is
available in different shapes and colors. Therefore, this sensor can be used to correctly
identify the type of thermoplastic based on color texture and shape (Chen et al., 2016)
(S. Wang et al., 2016) (Tzschichholz, Boge & Schilling, 2015).
2)Inductive sensors: Inductive sensors are a type of non-contact proximity sensors
useful for identifying the contents of the plastic shards without physically touching
the components. In particular, this sensor is useful in identifying components which
have been metallized. This sensor consists of induction loop, the inductance of which
changes in accordance to the material contents of the plastic shards. The changes in
the loop inductance reflects in current flow fluctuation which is captured by the sensors
Chapter 1. Introduction 21
(C. Wang, Liu & Chen, 2015) (Chang et al., 2016).
3)NIR sensors: Near infrared spectroscopy or NIR sensors is a device to measure
density and case of thermoplastics it can be used to measure and study the transmittance
or absorption between different plastics when exposed to infrared laser. The NIR
sensors can also measure surface temperature and particle state calibration in polymer
resins. NIR sensor output is multivariate. It is often used to extract specific features
of polymer resins like surface temperature, shrinkage, hardness etc. (Beguš, Begeš,
Drnovšek & Hudoklin, 2015) (Müller, Burger, Borisov & Klimant, 2015) (Iyakwari,
Glass, Rollinson & Kowalczuk, 2016)
4)Polymer FBG sensors: A fiber optic sensor uses optical fiber for either sensing
or as a means to relay signals from sensor to electronic measuring unit. Fiber optic
sensors are mostly used to measure strain, temperature, pressure among other qualities.
It provides a very distributed sensing over large distances. Polymer FBG sensor is an
extension of fiber optic sensor. It is very attractive due to its many qualities such as
flexibility, high strain measurement range, and low density. It can also vary the intensity
of light and can used for remote sensing. In case of thermoplastic resin, polymer based
fiber bragg grating sensor can be used to measure all tensile properties and temperatures.
5)Hyperspectral system: A spectral image is used to collect and process information
across an electromagnetic spectrum. This means that hyperspectral image identifies a
unique spectrum for each pixel in the scanned object. Its purpose is to identify certain
characteristics of materials and is a part of detecting processes. The human perception
spectrum is capable of classifying light in mostly 3 bands namely green, blue, and
red), however a spectral image classifies a spectrum in many more bands including
those which are not visible to human eye. The simplest form of hyperspectral system
is a combination of monochrome camera and hyperspectral lens and a spectrograph.
(Kouyama et al., 2016) (Li, Chen, Zhao & Wu, 2015) (Castorena, Morrison, Paliwal &
Erkinbaev, 2015)
Chapter 1. Introduction 22
1.4 Software architecture
Multisensory data fusion can be processed at four different levels of categorization
in accordance with the stage at which fusion takes place. This can be at signal level,
spectrum level, feature level, and decision level as illustrated by Figure 1.3 on page 26.
1) Signal level fusion: this occurs through combination of signals from different
sensors to create a new fused signal having a better signal to noise ratio compared to
the original (AO, YONGCAI, LU, BROOKS & IYENGAR, 2016). 2) Spectrum level
fusion: information associated with each point is derived from a matrix of associated
features from different sensors and is plotted on a spectral image to create a model
interpreted by a computing system for further evaluation(AO et al., 2016). 3) Feature
level fusion: information from objects recognized by the sensors in the form of salient
features are extracted and placed in a matrix from. The salient features depend on the
environment example of the features could be edges or textures. These features extracted
from different sensors are fused and placed in feature matrix (cross ref later)(AO et al.,
2016). 4) Decision level fusion: These include fusion based on lower level categories
stated above (including signal, spectrum as well as feature level extraction) to yield a
final fused decision. The thermoplastic contents of a recycling plant are firstly classified
based on lower level fusion based on features and spectra. The individual extraction
results are finally combined applying decision rules to complete the sorting process(AO
et al., 2016) (Duro et al., 2016)(Mönks, Trsek, Dürkop, Geneiß & Lohweg, 2016).
In this study all levels of fusion except signal fusion are used and form the important
part in formulating the fusion algorithm. We consider this study through wireless sensory
network (WSN) and cloud manufacturing platform. In a cloud based manufacturing
set up there is bound to be large number of sensors. A large number of sensors are
randomly distributed in the deployment area. The sensors are randomly distributed in
the deployment setup. The sensor are categorically organized into clusters based on
Chapter 1. Introduction 23
their type and spatial proximity. Each cluster has a cluster head, which are responsible
for synchronizing and collaborating the fusion activity and transferring data to base
station. The Sensor nodes collect the following information
1 CCD camera: Collect shape, color (Tzschichholz et al., 2015) 2 NIR sensor:
Surface temperature. (Hafizi, Epaarachchi & Lau, 2015) (Müller et al., 2015) 3 In-
ductive sensors: Collect component information specifically metallic content in the
shards.(Ding, Chen, Ma, Chen & Li, 2016) 4 Polymer FBG sensor: Tensile properties of
resin. A general block diagram of the fusion process mentioned above is shown in 1.4.
The sensor nodes collect data D1, D2, D3 from the environment. The data value from
these nodes may not be precisely true. There exists redundancies and noise in the data
generated by the sensor nodes. These uncertain data can be generated by sensors either
due to manufacturing or environmental conditions. These uncertainties can adversely
affect the decision fusion and efficient sensor power consumption in a wireless sensor
network (Mishra, Lohar & Amphawan, 2016) (Pereira, McGugan & Mikkelsen, 2016) .
This study uses a hybrid of Kalman filter and probabilistic neural network. Data
fusion itself uses the Kalman filters, while feature extraction and decision fusion as
mentioned in (above ref) occurs through PNN derivatives. The algorithm process flow
for the entire process is as follows: Step 1 Preprocess data: The data should be free
of inconsistencies and errors. PNN training step is used here. The training data uses a
predictor variable. This is shown in equation (Oliva Teles et al., 2015) 1.1 as follows:
Φ = d′
− d (1.1)
The predictor variable differentiates the data captured by sensor to the normalized
value of the data. This represent the proximity of captured value to the actual value.
Here d’ is the normalized value shown by training previous data sets and d is captured
Chapter 1. Introduction 24
value. Therefore, percentage of correct data packets received by the equation 1.2. Here
C is the correct percent of packets received. While N is the total number of events
processed.
C =
Φ
N
× 100 (1.2)
Step 2: Feature extraction is commonly used in structural analysis of dynamic
components. It plays an important role in improving the performance of the classifier
in sorting of thermoplastic according to their features. Since frequency and its relative
modal parameters are relatively simple to extract from structural characteristics and re-
sponses received from sensory devices. Normalized frequency is a unit of measurement
corresponding to frequency. It is represented in continuous time variable t. Normalized
frequency is represented as cycles/sample, therefore in our classifier this is represented
by seconds/ samples (number of samples). There are three delimiters to our feature
extraction (Land et al., 2012).
1.Frequency change ratio: The first delimiter measures the changes in the com-
ponents in sensor data. This given by equation 1.2 this measures the changes in the
structural components being classified.(Land et al., 2012)
1.3
NFCRi =
FFCi
∑
n
j−1 FFCj
(1.3)
2.The second measures the changes in modal points in the sensor data. This is the
measure of accurate dimensionality of the classifier. This is shown in equation 1.3.
(Iounousse, Er-Raki, El Motassadeq & Chehouani, 2015)
FMCRi =
MCRi(k)
∑
n
j−1 MCRj(k)
(1.4)
3. The curvature change ratio showed by equation 1.4 shows difference in structures
Chapter 1. Introduction 25
of the sensed components.
NCCDj =
Cui(j) − Cdi(j)
∑ Cui(j) − Cdi(j)
(1.5)
Here FFCi, MCRi are the frequency change ratio, modal change ratio derived from
sensory data training.
Step 3: Feature level data fusion: The features extracted are then fed in to individual
matrices. The fused matrix is then calculated. This is a well-rounded matrix representing
all information for the classifier to successfully decide the grade and category of plastic.
Example of this process through the example of our thermoplastic classifier is seen in
figure 1.6 and the fused feature matrix example is shown in Figure 1.5.
Step 4: FIS induced decision fusion: This step shows the proposed data fusion
process takes place. Decision theory and making appropriate decision plays an important
role in decision fusion. Due to complex situations encountered in multisensory fusion
process. Selection of appropriate criterion for derivation of the most optimal path
by selecting an estimation/filtering algorithm is a lengthy and complicated process in
itself. Decision fusion concerns mainly with how to make decisions and/or the most
appropriate decision to be taken given a status of an object, occurrence of an event or a
given scenario. This is usually done on basis of some objective analysis. Sometimes a
decision is taken on the basis of data extracted from sensors and the processing of these
datasets.(Tong, Liu, Zhao, Chen & Li, 2016)
There are two types of decision processes(Rawat & Rawat, 2016). First is the
normative decision making process: This describes how decision should be made and
not how they are actually made. The other type is descriptive decision making. The
main difference of former from the latter is the element of artificial intelligence or
AI. This entails how human think and their rationale. This AI has several facets these
are (i)logical decision making, (ii) perception, (iii) planning, (iv) learning, and (v)
Chapter 1. Introduction 26
action. The descriptive decision making also includes 1) Identifying the actual issue/
current situation 2) ability to collect data/ necessary details pertaining to the problem.
3) ability to arrive at intermittent solution / generating feasible processing option. 4)
conducting intermittent analysis on solution arrived at. 5)implementation of intermittent
solution. Take action to implement the final decision. For the purpose of this study we
use the descriptive decision making. The final decision model is used for successful
classification. The consequent processes to attain a successful decision fusion is further
discussed in this section.(Rawat & Rawat, 2016)
The sensors are embedded with a fuzzy logic controller. This purpose of a fuzzy
logic controller is to calculate the confidence level of collected data this depends on the
current condition of the sensor nodes. Each sensor node is responsible for collecting
the data and the confidence factor. The data fusion algorithm collects both data and the
confidence factor. In order to collect the confidence factor, signal to noise ratio of a
sensor input device is collected. Poor signal to noise ratio is indicative of poor sensor
health and therefore has a lower confidence factor. (Tian, Sun & Li, 2016)
The fuzzy logic controller determines if the sensor data and the signal to noise ratio
are in acceptable range (Mönks et al., 2016), this is determined through training data. A
sensor which is out of range will have confidence factor between 0 to 100 percent, the
confidence value of the sensor can be 100 percent only if its health and environmental
factors are desirable. A minimum acceptable value is decided for each sensory input
if the screened value is blow this acceptable range the value is rejected. For fusion
purposes the message packet will also include the node id of a source node (Mönks et
al., 2016).
DF =
(CF1 × D1) + (CF2 × D2 + (CF3 × D3) + (CF4 × D4) + .... + (CFn × Dn)
CF1 + CF2 + CF3 + CF4...CFn
(1.6)
Chapter 1. Introduction 27
Cluster heads are responsible for data aggregation from its subordinates. The multi
sensor fusion process is started by cluster heads at the completion of each round of data
collection process. The data aggregation process is shown in equation 1.5. Here Dn
represents the data received from similar cluster members CFn is the confidence factor
of data collected by each sensor node. Data fusion is conducted through combination of
data collected by different cluster heads. In other words, data fusion combines data with
higher certainty levels. Because sensor fusion combines data from different sources, it
provides a better view of the environment (Ari, Yenke, Labraoui, Damakoa & Gueroui,
2016), (Dehghani, Pourzaferani & Barekatain, 2015).
The example of the overall process is as follows for e.g. we consider data from three
temperature units, four tensile yield units and three component analysis units. Let the
temperature data collected be 10 degrees Celsius, 15 degrees Celsius, and 20 degrees
Celsius respectively having confidence factors of 75, 65, and 41 percent respectively.
The FD of temperature would be 15.93, similarly if FD of tensile yield and component
analysis is 49.8 and 26.8 respectively. The feature fusion matrix will have the values
15.98, 49.8, and 26.8 respectively. The feature fusion matrix from different sources
(cluster heads) are finally fused. After the fusion is successfully completed if there
is no change occurred at any clustered heads no packets are send to the base station,
alternately if there is occurrence of constant change the base station is alerted. The
successfully fused data is finally stored for training. The entire process is shown in
flowchart represented in Figure 1.7.
1
∏
c
=
1
(2n)
n
2 σn
exp[−
(x − xij)T (x − xij)
2σ2
(1.7)
pi(x) =
1
(2π)
n
2 σn
1
Ni
(Ni)
∑
(i=1)
exp[−
(x − xij)T (x − xij)
2σ2
] (1.8)
Chapter 1. Introduction 28
→x k + 1 = A.→x k + B.→u k + w (1.9)
Chapter 1. Introduction 29
Figure 1.3: Levels of Data Fusion adapted from
(Duro, Padget, Bowen, Kim & Nassehi, 2016)
Chapter 1. Introduction 30
Figure 1.4: Sensor Fusion Process
Chapter 1. Introduction 31
Figure 1.5: Fused Feature matrix
Chapter 1. Introduction 32
Figure 1.6: Sensor Fusion based PNN classifier
Chapter 1. Introduction 33
Figure 1.7: Sensor Fusion flow chart
Chapter 2
Literature Review
2.1 Introduction
The advent of IoT, wireless capabilities and increasing number of devices with varying
features and potential is changing the business paradigm radically. Cloud computing
allows its users to tap the full potential of web services to create powerful and loosely
coupled infrastructure that are autonomous (Hao & Helo, n.d.).
Cloud based manufacturing (CBM) is scalable, agile infrastructure for manufac-
turing operations (Wu, Rosen, Wang & Schaefer, 2015b). CBM has a decentralized
structure and functions as a part of network. CBM consist of loosely coupled systems
working together in real time, the backbone of CBM’s contains, several enabling tech-
nologies like cloud computing, service oriented architecture (SOA) (Wu, Rosen, Wang
& Schaefer, 2015a).
It can therefore be inferred that cloud computing framework makes it possible to
collect, assemble and use IoT data. Therefore, cloud computing is the center of manu-
facturing network.There are three dimensions to cloud based manufacturing network
they are
34
Chapter 2. Literature Review 35
1 IoT (Internet of things) : This represents all the devices (and their sensors respect-
ively) connect to a cloud manufacturing network. This also includes the data
created by these devices.
2 IoS (Internet of Services) : This includes all the underlying services including
manufacturing services, cloud services and web-services in general. In gen-
eral manufacturing processes includes several components like raw material,
semi finished products, demand supply datum, product and process structure
information.In order to present a formal representation of these processes and
information(datum), in addition to the conventional compute, storage and network
included in IaaS, PaaS and SaaS, cloud manufacturing has many more services.
These may include integration as a service, simulation as a service, maintenance
as a service, design as a service (Schroth & Janner, 2007).
3 IoU (Internet of users): IoU are entities interacting with the system including service
providers (cloud service providers and others),operators, end users (consumers)
(Wu et al., 2015a).
Authors Wu et.al (Wu et al., 2015a) have presented eight requirements to cloud
manufacturing, these are mentioned in table 2.1 on the next page.These include a
cloud based shared storage, Internet connectivity ability to share data and information,
multitenancy architecture. This paper is based on the assumption that these requirements
are true and met for all cloud based manufacturing units.
The system architecture of cloud manufacturing follow a layered approach as follows
1 Perception layer.
2 Network layer.
3 Middleware layer.
Chapter 2. Literature Review 36
Table 2.1: Assumptions to cloud manufacturing
Requirement Requirement description
R1
To connect individual service providers and consumers in the networked
manufacturing setting, a CBM system should support social media-based
networking services. Social media such as Quirky allows users to utilize
/leverage crowd-sourcing in manufacturing. In addition, social media does
not only connect individuals; it also connects manufacturing-
related data and information, enabling users to interact with a global
community of experts on the Internet.
R2
To allow users to collaborate and share 3D geometric data instantly, a CBM
system should provide elastic and cloud-based storage that allows files to be
stored, maintained, and synchronized automatically.
R3
Should have an open-source programming framework that can process and
analyze big data stored in the cloud for e.g MapReduce.
R4
To provide SaaS applications to customers, a CBM system should support the
multitenancy architecture. Through multi-tenancy, a single software instance
can serve multiple tenants via a web browser. According to Numecent,
a cloud platform, called Native as a Service (NaaS), is developed to deliver
native Windows applications to client devices. In other words, NaaS can
“cloudify” CAD/CAM software such as Solidworks without developing
cloud-based applications separately. With such a multi-tenant platform, such
programs can be run as if they were native applications installed on the
user’s device.
R5
To allocate and control manufacturing resources (e.g., machines, robots,
manufacturing cells, and assembly lines) in CBM systems effectively and
efficiently, real-time monitoring of material flow, availability and capacity of
manufacturing resources become increasingly important in cloud-based process
planning, scheduling, and job dispatching. Hence, a CBM system should be able
to collect real-time data using IoT technologies such as radio-frequency identification
(RFID) and store these data in cloud-based distributed file systems
R6 Should provide IaaS, PaaS, HaaS, and SaaS applications to users
R7 Should support an intelligent search engine to users to help answer queries
R8
Should provide a quoting engine to generate instant quotes
based on design and manufacturing specification
Chapter 2. Literature Review 37
4 Application layer.
5 Business layer.
The lowest layer consists of sensors and sensory devices including RFID, GPS
cards etc., this layer is called as the perception layer.Second layer consists of network
modules like GPRS, Zigbee, 3G, infrared, blue-tooth this layer provides us means of
transmitting information perceived through sensor devices to middle-ware web services,
IoT gateways, user interface modules to capture the data transmitted and pass it onto
the next layer. Middle ware layer takes care of web services and their interaction, it
also includes database modules for storage example MongoDB, and other nosql,mysql
databases (Wu et al., 2015a). This layer also consist of ubiquitous computing and cloud
computing modules this is a very important module specially in cloud manufacturing
(Mitton, Papavassiliou, Puliafito & Trivedi, 2012) it allows the users to extract the data
from anywhere it can also be used to attain some degree of remote control features
as stated by(smart home). Application layer typically consists of user interfaces or
management applications that use or manipulate the data from lower layer.Business
layer is used for designing business process flows these include model data, flow charts
and graphs (Y. Zhang et al., 2012) .
The main features that characterize cloud manufacturing framework are 1)Data
availability 2)Complexity. These features make this framework a viable option for IoT.
Cloud manufacturing provides hi speed access to on demand resources, also it provides
quick storage options for sensory data. Cloud computing model of manufacturing
consist loosely coupled modules interacting with each other over a common backbone
(Wu et al., 2015a). Cloud manufacturing model due to on-demand compute and storage
are able to process large data-sets which is not possible in conventional manufacturing.
This manufacturing model follows an input-reply approach wherein a master node
divides the input task into sub-tasks which are then aligned to be process by child
Chapter 2. Literature Review 38
nodes (Schaefer, 2014),(Hao & Helo, n.d.),(Kyriazis & Varvarigou, 2013) ,(Yang, Shi &
Zhang, 2014). This therefore allows complex operations to be assigned and completed
in parallel. On completion of a sub task the child node notifies the parent node of the
same. The correspondence between parent and the child nodes takes place through
HTTP codes as follows.
1 1XX level codes : represent informational notification (optional errors)
2 2XX level codes : indicates that the task was completed successfully by a sub node.
3 3XX level codes: This indicates redirection. It means that the node has several other
child nodes, and is waiting for its sub-tasks to get executed.
4 4XX level codes: This represents error-sets in child nodes.
5 5XX level codes : This represents errors in the parent nodes.
These error codes are analogous to Rest API HTTP codes used for web-services.
Another assumption to cloud manufacturing model is that the machines and devices in
one layer of hierarchy have the same make and configuration. This means that child
devices under one parent are identical to others. This makes the QoS of these machines
and devices quantifiable (Lartigau, Xu, Nie & Zhan, 2015),(Huang, Li & Tao, 2014).
From studies such as (Aydogdu, Akın & Saka, 2016),(Chakaravarthy, Marimuthu,
Ponnambalam & Kanagaraj, n.d.) , (Loubiere, Jourdan, Siarry & Chelouah, n.d.),
(Anuar, Selamat & Sallehuddin, n.d.) it can be inferred that an allocation optimization
algorithm like ABC (artificial bee colony) is more suited for loosely coupled system
requiring on precision in their operations. ABC algorithm gives the optimal task
allocation path in a cloud for manufacturing. In this algorithm there are three entities
1)worker nodes 2)scout nodes 3) and onlooker nodes (Anuar et al., n.d.).
The worker nodes do a job using one resource, there has to be a one to one rela-
tionship between a worker node and a resource. The onlookers are those nodes that are
Chapter 2. Literature Review 39
waiting for a signal of completion from the worker nodes. According to the input-reply
approach mentioned above in 2.1 on the preceding page worker nodes can be considered
as master nodes and the onlooker nodes is the child node which is dependent on its
master to complete the processing of its jobs and then allocated the sub tasks to its
children, If the child nodes are non responsive the task are routed to other child nodes.
It is for this reason that all child nodes under a master node need to have a same make
(in terms of configuration and model of the machine they need to be exact replicas of
each other). The scout nodes are those master nodes which have encountered an error.
Master node on encountering an error gives a 5XX level error code. This masters job
are re routed to its other replicas, who then process the master’s job and communicate
with failed master’s children. This model is suitable for cloud manufacturing, as clouds
provide scalable, and on demand resources and routing facilities to under-allocated
reserves to speed up the operation(buyya cloud char).
Uncertainties in manufacturing operations can be defined as discrepancies between
desired output and the actual one (Hasani, Zegordi & Nikbakhsh, 2012a). Uncertainties
in manufacturing can be categorized as aleatoric and epistemic uncertainties. Aleatoric
uncertainties are represented by unknown outcomes in every run (Senge et al., n.d.).
This can be caused due human cognitive impairments that could result in ambiguities
and vagueness (Hasani, Zegordi & Nikbakhsh, 2012b). This type of uncertainty is
caused by inadequate, erroneous, missing data (Francalanza, Borg & Constantinescu,
2016). Epistemic uncertainties arise due to wrong modeling of data due to assump-
tions and approximation (Urbina, Mahadevan & Paez, 2011). In this paper we will
consider overall uncertainties and whether real time data available through wireless
sensory networks(made accessible by scalable cloud storage), can help the machines
to take decision (with the least impact to other production processes) in case any such
uncertainties arise (Hasani et al., 2012b), (Francalanza et al., 2016), (Hasani et al.,
2012a).
Chapter 2. Literature Review 40
2.2 Autonomic Machines
In the words of (Buyya, Calheiros & Li, 2012) Autonomic systems are self-regulating,
self-healing, self-protecting, and self-improving. In other words, they are self-managing
. Many articles relating to autonomic computing (Parashar & Hariri, 2005), (Yahya,
Yahya & Dahanayake, n.d.) have compared this concept with human neural decision
making and rationalization. According to (Parashar & Hariri, 2005) human nervous
system could be cited as the best example of autonomic system present in nature, with
the help of sensory inputs the human nervous system is able to monitor and adapt to
changes both internal and external to itself.
According to (Maggio et al., 2012) Autonomic computing system must posses three
compulsory requirements
1 Automatic : This means that systems must be able to carry out their operations
without manual intervention.This inherently includes thin knowledge layer that
contains know how about this system (He, Cui, Zhou & Wang, 2016).
2 Adaptive : The ability of system to take decision when a uncertain event occurs.
This includes alterations to its state, configuration, functionality etc (He et al.,
2016), (Yahya et al., n.d.). In operation context this will include dealing with both
temporal and spatial changes/ uncertainties occurring on both long and short term
basis. It must be able to predict and anticipate demand on its resources to avoid
any down time.
3 Aware: An autonomic system must be aware of both the operational context and
its state as compared with its operative environment as a whole. In order to be
adaptive the machine or a system must be aware of its operative environment.This
means that a system must know itself in context of what resources are accessible,
why and how it is connected to other operatives (Parashar & Hariri, 2005),
Chapter 2. Literature Review 41
(Maggio et al., 2012) .
Majority of the articles that deal with such self-adaptive, self-managing systems
such as (Maggio et al., 2012), (He et al., 2016), (Yahya et al., n.d.) manifest certain
common characteristics. These systems are often ’reflective’ this means that they posses
the ability to reason in context of their state and environment (White, Hanson, Whalley,
Chess & Kephart, 2004). Secondly these systems are able to rationalize, take decisions
during or in a close time-frame to run-time(while the machine or the system is running).
Based on the work of (White et al., 2004) certain paradoxical features to autonomic
computing are:
1 Uncertainty: These are presented in section 2.1 on the previous page
2 Environmental non determinism: this means that the environment in which the sys-
tem works requires different response in every run (therefore there is requirement
of decision making during run time).
3 Extra functional capabilities: This includes those features that may not be inherently
a part of the system,degree to which a machine combats the anomaly that is not a
part of its design. It is a representation of the system trade-offs that arise due to
unexpected outcome.
4 Incomplete control of system or its components: This could be due to embedded
components, human involvement in sub processes, or continual action.
Authors (Yahya et al., n.d.) have propounded that autonomic systems follow feedback
process. A feed back control loop is a continual process behind autonomic component’s
rationale and logic. These process’s are depicted through 4 canonical activities(collect,
analyze, decide and act) as shown in .
Chapter 2. Literature Review 42
1 Collect: collection of data from sensors,events,probes. This involves collecting
data from executing processes in real time, Data collected includes contextual
information about component states (Schaefer, 2014).The data is filtered for any
noise(unwanted or redundant sources) and is stored for future use. This data can
be used to generate a trend pattern by comparing current and past states.
2 Analyze : the data stored is analyzed to generate inferences and symptoms.
3 decide : to resolve a future course of action or how to act when such a event arises
in the future.
4 Act: machine components need to be checked for determining which are the managed
components and how these can be manipulated( for e.g., by new algorithms or
parameter tuning). Are the changes to system pre-configured(the error has already
occurred in the past, was logged and resolved ) or are assembled, composed or
generated at runtime (opportunistically). This step involves acting on executing
processes and its context.
2.3 IoT, WSN
According to Gartner forecast "6.4 billion connected things will be in use worldwide
in 2016, up 30 percent from 2015, and will reach 20.8 billion by 2020. In 2016, 5.5
million new things will get connected every day". These IoT devices(things) consist of
many low power sensors(Y. Zhang et al., 2012) .
IoT sensor networks observe and monitor environments which change rapidly over
a period. These dynamic behavior is caused due to internal problems(that are integral
part of system design that requires a certain degree of elasticity) or they are cause by
external factors(unforeseen changes or behavior)(Chi, Yan, Zhang, Pang & Da Xu,
2014). IoT are typically composed of multiple, autonomous, small, cheap, and low
Chapter 2. Literature Review 43
power sensor nodes. In a cloud manufacturing paradigm there nodes forward sensed
data to cloud storage (Soliman, 2014) and are then usually used for further processing.
The sensors could measure pressure, impact, temperature,thermal acoustics and many
other dynamic parameters within a industrial set up. IoT’s have a tremendous potential
for creating generic industrial machines to reduce fault positives and downtime faced
by machines.
In order to achieve this automation task we have to look at 1)event scheduling 2)
reliability of data( data should be without noise that change the pattern of learning
and prediction, non redundant data set) 3)Data availability 4)Data security 5)Data
aggregation 6)localization 7)clustering of nodes(IoT3) 8)fault detection. In the context
of this study:
1 Knowledge acquisiton (unbiased) in order to develop automation and enhance the
productivity and performance of machines in a industrial setup.The outcome of
this step would be the training data (Kyriazis & Varvarigou, 2013).
2 optimization: This is done by detecting and describing in consistencies through
the training data. Finding patterns of these in consistencies in training data and
generating algorithms for machines (White et al., 2004).
Machine learning will be suitable for machine automation through IoT sensor network
because
1 IoT sensors can monitor dynamic environment that is typified by frequently changing
data over time (Mitton et al., 2012). For e.g. A node’s current location may change
due eroding soil, landslides, weather turbulence due wind, or floods. It is desirable
to use sensors as they can adapt to this environment and operate efficiently.
2 IoT sensors can be used to collect new knowledge from unreachable locations. Using
this knowledge system designers can develop learning algorithms that are able to
Chapter 2. Literature Review 44
calibrate itself to any newly acquired expertise (Kyriazis & Varvarigou, 2013) .
3 IoT sensors can be deployed in complicated machine structures where designers
cannot build accurate model of the system behavior. This means there is no
estimate to how a machine will behave at run-time.
4 Majority of the articles on this topic (m2miot, cmfd io2 iot3) propose a M2M
framework that use IoT but very few speak about using data, mining knowledge
and extracting important correlations from them. These correlations represent
learning trends to enable automation with minimum human intervention.
2.3.1 Wireless sensory network
Wireless sensor networks are heterogeneous distributed systems that primarily work
from data collected from a physical environment. In its most fundamental form, sensor
nodes are small autonomous, short range transceivers having sensors. These sensors
are able to cooperate and communicate with each other wirelessly (Gokturk, Gur-
buz & Erman, 2016). Sensor nodes sense (or gather) information about a physical
object. Although a single sensor node has only a limited range, a wireless sensor
network composing of many such similar nodes can provide detailed measurement
encompassing a large region (ref). Wireless sensor network has many applications
such as those in agriculture, nature monitoring (for e.g. monitoring active volcanoes,
forest fires etc.), industries (measuring temperature, pressure etc.), health monitoring
systems(Antonopoulos & Voros, 2016).
WSN is composed of mainly several sensor subsystems.
1)Sensor node is the most important element in a wireless sensory network. The
functionality of a sensor node includes sensing data, converting the raw physical to
digital form, interpret the communication protocol to send and receive data packets. In
order to achieve this, a sensor node must be equipped with certain physical resources.
Chapter 2. Literature Review 45
The sensor resources can be further sub classified into four subsystems (Y. Wang, Chen,
Wu & Shu, 2016).
2)Sensor conversion units: To sense a value and convert them into digital form
which can be transmitted over a network (Y. Wang et al., 2016).
3)Sensor processing units: Storage of gathered and configured data in the local
memory, this also includes assembly functions to further process the gathered data,
information, or messages (Y. Wang et al., 2016).
4)Sensor interpreter unit: Interpret the received data frames according to protocol
and also assemble the data packets to be transmitted over a wireless network either to
other nodes or to the cluster head (Y. Wang et al., 2016).
5)Power unit: This unit is responsible for alerting the WSN monitoring interface
with sensor health diagnostics and the sensor battery status. This component regularly
alerts this message to cluster head to calculate the confidence factor (Y. Wang et al.,
2016).
Proper implementation and working of all subsystems can affect the network per-
formance. In wireless sensor networks the most common attributes adversely affecting
the performance of the network are the frame loss rate, bandwidth, QoS (quality of
service), and power consumption (Gokturk et al., 2016). Since some sensors are placed
in extremely hazardous, or too expensive it is not possible to connect these sensors
with the power grid and therefore sensors are equipped with batteries that have to be
frequently replaced (Antonopoulos & Voros, 2016)
Low power consumption of sensor nodes is crucial for several reasons, such as the
fact that sensor nodes are often placed in areas where there is no possibility (or it is too
expensive, dangerous etc.) of connecting sensor nodes to power grid (Edwards-Murphy,
Magno, Whelan, O’Halloran & Popovici, 2016). Therefore, they have to be equipped
with batteries, which have to be replaced with fresh ones occasionally. These periods
may differ from few days to years. As mentioned, sensor nodes are often placed in
Chapter 2. Literature Review 46
areas with low accessibility, which leads us to the importance of power consumption
attribute when dealing with WSN. The battery status is also an important factor while
calculating the confidence factor (Edwards-Murphy et al., 2016).
2.3.2 BaseStation
A base station can be viewed as an entry point to a sensor network. A base station(BS)
is usually computationally more advanced than a sensor node (Arkin et al., 2014). A
base station processes larger memory (Could be connected to cloud storage) and may
be connected to a power grid. The base station is responsible for gathering data from
different computational cluster heads (Cayirpunar, Kadioglu-Urtis & Tavli, 2015). All
analysis of the content in a wireless sensor network is done by base stations. A BS also
handles, individual node configuration and routing (Gao, Zhang, Qi, Li & Tong, 2016).
The base station is usually composed of a wireless trans-receiver, monitoring in-
terface and could include a computing terminal station. A minimalistic design of a
wireless sensor network will include a monitoring module running on the terminal in the
base station that receives data to a database (Devaraju, Suhas, Mohana & Patil, 2015)
2.3.3 Routing in IoT sensory network
. In designing a sensory network corresponding to a cloud based manufacturing infra-
structure it is important to consider power and memory constraints of sensory nodes,
changes in topology, communication failures,localization of data, decentralization, node
clustering and data aggregation (Wu et al., 2015a), (Chi et al., 2014). To design a
routing protocol corresponding to IoT sensor networks it is mandatory to give con-
sideration corresponding design challenges like scalability (provided by cloud), data
coverage,energy consumption(of sensor nodes), fault tolerance e.t.c.(cloud mfgd 2).
Sensor nodes are inherently low power devices having limited processing capabilities,
Chapter 2. Literature Review 47
low bandwidth and memory. There are two conventional methods of representing the
routing problem of sensory nodes in cloud network.
1 The cloud network is represent as a graph G where G = (V,E) hereV represents the
nodes and E represents bi-directional communication links. The optimum routing
is represented as the minimal cost of traveling all the nodes starting at source and
vising all the destination vertexes in the available graph edge.
2 A spanning tree T = (V,E) cost of which includes travelling to all of its childess
nodes. This solution is complex even when the entire topology is already known.
Machine learning makes it possible for a system(node) to learn from its experi-
ences, selecting the most effective route(low cost) and adapt to a dynamic environ-
ment.Learning to route dynamically is cheap and energy saving and prolongs the life of
cloud based IoT sensor networks(as it can adapt to any additions or changes). In his
paper (Guestrin, Bodik, Thibaux, Paskin & Madden, 2004) has mentioned the usage
of a distributed regression framework to model the routing problem. He propagates
that instead of transmitting to one another or outside a sensor node network the nodes
transmit only a subset of its parameters significantly reducing the communication
required.(Barbancho, León, Molina & Barbancho, 2007) have presented optimal routing
using self organizing maps(SOM). SOM is a neural network that uses unsupervised
learning to produce discretized representation of training data they call map. They use
Dijkstra’s algorithm to form the shortest path from network base station to every node
in network.
WSN composes of units with limited capabilities such as lower levels energy
bandwidth and storage capacities. An efficient routing algorithm must reflect these
shortcomings. Some commonly used routing protocols are as follows (Y. Wang et
al., 2016): 1)Flooding: In this type of routing a message to an uncertain location is
broadcast to all the nodes. The receivers of the message again broadcast the message to
Chapter 2. Literature Review 48
all its neighboring nodes. The process is repeated until all nodes receive the message.
This type of routing while having a high latency, is suitable to system with low power
requirements.
2)Data centric routing: In this type of routing algorithm the sender of the sensed
data broadcast the metadata of a message (mostly the node id and the timestamp) as
an advertisement all nodes that do not have the data respond with a request. Finally,
the sender node only responds to these nodes. Another flavor of this protocol is direct
diffusion routing here the nodes interesting in a topic set their gradient to zero. Data is
send to these nodes only. Gradient increases on every resend.
3)Table driven routing: the routes of delivery of a message is stored in a table
are stored in a table. The sensor nodes fill the tables before they are actually used.
This approach in a wireless sensor network is very useful where there is low latency
requirement.
4) On demand routing: Routing paths for a message is decided only when needed.
This approach is the opposite of table driven routing which focuses on low energy
capacities instead of low latency. This method is better suited therefore is popularly
used in many wireless sensor network implementations.
5) Hierarchical routing: In this method similar sensor nodes are grouped together.
This group of similar sensor nodes are called a cluster. Each cluster has a cluster head,
the cluster head has advanced computational abilities and energy resources. A sensor
node will only send data to the cluster head, saving the bandwidth of its peers. Only a
cluster head handles the routing for the entire cluster. This approach efficiently handles
the problem of latency, bandwidth, and power constrictions.
Chapter 2. Literature Review 49
2.3.4 Communication with base station
There are two model through which communication between the cluster heads and the
base station takes place (Cayirpunar et al., 2015) 1) Push model: The sensor nodes only
transmit data when they are ready. The base station passively listens over a sensory
network. 2) Pull model: In this approach the base station queries for data in a timely
manner. The sensor nodes respond to the query.
2.4 Data Fusion
Multi sensor data fusion is acumination of data from several sources to form a unified
model image. Multisensory fusion is used to provide the computing systems human like
perception although, achieving exact emulation of a human brain is still not possible.
Human beings can perceive objects attributes like depth, texture, colour, temperature
etc. simultaneously as a single model. Without physical touch or contact humans can
guess the correct dimensions of an object. Multi sensor fusion is currently used most
popularly in image and spectral fusion to generate perceptive imaging guidance systems
popularly used in robotic and navigation systems (Tong et al., 2016).
Joint directors of laboratories (JDL) defines data fusion as a “multilevel, multifa-
ceted process handling for automatic detection, association, correlation, estimation and
combination of data and information from several sources”. The data can be extracted
from a single or multiple sources (De Vin, Holm & Ng, 2010).
2.4.1 Advantages of data fusion
1) Robustness and reliability: fused data coming from multiple sources tend be redund-
ant which enables it to work in case of partial failure. One sensor can produce additional
dimension to sensor perception.
Chapter 2. Literature Review 50
2) Increased confidence factor: Multiple sensors provide extended spatial and
temporal dimension to sensor data. This adds to increased confidence factor. It also
leads to data measured by one sensor be confirmed by measurements of other sensors
(Tong et al., 2016).
3) Reduced uncertainty and ambiguity
4) Reduces interference: By increasing the dimensionality of the measurement units,
the resultant system becomes less vulnerable against external interference.
5) Improved perception: The resultant value of a fused property has a better resolu-
tion than measurement taken by a single sensor.
2.4.2 Motivation of sensor fusion
The physical single sensor environment suffers from certain shortcomings. 1) Sensor
failure: the breakdown of a single sensor causes the complete loss of perception for a
physical object.(Mönks et al., 2016)
2) Spatial coverage: one sensor even a cluster of sensor node covers limited spatial
coverage. For e.g. temperature read from a sensor represents the unit measured in a
single read it may not be very accurate, it fails to render the average temperature of a
boiler.
3) Temporal coverage: Some sensors limit the maximum frequency of measurement,
as they need a particular time frame to finish.
4) Imprecision: Data measured from a single sensor over a period of time may not
be precise. A group of sensors measuring the data gives a more accurate reading 2.2 on
page 54.
5) Uncertainty: Uncertainty arises when sensors cannot measure all relevant at-
tributes with its perception range leading to missing and inconsistent values 2.2 on
page 54.
Chapter 2. Literature Review 51
2.4.3 Issues with Data fusion
1) Imperfect, inconsistent and spurious data.
2)Correlation of data: Bias of data due to external uncontrollable data.
3)Data dimensionality: Compression loss of lower level sensors after data fusion
takes place.
2.5 Sensor fusion methodologies
2.5.1 PNN Classifier
Probabilistic neural network is a classifier based on the Bayesian classification theory
and estimation of probability density function. PNN is predominantly a classifier that
maps the multiple inputs to a number of classification. PNN classifier can be used to
process multiple inputs to generate decision ranges.
A probabilistic neural network consists of several probability density function
estimators for each of its classes. The classifier takes decision after calculating the
probability density function of each class using the training examples. The multi-
classifier decision is mathematically expressed as
pkfk > pjfj (2.1)
Here pk is the priori probability of occurrence from class k and the probability
density function for class k is represented by fk. This is called a neural network as it
maps in to the two-layer network.
Chapter 2. Literature Review 52
Architecture of PNN
PNN consist of four layer 1)Input layer, 2)Pattern layer, 3) Summation layer, 4) decision
layer shown as in figure 2.3 on page 54
The input layer shows the multiple inputs to the classifier. The number of nodes in
the pattern layer is equivalent to number of classes in training instances. The input layer
is completely connected to the pattern layer. It does not do any further computation only
distributes input to neurons to pattern layer. The pattern layer is partially connected
to summation layer. The summation layer adds the input neurons from the pattern
corresponding to training instances selected.
How does PNN work
The input layer is propagated to the pattern layer. Once the pattern similar to input is
selected the output node are computed. In equation 2.2 n is the number of features
of input, sigma represents smoothing coefficient, xij is the training instance that cor-
responds to category c. PNN summation layer computes the probability of pattern x
being classified in class c by averaging the output of neurons belonging to the same
class.(Mahendran & Dhanasekaran, 2015)
1
∏
c
=
1
(2n)
n
2 σn
exp[−
(x − xij)T (x − xij)
2σ2
(2.2)
Here X is the input instance
pi(x) =
1
(2π)
n
2 σn
1
Ni
(Ni)
∑
(i=1)
exp[−
(x − xij)T (x − xij)
2σ2
] (2.3)
In equation 2.3, Ni denotes the total number of samples in classifier c the training
class will contain probabilities of losses associated with making incorrect decision.
This priori information will be used by the decision unit to calculate the output from
Chapter 2. Literature Review 53
summation neurons. (Ben Ali, Saidi, Mouelhi, Chebel-Morello & Fnaiech, 2015)
2.5.2 Kalman filter
Kalman filter is an algorithm that uses temporal series of measurement containing noise.
It produces estimates of unknown units over a period of time that tend to be more precise
than those based on measurement in a single run. It also measures joint probability
distribution for a variable over a timeframe. Kalman filter inputs data from multiple
sensors to a matrix of internal states consisting of parameters of interest which relates
to linear dependencies between system states and corresponding inputs (Cheng, 2016).
→x k + 1 = A.→x k + B.→u k + w (2.4)
Equation 2.4 shows the Kalman filter algorithm here vector xk shows the state of
system at a point of time represented by k. Vector uk is input to the system at time k.
Matrix B defines the relation between input vector uk and system state vector xk+1, w
is a random variable that represents noise. (Cheng, 2016)
2.6 Clusters and data aggregation
In a cloud based topology as shown in (Wu et al., 2015a) (Y. Zhang et al., 2012),
(Mitton et al., 2012) have a cluster architecture. Instead of communicating with the
nodes directly all the nodes on a cloud based network topology each of the node directly
communicates to the cluster head.A sensory cloud can have more than one cluster
head. The cluster head then assembles and aggregates the data this is known as data
aggregation. Cluster and data aggregation are very useful for machine learning because
1 Due to data aggregation, pool of data from all the sensors are available for interpola-
tion and extracting trend patterns from data.
Chapter 2. Literature Review 54
2 Machine learning algorithms can be employed to select the most optimal(lower cost
for all nodes to interact) cluster head. Selecting an efficient cluster head will in
turn prolong life of the network (Schaefer, 2014), (Yang et al., 2014), (Soliman,
2014).
Few methodologies for clustering includes 1)using decision trees for selecting cluster-
heads (Craven & Shavlik, 2014). 2)Neural networks for large scale clustering (Rojas,
2013). 3)Using a self organizing map for data aggregation (Barbancho et al., 2007).
2.6.1 Event detection and processing
: Event detection is the corner stone of every sensory network.In context of cloud based
manufacturing there has be trustworthy event scheduling and detection . Machines in
the cloud based architecture must be able to detect and log events with minimum human
intervention (Khan, Khan, Zaheer & Khan, n.d.) (Wu et al., 2015a). Event processing
can be done through 1) Bayesian algorithms : This includes decentralization of a sensor
network zone (Liu, Yang, Ma & Cheng, 2015). 2) neural network (Yu, Wang & Meng,
2005) and K nearest neighbour (Jayaraman, Zaslavsky & Delsing, 2010).
2.6.2 Localization of sensor objects
: this involves determining geographical location of nodes and its components.In order
make the cloud based topology self aware, each node needs to know (or be aware) of
other components in its locality. For most large scale organizations this is done through
global positioning system. But in case of manufacturing firms which are indoors this
may not be a viable choice. Relative location is sufficient for certain purposes (Ahmed,
Shi & Shang, 2005) ,absolute localization involves the use of anchors to localize a node
in its absolute location (Ahmed et al., 2005). In this context certain terms used is:
1 Unknown node: This represents a node whose location cannot be determined.
Chapter 2. Literature Review 55
2 beacon node: This node can recognize its placement either manually or through
a positioning hardware. This node serves as a beacon for recognizing other
unknown nodes.
3 Received signal strength indicators: This is a representation of distance to measure
transmission performance (of nodes)
2.7 Semantic Sensor N/W Ontology
Semantic layer is a thin information processing standard that extends W3C consortium,
these standards represent common protocol of information exchange between web
entities.
According to (Hao & Helo, n.d.), (Wu et al., 2015b), (Yang et al., 2014), (X. V. Wang,
Wang, Mohammed & Givehchi, 2016)a cloud manufacturing network consist of several
loosely coupled entities that are autonomous and self aware. In addition, machines in
a cloud manufacturing framework must be able to understand each other (talk with
each other, interpret contextual data relating to its functionality) (Brown, McMahon
& Sims Williams, 1995), (Young, Gunendran, Cutting-Decelle & Gruninger, 2007).
Semantics sensor network ontology is a set of tools for knowledge acquisition, structural
representation of logic and reasoning that are tailored for sensor based framework
(Lemaignan, Siadat, Dantan & Semenenko, 2006).
Sensory network ontology is based on OWL that uses XML like structure (elements
and attributes) with standardized meaning that is used to define nodes, their structure,
their relationships, and their functionality . The OWL an extension to RDF schema set
(Resource description framework) RDF is a data model for describing things and their
relationship (Papazoglou, 2012).
A basic structure describing a simple relationship between a printer(machine) and
Chapter 2. Literature Review 56
its sub part is shown in 2.4 on page 55. The figure simply states that the print head
having the name ’CR322A’ is the sub part of the printer.
Figure 2.5 on page 56 depicts a complete representation of semantic sensory network
as adapted from (Pu, Wang, Du, Zhang & Chen, 2016). This shows diagrammatic repres-
entation of the ontology repository. This illustrates a sensor that measures temperature
and further provides information about measurement range (attributes of which consist
minimum and maximum temperature range, unit of measurementCelsius), measurement-
capability (having attributes accuracy rangemin, max values and interval), and system
information (giving platform representation, and address of physical instance).
As SOAP protocol,XML and REST web services are corner stone of cloud based
manufacturing as discussed by (Young et al., 2007), (Lemaignan et al., 2006). A
thin ontology layer that sits on top of its web service should be relatively easy to
define(Young et al., 2007).
By referring the literature described in this section. We will attempt to conduct
the study further to answer the following research questions. RQ1 as a result of the
advantages cloud manufacturing Shown in table 2.1 on page 33 can machines achieve
(at least partially) human-like perception (from the context of judging the depth, texture,
color and similar attributes of an object).
RQ2 Can these machines take autonomous decision based on the achieved sensory
perception
Chapter 2. Literature Review 57
Figure 2.1: Feedback control process in autonomic system
Chapter 2. Literature Review 58
Figure 2.2: Aspects of data fusion
Figure 2.3: PNN Classifier
Chapter 2. Literature Review 59
Figure 2.4: Semantic sensor network ontology representation
Chapter 2. Literature Review 60
Figure 2.5: Semantic sensor network ontology representation
Chapter 3
Method
3.1 Introduction
An integral part of designing a diverse system architecture is usage of a functional model
that clearly captures the guidelines of operation, system architecture, processing and
control functions within the data fusion process. Design science research methodology
has been found to successfully capture the salient features and diverse automatic,manual
approaches that are employed across different computing projects.Design science is a
solution oriented approach that is more suitable to agile projects requiring observational
studies and having a patterned evolution approach. In order to fully attain the fused state
a certain amount of reverse engineering is required as if the fusion is unsuccessful.The
data can relapse into its original state. The data fusion process is tightly coupled with
resource management this is an important feature that needs to be fulfilled in a data
fusion system.
Data science methodology follows the engineering cycle which is a series of recur-
ring steps that leads to a better functioning system (Keith, Demirkan & Goul, 2013).
The recursive steps in design science includes
1 Problem investigation.
61
Chapter 3. Method 62
2 Treatment design.
3 Treatment implementation.
5 Implementation evaluation.
3.2 Problem investigation
Figure 3.1: Data fusion architecture
as seen in figure 3.1, data fusion is fundamentally used to provide specific assessment
of the observed environment , this information can be further used to automate the
system or be used by the end user to plan and control the resources.
3.2.1 Problem investigation
A generalized approach to problem investigation for data fusion project is shown in 3.2
on the next page. This approach suits problem area like sensor fusion where a complete
Chapter 3. Method 63
Figure 3.2: Problem investigation approach
set of system goals is difficult to understand at the beginning of the design process
(Keith et al., 2013). In a poorly defined and understood environment such the design
science engineering approach will incorporate better solutions through data, or model
available in the system (Keith et al., 2013).
Layered design : This approach applies the concept of ’divide and conquer’ to the
problem area, specially when the kind of possible errors likely to be encountered by the
system in the future is uncertain. This concept merely states how the architecture needs
to be sub divided and the control needs to integrated to various sub working nodes. This
is shown in figure 3.3 on the following page.
This approach permits a cost effective method to achieve the rquired level of
performance. This is the common method used for designing and developing the multi-
sensor fusion approach and will be used in this paper. The higher level processes will be
handled by more computationally advanced nodes therefore, the users of this approach
Chapter 3. Method 64
Figure 3.3: Layered Approach architecture
can add any number of lower level nodes when required.
Paradigm based design: multi-sensor data fusion itself follows a top down approach,
while the control structure follows the top- down approach. This is a important principal
that will be used to design this system as it can be noted that sensor data from the lower
nodes are passed on to higher level nodes for fusing. This means that the data is flowing
from bottom towards top controlling nodes. Similarly, lower level sensors are not able
to compute data therefore these functions are done by higher level nodes.
Chapter 3. Method 65
Figure 3.4: Layered Approach architecture
3.2.2 Treatment design
The design for a sensor fusion system involves.Figure 3.5 on the next page shows the
design process. The design process includes
1 Data fusion algorithm design
2 Development and validation of the proposed algorithm.
Data fusion algorithm defines the processing of the fusion activity. This includes
1 Data association
2 Data alignment
3 Estimation of state(classification)
The concept of data association means how the sensor data is represented before and
after fusion this concept is also mentioned in 1.5 on page 28. Data association includes
1 Hypothesis generation:Indentify the sensors used and data generated to be fused
(Aher & Lobo, 2013)
Chapter 3. Method 66
Figure 3.5: Multisensor system design process adapted from
(Mönks et al., 2016)
2 Hypothesis evaluation: Calculate the metrics for fusion and how they will be fused
(Aher & Lobo, 2013).
3 Estimation of state(classification): this is the final step of this fusion process where
the fusion and classification of data is completed.
3.2.3 Treatment implementation and evaluation
In this study the implementation of the proposed design is done through a simulation
of the entire process. The dataset includes 100 sensor samples and 312 samples of the
fused matrix which is then used to show the simulation of fusion on a three dimensional
spectrum (Yann-Al Le borgne, 2014) and (Illinois, 2014).
Proposed system design
Here a block diagram shows the fusion system design. Taking in consideration the
layered approach mentioned in 3.2.1 on page 59 figure 3.6 on page 64 shows the
Chapter 3. Method 67
proposed system network. Figure 3.6 on the next page shows three wireless sensor
network having sensor clusters. Each cluster has a cluster head node. The cluster head
is responsible for training the data from sensors received within its range. Sensors can
interact only with the cluster head. Each sensor head is equipped with a switch for
communicating with the base station and other cluster heads .
The base station is responsible for the final classification and storing of data.
Chapter 3. Method 68
CH
CO COCO
CO
CH
CH
CH
Polymer FBG
Sensor
Proximity
Sensor
CO
NIR
Sensor
Network Edge
Switch
Core switch
RouterBase
Station
Wireless
Access
Point CLOUD BASED MANUFACTURING
PFBG
SENSOR 1
PFBG
SENSOR 2
PFBG
SENSOR 3
NIR
SENSORS
CH = Cluster
Head
Proximity
Sensors
Optical fiber
network
UTP Cable Line
Chapter 4
Analysis
4.1 Introduction
In order to avoid the tedious process of disassembly during recycling thermoplastic
components are chopped into smaller shards which are sorted manually and if lucrative
these shards are sorted by hand with the aid of a convertor. The operators select shards
based on appearance, texture, color etc. manually. This process has to be extremely
precise as even a little margin of error can cause the price of the resultant plastic to
be extremely low. In the manual process an operator often selects one type of plastic.
Therefore, the automated classifier has to take in account both the factors texture
identification as well as the cadence. The conveyor belt of the recycling plant has
different blends of plastic resins passing through it. For a classifier system to achieve
100 percent accuracy and be profitable it needs to able to distinguish both the physical
attributes as well learn the underlying pattern of the data. Visually many thermoplastic
components such as ABS, polycarbonate, polyamide are similar therefore using the
data only from visual sources is not sufficient for proper classification of thermoplastics.
The NIR spectra shows the absorption of light. Regardless of the actual color of plastic
resin, samples of the same type always have the same peak patterns (Emberton, Chittka,
69
Chapter 4. Analysis 70
Cavallaro & Menghua, 2016a). These property makes it possible to distinguish different
plastic types.
Unfortunately, plastics with blackish gradient have stronger absorption peaks than
others (Emberton, Chittka, Cavallaro & Menghua, 2016b) This makes information
generated only from the spectra incomplete to successfully classify and categorize a
plastic shard. Therefore, data from different devices is required to successfully classify
a sample in a recycling plant given the huge variance in the sample size (Elsayed,
Rischbeck & Schmidhalter, 2015).
4.2 Supervised learning classifier
The classifier uses the feature vector, which is provided by the multiple sensors in the
recycling plant (Miguez, Georgiopoulos & Kaylani, 2010). These sensors provide a
more in depth attribute categorization to technically classify thermoplastic resin. For
the classifier to achieve its purpose the feature values for the thermoplastics in the same
category should be similar, and a greater variance for plastics of different categories
are expected. This means that the classifier complexity increases with the length of
our feature fused vector. In our algorithm sensory pretreatment is conducted for two
purposes. Firstly, it serves as a learning aid for the algorithm. This means that once
a shard is properly classified its properties are stored in the fused feature matrix, the
future classification of shards with same properties are instantaneous. Secondly the
smoothing (or the training vector) acts as a means to correctly abolish noisy data.
The principal component analysis method or (PCA) is used for both data smoothing
(training the data) and spectral recognition (classifying the data in a PCA hyperspectral
system) (Casa, Castaldi, Pascucci, Palombo & Pignatti, 2013). Once the fused feature
matrix is identified from the sensory system. A supervised learning classifier based
on probabilistic neural network is used to classify the plastics into their appropriate
Chapter 4. Analysis 71
categories ( here the decision fusion occurs) (Nie et al., 2016). A example of working
of a PCA hyperspectral system is shown in 4.1 on the following page
Figure 4.1: Data in PCA after feature fusion
In our case the sensory data pre-treatment is done mainly for two reasons. Firstly,
from data in the visible spectra it is interesting to utilize prior knowledge in order
to decide which are the discriminant features. Secondly, the information provided
by the hyperspectral system requires an adequate pretreatment for reducing the data
dimensionality, that it is huge in this kind of sensor, and for revealing differences
between spectra. The Principal Component Analysis method (PCA) has been used as
a hyper spectral component. Figure 4.2 shows the overall functioning of the classifier
shows for this pre-treatment.
4.3 Feature Fusion
As seen in the algorithm shown in 4.3 on page 72, the feature fusion algorithm takes three
inputs namely the temperature, tensile yields, and metallic content in a thermoplastic
shard. This information is captured by the individual sensors. The three inputs are
passed on to the FLC or the fuzzy logic controller. The first data fusion occurs between
the pre trained data and the new data captured by the sensor of the same type. This
help in eliminating unwanted noise from the fusion process this is also mentioned in
Chapter 4. Analysis 72
Figure 4.2: Classifier structure
equation 1.1 on page 21 the difference between d’ (the trained data) and d (new data)
is represented here. Consequently, fusion of data from different sources takes place
and the fused feature matrix as shown in 1.5 on page 28 The content in the fused
matrix is then compared to its previous entries and if there is no change in the data the
current run is disregarded from the matrix entirely (Jamalabadi, Nasrollahi, Alizadeh,
Nadjar Araabi & Nili Ahamadabadi, 2016). If the fusion is successful, the fused data is
accepted by the base station and is stored on a cloud database in real-time. This data is
instantaneously available to the hyperspectral scanners. These scanners with the help
of dimension data and in some cases (image representation) combine feature matrix to
the spectral image (decision fusion) and use this data to successfully classify a plastic
component (Scherer, 2012).
The process flow of feature fusion is shown through a business process modeling
notation (BPMN) diagram in 4.4 on page 73. Here there are two processes corresponding
to two different sensors are shown. One sensor collects temperature data and the second
sensor extracts tensile yield. The system checks sensor battery levels for calculating the
confidence factor every 10 minutes prior to sensor fusion. The data extracted is then
Chapter 4. Analysis 73
passed on to the base station where the fusion process actually takes place. Here the
base station will evaluate the confidence factor, fuse the data with its training factors
as mentioned in 4.3 on page 72. The fuzzy logic controller located in the base station
fuses the data from different sources to generate and maintain the feature fusion matrix
(Iounousse et al., 2015).
4.4 Findings
Findings are categorized according to the fusion type.
4.4.1 Feature fusion
Feature level fusion can be accomplished by fusing two feature sets obtained from
multiple sensor sources as shown in figure 1.5 on page 28. To further explain the
simulation process we will assume that A and B represent two different sensor feature
vectors. Let A =[a1,a2,a3,..., an] and B = [b1,b2,b3,...,bn] represent data extracted from
multiple sensor sources. The fusion of these two sets gives a new resultant feature
vector that is multiplication of the first two . The working of Kalman FLC( fuzzy logic
controller) is shown in figure 4.5 on page 74. Here there are two states as represented in
1.1. The state is the representative of variable d and the covariance is the representation
of state d’. On encountering a new entry of data from multiple sensors, firstly the values
of the new vector is predicted and measured based on the training values. The new
state is then validated against a specific range of values this is shown in step ’validate
measurement’ in figure 4.5 on page 74.If the new values are not in the specific range
then the vector is reverted back to its original state. Otherwise, the validated values are
merged with the co-variance vector (d’). The new probabilities for future measurement
prediction and validation are calculated and stored. After this step feature fusion is
actually completed and the two vectors of d and d’ are true representation of their
Chapter 4. Analysis 74
real-time values.
As a proof of concept the fusion algorithm was run on mat-lab simulator for 100
samples. The data is extracted from (Illinois, 2014) and (Yann-Al Le borgne, 2014)
. The data from these files was extracted to MATLAB data file. The results of the
simulation are as follows. In figure 4.6 on page 75 smoothing of surface temperature and
tensile strength takes place in accordance with first step in 4.3 on page 72. Figure 4.7
on page 76 show the difference between states d and d’ mentioned in equation 1.1 on
page 21 and 2.2 on page 48. Finally, Fusion of data from different sensors takes place
as shown in 4.8 on page 76. In feature fusion shown in figure 4.8 on page 76, both
smoothing and actual fusion of multi-sensor data takes place. Figure 4.9 on page 77
shows the difference in training and actual data after multi data fusion.
Here the data in the MATLAB model data file is plotted on a graph. Data from
individual sensors are first fused with their normalized forms, this function is known
as data smoothing. Here the assumption is that all values are in the proper range
configuration. Here 90 percent of the values is assumed to be in its normalized state or
90 percent of the data used is for training.
4.4.2 Decision fusion
For simulating the decision fusion sum of absolute difference SAD is used. Sum of
absolute difference is fundamentally the similarity between two image blocks. Here the
center of gravity,color,texture is used to classify objects into classes. SAD works by
taking difference in each pixel of the original block and comparing it with each pixel of
the processed block. In figure 4.1 on page 67 this pixel processing and fusion is done
by components NPC1 and NPC2. The only difference being that we use the principal
component analysis or PCA for classifying an image in to a class and not fusing image
data.
Chapter 4. Analysis 75
Next the texture and contour of the pixel is generated as shown in figure 4.11 on
page 79.
In order to show the simulation of hyper spectral system used for decision fusion
a triangulated image is plotted on a graph as shown in figure 4.10 on page 78. Then
the pixel information of our image and the target image is generated for comparison
as seen in figure 4.12 on page 80. Finally, the image is classified into a appropriate
class see figure 4.14 on page 81. For our thermoplastic classification the original image
represents the normalized image (training data) this could be derived as individual
image captured by CCD camera sensor or an array of images or even a fused image.
The thermoplastic shards captured by the CCD camera sensor can be scanned in similar
manner against the normalized set of images to identify its class. Here figure vrefCOG
shows a three dimensional representation of the scanned object.
4.4.3 Final PNN Classification
Once the feature fusion matrix the final feature vector and the class of the plastic is de-
cided this information is feed into PNN classfier to merge both these information(either
one feature vector or hyper spectral data by itself will not suffice as only density and
color information will not be enough to classify the final category ).
Input Layer: Feature Fused matrix(temprature, tensile properties), Hyperspectral
class data (this includes all information by the CCD camera sensor and the
hyperspectrum like colour, texture, density, depth(COG)) etc.
Output layer Final identified category of the plastic.
Summation Layer This layer will use the fuzzy logic controller to fuse the inputs
recursively till successfully classified and training data is normalized
Chapter 4. Analysis 76
Figure 4.3: Data Fusion Algorithm
Chapter 4. Analysis 77
Get Tensile
Yield
Get Temp
Fuse Data
Base Station
Figure 4.4: Data Fusion Algorithm
Chapter4.Analysis78
Figure4.5:Differencebetweennormalizedandactualvaluesofmergeddataadapted
from
(Tianetal.,2016)
Chapter 4. Analysis 79
Figure 4.6: Smoothing of tensile and temperature data
Chapter 4. Analysis 80
Figure 4.7: Difference in state d and d’ of multi-sensor data
Figure 4.8: Fusion of tensile and temperature data
Chapter 4. Analysis 81
Figure 4.9: Difference between normalized and actual values of merged data
Chapter 4. Analysis 82
Figure 4.10: Plotting of original image
Chapter 4. Analysis 83
Figure 4.11: Texture and contour generation
Chapter 4. Analysis 84
Figure 4.12: Pixel-wise comparison of source and target
Chapter 4. Analysis 85
Figure 4.13: Center of gravity and density of the target
Figure 4.14: Pixel-wise comparison of source and target
Chapter 5
Discussion
The norm of covariance curve as shown by figure 4.9 on page 77, we can clearly see
that the norms of the fused features is considerably lower than those of normalized
version of both the sensors. This shows that the fused data is relatively free of noise.
This can also be seen from figures 5.1 on the following page , 5.2 on page 84 and 5.3 on
page 85. The example shown above gives a better predictive accuracy(less uncertainty).
this shows that the algorithm has worked relatively well.
A study conducted by (Declercq, Nagaoka, Yamazaki & Saijo, 2015) specifies
that PCA based spectrum are highly sensitive to light reflectively and gradients of the
scanned objects. According to the authors ,the hyper spectrum performs poorly for
objects of a blackish - gray gradient because lesser light is reflected back, thus causing
poor visibility of the scanned object. This will have a clear repercussion on the objects
scanned by the spectral system.
A major drawback of this study is that it was conducted in a simulated environment
with a small data set. A plastic recycling plant is huge with infinite amount of data being
generated daily. The use of algorithm on actual data will only show the true accuracy of
the methodology.
86
Chapter 5. Discussion 87
Figure 5.1: Temperature data before smoothing
5.0.1 Factors that could effect the accuracy of the algorithm
Out of sequence data The input to a data fusion system is normally discrete pieces
of algorithm having a unique timestamp, deciding its time of origin. However in
real life scenario, heterogeneous sensors working at different rates(speed) cand
lead to data being out of sequence. Such out of sequence data could badly effect
the working of our algorithm, as both training/ smoothing of data and the current
data will be in an inconsistent state leading to disastrous outcomes.
Conflicting data In our case study scenario, we have used NIR and Polymer FBG
sensors. A notable point is that both these sensor produce temperature data
(although in this study it is assumed that only NIR captures temperature data ).
A real life situation could have many sensors reflecting many different values.
Chapter 5. Discussion 88
Figure 5.2: Temperature data after smoothing
This could lead to data inconsistency. An appropriate measure of accuracy of the
algorithm needs the data to be in a consistent state.
Disparate data In order for a fully automated classifier the sensors should work in
conjunction. However, in a recycling plant there could be sensors whose data are
completely incompatible. Data from completely disparate sources will cause the
fusion mechanism in this study to fail. It needs to ensured beforehand that the
data to be fused is compatible.
Chapter 5. Discussion 89
Figure 5.3: Tensile yield before and after smoothing
Chapter 6
Conclusion and future study
This study presents a concept of multi sensor data fusion in a WSN based infrastructure.
The preliminary study of the WSN environment currently used in manufacturing/ re-
cycling system showed us that there are multiple arrays of sensors already deployed
around the machines. Data sensed by these multiple sensors are being used to monitor
the state of machines. In this study we have attempted to gather the data from these
sensors and fuse them into a meaningful form to automate the classification and sort-
ing of plastics into their appropriate categories and pass them along for next step of
processing. Recycling thermoplastic is a technological problem that is not completely
solved yet.One of the complexity in deploying an automated solution is that various
plastic contents though visually similar have largely varying characteristics which is not
perceivable by machines unless the data already present in different sensors are used in
a way that helps the classifiers more adaptive. This study shows a simulation through
the algorithm that fuses data from CCD spetrum, NIR sensor and inductive sensor.
A brief description of the classification method used in this study is as follows. PNN
methodology is used for the classification, in which set of predetermined inputs or (data
from sensors) are normalized and then fused using the Kalman filter algorithm. Feature
fusion represents the fusion of lower level sensors for a vector of feature attributes. A
90
Chapter 6. Conclusion and future study 91
higher level sensor used is the PCA or the hyper spectral sensing in which the plastic
color and texture is identified to determine the preliminary class of the plastic. This
information is again fused with the feature fused matrix of lower level values. The
Kalman filter fusion process and the hyperspectral image classifcation is shown through
a simulation in MATLAB.
To conclude although the algorithm successfully works on a small scale simulated
data, a full scale implementation of the actual environment is essential. In this study the
elements of noisy data, redundant, incompatible and spurious data are not considered but
these factor could have serious repercussions on accurate functioning of the algorithm.
6.0.1 Future studies
Future work will include enhancing the automated fusion through machine learning
and web based modules.A sensor ontology language can also significantly improve the
extraction of data through a web based module.
Semantic sensor network
Sensor semantics deal with extending the former OWL ontology for modeling sensor
devices and their capabilities for interacting better with their systems and processes
(Compton et al., 2012). The ontology is based on observation the pattern of sensor
interaction with its peers or higher level systems as seen in figure 6.1 on the next page
The sensor ontology can be sub divided in to a number of perspectives based on
their interaction with sub-systems. These are as follows
1 Sensor perspective:what a sensor measures, how it senses and what is sensed.
(Compton et al., 2012)
2 System perspective: sensors interactions with their systems. Focus is on a system of
sensors (Compton et al., 2012).
Chapter 6. Conclusion and future study 92
Figure 6.1: Pattern in Semantic sensor network taken from
(Compton et al., 2012)
3 Data perspective: what data is being sensed also the meta-data generated (Compton
et al., 2012).
4 Feature or property perspective with focus on features or attributes being sensed and
the correlation of these properties
Chapter 6. Conclusion and future study 93
Figure 6.2: Overview of the Semantic Sensor Network ontology classes and properties
taken from
(Compton et al., 2012)
References
Aher, S. B. & Lobo, L. (2013). Combination of machine learning algorithms for recom-
mendation of courses in e-learning system based on historical data. Knowledge-
Based Systems, 51, 1 - 14.
Ahmed, A. A., Shi, H. & Shang, Y. (2005, June). Sharp: a new approach to relative
localization in wireless sensor networks. In 25th ieee international conference on
distributed computing systems workshops (p. 892-898). doi: 10.1109/ICDCSW
.2005.125
Antonopoulos, C. P. & Voros, N. S. (2016). Resource efficient data compression
algorithms for demanding, wsn based biomedical applications. Journal of Bio-
medical Informatics, 59, 1 - 14.
Anuar, S., Selamat, A. & Sallehuddin, R. (n.d.). A modified scout bee for artificial bee
colony algorithm and its performance on optimization problems. Journal of King
Saud University - Computer and Information Sciences.
AO, B., YONGCAI, W., LU, Y., BROOKS, R. R. & IYENGAR, S. S. (2016). On
precision bound of distributed fault-tolerant sensor fusion algorithms. ACM
Computing Surveys, 49(1), 5:1 - 5:23.
Ari, A. A. A., Yenke, B. O., Labraoui, N., Damakoa, I. & Gueroui, A. (2016). A
power efficient cluster-based routing algorithm for wireless sensor networks:
Honeybees swarm intelligence based approach. Journal of Network and Computer
Applications.
Arkin, E. M., Efrat, A., Mitchell, J. S., Polishchuk, V., Ramasubramanian, S., Sank-
araraman, S. & Taheri, J. (2014). Data transmission and base-station placement
for optimizing the lifetime of wireless sensor networks. Ad Hoc Networks, 12.
Aydogdu, I., Akın, A. & Saka, M. (2016). Design optimization of real world steel
space frames using artificial bee colony algorithm with levy flight distribution.
Advances in Engineering Software, 92, 1 - 14.
Barbancho, J., León, C., Molina, F. J. & Barbancho, A. (2007). A new qos routing
algorithm based on self-organizing maps for wireless sensor networks. Telecom-
munication Systems, 36(1-3), 73–83.
Beguš, S., Begeš, G., Drnovšek, J. & Hudoklin, D. (2015). A novel nir laser-based
sensor for measuring the surface moisture in polymers. Sensors Actuators: A.
Physical, 221.
Ben Ali, J., Saidi, L., Mouelhi, A., Chebel-Morello, B. & Fnaiech, F. (2015). Linear fea-
ture selection and classification using pnn and sfam neural networks for a nearly
94
References 95
online diagnosis of bearing naturally progressing degradations. Engineering
Applications of Artificial Intelligence, 42, 67 - 81.
Brown, K., McMahon, C. & Sims Williams, J. (1995). Features, aka the semantics of a
formal language of manufacturing. Research in Engineering Design, 7(3), 151.
Buyya, R., Calheiros, R. N. & Li, X. (2012). Autonomic cloud computing: Open
challenges and architectural elements. In Emerging applications of information
technology (eait), 2012 third international conference on (pp. 3–10).
Casa, R., Castaldi, F., Pascucci, S., Palombo, A. & Pignatti, S. (2013). A comparison
of sensor resolution and calibration strategies for soil texture estimation from
hyperspectral remote sensing. Geoderma, 197-198, 17 - 26.
Castorena, J., Morrison, J., Paliwal, J. & Erkinbaev, C. (2015). Non-uniform system
response detection for hyperspectral imaging systems. Infrared Physics and
Technology, 73, 263 - 268.
Cayirpunar, O., Kadioglu-Urtis, E. & Tavli, B. (2015). Optimal base station mobility
patterns for wireless sensor network lifetime maximization. IEEE Sensors Journal,
15(11), 6592.
Chakaravarthy, G., Marimuthu, S., Ponnambalam, S. & Kanagaraj, G. (n.d.). Im-
proved sheep flock heredity algorithm and artificial bee colony algorithm for
scheduling m -machine flow shops lot streaming with equal size sub-lot problems.
International Journal of Production Research, 52(5), 1509 - 1527.
Chang, H.-C., Liao, S.-C., Hsieh, H.-S., Wen, J.-H., Lai, C.-H. & Fang, W. (2016).
Magnetostrictive type inductive sensing pressure sensor. Sensors Actuators: A.
Physical, 238, 25 - 36.
Chen, J., Li, Y., Lv, K., Zhong, W., Wang, H., Wu, Z., ... Jiang, J. (2016). Cyclam-
functionalized carbon dots sensor for sensitive and selective detection of copper
ion and sulfide anion in aqueous media and its imaging in live cells. Sensors and
Actuators, 224, 298 - 306.
Cheng, H.-Y. (2016). Hybrid solar irradiance now-casting by fusing kalman filter and
regressor. Renewable Energy, 91, 434 - 441.
Chi, Q., Yan, H., Zhang, C., Pang, Z. & Da Xu, L. (2014). A reconfigurable smart
sensor interface for industrial wsn in iot environment. Industrial Informatics,
IEEE Transactions on, 10(2), 1417–1425.
Compton, M., Barnaghi, P., Bermudez, L., García-Castro, R., Corcho, O., Cox, S., ...
Taylor, K. (2012). The {SSN} ontology of the {W3C} semantic sensor network
incubator group. Web Semantics: Science, Services and Agents on the World Wide
Web, 17, 25 - 32. Retrieved from http://www.sciencedirect.com/
science/article/pii/S1570826812000571 doi: http://dx.doi.org/
10.1016/j.websem.2012.05.003
Coskun, H. (2015). Equivalent rectangular stress distribution in thermoplastic beam
design. Journal of Thermoplastic Composite Materials, 28(11), 1465.
Craven, M. W. & Shavlik, J. W. (2014). Learning symbolic rules using artificial
neural networks. In Proceedings of the tenth international conference on machine
learning (pp. 73–80).
Declercq, N. F., Nagaoka, R., Yamazaki, R. & Saijo, Y. (2015). Proceedings of the 2015
References 96
icu international congress on ultrasonics, metz, france adaptive spatial filtering
with principal component analysis for biomedical photoacoustic imaging. Physics
Procedia, 70, 1161 - 1164. Retrieved from http://www.sciencedirect
.com/science/article/pii/S1875389215009906 doi: http://dx
.doi.org/10.1016/j.phpro.2015.08.249
Dehghani, S., Pourzaferani, M. & Barekatain, B. (2015). Comparison on energy-
efficient cluster based routing algorithms in wireless sensor network. Procedia
Computer Science, 72(The Third Information Systems International Conference
2015), 535 - 542.
Deng, S., Djukic, L., Paton, R. & Ye, L. (2015). Review: Thermoplastic–epoxy
interactions and their potential applications in joining composite structures – a
review. Composites Part A, 68, 121 - 132.
Devaraju, J., Suhas, K., Mohana, H. & Patil, V. A. (2015). Wireless portable microcon-
troller based weather monitoring station. Measurement, 76, 189 - 200.
De Vin, L. J., Holm, M. & Ng, A. H. (2010). The information fusion jdl-u model as a
reference model for virtual manufacturing. Robotics and Computer Integrated
Manufacturing, 26(19th International Conference on Flexible Automation and
Intelligent Manufacturing), 629 - 638.
Ding, X., Chen, X., Ma, W., Chen, X. & Li, N. (2016). A novel pqcr-l circuit for in-
ductive sensing and its application in displacement detection. IEEE Transactions
on Instrumentation and Measurement, 65(3), 685 - 693.
Duro, J. A., Padget, J. A., Bowen, C. R., Kim, H. A. & Nassehi, A. (2016). Multi-sensor
data fusion framework for cnc machining monitoring. Mechanical Systems and
Signal Processing, 66-67, 505 - 520.
Edwards-Murphy, F., Magno, M., Whelan, P. M., O’Halloran, J. & Popovici, E. M.
(2016). b+wsn: Smart beehive with preliminary decision tree analysis for agricul-
ture and honey bee health monitoring. Computers and Electronics in Agriculture,
124, 211 - 219.
Elsayed, S., Rischbeck, P. & Schmidhalter, U. (2015). Comparing the performance of
active and passive reflectance sensors to assess the normalized relative canopy
temperature and grain yield of drought-stressed barley cultivars. Field Crops
Research, 177, 148 - 160.
Emberton, S., Chittka, L., Cavallaro, A. & Menghua, W. (2016a). Sensor capability and
atmospheric correction in ocean colour remote sensing. Remote Sensing, 8(1), 1.
Emberton, S., Chittka, L., Cavallaro, A. & Menghua, W. (2016b). Sensor capability and
atmospheric correction in ocean colour remote sensing. Remote Sensing, 8(1), 1.
Francalanza, E., Borg, J. C. & Constantinescu, C. (2016). A fuzzy logic based
approach to explore manufacturing system changeability level decisions. Procedia
CIRP, 41(Research and Innovation in Manufacturing: Key Enabling Technologies
for the Factories of the Future - Proceedings of the 48th CIRP Conference on
Manufacturing Systems), 3 - 8.
Gao, Y., Zhang, H., Qi, K., Li, X. & Tong, Y. (2016). Original papers: Optimal
periphery deployment of wireless base stations in swine facilities for 1-coverage.
Computers and Electronics in Agriculture, 125, 173 - 183.
References 97
Gokturk, M. S., Gurbuz, O. & Erman, M. (2016). A practical cross layer cooperative
mac framework for wsns. Computer Networks, 98, 57 - 71.
Guestrin, C., Bodik, P., Thibaux, R., Paskin, M. & Madden, S. (2004). Distributed
regression: an efficient framework for modeling sensor network data. In In-
formation processing in sensor networks, 2004. ipsn 2004. third international
symposium on (pp. 1–10).
Hafizi, Z., Epaarachchi, J. & Lau, K. (2015). Impact location determination on thin
laminated composite plates using an nir-fbg sensor system. Measurement, 61, 51
- 57.
Hao, Y. & Helo, P. (n.d.). The role of wearable devices in meeting the needs of cloud
manufacturing: A case study. Robotics and Computer Integrated Manufacturing.
Hasani, A., Zegordi, S. H. & Nikbakhsh, E. (2012a). Robust closed-loop supply chain
network design for perishable goods in agile manufacturing under uncertainty.
International Journal of Production Research, 50(16), 4649 - 4669.
Hasani, A., Zegordi, S. H. & Nikbakhsh, E. (2012b). Robust closed-loop supply chain
network design for perishable goods in agile manufacturing under uncertainty.
International Journal of Production Research, 50(16), 4649 - 4669.
He, H., Cui, L., Zhou, F. & Wang, D. (2016). Distributed proxy cache technology based
on autonomic computing in smart cities. Future Generation Computer Systems.
Huang, B., Li, C. & Tao, F. (2014). A chaos control optimal algorithm for qos-
based service composition selection in cloud manufacturing system. Enterprise
Information Systems, 8(4), 445 - 463.
Illinois, U. O. (2014, sep). Retrieved from http://mtil.illinois.edu/DATA/
_HARDNESS_Cylindrical_Correction_Tables/RWCyl.txt
Iounousse, J., Er-Raki, S., El Motassadeq, A. & Chehouani, H. (2015). Using an unsu-
pervised approach of probabilistic neural network (pnn) for land use classification
from multitemporal satellite images. Applied Soft Computing Journal, 30, 1 - 13.
Iyakwari, S., Glass, H. J., Rollinson, G. K. & Kowalczuk, P. B. (2016). Application of
near infrared sensors to preconcentration of hydrothermally-formed copper ore.
Minerals Engineering, 85, 148 - 167.
Jamalabadi, H., Nasrollahi, H., Alizadeh, S., Nadjar Araabi, B. & Nili Ahamadabadi, M.
(2016). Competitive interaction reasoning: A bio-inspired reasoning method for
fuzzy rule based classification systems. Information Sciences, 352-353, 35 - 47.
Jayaraman, P. P., Zaslavsky, A. & Delsing, J. (2010). Intelligent processing of k-nearest
neighbors queries using mobile data collectors in a location aware 3d wireless
sensor network. In Trends in applied intelligent systems (pp. 260–270). Springer.
Jumaidin, R., Sapuan, S., Jawaid, M., Ishak, M. & Sahari, J. (2016). Characteristics
of thermoplastic sugar palm starch/agar blend: Thermal, tensile, and physical
properties. International Journal of Biological Macromolecules, 89, 575 - 581.
Keith, M., Demirkan, H. & Goul, M. (2013). Service-oriented methodology for systems
development. Journal of Management Information Systems, 30(1), 227 - 260.
Khan, R., Khan, S. U., Zaheer, R. & Khan, S. (n.d.). Future internet: The internet
of things architecture, possible applications and key challenges. 2012 10th
International Conference on Frontiers of Information Technology, 257.
References 98
Kouyama, T., Yokota, Y., Ishihara, Y., Nakamura, R., Yamamoto, S. & Matsunaga, T.
(2016). Development of an application scheme for the selene/sp lunar reflectance
model for radiometric calibration of hyperspectral and multispectral sensors.
Planetary and Space Science, 124, 76 - 83.
Kyriazis, D. & Varvarigou, T. (2013). Smart, autonomous and reliable internet of
things. Procedia Computer Science, 21(The 4th International Conference on
Emerging Ubiquitous Systems and Pervasive Networks (EUSPN-2013) and the
3rd International Conference on Current and Future Trends of Information and
Communication Technologies in Healthcare (ICTH)), 442 - 448.
Land, W. H., Jr., Ma, X., Barnes, E., Qiao, X., Heine, J., Masters, T. & Park, J. W.
(2012). Pnn/grnn ensemble processor design for early screening of breast cancer.
Procedia Computer Science, 12(Complex Adaptive Systems 2012), 438 - 443.
Lartigau, J., Xu, X., Nie, L. & Zhan, D. (2015). Cloud manufacturing service composi-
tion based on qos with geo-perspective transportation using an improved artificial
bee colony optimisation algorithm. International Journal of Production Research,
53(14), 4380 - 4404.
Lemaignan, S., Siadat, A., Dantan, J.-Y. & Semenenko, A. (2006). Mason: A pro-
posal for an ontology of manufacturing domain. IEEE Workshop on Distributed
Intelligent Systems: Collective Intelligence and Its Applications.
Li, H., Chen, Q., Zhao, J. & Wu, M. (2015). Nondestructive detection of total volatile
basic nitrogen (tvb-n) content in pork meat by integrating hyperspectral imaging
and colorimetric sensor combined with a nonlinear data fusion. LWT - Food
Science and Technology, 63, 268 - 274.
Lionetto, F., Dell’Anna, R., Montagna, F. & Maffezzoli, A. (2016). Modeling of
continuous ultrasonic impregnation and consolidation of thermoplastic matrix
composites. Composites: Part A, Applied Science and Manufacturing, 82, 119 -
129.
Liu, K., Yang, T., Ma, J. & Cheng, Z. (2015). Fault-tolerant event detection in
wireless sensor networks using evidence theory. KSII Transactions on Internet
and Information Systems (TIIS), 9(10), 3965–3982.
Loubiere, P., Jourdan, A., Siarry, P. & Chelouah, R. (n.d.). A sensitivity analysis
method for driving the artificial bee colony algorithm’s search process. Applied
Soft Computing, 41, 515 - 531.
Maggio, M., Hoffmann, H., Papadopoulos, A. V., Panerati, J., Santambrogio, M. D.,
Agarwal, A. & Leva, A. (2012). Comparison of decision-making strategies
for self-optimization in autonomic computing systems. ACM Transactions on
Autonomous and Adaptive Systems (TAAS), 7(4), 36.
Mahendran, G. & Dhanasekaran, R. (2015). Investigation of the severity level of diabetic
retinopathy using supervised classifier algorithms. Computers and Electrical
Engineering, 45, 312 - 323.
Miguez, R., Georgiopoulos, M. & Kaylani, A. (2010). G-pnn: A genetically engineered
probabilistic neural network. Nonlinear Analysis, 73, 1783 - 1791.
Mishra, V., Lohar, M. & Amphawan, A. (2016). Improvement in temperature sensitivity
of fbg by coating of different materials. Optik - International Journal for Light
References 99
and Electron Optics, 127, 825 - 828.
Mitton, N., Papavassiliou, S., Puliafito, A. & Trivedi, K. S. (2012). Combining
cloud and sensors in a smart city environment. EURASIP journal on Wireless
Communications and Networking, 2012(1), 1–10.
Mönks, U., Trsek, H., Dürkop, L., Geneiß, V. & Lohweg, V. (2016). Towards distributed
intelligent sensor and information fusion. Mechatronics, 34(System-Integrated
Intelligence: New Challenges for Product and Production Engineering), 63 - 71.
Müller, B. J., Burger, T., Borisov, S. M. & Klimant, I. (2015). High performance optical
trace oxygen sensors based on nir-emitting benzoporphyrins covalently coupled
to silicone matrixes. Sensors Actuators B. Chemical, 216, 527 - 534.
Nie, M., Liu, Z., Li, X., Wu, Q., Tang, B., Xiao, X., ... Zheng, C. (2016). Novel
method for hyperspectral unmixing: fuzzy c-means unmixing. Sensor Review,
36(2), 184.
Oliva Teles, L., Fernandes, M., Amorim, J. & Vasconcelos, V. (2015). Video-tracking
of zebrafish (danio rerio) as a biological early warning system using two distinct
artificial neural networks: Probabilistic neural network (pnn) and self-organizing
map (som). Aquatic Toxicology, 165, 241 - 248.
Otheguy, M. E. (2010). Manufacture, repair and recycling of thermoplastic composite
boats.
Papazoglou, M. P. (2012). Web services & soa. Perason Education.
Parashar, M. & Hariri, S. (2005). Autonomic computing: An overview. Unconventional
Programming Paradigms, 257–269.
Pereira, G., McGugan, M. & Mikkelsen, L. (2016). Test equipment: Method for inde-
pendent strain and temperature measurement in polymeric tensile test specimen
using embedded fbg sensors. Polymer Testing, 50, 125 - 134.
Peres, A. M., Pires, R. R. & Oréfice, R. L. (2016). Evaluation of the effect of repro-
cessing on the structure and properties of low density polyethylene/thermoplastic
starch blends. Carbohydrate Polymers, 136, 210 - 215.
Pu, F., Wang, Z., Du, C., Zhang, W. & Chen, N. (2016). Semantic integration of
wireless sensor networks into open geospatial consortium sensor observation
service to access and share environmental monitoring systems. IET Software,
10(2), 45–53.
Rawat, S. & Rawat, S. (2016). Multi-sensor data fusion by a hybrid methodology – a
comparative study. Computers in Industry, 75, 27 - 34.
Rojas, R. (2013). Neural networks: a systematic introduction.
Schaefer, D. (2014). Cloud-based design and manufacturing (cbdm): A service-oriented
product development paradigm for the 21st century.
Scherer, R. (2012). Multiple fuzzy classification systems (Vol. 288). Springer.
Schroth, C. & Janner, T. (2007). Web 2.0 and soa: Converging concepts enabling the
internet of services. IT professional, 9(3), 36–41.
Senge, R., Bösner, S., Dembczy´nski, K., Haasenritter, J., Hirsch, O., Donner-Banzhoff,
N. & Hüllermeier, E. (n.d.). Reliable classification: Learning classifiers that
distinguish aleatoric and epistemic uncertainty. Information Sciences, 255, 16 -
29.
References 100
Soliman, F. (2014). Business transformation and sustainability through cloud system
implementation.
Thermoplastic elastomer (tpe) faqs. (2015, Feb). Retrieved from http://
www.polyone.com/products/thermoplastic-elastomers/
tpe-knowledge-center/tpe-faqs
Tian, T., Sun, S. & Li, N. (2016). Multi-sensor information fusion estimators for
stochastic uncertain systems with correlated noises. Information Fusion, 27, 126
- 137.
Tong, Y., Liu, L., Zhao, M., Chen, J. & Li, H. (2016).
Signal Processing, 126(Signal Processing for Heterogeneous Sensor Networks),
149 - 158.
Tzschichholz, T., Boge, T. & Schilling, K. (2015). Relative pose estimation of satellites
using pmd-/ccd-sensor data fusion. Acta Astronautica, 109, 25 - 33.
Urbina, A., Mahadevan, S. & Paez, T. L. (2011). Quantification of margins and
uncertainties of complex systems in the presence of aleatoric and epistemic
uncertainty. Reliability Engineering and System Safety, 96(Quantification of
Margins and Uncertainties), 1114 - 1125.
Wang, C., Liu, X. & Chen, Z. (2015). Probe improvement of inductive sensor for online
health monitoring of mechanical transmission systems. IEEE Transactions on
Magnetics, 51(11), 1 - 4.
Wang, S., Carpenter, D. A., DeJager, A., DiBella, J. A., Doran, J. E., Fabinski, R. P.,
... Yaniga, R. (2016). A 47 million pixel high-performance interline ccd image
sensor. IEEE Transactions on Electron Devices, 63(1), 174 - 181.
Wang, X. V., Wang, L., Mohammed, A. & Givehchi, M. (2016). Ubiquitous manufac-
turing system based on cloud: A robotics application. Robotics and Computer
Integrated Manufacturing.
Wang, Y., Chen, H., Wu, X. & Shu, L. (2016). An energy-efficient sdn based sleep
scheduling algorithm for wsns. Journal of Network and Computer Applications,
59, 39 - 45.
White, S. R., Hanson, J. E., Whalley, I., Chess, D. M. & Kephart, J. O. (2004). An
architectural approach to autonomic computing. null, 2–9.
Wu, D., Rosen, D. W., Wang, L. & Schaefer, D. (2015a).
Computer-Aided Design, 59, 1 - 14.
Wu, D., Rosen, D. W., Wang, L. & Schaefer, D. (2015b). Cloud-based design and
manufacturing: A new paradigm in digital manufacturing and design innovation.
Computer-Aided Design, 59, 1 - 14.
Xie, L., Liu, H., Wu, W., Abliz, D., Duan, Y. & Li, D. (2016). Fusion bonding of
thermosets composite structures with thermoplastic binder co-cure and prepreg
interlayer in electrical resistance welding. Materials and Design, 98, 143 - 149.
Yahya, M. A., Yahya, M. A. & Dahanayake, A. (n.d.). Autonomic computing: A
framework to identify autonomy requirements. Procedia Computer Science,
20(Complex Adaptive Systems), 235 - 241.
Yang, X., Shi, G. & Zhang, Z. (2014).
International Journal of Production Research, 52(2), 326 - 336.
References 101
Yann-al le borgne. (2014, Aug). Retrieved from http://www.ulb.ac.be/di/
map/yleborgn/sensornets/code/matlabcode.zip
Young, R. I. M., Gunendran, A. G., Cutting-Decelle, A. F. & Gruninger, M. (2007).
Manufacturing knowledge sharing in plm: a progression towards the use of heavy
weight ontologies. International Journal of Production Research, 45(7), 1505 -
1519.
Yu, L., Wang, N. & Meng, X. (2005). Real-time forest fire detection with wireless
sensor networks. In Wireless communications, networking and mobile computing,
2005. proceedings. 2005 international conference on (Vol. 2, pp. 1214–1217).
Zhang, H., Bharti, A., Li, Z., Du, S., Bilotti, E. & Peijs, T. (2015). Localized
toughening of carbon/epoxy laminates using dissolvable thermoplastic interleaves
and electrospun fibres. Composites Part A, 79, 116 - 126.
Zhang, Y., Yu, R., Nekovee, M., Liu, Y., Xie, S. & Gjessing, S. (2012). Cognitive
machine-to-machine communications: visions and potentials for the smart grid.
IEEE Network, 26(3), 6 - 13.

More Related Content

PDF
Thesis Fabian Brull
PDF
Jmetal4.5.user manual
PDF
Trade-off between recognition an reconstruction: Application of Robotics Visi...
PDF
Neural networks and deep learning
PDF
Optimal control systems
PDF
PhD-2013-Arnaud
PDF
Final Report - Major Project - MAP
Thesis Fabian Brull
Jmetal4.5.user manual
Trade-off between recognition an reconstruction: Application of Robotics Visi...
Neural networks and deep learning
Optimal control systems
PhD-2013-Arnaud
Final Report - Major Project - MAP

What's hot (15)

PDF
Upstill_thesis_2000
PDF
Machine learning solutions for transportation networks
PDF
Intro photo
PDF
MSc_thesis_OlegZero
PDF
Seismic Tomograhy for Concrete Investigation
PDF
Discrete Mathematics - Mathematics For Computer Science
PDF
Diplomarbeit
PDF
Big Data and the Web: Algorithms for Data Intensive Scalable Computing
PDF
Efficient Model-based 3D Tracking by Using Direct Image Registration
PDF
Habilitation draft
PDF
phd-thesis
PDF
JPMthesis
PDF
The Cellular Automaton Interpretation of Quantum Mechanics
Upstill_thesis_2000
Machine learning solutions for transportation networks
Intro photo
MSc_thesis_OlegZero
Seismic Tomograhy for Concrete Investigation
Discrete Mathematics - Mathematics For Computer Science
Diplomarbeit
Big Data and the Web: Algorithms for Data Intensive Scalable Computing
Efficient Model-based 3D Tracking by Using Direct Image Registration
Habilitation draft
phd-thesis
JPMthesis
The Cellular Automaton Interpretation of Quantum Mechanics
Ad

Viewers also liked (20)

DOCX
final (Approved Thesis) do this for ur fyp
PDF
Cloud computing and Innovation in Manufacturing
PDF
iON Manufacturing Cloud ERP Solution for Pharmaceutical Industry
PPTX
To Cloud or not to Cloud
PDF
Green manufacturing
PPT
Perception
PPT
Green manufacturing
PPT
06 sensory processing
PDF
Thermal spray basics_processes
PPTX
Green computing
PPT
Data Modelling and Knowledge Engineering for the Internet of Things
PPTX
Spray coating
PDF
Green Manufacturing: The Art of Sustainability
PPTX
Green manufacturing tips
PPTX
Cloud computing for manufacturing
PPTX
Green manufacturing in Foundries
PPTX
Seminar on GREEN MANUFACTURING
PPTX
Green Manufacturing
PPTX
Green Manufacturing!
PPTX
Green manufacturing & Renewable Energy
final (Approved Thesis) do this for ur fyp
Cloud computing and Innovation in Manufacturing
iON Manufacturing Cloud ERP Solution for Pharmaceutical Industry
To Cloud or not to Cloud
Green manufacturing
Perception
Green manufacturing
06 sensory processing
Thermal spray basics_processes
Green computing
Data Modelling and Knowledge Engineering for the Internet of Things
Spray coating
Green Manufacturing: The Art of Sustainability
Green manufacturing tips
Cloud computing for manufacturing
Green manufacturing in Foundries
Seminar on GREEN MANUFACTURING
Green Manufacturing
Green Manufacturing!
Green manufacturing & Renewable Energy
Ad

Similar to Integrating IoT Sensory Inputs For Cloud Manufacturing Based Paradigm (20)

PDF
KurtPortelliMastersDissertation
PDF
ICIF19_Garg_job_talk_portfolio_modification.pdf
PDF
Intelligent Techniques for Warehousing and Mining Sensor Network Data Premier...
PDF
Low Power Context Aware Hierarchical System Design
PDF
Thesis small
PDF
Sensfusion
PDF
Intelligent Techniques for Warehousing and Mining Sensor Network Data Premier...
PDF
Intelligent Techniques for Warehousing and Mining Sensor Network Data Premier...
DOCX
An effecient spam detection technique for io t devices using machine learning
PDF
Intelligent Techniques for Warehousing and Mining Sensor Network Data Premier...
PDF
MACHINE LEARNING METHODS FOR THE
PDF
MACHINE LEARNING METHODS FOR THE
PDF
Innovative Payloads for Small Unmanned Aerial System-Based Person
PDF
High Performance Traffic Sign Detection
PDF
Real-Time Vehicle Classification and Counting Via Low-Cost Collaborative Sensing
PDF
IRJET - Identification and Classification of IoT Devices in Various Appli...
PPT
"An adaptive modular approach to the mining of sensor network ...
PDF
SeanLawlor_Masters_Thesis
PDF
Intelligent Techniques for Warehousing and Mining Sensor Network Data Premier...
PDF
IRJET- IOT based Intrusion Detection and Tracking System
KurtPortelliMastersDissertation
ICIF19_Garg_job_talk_portfolio_modification.pdf
Intelligent Techniques for Warehousing and Mining Sensor Network Data Premier...
Low Power Context Aware Hierarchical System Design
Thesis small
Sensfusion
Intelligent Techniques for Warehousing and Mining Sensor Network Data Premier...
Intelligent Techniques for Warehousing and Mining Sensor Network Data Premier...
An effecient spam detection technique for io t devices using machine learning
Intelligent Techniques for Warehousing and Mining Sensor Network Data Premier...
MACHINE LEARNING METHODS FOR THE
MACHINE LEARNING METHODS FOR THE
Innovative Payloads for Small Unmanned Aerial System-Based Person
High Performance Traffic Sign Detection
Real-Time Vehicle Classification and Counting Via Low-Cost Collaborative Sensing
IRJET - Identification and Classification of IoT Devices in Various Appli...
"An adaptive modular approach to the mining of sensor network ...
SeanLawlor_Masters_Thesis
Intelligent Techniques for Warehousing and Mining Sensor Network Data Premier...
IRJET- IOT based Intrusion Detection and Tracking System

Recently uploaded (20)

PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PPTX
MYSQL Presentation for SQL database connectivity
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PDF
Empathic Computing: Creating Shared Understanding
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PPTX
Big Data Technologies - Introduction.pptx
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PDF
Review of recent advances in non-invasive hemoglobin estimation
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Approach and Philosophy of On baking technology
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
Encapsulation theory and applications.pdf
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
Reach Out and Touch Someone: Haptics and Empathic Computing
MIND Revenue Release Quarter 2 2025 Press Release
MYSQL Presentation for SQL database connectivity
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
Empathic Computing: Creating Shared Understanding
Diabetes mellitus diagnosis method based random forest with bat algorithm
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
Big Data Technologies - Introduction.pptx
Digital-Transformation-Roadmap-for-Companies.pptx
NewMind AI Weekly Chronicles - August'25 Week I
Advanced methodologies resolving dimensionality complications for autism neur...
Review of recent advances in non-invasive hemoglobin estimation
Understanding_Digital_Forensics_Presentation.pptx
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Approach and Philosophy of On baking technology
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Encapsulation theory and applications.pdf
Chapter 3 Spatial Domain Image Processing.pdf
Building Integrated photovoltaic BIPV_UPV.pdf

Integrating IoT Sensory Inputs For Cloud Manufacturing Based Paradigm

  • 1. F MULTI SENSOR INPUT FUSION IN WSN FOR AUTOMATED SORTING IN A CLOUD BASED PLASTIC RECYCLING PLANT A THESIS SUBMITTED TO AUCKLAND UNIVERSITY OF TECHNOLOGY IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SERVICE-ORIENTED COMPUTING Supervisors Prof. Alan Litchfeild June 2016 By Kavita Pillai School of Engineering, Computer and Mathematical Sciences
  • 2. Copyright Copyright in text of this thesis rests with the Author. Copies (by any process) either in full, or of extracts, may be made only in accordance with instructions given by the Author and lodged in the library, Auckland University of Technology. Details may be obtained from the Librarian. This page must form part of any such copies made. Further copies (by any process) of copies made in accordance with such instructions may not be made without the permission (in writing) of the Author. The ownership of any intellectual property rights which may be described in this thesis is vested in the Auckland University of Technology, subject to any prior agreement to the contrary, and may not be made available for use by third parties without the written permission of the University, which will prescribe the terms and conditions of any such agreement. Further information on the conditions under which disclosures and exploitation may take place is available from the Librarian. © Copyright 2016. Kavita Pillai 2
  • 3. Declaration I hereby declare that this submission is my own work and that, to the best of my knowledge and belief, it contains no material previously published or written by another person nor material which to a substantial extent has been accepted for the qualification of any other degree or diploma of a university or other institution of higher learning. Signature of candidate 3
  • 4. Acknowledgements I would like to thank my mother and my husband for their unparalleled support. 4
  • 5. Abstract The first step in thermoplastic recycling is identifying the plastic waste categorically. This manual task is often inefficiency and costly. This study therefore analyzes the problem and presents a automatic classifier based on a WSN infrastructure. The classifier fuses data from two different sources using Kalman filter and neural network. The algorithm is run on a matlab simulator to test the results. 5
  • 6. Publications Title of the first publication publication details Title of the second publication publication details Title of the third publication publication details And so on publication details 6
  • 7. Contents Copyright 2 Declaration 3 Acknowledgements 4 Abstract 5 Publications 6 1 Introduction 10 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.2 Thermoplastic type, identifiers, parameters: . . . . . . . . . . . . . . . . 12 1.3 Hardware architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 1.4 Software architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2 Literature Review 31 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 2.2 Autonomic Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.3 IoT, WSN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 2.3.1 Wireless sensory network . . . . . . . . . . . . . . . . . . . . . 40 2.3.2 BaseStation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 2.3.3 Routing in IoT sensory network . . . . . . . . . . . . . . . . . 42 2.3.4 Communication with base station . . . . . . . . . . . . . . . . . 45 2.4 Data Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 2.4.1 Advantages of data fusion . . . . . . . . . . . . . . . . . . . . . 45 2.4.2 Motivation of sensor fusion . . . . . . . . . . . . . . . . . . . . 46 2.4.3 Issues with Data fusion . . . . . . . . . . . . . . . . . . . . . . . 47 2.5 Sensor fusion methodologies . . . . . . . . . . . . . . . . . . . . . . . . 47 2.5.1 PNN Classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 2.5.2 Kalman filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 2.6 Clusters and data aggregation . . . . . . . . . . . . . . . . . . . . . . . . 49 2.6.1 Event detection and processing . . . . . . . . . . . . . . . . . . 50 2.6.2 Localization of sensor objects . . . . . . . . . . . . . . . . . . . 50 2.7 Semantic Sensor N/W Ontology . . . . . . . . . . . . . . . . . . . . . . . 51 7
  • 8. 3 Method 57 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.2 Problem investigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.2.1 Problem investigation . . . . . . . . . . . . . . . . . . . . . . . . 58 3.2.2 Treatment design . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 3.2.3 Treatment implementation and evaluation . . . . . . . . . . . . 62 4 Analysis 65 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 4.2 Supervised learning classifier . . . . . . . . . . . . . . . . . . . . . . . . 66 4.3 Feature Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 4.4 Findings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 4.4.1 Feature fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 4.4.2 Decision fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 4.4.3 Final PNN Classification . . . . . . . . . . . . . . . . . . . . . . . 71 5 Discussion 82 5.0.1 Factors that could effect the accuracy of the algorithm . . . . . 83 6 Conclusion and future study 86 6.0.1 Future studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 References 90 8
  • 9. List of Tables 1.1 Difference between a plastic polymer and a Thermoplastic polymer taken from . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 1.2 Acceptable ranges for polymer attributes adapted from . . . . . . . . . 17 2.1 Assumptions to cloud manufacturing . . . . . . . . . . . . . . . . . . . 33 9
  • 10. List of Figures 1.1 Thermoplastic structure on heating adapted from . . . . . . . . . . . . 15 1.2 Thermoplastic structure on cooling adapted from . . . . . . . . . . . . 16 1.3 Figure 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 1.4 Sensor Fusion Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 1.5 Fused Feature matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 1.6 Sensor Fusion based PNN classifier . . . . . . . . . . . . . . . . . . . . 29 1.7 Sensor Fusion flow chart . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.1 Feedback control process in autonomic system . . . . . . . . . . . . . . 53 2.2 Figure 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 2.3 PNN Classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 2.4 Semantic sensor network ontology representation . . . . . . . . . . . . 55 2.5 Semantic sensor network ontology representation . . . . . . . . . . . . 56 3.1 Data fusion architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.2 Problem investigation approach . . . . . . . . . . . . . . . . . . . . . . 59 3.3 Layered Approach architecture . . . . . . . . . . . . . . . . . . . . . . . 60 3.4 Layered Approach architecture . . . . . . . . . . . . . . . . . . . . . . . . 61 3.5 Multisensor system design process adapted from . . . . . . . . . . . . . 62 3.6 Figure 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 4.1 Data in PCA after feature fusion . . . . . . . . . . . . . . . . . . . . . . 67 4.2 Classifier structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 4.3 Data Fusion Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 4.4 Data Fusion Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 4.5 Difference between normalized and actual values of merged data adap- ted from . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 4.6 Smoothing of tensile and temperature data . . . . . . . . . . . . . . . . 75 4.7 Difference in state d and d’ of multi-sensor data . . . . . . . . . . . . . 76 4.8 Fusion of tensile and temperature data . . . . . . . . . . . . . . . . . . 76 4.9 Difference between normalized and actual values of merged data . . . 77 4.10 Plotting of original image . . . . . . . . . . . . . . . . . . . . . . . . . . 78 4.11 Texture and contour generation . . . . . . . . . . . . . . . . . . . . . . 79 4.12 Pixel-wise comparison of source and target . . . . . . . . . . . . . . . 80 4.13 Center of gravity and density of the target . . . . . . . . . . . . . . . . . 81 10
  • 11. 4.14 Pixel-wise comparison of source and target . . . . . . . . . . . . . . . . 81 5.1 Temperature data before smoothing . . . . . . . . . . . . . . . . . . . . 83 5.2 Temperature data after smoothing . . . . . . . . . . . . . . . . . . . . . 84 5.3 Tensile yield before and after smoothing . . . . . . . . . . . . . . . . . 85 6.1 Pattern in Semantic sensor network taken from . . . . . . . . . . . . . . 88 6.2 Overview of the Semantic Sensor Network ontology classes and prop- erties taken from . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 11
  • 12. Chapter 1 Introduction 1.1 Introduction All plastic products including bottles pose a serious threat to environment due to massive consumption of petroleum for plastic manufacturing, their non-biodegradable nature, and their low density to volume ratio. This is the elementary reason behind plastics being the major part of municipal waste throughout the world. In the past decade intensive research has been dedicated to, developing and providing efficient solutions for plastic bottle recycling. The research in this area attempt to 1) Increase recyclable plastic consumption worldwide 2) In particular recyclable thermoplastic Thermoplastic is made from polymer resin that melts into homogenized liquid when heated and turns hard when cooled (Jumaidin, Sapuan, Jawaid, Ishak & Sahari, 2016). These type of plastics can be re-molded, reheated, and repeatedly solidified, thereby making it recyclable. Major types of thermoplastic polymer used in today’s industries include: 1) Acrylonitrile butadiene styrene (ABS): these are used for manufacturing sports equipment, and automobile parts. 2) Polycarbonate used for manufacturing compact discs, drinking bottles (PET bottles), storage containers etc. 3) Polythene: used to make shampoo bottles, plastic grocery bags etc.(Jumaidin et al., 2016), (Coskun, 12
  • 13. Chapter 1. Introduction 13 2015) The interest in recycling thermoplastic is due the fact that it is a crucial component of everyday use. Furthermore, a single unit of usage (for e.g. a single plastic bottle) might be composed of several types of thermoplastic. Since most forms of thermoplastic resin polymer can be reused (Deng, Djukic, Paton & Ye, 2015) majority of research dealing with sustainable usage of plastic for environmental conservation deals with combining information technology, sensors, and web based control modules to generate autonomous systems for plastic recycling . The very first process in recycling plastics is sorting the plastic based on color, shape roughness. Currently most of the plants throughout the world use manual processes for plastic recycling (Peres, Pires & Oréfice, 2016), wherein trained operatives assist in classifying the polymer of plastic based on color and shape. This approach however is both expensive and ineffective. The reasoning behind this primitive approach in sorting the plastic components is that recycling systems lack human perception (Peres et al., 2016). This means that a human sensory receptors are capable of combining the signals and data from the body (this includes sight, touch, smell, sound, and taste) through which, even with a vague knowledge of our surrounding environment we can dynamically create, visualize and update models of real world objects. Therefore, this study considers the use-case scenario of automatic thermoplastic bottles classification system based on combination of chemical property of the resin, colour and texture. The hardware architecture of this thesis will be composed of three sensors including: 1) computer vision in spectra, 2) NIR (near infrared spectroscopy, and 3) inductive sensors. The software architecture will be based on statistical classifiers and design to achieve multi sensor data fusion. The purpose of this study is to develop a sustainable classification solution for plastic recycling plants using the advancement in the fields of web services, sensors, internet of things, and cloud computing.
  • 14. Chapter 1. Introduction 14 1.2 Thermoplastic type, identifiers, parameters: In order to create a sorting algorithm, it is of the utmost importance to know the identifying characteristic of thermoplastics and the unique combination of attributes that differentiates a type of thermoplastic from another (Oliva Teles, Fernandes, Amorim & Vasconcelos, 2015). In modern consumer market it can be found that there exist many forms of thermoplastics used on a daily basis. Each of the variety has got a mix of parameters that give it a unique look and feel. Conventional Thermoplastics are known as the two phase system (Deng et al., 2015). The classifying property being the temperature (Lionetto, Dell’Anna, Montagna & Maffezzoli, 2016). The first phase being the temperature required for a thermoplastic to reach its liquid resin form (Otheguy, 2010). The second phase being the temperature required for it to reach it solidified form. Therefore, the very first identifier parameter is the temperature range. Some of the other properties that will help in the classification process are: 1) Tensile Properties: tensile properties of a thermoplastic polymer deal with the elasticity of the polymer. They describe to what extend a thermoplastic resin can be stretched. Test may indicate how a polymer will perform in actual usage condition (H. Zhang et al., 2015). 2) Tensile at Break: Also known as the ultimate tensile. This indicate the point at which a thermoplastic polymer will break. The force required to stretch the polymer until it beaks is measured in units (typically pounds) per square inch or psi. It can also be presented as megaPascals (MPa)(H. Zhang et al., 2015). Thermoplastic polymers with higher tensile at break values will be more difficult to break by stretching compared to its lower value counterparts. 3) Tear strength: this represents how resistant the thermoplastic polymer is against tearing. This property is somewhat similar to tensile at break except for a propagation point. This point symbolizes the force at which the polymer tears completely. Therefore, tensile at break refers to point where the polymer starts breaking and the tensile strength refers to the end
  • 15. Chapter 1. Introduction 15 point when the polymer is completely torn. The unit of measurement for this attribute is typically given in psi or kiloNewtons per meter (kN/m) (H. Zhang et al., 2015). 4) Tensile Modulus: In this test the polymer is stretched over a range of elongated points and its stretching is measure over range of elongations. The unit of measurement for this parameter is the percentage of the original length of the thermoplastic polymer. For e.g. the outcome could be 50, 100, or 300 percent of the original length of the polymer resin. Thermoplastic polymer that has a strong tensile strength (elasticity) may become weaker as it elongates this is also known as necking (H. Zhang et al., 2015). 5) Elongation at break. This measures the stretching length of the polymer before it breaks. This is also measured in terms of percentage of the original strength. A soft polymer will have a higher value than when it hardens (H. Zhang et al., 2015). In case of thermoplastic polymers there are certain factors that depend on the methods in which thermoplastic resins are molded and the mold used that effect the tensile properties (Jumaidin et al., 2016). It is for this reason that both direction of flow and its transverse are measure for tensile properties of thermoplastic polymers. 1) Direction of flow: the orientation of thermoplastic polymers on molding greatly effects its tensile properties. The tensile properties may vary greatly depending on the time when the polymer was stretched. It must be determined if the polymer is stretched in the direction of polymer flow or in the transverse direction during the molding process (Coskun, 2015). 2) Extruded or injection based plaques. It is very important to conduct test on outcomes of a similar plaque (i.e. compare the outcomes of extrusion plaque with another of the same type of plaque, as comparing the same with an intrusion based mold will lead to an ambiguous outcome) (Xie et al., 2016). Compression set for thermoplastic polymers: A compression set is the permanent deformation that occurs to the polymer when it is exposed to a specific temperature. The test performed on thermoplastic polymers to check for compression set deformities is called the ‘ASTM’ or the ‘ASTM-D395’. This method for testing defines that normal
  • 16. Chapter 1. Introduction 16 range of compression in a thermoplastic polymer should be around 25- 30 percent. In any recycling plant, this test is conducted to check the true positive deformed units and the sampling is done every half hour. The samples chosen as true positives are the ones that fail to reach its original height. This deformity occurs at a specific time, in a specific temperature (Otheguy, 2010). Hardness in Thermoplastic polymer: The relative hardness or softness is the primary property to be considered while classifying the thermoplastic polymer. Hardness is also closely related to other properties of polymers such as tensile qualities and flexural modulus. The hardness in polymer are commonly measured by an instrument called as the Shore durometer. A metal indenter is pushed in the surface of the hardened polymer using a spring measuring how far it penetrates. The depth penetration can be in a range of 0 to 0.100 inches. Here zero indicates that the indenter is at the maximum depth and value of 100 indicates that no penetration is detected. The readings can be taken either immediately or after a certain delay. Readings taken immediately are always higher than the delayed reading. The delayed reading represents the resiliency of the polymer along with the hardness (H. Zhang et al., 2015). Flexural modulus is the measurement of resistance to bending in a thermoplastic polymer. This property is closely related and is often confused with hardness which typically measures resistance to indentation. The two properties of hardness and flexural modulus have direct correlation (meaning if the value of hardness goes up, so does flexural modulus). Both the property deal with how a thermoplastic polymer feel in customer’s hands(H. Zhang et al., 2015). How does a thermoplastic polymer differ from ordinary plastic material? Thermo- plastic polymers repeatedly soften or melt when heated and solidifies when cooled. Thermoplastic can therefore burn to some degree. Temperatures of thermoplastic poly- mers can vary greatly with the type and grade of the polymer. In most thermoplastic polymers molecular chains can represented as loosely coupled, intertwining strings
  • 17. Chapter 1. Introduction 17 resembling a spaghetti. Figure 1 shows the structure of the molecules on heating. Figure 1.1: Thermoplastic structure on heating adapted from (Thermoplastic Elastomer (TPE) FAQs, 2015) On cooling the molecules are held together more firmly as shown in figure 2. Some of the basic distinguishing factors are between thermoplastic and conventional plastic polymers given in table 1 Shrinkage in thermoplastic polymers: When the polymers harden they shrink the overall size of the molded part. Even shrinkage levels can reduce the deformities of the molded thermoplastics. Shrinkage normally relates to molding and removal of final thermoplastic (for e.g. a bottle form). The shrinkage level should always be even (H. Zhang et al., 2015).
  • 18. Chapter 1. Introduction 18 Figure 1.2: Thermoplastic structure on cooling adapted from (Thermoplastic Elastomer (TPE) FAQs, 2015) Therefore based on the above attributes table 2 gives a the range of acceptable values for recycling of thermoplastic polymer based on for the training dataset for our multisensory environment. Shrinkage in thermoplastic polymers: When the polymers harden they shrink the overall size of the molded part. Even shrinkage levels can reduce the deformities of the molded thermoplastics. Shrinkage normally relates to molding and removal of final thermoplastic (for e.g. a bottle form). The shrinkage level should always be even. Therefore based on the above attributes table 2 gives a the range of acceptable values for recycling of thermoplastic polymer based on for the training dataset for our multisensory environment.
  • 19. Chapter 1. Introduction 19 Table 1.1: Difference between a plastic polymer and a Thermoplastic polymer taken from (Thermoplastic Elastomer (TPE) FAQs, 2015) VARIABLE Thermoplastic polymer Plastic polymer Fabrication Rapid (seconds) Slow (Minutes) Scrap Reusable High Percentage,waste Curing Agents None Required Machinery Conventional,Thermoplastic Equipment Special Vulcanizing,Equipment Additives Minimal or None Numerous Processing,Aids Design Optimization Unlimited Limited Remold Parts Yes Unlikely Heat Seal Yes No Table 1.2: Acceptable ranges for polymer attributes adapted from (Otheguy, 2010) Physical,Characteristic Min Value Max,Value Gravity 1.20,N/Kg 1.20,N/Kg Molding,Shrinkage Flow 0.10% 0.30% Tensile,Strength NA 13800 psi Flexural,Modulus NA 870000,psi Flexural,Strength NA 23200,psi Mold,Temperature 149,degrees Fahrenheit 203 degrees,Fahrenheit Cooling,Temperature 194 degrees,Fahrenheit 212 degrees,Fahrenheit Cooling,time 4,hours 6,hours
  • 20. Chapter 1. Introduction 20 1.3 Hardware architecture In order to design an adequate system for thermoplastic classifier we need to give consideration to the following questions. 1) The type of sensors to be used. 2) How do we integrate these sensory inputs to form a singular model for the classifier? 3) Designing the prototype for presenting the designed solution. Classification of thermoplastic requires us to take into account data from different devices, as a single sensor data will not give us sufficient information to successful classify a polymer into a type. Therefore, we require the conjunction of different sensors each of which, gives us data for a single attribute (e.g. temperature). This process of combining data from different sensor is called sensor fusion. Each of the sensor used in this study has a specific task (identifying attributes) associated with the automated classifier. 1) CCD camera sensor: This sensor has two-fold use in our classifier firstly it can be used to identify the gravity of the object in its perception. And secondly it can be used to identify the color and texture associated with the thermoplastic shard processed by it. Out of the three types of thermoplastic materials considered in this study i.e. ABS, PET, and polythene, especially PET bottles are characterized by its solid texture and is available in different shapes and colors. Therefore, this sensor can be used to correctly identify the type of thermoplastic based on color texture and shape (Chen et al., 2016) (S. Wang et al., 2016) (Tzschichholz, Boge & Schilling, 2015). 2)Inductive sensors: Inductive sensors are a type of non-contact proximity sensors useful for identifying the contents of the plastic shards without physically touching the components. In particular, this sensor is useful in identifying components which have been metallized. This sensor consists of induction loop, the inductance of which changes in accordance to the material contents of the plastic shards. The changes in the loop inductance reflects in current flow fluctuation which is captured by the sensors
  • 21. Chapter 1. Introduction 21 (C. Wang, Liu & Chen, 2015) (Chang et al., 2016). 3)NIR sensors: Near infrared spectroscopy or NIR sensors is a device to measure density and case of thermoplastics it can be used to measure and study the transmittance or absorption between different plastics when exposed to infrared laser. The NIR sensors can also measure surface temperature and particle state calibration in polymer resins. NIR sensor output is multivariate. It is often used to extract specific features of polymer resins like surface temperature, shrinkage, hardness etc. (Beguš, Begeš, Drnovšek & Hudoklin, 2015) (Müller, Burger, Borisov & Klimant, 2015) (Iyakwari, Glass, Rollinson & Kowalczuk, 2016) 4)Polymer FBG sensors: A fiber optic sensor uses optical fiber for either sensing or as a means to relay signals from sensor to electronic measuring unit. Fiber optic sensors are mostly used to measure strain, temperature, pressure among other qualities. It provides a very distributed sensing over large distances. Polymer FBG sensor is an extension of fiber optic sensor. It is very attractive due to its many qualities such as flexibility, high strain measurement range, and low density. It can also vary the intensity of light and can used for remote sensing. In case of thermoplastic resin, polymer based fiber bragg grating sensor can be used to measure all tensile properties and temperatures. 5)Hyperspectral system: A spectral image is used to collect and process information across an electromagnetic spectrum. This means that hyperspectral image identifies a unique spectrum for each pixel in the scanned object. Its purpose is to identify certain characteristics of materials and is a part of detecting processes. The human perception spectrum is capable of classifying light in mostly 3 bands namely green, blue, and red), however a spectral image classifies a spectrum in many more bands including those which are not visible to human eye. The simplest form of hyperspectral system is a combination of monochrome camera and hyperspectral lens and a spectrograph. (Kouyama et al., 2016) (Li, Chen, Zhao & Wu, 2015) (Castorena, Morrison, Paliwal & Erkinbaev, 2015)
  • 22. Chapter 1. Introduction 22 1.4 Software architecture Multisensory data fusion can be processed at four different levels of categorization in accordance with the stage at which fusion takes place. This can be at signal level, spectrum level, feature level, and decision level as illustrated by Figure 1.3 on page 26. 1) Signal level fusion: this occurs through combination of signals from different sensors to create a new fused signal having a better signal to noise ratio compared to the original (AO, YONGCAI, LU, BROOKS & IYENGAR, 2016). 2) Spectrum level fusion: information associated with each point is derived from a matrix of associated features from different sensors and is plotted on a spectral image to create a model interpreted by a computing system for further evaluation(AO et al., 2016). 3) Feature level fusion: information from objects recognized by the sensors in the form of salient features are extracted and placed in a matrix from. The salient features depend on the environment example of the features could be edges or textures. These features extracted from different sensors are fused and placed in feature matrix (cross ref later)(AO et al., 2016). 4) Decision level fusion: These include fusion based on lower level categories stated above (including signal, spectrum as well as feature level extraction) to yield a final fused decision. The thermoplastic contents of a recycling plant are firstly classified based on lower level fusion based on features and spectra. The individual extraction results are finally combined applying decision rules to complete the sorting process(AO et al., 2016) (Duro et al., 2016)(Mönks, Trsek, Dürkop, Geneiß & Lohweg, 2016). In this study all levels of fusion except signal fusion are used and form the important part in formulating the fusion algorithm. We consider this study through wireless sensory network (WSN) and cloud manufacturing platform. In a cloud based manufacturing set up there is bound to be large number of sensors. A large number of sensors are randomly distributed in the deployment area. The sensors are randomly distributed in the deployment setup. The sensor are categorically organized into clusters based on
  • 23. Chapter 1. Introduction 23 their type and spatial proximity. Each cluster has a cluster head, which are responsible for synchronizing and collaborating the fusion activity and transferring data to base station. The Sensor nodes collect the following information 1 CCD camera: Collect shape, color (Tzschichholz et al., 2015) 2 NIR sensor: Surface temperature. (Hafizi, Epaarachchi & Lau, 2015) (Müller et al., 2015) 3 In- ductive sensors: Collect component information specifically metallic content in the shards.(Ding, Chen, Ma, Chen & Li, 2016) 4 Polymer FBG sensor: Tensile properties of resin. A general block diagram of the fusion process mentioned above is shown in 1.4. The sensor nodes collect data D1, D2, D3 from the environment. The data value from these nodes may not be precisely true. There exists redundancies and noise in the data generated by the sensor nodes. These uncertain data can be generated by sensors either due to manufacturing or environmental conditions. These uncertainties can adversely affect the decision fusion and efficient sensor power consumption in a wireless sensor network (Mishra, Lohar & Amphawan, 2016) (Pereira, McGugan & Mikkelsen, 2016) . This study uses a hybrid of Kalman filter and probabilistic neural network. Data fusion itself uses the Kalman filters, while feature extraction and decision fusion as mentioned in (above ref) occurs through PNN derivatives. The algorithm process flow for the entire process is as follows: Step 1 Preprocess data: The data should be free of inconsistencies and errors. PNN training step is used here. The training data uses a predictor variable. This is shown in equation (Oliva Teles et al., 2015) 1.1 as follows: Φ = d′ − d (1.1) The predictor variable differentiates the data captured by sensor to the normalized value of the data. This represent the proximity of captured value to the actual value. Here d’ is the normalized value shown by training previous data sets and d is captured
  • 24. Chapter 1. Introduction 24 value. Therefore, percentage of correct data packets received by the equation 1.2. Here C is the correct percent of packets received. While N is the total number of events processed. C = Φ N × 100 (1.2) Step 2: Feature extraction is commonly used in structural analysis of dynamic components. It plays an important role in improving the performance of the classifier in sorting of thermoplastic according to their features. Since frequency and its relative modal parameters are relatively simple to extract from structural characteristics and re- sponses received from sensory devices. Normalized frequency is a unit of measurement corresponding to frequency. It is represented in continuous time variable t. Normalized frequency is represented as cycles/sample, therefore in our classifier this is represented by seconds/ samples (number of samples). There are three delimiters to our feature extraction (Land et al., 2012). 1.Frequency change ratio: The first delimiter measures the changes in the com- ponents in sensor data. This given by equation 1.2 this measures the changes in the structural components being classified.(Land et al., 2012) 1.3 NFCRi = FFCi ∑ n j−1 FFCj (1.3) 2.The second measures the changes in modal points in the sensor data. This is the measure of accurate dimensionality of the classifier. This is shown in equation 1.3. (Iounousse, Er-Raki, El Motassadeq & Chehouani, 2015) FMCRi = MCRi(k) ∑ n j−1 MCRj(k) (1.4) 3. The curvature change ratio showed by equation 1.4 shows difference in structures
  • 25. Chapter 1. Introduction 25 of the sensed components. NCCDj = Cui(j) − Cdi(j) ∑ Cui(j) − Cdi(j) (1.5) Here FFCi, MCRi are the frequency change ratio, modal change ratio derived from sensory data training. Step 3: Feature level data fusion: The features extracted are then fed in to individual matrices. The fused matrix is then calculated. This is a well-rounded matrix representing all information for the classifier to successfully decide the grade and category of plastic. Example of this process through the example of our thermoplastic classifier is seen in figure 1.6 and the fused feature matrix example is shown in Figure 1.5. Step 4: FIS induced decision fusion: This step shows the proposed data fusion process takes place. Decision theory and making appropriate decision plays an important role in decision fusion. Due to complex situations encountered in multisensory fusion process. Selection of appropriate criterion for derivation of the most optimal path by selecting an estimation/filtering algorithm is a lengthy and complicated process in itself. Decision fusion concerns mainly with how to make decisions and/or the most appropriate decision to be taken given a status of an object, occurrence of an event or a given scenario. This is usually done on basis of some objective analysis. Sometimes a decision is taken on the basis of data extracted from sensors and the processing of these datasets.(Tong, Liu, Zhao, Chen & Li, 2016) There are two types of decision processes(Rawat & Rawat, 2016). First is the normative decision making process: This describes how decision should be made and not how they are actually made. The other type is descriptive decision making. The main difference of former from the latter is the element of artificial intelligence or AI. This entails how human think and their rationale. This AI has several facets these are (i)logical decision making, (ii) perception, (iii) planning, (iv) learning, and (v)
  • 26. Chapter 1. Introduction 26 action. The descriptive decision making also includes 1) Identifying the actual issue/ current situation 2) ability to collect data/ necessary details pertaining to the problem. 3) ability to arrive at intermittent solution / generating feasible processing option. 4) conducting intermittent analysis on solution arrived at. 5)implementation of intermittent solution. Take action to implement the final decision. For the purpose of this study we use the descriptive decision making. The final decision model is used for successful classification. The consequent processes to attain a successful decision fusion is further discussed in this section.(Rawat & Rawat, 2016) The sensors are embedded with a fuzzy logic controller. This purpose of a fuzzy logic controller is to calculate the confidence level of collected data this depends on the current condition of the sensor nodes. Each sensor node is responsible for collecting the data and the confidence factor. The data fusion algorithm collects both data and the confidence factor. In order to collect the confidence factor, signal to noise ratio of a sensor input device is collected. Poor signal to noise ratio is indicative of poor sensor health and therefore has a lower confidence factor. (Tian, Sun & Li, 2016) The fuzzy logic controller determines if the sensor data and the signal to noise ratio are in acceptable range (Mönks et al., 2016), this is determined through training data. A sensor which is out of range will have confidence factor between 0 to 100 percent, the confidence value of the sensor can be 100 percent only if its health and environmental factors are desirable. A minimum acceptable value is decided for each sensory input if the screened value is blow this acceptable range the value is rejected. For fusion purposes the message packet will also include the node id of a source node (Mönks et al., 2016). DF = (CF1 × D1) + (CF2 × D2 + (CF3 × D3) + (CF4 × D4) + .... + (CFn × Dn) CF1 + CF2 + CF3 + CF4...CFn (1.6)
  • 27. Chapter 1. Introduction 27 Cluster heads are responsible for data aggregation from its subordinates. The multi sensor fusion process is started by cluster heads at the completion of each round of data collection process. The data aggregation process is shown in equation 1.5. Here Dn represents the data received from similar cluster members CFn is the confidence factor of data collected by each sensor node. Data fusion is conducted through combination of data collected by different cluster heads. In other words, data fusion combines data with higher certainty levels. Because sensor fusion combines data from different sources, it provides a better view of the environment (Ari, Yenke, Labraoui, Damakoa & Gueroui, 2016), (Dehghani, Pourzaferani & Barekatain, 2015). The example of the overall process is as follows for e.g. we consider data from three temperature units, four tensile yield units and three component analysis units. Let the temperature data collected be 10 degrees Celsius, 15 degrees Celsius, and 20 degrees Celsius respectively having confidence factors of 75, 65, and 41 percent respectively. The FD of temperature would be 15.93, similarly if FD of tensile yield and component analysis is 49.8 and 26.8 respectively. The feature fusion matrix will have the values 15.98, 49.8, and 26.8 respectively. The feature fusion matrix from different sources (cluster heads) are finally fused. After the fusion is successfully completed if there is no change occurred at any clustered heads no packets are send to the base station, alternately if there is occurrence of constant change the base station is alerted. The successfully fused data is finally stored for training. The entire process is shown in flowchart represented in Figure 1.7. 1 ∏ c = 1 (2n) n 2 σn exp[− (x − xij)T (x − xij) 2σ2 (1.7) pi(x) = 1 (2π) n 2 σn 1 Ni (Ni) ∑ (i=1) exp[− (x − xij)T (x − xij) 2σ2 ] (1.8)
  • 28. Chapter 1. Introduction 28 →x k + 1 = A.→x k + B.→u k + w (1.9)
  • 29. Chapter 1. Introduction 29 Figure 1.3: Levels of Data Fusion adapted from (Duro, Padget, Bowen, Kim & Nassehi, 2016)
  • 30. Chapter 1. Introduction 30 Figure 1.4: Sensor Fusion Process
  • 31. Chapter 1. Introduction 31 Figure 1.5: Fused Feature matrix
  • 32. Chapter 1. Introduction 32 Figure 1.6: Sensor Fusion based PNN classifier
  • 33. Chapter 1. Introduction 33 Figure 1.7: Sensor Fusion flow chart
  • 34. Chapter 2 Literature Review 2.1 Introduction The advent of IoT, wireless capabilities and increasing number of devices with varying features and potential is changing the business paradigm radically. Cloud computing allows its users to tap the full potential of web services to create powerful and loosely coupled infrastructure that are autonomous (Hao & Helo, n.d.). Cloud based manufacturing (CBM) is scalable, agile infrastructure for manufac- turing operations (Wu, Rosen, Wang & Schaefer, 2015b). CBM has a decentralized structure and functions as a part of network. CBM consist of loosely coupled systems working together in real time, the backbone of CBM’s contains, several enabling tech- nologies like cloud computing, service oriented architecture (SOA) (Wu, Rosen, Wang & Schaefer, 2015a). It can therefore be inferred that cloud computing framework makes it possible to collect, assemble and use IoT data. Therefore, cloud computing is the center of manu- facturing network.There are three dimensions to cloud based manufacturing network they are 34
  • 35. Chapter 2. Literature Review 35 1 IoT (Internet of things) : This represents all the devices (and their sensors respect- ively) connect to a cloud manufacturing network. This also includes the data created by these devices. 2 IoS (Internet of Services) : This includes all the underlying services including manufacturing services, cloud services and web-services in general. In gen- eral manufacturing processes includes several components like raw material, semi finished products, demand supply datum, product and process structure information.In order to present a formal representation of these processes and information(datum), in addition to the conventional compute, storage and network included in IaaS, PaaS and SaaS, cloud manufacturing has many more services. These may include integration as a service, simulation as a service, maintenance as a service, design as a service (Schroth & Janner, 2007). 3 IoU (Internet of users): IoU are entities interacting with the system including service providers (cloud service providers and others),operators, end users (consumers) (Wu et al., 2015a). Authors Wu et.al (Wu et al., 2015a) have presented eight requirements to cloud manufacturing, these are mentioned in table 2.1 on the next page.These include a cloud based shared storage, Internet connectivity ability to share data and information, multitenancy architecture. This paper is based on the assumption that these requirements are true and met for all cloud based manufacturing units. The system architecture of cloud manufacturing follow a layered approach as follows 1 Perception layer. 2 Network layer. 3 Middleware layer.
  • 36. Chapter 2. Literature Review 36 Table 2.1: Assumptions to cloud manufacturing Requirement Requirement description R1 To connect individual service providers and consumers in the networked manufacturing setting, a CBM system should support social media-based networking services. Social media such as Quirky allows users to utilize /leverage crowd-sourcing in manufacturing. In addition, social media does not only connect individuals; it also connects manufacturing- related data and information, enabling users to interact with a global community of experts on the Internet. R2 To allow users to collaborate and share 3D geometric data instantly, a CBM system should provide elastic and cloud-based storage that allows files to be stored, maintained, and synchronized automatically. R3 Should have an open-source programming framework that can process and analyze big data stored in the cloud for e.g MapReduce. R4 To provide SaaS applications to customers, a CBM system should support the multitenancy architecture. Through multi-tenancy, a single software instance can serve multiple tenants via a web browser. According to Numecent, a cloud platform, called Native as a Service (NaaS), is developed to deliver native Windows applications to client devices. In other words, NaaS can “cloudify” CAD/CAM software such as Solidworks without developing cloud-based applications separately. With such a multi-tenant platform, such programs can be run as if they were native applications installed on the user’s device. R5 To allocate and control manufacturing resources (e.g., machines, robots, manufacturing cells, and assembly lines) in CBM systems effectively and efficiently, real-time monitoring of material flow, availability and capacity of manufacturing resources become increasingly important in cloud-based process planning, scheduling, and job dispatching. Hence, a CBM system should be able to collect real-time data using IoT technologies such as radio-frequency identification (RFID) and store these data in cloud-based distributed file systems R6 Should provide IaaS, PaaS, HaaS, and SaaS applications to users R7 Should support an intelligent search engine to users to help answer queries R8 Should provide a quoting engine to generate instant quotes based on design and manufacturing specification
  • 37. Chapter 2. Literature Review 37 4 Application layer. 5 Business layer. The lowest layer consists of sensors and sensory devices including RFID, GPS cards etc., this layer is called as the perception layer.Second layer consists of network modules like GPRS, Zigbee, 3G, infrared, blue-tooth this layer provides us means of transmitting information perceived through sensor devices to middle-ware web services, IoT gateways, user interface modules to capture the data transmitted and pass it onto the next layer. Middle ware layer takes care of web services and their interaction, it also includes database modules for storage example MongoDB, and other nosql,mysql databases (Wu et al., 2015a). This layer also consist of ubiquitous computing and cloud computing modules this is a very important module specially in cloud manufacturing (Mitton, Papavassiliou, Puliafito & Trivedi, 2012) it allows the users to extract the data from anywhere it can also be used to attain some degree of remote control features as stated by(smart home). Application layer typically consists of user interfaces or management applications that use or manipulate the data from lower layer.Business layer is used for designing business process flows these include model data, flow charts and graphs (Y. Zhang et al., 2012) . The main features that characterize cloud manufacturing framework are 1)Data availability 2)Complexity. These features make this framework a viable option for IoT. Cloud manufacturing provides hi speed access to on demand resources, also it provides quick storage options for sensory data. Cloud computing model of manufacturing consist loosely coupled modules interacting with each other over a common backbone (Wu et al., 2015a). Cloud manufacturing model due to on-demand compute and storage are able to process large data-sets which is not possible in conventional manufacturing. This manufacturing model follows an input-reply approach wherein a master node divides the input task into sub-tasks which are then aligned to be process by child
  • 38. Chapter 2. Literature Review 38 nodes (Schaefer, 2014),(Hao & Helo, n.d.),(Kyriazis & Varvarigou, 2013) ,(Yang, Shi & Zhang, 2014). This therefore allows complex operations to be assigned and completed in parallel. On completion of a sub task the child node notifies the parent node of the same. The correspondence between parent and the child nodes takes place through HTTP codes as follows. 1 1XX level codes : represent informational notification (optional errors) 2 2XX level codes : indicates that the task was completed successfully by a sub node. 3 3XX level codes: This indicates redirection. It means that the node has several other child nodes, and is waiting for its sub-tasks to get executed. 4 4XX level codes: This represents error-sets in child nodes. 5 5XX level codes : This represents errors in the parent nodes. These error codes are analogous to Rest API HTTP codes used for web-services. Another assumption to cloud manufacturing model is that the machines and devices in one layer of hierarchy have the same make and configuration. This means that child devices under one parent are identical to others. This makes the QoS of these machines and devices quantifiable (Lartigau, Xu, Nie & Zhan, 2015),(Huang, Li & Tao, 2014). From studies such as (Aydogdu, Akın & Saka, 2016),(Chakaravarthy, Marimuthu, Ponnambalam & Kanagaraj, n.d.) , (Loubiere, Jourdan, Siarry & Chelouah, n.d.), (Anuar, Selamat & Sallehuddin, n.d.) it can be inferred that an allocation optimization algorithm like ABC (artificial bee colony) is more suited for loosely coupled system requiring on precision in their operations. ABC algorithm gives the optimal task allocation path in a cloud for manufacturing. In this algorithm there are three entities 1)worker nodes 2)scout nodes 3) and onlooker nodes (Anuar et al., n.d.). The worker nodes do a job using one resource, there has to be a one to one rela- tionship between a worker node and a resource. The onlookers are those nodes that are
  • 39. Chapter 2. Literature Review 39 waiting for a signal of completion from the worker nodes. According to the input-reply approach mentioned above in 2.1 on the preceding page worker nodes can be considered as master nodes and the onlooker nodes is the child node which is dependent on its master to complete the processing of its jobs and then allocated the sub tasks to its children, If the child nodes are non responsive the task are routed to other child nodes. It is for this reason that all child nodes under a master node need to have a same make (in terms of configuration and model of the machine they need to be exact replicas of each other). The scout nodes are those master nodes which have encountered an error. Master node on encountering an error gives a 5XX level error code. This masters job are re routed to its other replicas, who then process the master’s job and communicate with failed master’s children. This model is suitable for cloud manufacturing, as clouds provide scalable, and on demand resources and routing facilities to under-allocated reserves to speed up the operation(buyya cloud char). Uncertainties in manufacturing operations can be defined as discrepancies between desired output and the actual one (Hasani, Zegordi & Nikbakhsh, 2012a). Uncertainties in manufacturing can be categorized as aleatoric and epistemic uncertainties. Aleatoric uncertainties are represented by unknown outcomes in every run (Senge et al., n.d.). This can be caused due human cognitive impairments that could result in ambiguities and vagueness (Hasani, Zegordi & Nikbakhsh, 2012b). This type of uncertainty is caused by inadequate, erroneous, missing data (Francalanza, Borg & Constantinescu, 2016). Epistemic uncertainties arise due to wrong modeling of data due to assump- tions and approximation (Urbina, Mahadevan & Paez, 2011). In this paper we will consider overall uncertainties and whether real time data available through wireless sensory networks(made accessible by scalable cloud storage), can help the machines to take decision (with the least impact to other production processes) in case any such uncertainties arise (Hasani et al., 2012b), (Francalanza et al., 2016), (Hasani et al., 2012a).
  • 40. Chapter 2. Literature Review 40 2.2 Autonomic Machines In the words of (Buyya, Calheiros & Li, 2012) Autonomic systems are self-regulating, self-healing, self-protecting, and self-improving. In other words, they are self-managing . Many articles relating to autonomic computing (Parashar & Hariri, 2005), (Yahya, Yahya & Dahanayake, n.d.) have compared this concept with human neural decision making and rationalization. According to (Parashar & Hariri, 2005) human nervous system could be cited as the best example of autonomic system present in nature, with the help of sensory inputs the human nervous system is able to monitor and adapt to changes both internal and external to itself. According to (Maggio et al., 2012) Autonomic computing system must posses three compulsory requirements 1 Automatic : This means that systems must be able to carry out their operations without manual intervention.This inherently includes thin knowledge layer that contains know how about this system (He, Cui, Zhou & Wang, 2016). 2 Adaptive : The ability of system to take decision when a uncertain event occurs. This includes alterations to its state, configuration, functionality etc (He et al., 2016), (Yahya et al., n.d.). In operation context this will include dealing with both temporal and spatial changes/ uncertainties occurring on both long and short term basis. It must be able to predict and anticipate demand on its resources to avoid any down time. 3 Aware: An autonomic system must be aware of both the operational context and its state as compared with its operative environment as a whole. In order to be adaptive the machine or a system must be aware of its operative environment.This means that a system must know itself in context of what resources are accessible, why and how it is connected to other operatives (Parashar & Hariri, 2005),
  • 41. Chapter 2. Literature Review 41 (Maggio et al., 2012) . Majority of the articles that deal with such self-adaptive, self-managing systems such as (Maggio et al., 2012), (He et al., 2016), (Yahya et al., n.d.) manifest certain common characteristics. These systems are often ’reflective’ this means that they posses the ability to reason in context of their state and environment (White, Hanson, Whalley, Chess & Kephart, 2004). Secondly these systems are able to rationalize, take decisions during or in a close time-frame to run-time(while the machine or the system is running). Based on the work of (White et al., 2004) certain paradoxical features to autonomic computing are: 1 Uncertainty: These are presented in section 2.1 on the previous page 2 Environmental non determinism: this means that the environment in which the sys- tem works requires different response in every run (therefore there is requirement of decision making during run time). 3 Extra functional capabilities: This includes those features that may not be inherently a part of the system,degree to which a machine combats the anomaly that is not a part of its design. It is a representation of the system trade-offs that arise due to unexpected outcome. 4 Incomplete control of system or its components: This could be due to embedded components, human involvement in sub processes, or continual action. Authors (Yahya et al., n.d.) have propounded that autonomic systems follow feedback process. A feed back control loop is a continual process behind autonomic component’s rationale and logic. These process’s are depicted through 4 canonical activities(collect, analyze, decide and act) as shown in .
  • 42. Chapter 2. Literature Review 42 1 Collect: collection of data from sensors,events,probes. This involves collecting data from executing processes in real time, Data collected includes contextual information about component states (Schaefer, 2014).The data is filtered for any noise(unwanted or redundant sources) and is stored for future use. This data can be used to generate a trend pattern by comparing current and past states. 2 Analyze : the data stored is analyzed to generate inferences and symptoms. 3 decide : to resolve a future course of action or how to act when such a event arises in the future. 4 Act: machine components need to be checked for determining which are the managed components and how these can be manipulated( for e.g., by new algorithms or parameter tuning). Are the changes to system pre-configured(the error has already occurred in the past, was logged and resolved ) or are assembled, composed or generated at runtime (opportunistically). This step involves acting on executing processes and its context. 2.3 IoT, WSN According to Gartner forecast "6.4 billion connected things will be in use worldwide in 2016, up 30 percent from 2015, and will reach 20.8 billion by 2020. In 2016, 5.5 million new things will get connected every day". These IoT devices(things) consist of many low power sensors(Y. Zhang et al., 2012) . IoT sensor networks observe and monitor environments which change rapidly over a period. These dynamic behavior is caused due to internal problems(that are integral part of system design that requires a certain degree of elasticity) or they are cause by external factors(unforeseen changes or behavior)(Chi, Yan, Zhang, Pang & Da Xu, 2014). IoT are typically composed of multiple, autonomous, small, cheap, and low
  • 43. Chapter 2. Literature Review 43 power sensor nodes. In a cloud manufacturing paradigm there nodes forward sensed data to cloud storage (Soliman, 2014) and are then usually used for further processing. The sensors could measure pressure, impact, temperature,thermal acoustics and many other dynamic parameters within a industrial set up. IoT’s have a tremendous potential for creating generic industrial machines to reduce fault positives and downtime faced by machines. In order to achieve this automation task we have to look at 1)event scheduling 2) reliability of data( data should be without noise that change the pattern of learning and prediction, non redundant data set) 3)Data availability 4)Data security 5)Data aggregation 6)localization 7)clustering of nodes(IoT3) 8)fault detection. In the context of this study: 1 Knowledge acquisiton (unbiased) in order to develop automation and enhance the productivity and performance of machines in a industrial setup.The outcome of this step would be the training data (Kyriazis & Varvarigou, 2013). 2 optimization: This is done by detecting and describing in consistencies through the training data. Finding patterns of these in consistencies in training data and generating algorithms for machines (White et al., 2004). Machine learning will be suitable for machine automation through IoT sensor network because 1 IoT sensors can monitor dynamic environment that is typified by frequently changing data over time (Mitton et al., 2012). For e.g. A node’s current location may change due eroding soil, landslides, weather turbulence due wind, or floods. It is desirable to use sensors as they can adapt to this environment and operate efficiently. 2 IoT sensors can be used to collect new knowledge from unreachable locations. Using this knowledge system designers can develop learning algorithms that are able to
  • 44. Chapter 2. Literature Review 44 calibrate itself to any newly acquired expertise (Kyriazis & Varvarigou, 2013) . 3 IoT sensors can be deployed in complicated machine structures where designers cannot build accurate model of the system behavior. This means there is no estimate to how a machine will behave at run-time. 4 Majority of the articles on this topic (m2miot, cmfd io2 iot3) propose a M2M framework that use IoT but very few speak about using data, mining knowledge and extracting important correlations from them. These correlations represent learning trends to enable automation with minimum human intervention. 2.3.1 Wireless sensory network Wireless sensor networks are heterogeneous distributed systems that primarily work from data collected from a physical environment. In its most fundamental form, sensor nodes are small autonomous, short range transceivers having sensors. These sensors are able to cooperate and communicate with each other wirelessly (Gokturk, Gur- buz & Erman, 2016). Sensor nodes sense (or gather) information about a physical object. Although a single sensor node has only a limited range, a wireless sensor network composing of many such similar nodes can provide detailed measurement encompassing a large region (ref). Wireless sensor network has many applications such as those in agriculture, nature monitoring (for e.g. monitoring active volcanoes, forest fires etc.), industries (measuring temperature, pressure etc.), health monitoring systems(Antonopoulos & Voros, 2016). WSN is composed of mainly several sensor subsystems. 1)Sensor node is the most important element in a wireless sensory network. The functionality of a sensor node includes sensing data, converting the raw physical to digital form, interpret the communication protocol to send and receive data packets. In order to achieve this, a sensor node must be equipped with certain physical resources.
  • 45. Chapter 2. Literature Review 45 The sensor resources can be further sub classified into four subsystems (Y. Wang, Chen, Wu & Shu, 2016). 2)Sensor conversion units: To sense a value and convert them into digital form which can be transmitted over a network (Y. Wang et al., 2016). 3)Sensor processing units: Storage of gathered and configured data in the local memory, this also includes assembly functions to further process the gathered data, information, or messages (Y. Wang et al., 2016). 4)Sensor interpreter unit: Interpret the received data frames according to protocol and also assemble the data packets to be transmitted over a wireless network either to other nodes or to the cluster head (Y. Wang et al., 2016). 5)Power unit: This unit is responsible for alerting the WSN monitoring interface with sensor health diagnostics and the sensor battery status. This component regularly alerts this message to cluster head to calculate the confidence factor (Y. Wang et al., 2016). Proper implementation and working of all subsystems can affect the network per- formance. In wireless sensor networks the most common attributes adversely affecting the performance of the network are the frame loss rate, bandwidth, QoS (quality of service), and power consumption (Gokturk et al., 2016). Since some sensors are placed in extremely hazardous, or too expensive it is not possible to connect these sensors with the power grid and therefore sensors are equipped with batteries that have to be frequently replaced (Antonopoulos & Voros, 2016) Low power consumption of sensor nodes is crucial for several reasons, such as the fact that sensor nodes are often placed in areas where there is no possibility (or it is too expensive, dangerous etc.) of connecting sensor nodes to power grid (Edwards-Murphy, Magno, Whelan, O’Halloran & Popovici, 2016). Therefore, they have to be equipped with batteries, which have to be replaced with fresh ones occasionally. These periods may differ from few days to years. As mentioned, sensor nodes are often placed in
  • 46. Chapter 2. Literature Review 46 areas with low accessibility, which leads us to the importance of power consumption attribute when dealing with WSN. The battery status is also an important factor while calculating the confidence factor (Edwards-Murphy et al., 2016). 2.3.2 BaseStation A base station can be viewed as an entry point to a sensor network. A base station(BS) is usually computationally more advanced than a sensor node (Arkin et al., 2014). A base station processes larger memory (Could be connected to cloud storage) and may be connected to a power grid. The base station is responsible for gathering data from different computational cluster heads (Cayirpunar, Kadioglu-Urtis & Tavli, 2015). All analysis of the content in a wireless sensor network is done by base stations. A BS also handles, individual node configuration and routing (Gao, Zhang, Qi, Li & Tong, 2016). The base station is usually composed of a wireless trans-receiver, monitoring in- terface and could include a computing terminal station. A minimalistic design of a wireless sensor network will include a monitoring module running on the terminal in the base station that receives data to a database (Devaraju, Suhas, Mohana & Patil, 2015) 2.3.3 Routing in IoT sensory network . In designing a sensory network corresponding to a cloud based manufacturing infra- structure it is important to consider power and memory constraints of sensory nodes, changes in topology, communication failures,localization of data, decentralization, node clustering and data aggregation (Wu et al., 2015a), (Chi et al., 2014). To design a routing protocol corresponding to IoT sensor networks it is mandatory to give con- sideration corresponding design challenges like scalability (provided by cloud), data coverage,energy consumption(of sensor nodes), fault tolerance e.t.c.(cloud mfgd 2). Sensor nodes are inherently low power devices having limited processing capabilities,
  • 47. Chapter 2. Literature Review 47 low bandwidth and memory. There are two conventional methods of representing the routing problem of sensory nodes in cloud network. 1 The cloud network is represent as a graph G where G = (V,E) hereV represents the nodes and E represents bi-directional communication links. The optimum routing is represented as the minimal cost of traveling all the nodes starting at source and vising all the destination vertexes in the available graph edge. 2 A spanning tree T = (V,E) cost of which includes travelling to all of its childess nodes. This solution is complex even when the entire topology is already known. Machine learning makes it possible for a system(node) to learn from its experi- ences, selecting the most effective route(low cost) and adapt to a dynamic environ- ment.Learning to route dynamically is cheap and energy saving and prolongs the life of cloud based IoT sensor networks(as it can adapt to any additions or changes). In his paper (Guestrin, Bodik, Thibaux, Paskin & Madden, 2004) has mentioned the usage of a distributed regression framework to model the routing problem. He propagates that instead of transmitting to one another or outside a sensor node network the nodes transmit only a subset of its parameters significantly reducing the communication required.(Barbancho, León, Molina & Barbancho, 2007) have presented optimal routing using self organizing maps(SOM). SOM is a neural network that uses unsupervised learning to produce discretized representation of training data they call map. They use Dijkstra’s algorithm to form the shortest path from network base station to every node in network. WSN composes of units with limited capabilities such as lower levels energy bandwidth and storage capacities. An efficient routing algorithm must reflect these shortcomings. Some commonly used routing protocols are as follows (Y. Wang et al., 2016): 1)Flooding: In this type of routing a message to an uncertain location is broadcast to all the nodes. The receivers of the message again broadcast the message to
  • 48. Chapter 2. Literature Review 48 all its neighboring nodes. The process is repeated until all nodes receive the message. This type of routing while having a high latency, is suitable to system with low power requirements. 2)Data centric routing: In this type of routing algorithm the sender of the sensed data broadcast the metadata of a message (mostly the node id and the timestamp) as an advertisement all nodes that do not have the data respond with a request. Finally, the sender node only responds to these nodes. Another flavor of this protocol is direct diffusion routing here the nodes interesting in a topic set their gradient to zero. Data is send to these nodes only. Gradient increases on every resend. 3)Table driven routing: the routes of delivery of a message is stored in a table are stored in a table. The sensor nodes fill the tables before they are actually used. This approach in a wireless sensor network is very useful where there is low latency requirement. 4) On demand routing: Routing paths for a message is decided only when needed. This approach is the opposite of table driven routing which focuses on low energy capacities instead of low latency. This method is better suited therefore is popularly used in many wireless sensor network implementations. 5) Hierarchical routing: In this method similar sensor nodes are grouped together. This group of similar sensor nodes are called a cluster. Each cluster has a cluster head, the cluster head has advanced computational abilities and energy resources. A sensor node will only send data to the cluster head, saving the bandwidth of its peers. Only a cluster head handles the routing for the entire cluster. This approach efficiently handles the problem of latency, bandwidth, and power constrictions.
  • 49. Chapter 2. Literature Review 49 2.3.4 Communication with base station There are two model through which communication between the cluster heads and the base station takes place (Cayirpunar et al., 2015) 1) Push model: The sensor nodes only transmit data when they are ready. The base station passively listens over a sensory network. 2) Pull model: In this approach the base station queries for data in a timely manner. The sensor nodes respond to the query. 2.4 Data Fusion Multi sensor data fusion is acumination of data from several sources to form a unified model image. Multisensory fusion is used to provide the computing systems human like perception although, achieving exact emulation of a human brain is still not possible. Human beings can perceive objects attributes like depth, texture, colour, temperature etc. simultaneously as a single model. Without physical touch or contact humans can guess the correct dimensions of an object. Multi sensor fusion is currently used most popularly in image and spectral fusion to generate perceptive imaging guidance systems popularly used in robotic and navigation systems (Tong et al., 2016). Joint directors of laboratories (JDL) defines data fusion as a “multilevel, multifa- ceted process handling for automatic detection, association, correlation, estimation and combination of data and information from several sources”. The data can be extracted from a single or multiple sources (De Vin, Holm & Ng, 2010). 2.4.1 Advantages of data fusion 1) Robustness and reliability: fused data coming from multiple sources tend be redund- ant which enables it to work in case of partial failure. One sensor can produce additional dimension to sensor perception.
  • 50. Chapter 2. Literature Review 50 2) Increased confidence factor: Multiple sensors provide extended spatial and temporal dimension to sensor data. This adds to increased confidence factor. It also leads to data measured by one sensor be confirmed by measurements of other sensors (Tong et al., 2016). 3) Reduced uncertainty and ambiguity 4) Reduces interference: By increasing the dimensionality of the measurement units, the resultant system becomes less vulnerable against external interference. 5) Improved perception: The resultant value of a fused property has a better resolu- tion than measurement taken by a single sensor. 2.4.2 Motivation of sensor fusion The physical single sensor environment suffers from certain shortcomings. 1) Sensor failure: the breakdown of a single sensor causes the complete loss of perception for a physical object.(Mönks et al., 2016) 2) Spatial coverage: one sensor even a cluster of sensor node covers limited spatial coverage. For e.g. temperature read from a sensor represents the unit measured in a single read it may not be very accurate, it fails to render the average temperature of a boiler. 3) Temporal coverage: Some sensors limit the maximum frequency of measurement, as they need a particular time frame to finish. 4) Imprecision: Data measured from a single sensor over a period of time may not be precise. A group of sensors measuring the data gives a more accurate reading 2.2 on page 54. 5) Uncertainty: Uncertainty arises when sensors cannot measure all relevant at- tributes with its perception range leading to missing and inconsistent values 2.2 on page 54.
  • 51. Chapter 2. Literature Review 51 2.4.3 Issues with Data fusion 1) Imperfect, inconsistent and spurious data. 2)Correlation of data: Bias of data due to external uncontrollable data. 3)Data dimensionality: Compression loss of lower level sensors after data fusion takes place. 2.5 Sensor fusion methodologies 2.5.1 PNN Classifier Probabilistic neural network is a classifier based on the Bayesian classification theory and estimation of probability density function. PNN is predominantly a classifier that maps the multiple inputs to a number of classification. PNN classifier can be used to process multiple inputs to generate decision ranges. A probabilistic neural network consists of several probability density function estimators for each of its classes. The classifier takes decision after calculating the probability density function of each class using the training examples. The multi- classifier decision is mathematically expressed as pkfk > pjfj (2.1) Here pk is the priori probability of occurrence from class k and the probability density function for class k is represented by fk. This is called a neural network as it maps in to the two-layer network.
  • 52. Chapter 2. Literature Review 52 Architecture of PNN PNN consist of four layer 1)Input layer, 2)Pattern layer, 3) Summation layer, 4) decision layer shown as in figure 2.3 on page 54 The input layer shows the multiple inputs to the classifier. The number of nodes in the pattern layer is equivalent to number of classes in training instances. The input layer is completely connected to the pattern layer. It does not do any further computation only distributes input to neurons to pattern layer. The pattern layer is partially connected to summation layer. The summation layer adds the input neurons from the pattern corresponding to training instances selected. How does PNN work The input layer is propagated to the pattern layer. Once the pattern similar to input is selected the output node are computed. In equation 2.2 n is the number of features of input, sigma represents smoothing coefficient, xij is the training instance that cor- responds to category c. PNN summation layer computes the probability of pattern x being classified in class c by averaging the output of neurons belonging to the same class.(Mahendran & Dhanasekaran, 2015) 1 ∏ c = 1 (2n) n 2 σn exp[− (x − xij)T (x − xij) 2σ2 (2.2) Here X is the input instance pi(x) = 1 (2π) n 2 σn 1 Ni (Ni) ∑ (i=1) exp[− (x − xij)T (x − xij) 2σ2 ] (2.3) In equation 2.3, Ni denotes the total number of samples in classifier c the training class will contain probabilities of losses associated with making incorrect decision. This priori information will be used by the decision unit to calculate the output from
  • 53. Chapter 2. Literature Review 53 summation neurons. (Ben Ali, Saidi, Mouelhi, Chebel-Morello & Fnaiech, 2015) 2.5.2 Kalman filter Kalman filter is an algorithm that uses temporal series of measurement containing noise. It produces estimates of unknown units over a period of time that tend to be more precise than those based on measurement in a single run. It also measures joint probability distribution for a variable over a timeframe. Kalman filter inputs data from multiple sensors to a matrix of internal states consisting of parameters of interest which relates to linear dependencies between system states and corresponding inputs (Cheng, 2016). →x k + 1 = A.→x k + B.→u k + w (2.4) Equation 2.4 shows the Kalman filter algorithm here vector xk shows the state of system at a point of time represented by k. Vector uk is input to the system at time k. Matrix B defines the relation between input vector uk and system state vector xk+1, w is a random variable that represents noise. (Cheng, 2016) 2.6 Clusters and data aggregation In a cloud based topology as shown in (Wu et al., 2015a) (Y. Zhang et al., 2012), (Mitton et al., 2012) have a cluster architecture. Instead of communicating with the nodes directly all the nodes on a cloud based network topology each of the node directly communicates to the cluster head.A sensory cloud can have more than one cluster head. The cluster head then assembles and aggregates the data this is known as data aggregation. Cluster and data aggregation are very useful for machine learning because 1 Due to data aggregation, pool of data from all the sensors are available for interpola- tion and extracting trend patterns from data.
  • 54. Chapter 2. Literature Review 54 2 Machine learning algorithms can be employed to select the most optimal(lower cost for all nodes to interact) cluster head. Selecting an efficient cluster head will in turn prolong life of the network (Schaefer, 2014), (Yang et al., 2014), (Soliman, 2014). Few methodologies for clustering includes 1)using decision trees for selecting cluster- heads (Craven & Shavlik, 2014). 2)Neural networks for large scale clustering (Rojas, 2013). 3)Using a self organizing map for data aggregation (Barbancho et al., 2007). 2.6.1 Event detection and processing : Event detection is the corner stone of every sensory network.In context of cloud based manufacturing there has be trustworthy event scheduling and detection . Machines in the cloud based architecture must be able to detect and log events with minimum human intervention (Khan, Khan, Zaheer & Khan, n.d.) (Wu et al., 2015a). Event processing can be done through 1) Bayesian algorithms : This includes decentralization of a sensor network zone (Liu, Yang, Ma & Cheng, 2015). 2) neural network (Yu, Wang & Meng, 2005) and K nearest neighbour (Jayaraman, Zaslavsky & Delsing, 2010). 2.6.2 Localization of sensor objects : this involves determining geographical location of nodes and its components.In order make the cloud based topology self aware, each node needs to know (or be aware) of other components in its locality. For most large scale organizations this is done through global positioning system. But in case of manufacturing firms which are indoors this may not be a viable choice. Relative location is sufficient for certain purposes (Ahmed, Shi & Shang, 2005) ,absolute localization involves the use of anchors to localize a node in its absolute location (Ahmed et al., 2005). In this context certain terms used is: 1 Unknown node: This represents a node whose location cannot be determined.
  • 55. Chapter 2. Literature Review 55 2 beacon node: This node can recognize its placement either manually or through a positioning hardware. This node serves as a beacon for recognizing other unknown nodes. 3 Received signal strength indicators: This is a representation of distance to measure transmission performance (of nodes) 2.7 Semantic Sensor N/W Ontology Semantic layer is a thin information processing standard that extends W3C consortium, these standards represent common protocol of information exchange between web entities. According to (Hao & Helo, n.d.), (Wu et al., 2015b), (Yang et al., 2014), (X. V. Wang, Wang, Mohammed & Givehchi, 2016)a cloud manufacturing network consist of several loosely coupled entities that are autonomous and self aware. In addition, machines in a cloud manufacturing framework must be able to understand each other (talk with each other, interpret contextual data relating to its functionality) (Brown, McMahon & Sims Williams, 1995), (Young, Gunendran, Cutting-Decelle & Gruninger, 2007). Semantics sensor network ontology is a set of tools for knowledge acquisition, structural representation of logic and reasoning that are tailored for sensor based framework (Lemaignan, Siadat, Dantan & Semenenko, 2006). Sensory network ontology is based on OWL that uses XML like structure (elements and attributes) with standardized meaning that is used to define nodes, their structure, their relationships, and their functionality . The OWL an extension to RDF schema set (Resource description framework) RDF is a data model for describing things and their relationship (Papazoglou, 2012). A basic structure describing a simple relationship between a printer(machine) and
  • 56. Chapter 2. Literature Review 56 its sub part is shown in 2.4 on page 55. The figure simply states that the print head having the name ’CR322A’ is the sub part of the printer. Figure 2.5 on page 56 depicts a complete representation of semantic sensory network as adapted from (Pu, Wang, Du, Zhang & Chen, 2016). This shows diagrammatic repres- entation of the ontology repository. This illustrates a sensor that measures temperature and further provides information about measurement range (attributes of which consist minimum and maximum temperature range, unit of measurementCelsius), measurement- capability (having attributes accuracy rangemin, max values and interval), and system information (giving platform representation, and address of physical instance). As SOAP protocol,XML and REST web services are corner stone of cloud based manufacturing as discussed by (Young et al., 2007), (Lemaignan et al., 2006). A thin ontology layer that sits on top of its web service should be relatively easy to define(Young et al., 2007). By referring the literature described in this section. We will attempt to conduct the study further to answer the following research questions. RQ1 as a result of the advantages cloud manufacturing Shown in table 2.1 on page 33 can machines achieve (at least partially) human-like perception (from the context of judging the depth, texture, color and similar attributes of an object). RQ2 Can these machines take autonomous decision based on the achieved sensory perception
  • 57. Chapter 2. Literature Review 57 Figure 2.1: Feedback control process in autonomic system
  • 58. Chapter 2. Literature Review 58 Figure 2.2: Aspects of data fusion Figure 2.3: PNN Classifier
  • 59. Chapter 2. Literature Review 59 Figure 2.4: Semantic sensor network ontology representation
  • 60. Chapter 2. Literature Review 60 Figure 2.5: Semantic sensor network ontology representation
  • 61. Chapter 3 Method 3.1 Introduction An integral part of designing a diverse system architecture is usage of a functional model that clearly captures the guidelines of operation, system architecture, processing and control functions within the data fusion process. Design science research methodology has been found to successfully capture the salient features and diverse automatic,manual approaches that are employed across different computing projects.Design science is a solution oriented approach that is more suitable to agile projects requiring observational studies and having a patterned evolution approach. In order to fully attain the fused state a certain amount of reverse engineering is required as if the fusion is unsuccessful.The data can relapse into its original state. The data fusion process is tightly coupled with resource management this is an important feature that needs to be fulfilled in a data fusion system. Data science methodology follows the engineering cycle which is a series of recur- ring steps that leads to a better functioning system (Keith, Demirkan & Goul, 2013). The recursive steps in design science includes 1 Problem investigation. 61
  • 62. Chapter 3. Method 62 2 Treatment design. 3 Treatment implementation. 5 Implementation evaluation. 3.2 Problem investigation Figure 3.1: Data fusion architecture as seen in figure 3.1, data fusion is fundamentally used to provide specific assessment of the observed environment , this information can be further used to automate the system or be used by the end user to plan and control the resources. 3.2.1 Problem investigation A generalized approach to problem investigation for data fusion project is shown in 3.2 on the next page. This approach suits problem area like sensor fusion where a complete
  • 63. Chapter 3. Method 63 Figure 3.2: Problem investigation approach set of system goals is difficult to understand at the beginning of the design process (Keith et al., 2013). In a poorly defined and understood environment such the design science engineering approach will incorporate better solutions through data, or model available in the system (Keith et al., 2013). Layered design : This approach applies the concept of ’divide and conquer’ to the problem area, specially when the kind of possible errors likely to be encountered by the system in the future is uncertain. This concept merely states how the architecture needs to be sub divided and the control needs to integrated to various sub working nodes. This is shown in figure 3.3 on the following page. This approach permits a cost effective method to achieve the rquired level of performance. This is the common method used for designing and developing the multi- sensor fusion approach and will be used in this paper. The higher level processes will be handled by more computationally advanced nodes therefore, the users of this approach
  • 64. Chapter 3. Method 64 Figure 3.3: Layered Approach architecture can add any number of lower level nodes when required. Paradigm based design: multi-sensor data fusion itself follows a top down approach, while the control structure follows the top- down approach. This is a important principal that will be used to design this system as it can be noted that sensor data from the lower nodes are passed on to higher level nodes for fusing. This means that the data is flowing from bottom towards top controlling nodes. Similarly, lower level sensors are not able to compute data therefore these functions are done by higher level nodes.
  • 65. Chapter 3. Method 65 Figure 3.4: Layered Approach architecture 3.2.2 Treatment design The design for a sensor fusion system involves.Figure 3.5 on the next page shows the design process. The design process includes 1 Data fusion algorithm design 2 Development and validation of the proposed algorithm. Data fusion algorithm defines the processing of the fusion activity. This includes 1 Data association 2 Data alignment 3 Estimation of state(classification) The concept of data association means how the sensor data is represented before and after fusion this concept is also mentioned in 1.5 on page 28. Data association includes 1 Hypothesis generation:Indentify the sensors used and data generated to be fused (Aher & Lobo, 2013)
  • 66. Chapter 3. Method 66 Figure 3.5: Multisensor system design process adapted from (Mönks et al., 2016) 2 Hypothesis evaluation: Calculate the metrics for fusion and how they will be fused (Aher & Lobo, 2013). 3 Estimation of state(classification): this is the final step of this fusion process where the fusion and classification of data is completed. 3.2.3 Treatment implementation and evaluation In this study the implementation of the proposed design is done through a simulation of the entire process. The dataset includes 100 sensor samples and 312 samples of the fused matrix which is then used to show the simulation of fusion on a three dimensional spectrum (Yann-Al Le borgne, 2014) and (Illinois, 2014). Proposed system design Here a block diagram shows the fusion system design. Taking in consideration the layered approach mentioned in 3.2.1 on page 59 figure 3.6 on page 64 shows the
  • 67. Chapter 3. Method 67 proposed system network. Figure 3.6 on the next page shows three wireless sensor network having sensor clusters. Each cluster has a cluster head node. The cluster head is responsible for training the data from sensors received within its range. Sensors can interact only with the cluster head. Each sensor head is equipped with a switch for communicating with the base station and other cluster heads . The base station is responsible for the final classification and storing of data.
  • 68. Chapter 3. Method 68 CH CO COCO CO CH CH CH Polymer FBG Sensor Proximity Sensor CO NIR Sensor Network Edge Switch Core switch RouterBase Station Wireless Access Point CLOUD BASED MANUFACTURING PFBG SENSOR 1 PFBG SENSOR 2 PFBG SENSOR 3 NIR SENSORS CH = Cluster Head Proximity Sensors Optical fiber network UTP Cable Line
  • 69. Chapter 4 Analysis 4.1 Introduction In order to avoid the tedious process of disassembly during recycling thermoplastic components are chopped into smaller shards which are sorted manually and if lucrative these shards are sorted by hand with the aid of a convertor. The operators select shards based on appearance, texture, color etc. manually. This process has to be extremely precise as even a little margin of error can cause the price of the resultant plastic to be extremely low. In the manual process an operator often selects one type of plastic. Therefore, the automated classifier has to take in account both the factors texture identification as well as the cadence. The conveyor belt of the recycling plant has different blends of plastic resins passing through it. For a classifier system to achieve 100 percent accuracy and be profitable it needs to able to distinguish both the physical attributes as well learn the underlying pattern of the data. Visually many thermoplastic components such as ABS, polycarbonate, polyamide are similar therefore using the data only from visual sources is not sufficient for proper classification of thermoplastics. The NIR spectra shows the absorption of light. Regardless of the actual color of plastic resin, samples of the same type always have the same peak patterns (Emberton, Chittka, 69
  • 70. Chapter 4. Analysis 70 Cavallaro & Menghua, 2016a). These property makes it possible to distinguish different plastic types. Unfortunately, plastics with blackish gradient have stronger absorption peaks than others (Emberton, Chittka, Cavallaro & Menghua, 2016b) This makes information generated only from the spectra incomplete to successfully classify and categorize a plastic shard. Therefore, data from different devices is required to successfully classify a sample in a recycling plant given the huge variance in the sample size (Elsayed, Rischbeck & Schmidhalter, 2015). 4.2 Supervised learning classifier The classifier uses the feature vector, which is provided by the multiple sensors in the recycling plant (Miguez, Georgiopoulos & Kaylani, 2010). These sensors provide a more in depth attribute categorization to technically classify thermoplastic resin. For the classifier to achieve its purpose the feature values for the thermoplastics in the same category should be similar, and a greater variance for plastics of different categories are expected. This means that the classifier complexity increases with the length of our feature fused vector. In our algorithm sensory pretreatment is conducted for two purposes. Firstly, it serves as a learning aid for the algorithm. This means that once a shard is properly classified its properties are stored in the fused feature matrix, the future classification of shards with same properties are instantaneous. Secondly the smoothing (or the training vector) acts as a means to correctly abolish noisy data. The principal component analysis method or (PCA) is used for both data smoothing (training the data) and spectral recognition (classifying the data in a PCA hyperspectral system) (Casa, Castaldi, Pascucci, Palombo & Pignatti, 2013). Once the fused feature matrix is identified from the sensory system. A supervised learning classifier based on probabilistic neural network is used to classify the plastics into their appropriate
  • 71. Chapter 4. Analysis 71 categories ( here the decision fusion occurs) (Nie et al., 2016). A example of working of a PCA hyperspectral system is shown in 4.1 on the following page Figure 4.1: Data in PCA after feature fusion In our case the sensory data pre-treatment is done mainly for two reasons. Firstly, from data in the visible spectra it is interesting to utilize prior knowledge in order to decide which are the discriminant features. Secondly, the information provided by the hyperspectral system requires an adequate pretreatment for reducing the data dimensionality, that it is huge in this kind of sensor, and for revealing differences between spectra. The Principal Component Analysis method (PCA) has been used as a hyper spectral component. Figure 4.2 shows the overall functioning of the classifier shows for this pre-treatment. 4.3 Feature Fusion As seen in the algorithm shown in 4.3 on page 72, the feature fusion algorithm takes three inputs namely the temperature, tensile yields, and metallic content in a thermoplastic shard. This information is captured by the individual sensors. The three inputs are passed on to the FLC or the fuzzy logic controller. The first data fusion occurs between the pre trained data and the new data captured by the sensor of the same type. This help in eliminating unwanted noise from the fusion process this is also mentioned in
  • 72. Chapter 4. Analysis 72 Figure 4.2: Classifier structure equation 1.1 on page 21 the difference between d’ (the trained data) and d (new data) is represented here. Consequently, fusion of data from different sources takes place and the fused feature matrix as shown in 1.5 on page 28 The content in the fused matrix is then compared to its previous entries and if there is no change in the data the current run is disregarded from the matrix entirely (Jamalabadi, Nasrollahi, Alizadeh, Nadjar Araabi & Nili Ahamadabadi, 2016). If the fusion is successful, the fused data is accepted by the base station and is stored on a cloud database in real-time. This data is instantaneously available to the hyperspectral scanners. These scanners with the help of dimension data and in some cases (image representation) combine feature matrix to the spectral image (decision fusion) and use this data to successfully classify a plastic component (Scherer, 2012). The process flow of feature fusion is shown through a business process modeling notation (BPMN) diagram in 4.4 on page 73. Here there are two processes corresponding to two different sensors are shown. One sensor collects temperature data and the second sensor extracts tensile yield. The system checks sensor battery levels for calculating the confidence factor every 10 minutes prior to sensor fusion. The data extracted is then
  • 73. Chapter 4. Analysis 73 passed on to the base station where the fusion process actually takes place. Here the base station will evaluate the confidence factor, fuse the data with its training factors as mentioned in 4.3 on page 72. The fuzzy logic controller located in the base station fuses the data from different sources to generate and maintain the feature fusion matrix (Iounousse et al., 2015). 4.4 Findings Findings are categorized according to the fusion type. 4.4.1 Feature fusion Feature level fusion can be accomplished by fusing two feature sets obtained from multiple sensor sources as shown in figure 1.5 on page 28. To further explain the simulation process we will assume that A and B represent two different sensor feature vectors. Let A =[a1,a2,a3,..., an] and B = [b1,b2,b3,...,bn] represent data extracted from multiple sensor sources. The fusion of these two sets gives a new resultant feature vector that is multiplication of the first two . The working of Kalman FLC( fuzzy logic controller) is shown in figure 4.5 on page 74. Here there are two states as represented in 1.1. The state is the representative of variable d and the covariance is the representation of state d’. On encountering a new entry of data from multiple sensors, firstly the values of the new vector is predicted and measured based on the training values. The new state is then validated against a specific range of values this is shown in step ’validate measurement’ in figure 4.5 on page 74.If the new values are not in the specific range then the vector is reverted back to its original state. Otherwise, the validated values are merged with the co-variance vector (d’). The new probabilities for future measurement prediction and validation are calculated and stored. After this step feature fusion is actually completed and the two vectors of d and d’ are true representation of their
  • 74. Chapter 4. Analysis 74 real-time values. As a proof of concept the fusion algorithm was run on mat-lab simulator for 100 samples. The data is extracted from (Illinois, 2014) and (Yann-Al Le borgne, 2014) . The data from these files was extracted to MATLAB data file. The results of the simulation are as follows. In figure 4.6 on page 75 smoothing of surface temperature and tensile strength takes place in accordance with first step in 4.3 on page 72. Figure 4.7 on page 76 show the difference between states d and d’ mentioned in equation 1.1 on page 21 and 2.2 on page 48. Finally, Fusion of data from different sensors takes place as shown in 4.8 on page 76. In feature fusion shown in figure 4.8 on page 76, both smoothing and actual fusion of multi-sensor data takes place. Figure 4.9 on page 77 shows the difference in training and actual data after multi data fusion. Here the data in the MATLAB model data file is plotted on a graph. Data from individual sensors are first fused with their normalized forms, this function is known as data smoothing. Here the assumption is that all values are in the proper range configuration. Here 90 percent of the values is assumed to be in its normalized state or 90 percent of the data used is for training. 4.4.2 Decision fusion For simulating the decision fusion sum of absolute difference SAD is used. Sum of absolute difference is fundamentally the similarity between two image blocks. Here the center of gravity,color,texture is used to classify objects into classes. SAD works by taking difference in each pixel of the original block and comparing it with each pixel of the processed block. In figure 4.1 on page 67 this pixel processing and fusion is done by components NPC1 and NPC2. The only difference being that we use the principal component analysis or PCA for classifying an image in to a class and not fusing image data.
  • 75. Chapter 4. Analysis 75 Next the texture and contour of the pixel is generated as shown in figure 4.11 on page 79. In order to show the simulation of hyper spectral system used for decision fusion a triangulated image is plotted on a graph as shown in figure 4.10 on page 78. Then the pixel information of our image and the target image is generated for comparison as seen in figure 4.12 on page 80. Finally, the image is classified into a appropriate class see figure 4.14 on page 81. For our thermoplastic classification the original image represents the normalized image (training data) this could be derived as individual image captured by CCD camera sensor or an array of images or even a fused image. The thermoplastic shards captured by the CCD camera sensor can be scanned in similar manner against the normalized set of images to identify its class. Here figure vrefCOG shows a three dimensional representation of the scanned object. 4.4.3 Final PNN Classification Once the feature fusion matrix the final feature vector and the class of the plastic is de- cided this information is feed into PNN classfier to merge both these information(either one feature vector or hyper spectral data by itself will not suffice as only density and color information will not be enough to classify the final category ). Input Layer: Feature Fused matrix(temprature, tensile properties), Hyperspectral class data (this includes all information by the CCD camera sensor and the hyperspectrum like colour, texture, density, depth(COG)) etc. Output layer Final identified category of the plastic. Summation Layer This layer will use the fuzzy logic controller to fuse the inputs recursively till successfully classified and training data is normalized
  • 76. Chapter 4. Analysis 76 Figure 4.3: Data Fusion Algorithm
  • 77. Chapter 4. Analysis 77 Get Tensile Yield Get Temp Fuse Data Base Station Figure 4.4: Data Fusion Algorithm
  • 79. Chapter 4. Analysis 79 Figure 4.6: Smoothing of tensile and temperature data
  • 80. Chapter 4. Analysis 80 Figure 4.7: Difference in state d and d’ of multi-sensor data Figure 4.8: Fusion of tensile and temperature data
  • 81. Chapter 4. Analysis 81 Figure 4.9: Difference between normalized and actual values of merged data
  • 82. Chapter 4. Analysis 82 Figure 4.10: Plotting of original image
  • 83. Chapter 4. Analysis 83 Figure 4.11: Texture and contour generation
  • 84. Chapter 4. Analysis 84 Figure 4.12: Pixel-wise comparison of source and target
  • 85. Chapter 4. Analysis 85 Figure 4.13: Center of gravity and density of the target Figure 4.14: Pixel-wise comparison of source and target
  • 86. Chapter 5 Discussion The norm of covariance curve as shown by figure 4.9 on page 77, we can clearly see that the norms of the fused features is considerably lower than those of normalized version of both the sensors. This shows that the fused data is relatively free of noise. This can also be seen from figures 5.1 on the following page , 5.2 on page 84 and 5.3 on page 85. The example shown above gives a better predictive accuracy(less uncertainty). this shows that the algorithm has worked relatively well. A study conducted by (Declercq, Nagaoka, Yamazaki & Saijo, 2015) specifies that PCA based spectrum are highly sensitive to light reflectively and gradients of the scanned objects. According to the authors ,the hyper spectrum performs poorly for objects of a blackish - gray gradient because lesser light is reflected back, thus causing poor visibility of the scanned object. This will have a clear repercussion on the objects scanned by the spectral system. A major drawback of this study is that it was conducted in a simulated environment with a small data set. A plastic recycling plant is huge with infinite amount of data being generated daily. The use of algorithm on actual data will only show the true accuracy of the methodology. 86
  • 87. Chapter 5. Discussion 87 Figure 5.1: Temperature data before smoothing 5.0.1 Factors that could effect the accuracy of the algorithm Out of sequence data The input to a data fusion system is normally discrete pieces of algorithm having a unique timestamp, deciding its time of origin. However in real life scenario, heterogeneous sensors working at different rates(speed) cand lead to data being out of sequence. Such out of sequence data could badly effect the working of our algorithm, as both training/ smoothing of data and the current data will be in an inconsistent state leading to disastrous outcomes. Conflicting data In our case study scenario, we have used NIR and Polymer FBG sensors. A notable point is that both these sensor produce temperature data (although in this study it is assumed that only NIR captures temperature data ). A real life situation could have many sensors reflecting many different values.
  • 88. Chapter 5. Discussion 88 Figure 5.2: Temperature data after smoothing This could lead to data inconsistency. An appropriate measure of accuracy of the algorithm needs the data to be in a consistent state. Disparate data In order for a fully automated classifier the sensors should work in conjunction. However, in a recycling plant there could be sensors whose data are completely incompatible. Data from completely disparate sources will cause the fusion mechanism in this study to fail. It needs to ensured beforehand that the data to be fused is compatible.
  • 89. Chapter 5. Discussion 89 Figure 5.3: Tensile yield before and after smoothing
  • 90. Chapter 6 Conclusion and future study This study presents a concept of multi sensor data fusion in a WSN based infrastructure. The preliminary study of the WSN environment currently used in manufacturing/ re- cycling system showed us that there are multiple arrays of sensors already deployed around the machines. Data sensed by these multiple sensors are being used to monitor the state of machines. In this study we have attempted to gather the data from these sensors and fuse them into a meaningful form to automate the classification and sort- ing of plastics into their appropriate categories and pass them along for next step of processing. Recycling thermoplastic is a technological problem that is not completely solved yet.One of the complexity in deploying an automated solution is that various plastic contents though visually similar have largely varying characteristics which is not perceivable by machines unless the data already present in different sensors are used in a way that helps the classifiers more adaptive. This study shows a simulation through the algorithm that fuses data from CCD spetrum, NIR sensor and inductive sensor. A brief description of the classification method used in this study is as follows. PNN methodology is used for the classification, in which set of predetermined inputs or (data from sensors) are normalized and then fused using the Kalman filter algorithm. Feature fusion represents the fusion of lower level sensors for a vector of feature attributes. A 90
  • 91. Chapter 6. Conclusion and future study 91 higher level sensor used is the PCA or the hyper spectral sensing in which the plastic color and texture is identified to determine the preliminary class of the plastic. This information is again fused with the feature fused matrix of lower level values. The Kalman filter fusion process and the hyperspectral image classifcation is shown through a simulation in MATLAB. To conclude although the algorithm successfully works on a small scale simulated data, a full scale implementation of the actual environment is essential. In this study the elements of noisy data, redundant, incompatible and spurious data are not considered but these factor could have serious repercussions on accurate functioning of the algorithm. 6.0.1 Future studies Future work will include enhancing the automated fusion through machine learning and web based modules.A sensor ontology language can also significantly improve the extraction of data through a web based module. Semantic sensor network Sensor semantics deal with extending the former OWL ontology for modeling sensor devices and their capabilities for interacting better with their systems and processes (Compton et al., 2012). The ontology is based on observation the pattern of sensor interaction with its peers or higher level systems as seen in figure 6.1 on the next page The sensor ontology can be sub divided in to a number of perspectives based on their interaction with sub-systems. These are as follows 1 Sensor perspective:what a sensor measures, how it senses and what is sensed. (Compton et al., 2012) 2 System perspective: sensors interactions with their systems. Focus is on a system of sensors (Compton et al., 2012).
  • 92. Chapter 6. Conclusion and future study 92 Figure 6.1: Pattern in Semantic sensor network taken from (Compton et al., 2012) 3 Data perspective: what data is being sensed also the meta-data generated (Compton et al., 2012). 4 Feature or property perspective with focus on features or attributes being sensed and the correlation of these properties
  • 93. Chapter 6. Conclusion and future study 93 Figure 6.2: Overview of the Semantic Sensor Network ontology classes and properties taken from (Compton et al., 2012)
  • 94. References Aher, S. B. & Lobo, L. (2013). Combination of machine learning algorithms for recom- mendation of courses in e-learning system based on historical data. Knowledge- Based Systems, 51, 1 - 14. Ahmed, A. A., Shi, H. & Shang, Y. (2005, June). Sharp: a new approach to relative localization in wireless sensor networks. In 25th ieee international conference on distributed computing systems workshops (p. 892-898). doi: 10.1109/ICDCSW .2005.125 Antonopoulos, C. P. & Voros, N. S. (2016). Resource efficient data compression algorithms for demanding, wsn based biomedical applications. Journal of Bio- medical Informatics, 59, 1 - 14. Anuar, S., Selamat, A. & Sallehuddin, R. (n.d.). A modified scout bee for artificial bee colony algorithm and its performance on optimization problems. Journal of King Saud University - Computer and Information Sciences. AO, B., YONGCAI, W., LU, Y., BROOKS, R. R. & IYENGAR, S. S. (2016). On precision bound of distributed fault-tolerant sensor fusion algorithms. ACM Computing Surveys, 49(1), 5:1 - 5:23. Ari, A. A. A., Yenke, B. O., Labraoui, N., Damakoa, I. & Gueroui, A. (2016). A power efficient cluster-based routing algorithm for wireless sensor networks: Honeybees swarm intelligence based approach. Journal of Network and Computer Applications. Arkin, E. M., Efrat, A., Mitchell, J. S., Polishchuk, V., Ramasubramanian, S., Sank- araraman, S. & Taheri, J. (2014). Data transmission and base-station placement for optimizing the lifetime of wireless sensor networks. Ad Hoc Networks, 12. Aydogdu, I., Akın, A. & Saka, M. (2016). Design optimization of real world steel space frames using artificial bee colony algorithm with levy flight distribution. Advances in Engineering Software, 92, 1 - 14. Barbancho, J., León, C., Molina, F. J. & Barbancho, A. (2007). A new qos routing algorithm based on self-organizing maps for wireless sensor networks. Telecom- munication Systems, 36(1-3), 73–83. Beguš, S., Begeš, G., Drnovšek, J. & Hudoklin, D. (2015). A novel nir laser-based sensor for measuring the surface moisture in polymers. Sensors Actuators: A. Physical, 221. Ben Ali, J., Saidi, L., Mouelhi, A., Chebel-Morello, B. & Fnaiech, F. (2015). Linear fea- ture selection and classification using pnn and sfam neural networks for a nearly 94
  • 95. References 95 online diagnosis of bearing naturally progressing degradations. Engineering Applications of Artificial Intelligence, 42, 67 - 81. Brown, K., McMahon, C. & Sims Williams, J. (1995). Features, aka the semantics of a formal language of manufacturing. Research in Engineering Design, 7(3), 151. Buyya, R., Calheiros, R. N. & Li, X. (2012). Autonomic cloud computing: Open challenges and architectural elements. In Emerging applications of information technology (eait), 2012 third international conference on (pp. 3–10). Casa, R., Castaldi, F., Pascucci, S., Palombo, A. & Pignatti, S. (2013). A comparison of sensor resolution and calibration strategies for soil texture estimation from hyperspectral remote sensing. Geoderma, 197-198, 17 - 26. Castorena, J., Morrison, J., Paliwal, J. & Erkinbaev, C. (2015). Non-uniform system response detection for hyperspectral imaging systems. Infrared Physics and Technology, 73, 263 - 268. Cayirpunar, O., Kadioglu-Urtis, E. & Tavli, B. (2015). Optimal base station mobility patterns for wireless sensor network lifetime maximization. IEEE Sensors Journal, 15(11), 6592. Chakaravarthy, G., Marimuthu, S., Ponnambalam, S. & Kanagaraj, G. (n.d.). Im- proved sheep flock heredity algorithm and artificial bee colony algorithm for scheduling m -machine flow shops lot streaming with equal size sub-lot problems. International Journal of Production Research, 52(5), 1509 - 1527. Chang, H.-C., Liao, S.-C., Hsieh, H.-S., Wen, J.-H., Lai, C.-H. & Fang, W. (2016). Magnetostrictive type inductive sensing pressure sensor. Sensors Actuators: A. Physical, 238, 25 - 36. Chen, J., Li, Y., Lv, K., Zhong, W., Wang, H., Wu, Z., ... Jiang, J. (2016). Cyclam- functionalized carbon dots sensor for sensitive and selective detection of copper ion and sulfide anion in aqueous media and its imaging in live cells. Sensors and Actuators, 224, 298 - 306. Cheng, H.-Y. (2016). Hybrid solar irradiance now-casting by fusing kalman filter and regressor. Renewable Energy, 91, 434 - 441. Chi, Q., Yan, H., Zhang, C., Pang, Z. & Da Xu, L. (2014). A reconfigurable smart sensor interface for industrial wsn in iot environment. Industrial Informatics, IEEE Transactions on, 10(2), 1417–1425. Compton, M., Barnaghi, P., Bermudez, L., García-Castro, R., Corcho, O., Cox, S., ... Taylor, K. (2012). The {SSN} ontology of the {W3C} semantic sensor network incubator group. Web Semantics: Science, Services and Agents on the World Wide Web, 17, 25 - 32. Retrieved from http://www.sciencedirect.com/ science/article/pii/S1570826812000571 doi: http://dx.doi.org/ 10.1016/j.websem.2012.05.003 Coskun, H. (2015). Equivalent rectangular stress distribution in thermoplastic beam design. Journal of Thermoplastic Composite Materials, 28(11), 1465. Craven, M. W. & Shavlik, J. W. (2014). Learning symbolic rules using artificial neural networks. In Proceedings of the tenth international conference on machine learning (pp. 73–80). Declercq, N. F., Nagaoka, R., Yamazaki, R. & Saijo, Y. (2015). Proceedings of the 2015
  • 96. References 96 icu international congress on ultrasonics, metz, france adaptive spatial filtering with principal component analysis for biomedical photoacoustic imaging. Physics Procedia, 70, 1161 - 1164. Retrieved from http://www.sciencedirect .com/science/article/pii/S1875389215009906 doi: http://dx .doi.org/10.1016/j.phpro.2015.08.249 Dehghani, S., Pourzaferani, M. & Barekatain, B. (2015). Comparison on energy- efficient cluster based routing algorithms in wireless sensor network. Procedia Computer Science, 72(The Third Information Systems International Conference 2015), 535 - 542. Deng, S., Djukic, L., Paton, R. & Ye, L. (2015). Review: Thermoplastic–epoxy interactions and their potential applications in joining composite structures – a review. Composites Part A, 68, 121 - 132. Devaraju, J., Suhas, K., Mohana, H. & Patil, V. A. (2015). Wireless portable microcon- troller based weather monitoring station. Measurement, 76, 189 - 200. De Vin, L. J., Holm, M. & Ng, A. H. (2010). The information fusion jdl-u model as a reference model for virtual manufacturing. Robotics and Computer Integrated Manufacturing, 26(19th International Conference on Flexible Automation and Intelligent Manufacturing), 629 - 638. Ding, X., Chen, X., Ma, W., Chen, X. & Li, N. (2016). A novel pqcr-l circuit for in- ductive sensing and its application in displacement detection. IEEE Transactions on Instrumentation and Measurement, 65(3), 685 - 693. Duro, J. A., Padget, J. A., Bowen, C. R., Kim, H. A. & Nassehi, A. (2016). Multi-sensor data fusion framework for cnc machining monitoring. Mechanical Systems and Signal Processing, 66-67, 505 - 520. Edwards-Murphy, F., Magno, M., Whelan, P. M., O’Halloran, J. & Popovici, E. M. (2016). b+wsn: Smart beehive with preliminary decision tree analysis for agricul- ture and honey bee health monitoring. Computers and Electronics in Agriculture, 124, 211 - 219. Elsayed, S., Rischbeck, P. & Schmidhalter, U. (2015). Comparing the performance of active and passive reflectance sensors to assess the normalized relative canopy temperature and grain yield of drought-stressed barley cultivars. Field Crops Research, 177, 148 - 160. Emberton, S., Chittka, L., Cavallaro, A. & Menghua, W. (2016a). Sensor capability and atmospheric correction in ocean colour remote sensing. Remote Sensing, 8(1), 1. Emberton, S., Chittka, L., Cavallaro, A. & Menghua, W. (2016b). Sensor capability and atmospheric correction in ocean colour remote sensing. Remote Sensing, 8(1), 1. Francalanza, E., Borg, J. C. & Constantinescu, C. (2016). A fuzzy logic based approach to explore manufacturing system changeability level decisions. Procedia CIRP, 41(Research and Innovation in Manufacturing: Key Enabling Technologies for the Factories of the Future - Proceedings of the 48th CIRP Conference on Manufacturing Systems), 3 - 8. Gao, Y., Zhang, H., Qi, K., Li, X. & Tong, Y. (2016). Original papers: Optimal periphery deployment of wireless base stations in swine facilities for 1-coverage. Computers and Electronics in Agriculture, 125, 173 - 183.
  • 97. References 97 Gokturk, M. S., Gurbuz, O. & Erman, M. (2016). A practical cross layer cooperative mac framework for wsns. Computer Networks, 98, 57 - 71. Guestrin, C., Bodik, P., Thibaux, R., Paskin, M. & Madden, S. (2004). Distributed regression: an efficient framework for modeling sensor network data. In In- formation processing in sensor networks, 2004. ipsn 2004. third international symposium on (pp. 1–10). Hafizi, Z., Epaarachchi, J. & Lau, K. (2015). Impact location determination on thin laminated composite plates using an nir-fbg sensor system. Measurement, 61, 51 - 57. Hao, Y. & Helo, P. (n.d.). The role of wearable devices in meeting the needs of cloud manufacturing: A case study. Robotics and Computer Integrated Manufacturing. Hasani, A., Zegordi, S. H. & Nikbakhsh, E. (2012a). Robust closed-loop supply chain network design for perishable goods in agile manufacturing under uncertainty. International Journal of Production Research, 50(16), 4649 - 4669. Hasani, A., Zegordi, S. H. & Nikbakhsh, E. (2012b). Robust closed-loop supply chain network design for perishable goods in agile manufacturing under uncertainty. International Journal of Production Research, 50(16), 4649 - 4669. He, H., Cui, L., Zhou, F. & Wang, D. (2016). Distributed proxy cache technology based on autonomic computing in smart cities. Future Generation Computer Systems. Huang, B., Li, C. & Tao, F. (2014). A chaos control optimal algorithm for qos- based service composition selection in cloud manufacturing system. Enterprise Information Systems, 8(4), 445 - 463. Illinois, U. O. (2014, sep). Retrieved from http://mtil.illinois.edu/DATA/ _HARDNESS_Cylindrical_Correction_Tables/RWCyl.txt Iounousse, J., Er-Raki, S., El Motassadeq, A. & Chehouani, H. (2015). Using an unsu- pervised approach of probabilistic neural network (pnn) for land use classification from multitemporal satellite images. Applied Soft Computing Journal, 30, 1 - 13. Iyakwari, S., Glass, H. J., Rollinson, G. K. & Kowalczuk, P. B. (2016). Application of near infrared sensors to preconcentration of hydrothermally-formed copper ore. Minerals Engineering, 85, 148 - 167. Jamalabadi, H., Nasrollahi, H., Alizadeh, S., Nadjar Araabi, B. & Nili Ahamadabadi, M. (2016). Competitive interaction reasoning: A bio-inspired reasoning method for fuzzy rule based classification systems. Information Sciences, 352-353, 35 - 47. Jayaraman, P. P., Zaslavsky, A. & Delsing, J. (2010). Intelligent processing of k-nearest neighbors queries using mobile data collectors in a location aware 3d wireless sensor network. In Trends in applied intelligent systems (pp. 260–270). Springer. Jumaidin, R., Sapuan, S., Jawaid, M., Ishak, M. & Sahari, J. (2016). Characteristics of thermoplastic sugar palm starch/agar blend: Thermal, tensile, and physical properties. International Journal of Biological Macromolecules, 89, 575 - 581. Keith, M., Demirkan, H. & Goul, M. (2013). Service-oriented methodology for systems development. Journal of Management Information Systems, 30(1), 227 - 260. Khan, R., Khan, S. U., Zaheer, R. & Khan, S. (n.d.). Future internet: The internet of things architecture, possible applications and key challenges. 2012 10th International Conference on Frontiers of Information Technology, 257.
  • 98. References 98 Kouyama, T., Yokota, Y., Ishihara, Y., Nakamura, R., Yamamoto, S. & Matsunaga, T. (2016). Development of an application scheme for the selene/sp lunar reflectance model for radiometric calibration of hyperspectral and multispectral sensors. Planetary and Space Science, 124, 76 - 83. Kyriazis, D. & Varvarigou, T. (2013). Smart, autonomous and reliable internet of things. Procedia Computer Science, 21(The 4th International Conference on Emerging Ubiquitous Systems and Pervasive Networks (EUSPN-2013) and the 3rd International Conference on Current and Future Trends of Information and Communication Technologies in Healthcare (ICTH)), 442 - 448. Land, W. H., Jr., Ma, X., Barnes, E., Qiao, X., Heine, J., Masters, T. & Park, J. W. (2012). Pnn/grnn ensemble processor design for early screening of breast cancer. Procedia Computer Science, 12(Complex Adaptive Systems 2012), 438 - 443. Lartigau, J., Xu, X., Nie, L. & Zhan, D. (2015). Cloud manufacturing service composi- tion based on qos with geo-perspective transportation using an improved artificial bee colony optimisation algorithm. International Journal of Production Research, 53(14), 4380 - 4404. Lemaignan, S., Siadat, A., Dantan, J.-Y. & Semenenko, A. (2006). Mason: A pro- posal for an ontology of manufacturing domain. IEEE Workshop on Distributed Intelligent Systems: Collective Intelligence and Its Applications. Li, H., Chen, Q., Zhao, J. & Wu, M. (2015). Nondestructive detection of total volatile basic nitrogen (tvb-n) content in pork meat by integrating hyperspectral imaging and colorimetric sensor combined with a nonlinear data fusion. LWT - Food Science and Technology, 63, 268 - 274. Lionetto, F., Dell’Anna, R., Montagna, F. & Maffezzoli, A. (2016). Modeling of continuous ultrasonic impregnation and consolidation of thermoplastic matrix composites. Composites: Part A, Applied Science and Manufacturing, 82, 119 - 129. Liu, K., Yang, T., Ma, J. & Cheng, Z. (2015). Fault-tolerant event detection in wireless sensor networks using evidence theory. KSII Transactions on Internet and Information Systems (TIIS), 9(10), 3965–3982. Loubiere, P., Jourdan, A., Siarry, P. & Chelouah, R. (n.d.). A sensitivity analysis method for driving the artificial bee colony algorithm’s search process. Applied Soft Computing, 41, 515 - 531. Maggio, M., Hoffmann, H., Papadopoulos, A. V., Panerati, J., Santambrogio, M. D., Agarwal, A. & Leva, A. (2012). Comparison of decision-making strategies for self-optimization in autonomic computing systems. ACM Transactions on Autonomous and Adaptive Systems (TAAS), 7(4), 36. Mahendran, G. & Dhanasekaran, R. (2015). Investigation of the severity level of diabetic retinopathy using supervised classifier algorithms. Computers and Electrical Engineering, 45, 312 - 323. Miguez, R., Georgiopoulos, M. & Kaylani, A. (2010). G-pnn: A genetically engineered probabilistic neural network. Nonlinear Analysis, 73, 1783 - 1791. Mishra, V., Lohar, M. & Amphawan, A. (2016). Improvement in temperature sensitivity of fbg by coating of different materials. Optik - International Journal for Light
  • 99. References 99 and Electron Optics, 127, 825 - 828. Mitton, N., Papavassiliou, S., Puliafito, A. & Trivedi, K. S. (2012). Combining cloud and sensors in a smart city environment. EURASIP journal on Wireless Communications and Networking, 2012(1), 1–10. Mönks, U., Trsek, H., Dürkop, L., Geneiß, V. & Lohweg, V. (2016). Towards distributed intelligent sensor and information fusion. Mechatronics, 34(System-Integrated Intelligence: New Challenges for Product and Production Engineering), 63 - 71. Müller, B. J., Burger, T., Borisov, S. M. & Klimant, I. (2015). High performance optical trace oxygen sensors based on nir-emitting benzoporphyrins covalently coupled to silicone matrixes. Sensors Actuators B. Chemical, 216, 527 - 534. Nie, M., Liu, Z., Li, X., Wu, Q., Tang, B., Xiao, X., ... Zheng, C. (2016). Novel method for hyperspectral unmixing: fuzzy c-means unmixing. Sensor Review, 36(2), 184. Oliva Teles, L., Fernandes, M., Amorim, J. & Vasconcelos, V. (2015). Video-tracking of zebrafish (danio rerio) as a biological early warning system using two distinct artificial neural networks: Probabilistic neural network (pnn) and self-organizing map (som). Aquatic Toxicology, 165, 241 - 248. Otheguy, M. E. (2010). Manufacture, repair and recycling of thermoplastic composite boats. Papazoglou, M. P. (2012). Web services & soa. Perason Education. Parashar, M. & Hariri, S. (2005). Autonomic computing: An overview. Unconventional Programming Paradigms, 257–269. Pereira, G., McGugan, M. & Mikkelsen, L. (2016). Test equipment: Method for inde- pendent strain and temperature measurement in polymeric tensile test specimen using embedded fbg sensors. Polymer Testing, 50, 125 - 134. Peres, A. M., Pires, R. R. & Oréfice, R. L. (2016). Evaluation of the effect of repro- cessing on the structure and properties of low density polyethylene/thermoplastic starch blends. Carbohydrate Polymers, 136, 210 - 215. Pu, F., Wang, Z., Du, C., Zhang, W. & Chen, N. (2016). Semantic integration of wireless sensor networks into open geospatial consortium sensor observation service to access and share environmental monitoring systems. IET Software, 10(2), 45–53. Rawat, S. & Rawat, S. (2016). Multi-sensor data fusion by a hybrid methodology – a comparative study. Computers in Industry, 75, 27 - 34. Rojas, R. (2013). Neural networks: a systematic introduction. Schaefer, D. (2014). Cloud-based design and manufacturing (cbdm): A service-oriented product development paradigm for the 21st century. Scherer, R. (2012). Multiple fuzzy classification systems (Vol. 288). Springer. Schroth, C. & Janner, T. (2007). Web 2.0 and soa: Converging concepts enabling the internet of services. IT professional, 9(3), 36–41. Senge, R., Bösner, S., Dembczy´nski, K., Haasenritter, J., Hirsch, O., Donner-Banzhoff, N. & Hüllermeier, E. (n.d.). Reliable classification: Learning classifiers that distinguish aleatoric and epistemic uncertainty. Information Sciences, 255, 16 - 29.
  • 100. References 100 Soliman, F. (2014). Business transformation and sustainability through cloud system implementation. Thermoplastic elastomer (tpe) faqs. (2015, Feb). Retrieved from http:// www.polyone.com/products/thermoplastic-elastomers/ tpe-knowledge-center/tpe-faqs Tian, T., Sun, S. & Li, N. (2016). Multi-sensor information fusion estimators for stochastic uncertain systems with correlated noises. Information Fusion, 27, 126 - 137. Tong, Y., Liu, L., Zhao, M., Chen, J. & Li, H. (2016). Signal Processing, 126(Signal Processing for Heterogeneous Sensor Networks), 149 - 158. Tzschichholz, T., Boge, T. & Schilling, K. (2015). Relative pose estimation of satellites using pmd-/ccd-sensor data fusion. Acta Astronautica, 109, 25 - 33. Urbina, A., Mahadevan, S. & Paez, T. L. (2011). Quantification of margins and uncertainties of complex systems in the presence of aleatoric and epistemic uncertainty. Reliability Engineering and System Safety, 96(Quantification of Margins and Uncertainties), 1114 - 1125. Wang, C., Liu, X. & Chen, Z. (2015). Probe improvement of inductive sensor for online health monitoring of mechanical transmission systems. IEEE Transactions on Magnetics, 51(11), 1 - 4. Wang, S., Carpenter, D. A., DeJager, A., DiBella, J. A., Doran, J. E., Fabinski, R. P., ... Yaniga, R. (2016). A 47 million pixel high-performance interline ccd image sensor. IEEE Transactions on Electron Devices, 63(1), 174 - 181. Wang, X. V., Wang, L., Mohammed, A. & Givehchi, M. (2016). Ubiquitous manufac- turing system based on cloud: A robotics application. Robotics and Computer Integrated Manufacturing. Wang, Y., Chen, H., Wu, X. & Shu, L. (2016). An energy-efficient sdn based sleep scheduling algorithm for wsns. Journal of Network and Computer Applications, 59, 39 - 45. White, S. R., Hanson, J. E., Whalley, I., Chess, D. M. & Kephart, J. O. (2004). An architectural approach to autonomic computing. null, 2–9. Wu, D., Rosen, D. W., Wang, L. & Schaefer, D. (2015a). Computer-Aided Design, 59, 1 - 14. Wu, D., Rosen, D. W., Wang, L. & Schaefer, D. (2015b). Cloud-based design and manufacturing: A new paradigm in digital manufacturing and design innovation. Computer-Aided Design, 59, 1 - 14. Xie, L., Liu, H., Wu, W., Abliz, D., Duan, Y. & Li, D. (2016). Fusion bonding of thermosets composite structures with thermoplastic binder co-cure and prepreg interlayer in electrical resistance welding. Materials and Design, 98, 143 - 149. Yahya, M. A., Yahya, M. A. & Dahanayake, A. (n.d.). Autonomic computing: A framework to identify autonomy requirements. Procedia Computer Science, 20(Complex Adaptive Systems), 235 - 241. Yang, X., Shi, G. & Zhang, Z. (2014). International Journal of Production Research, 52(2), 326 - 336.
  • 101. References 101 Yann-al le borgne. (2014, Aug). Retrieved from http://www.ulb.ac.be/di/ map/yleborgn/sensornets/code/matlabcode.zip Young, R. I. M., Gunendran, A. G., Cutting-Decelle, A. F. & Gruninger, M. (2007). Manufacturing knowledge sharing in plm: a progression towards the use of heavy weight ontologies. International Journal of Production Research, 45(7), 1505 - 1519. Yu, L., Wang, N. & Meng, X. (2005). Real-time forest fire detection with wireless sensor networks. In Wireless communications, networking and mobile computing, 2005. proceedings. 2005 international conference on (Vol. 2, pp. 1214–1217). Zhang, H., Bharti, A., Li, Z., Du, S., Bilotti, E. & Peijs, T. (2015). Localized toughening of carbon/epoxy laminates using dissolvable thermoplastic interleaves and electrospun fibres. Composites Part A, 79, 116 - 126. Zhang, Y., Yu, R., Nekovee, M., Liu, Y., Xie, S. & Gjessing, S. (2012). Cognitive machine-to-machine communications: visions and potentials for the smart grid. IEEE Network, 26(3), 6 - 13.