Unsupervised Signal Processing Channel
Equalization And Source Separation 1st Edition
Joo M T Romano download
https://ebookbell.com/product/unsupervised-signal-processing-
channel-equalization-and-source-separation-1st-edition-joo-m-t-
romano-2153304
Explore and download more ebooks at ebookbell.com
Here are some recommended products that we believe you will be
interested in. You can click the link to download.
Unsupervised Machine Learning For Clustering In Political And Social
Research Philip D Waggoner
https://ebookbell.com/product/unsupervised-machine-learning-for-
clustering-in-political-and-social-research-philip-d-waggoner-47499310
Unsupervised Pattern Discovery In Automotive Time Series Patternbased
Construction Of Representative Driving Cycles Fabian Kai Dietrich
Noering
https://ebookbell.com/product/unsupervised-pattern-discovery-in-
automotive-time-series-patternbased-construction-of-representative-
driving-cycles-fabian-kai-dietrich-noering-49007560
Unsupervised Classification Similarity Measures Classical And
Metaheuristic Approaches And Applications Sanghamitra Bandyopadhyay
https://ebookbell.com/product/unsupervised-classification-similarity-
measures-classical-and-metaheuristic-approaches-and-applications-
sanghamitra-bandyopadhyay-49007620
Unsupervised Learning Approaches For Dimensionality Reduction And Data
Visualization B K Tripathy
https://ebookbell.com/product/unsupervised-learning-approaches-for-
dimensionality-reduction-and-data-visualization-b-k-tripathy-49029446
Unsupervised Navigating And Influencing A World Controlled By Powerful
New Technologies 1st Edition Daniel Dollsteinberg
https://ebookbell.com/product/unsupervised-navigating-and-influencing-
a-world-controlled-by-powerful-new-technologies-1st-edition-daniel-
dollsteinberg-51796286
Unsupervised Process Monitoring And Fault Diagnosis With Machine
Learning Methods 1st Edition Chris Aldrich
https://ebookbell.com/product/unsupervised-process-monitoring-and-
fault-diagnosis-with-machine-learning-methods-1st-edition-chris-
aldrich-4241296
Unsupervised Learning With R Erik Rodriguez Pacheco
https://ebookbell.com/product/unsupervised-learning-with-r-erik-
rodriguez-pacheco-42984814
Unsupervised Information Extraction By Text Segmentation 1st Edition
Eli Cortez
https://ebookbell.com/product/unsupervised-information-extraction-by-
text-segmentation-1st-edition-eli-cortez-4415036
Unsupervised Learning In Space And Time A Modern Approach For Computer
Vision Using Graphbased Techniques And Deep Neural Networks Marius
Leordeanu
https://ebookbell.com/product/unsupervised-learning-in-space-and-time-
a-modern-approach-for-computer-vision-using-graphbased-techniques-and-
deep-neural-networks-marius-leordeanu-52374352
Unsupervised Signal Processing Channel Equalization And Source Separation 1st Edition Joo M T Romano
Unsupervised Signal Processing Channel Equalization And Source Separation 1st Edition Joo M T Romano
UNSUPERVISED
SIGNAL
PROCESSING
Channel Equalization
and Source Separation
Unsupervised Signal Processing Channel Equalization And Source Separation 1st Edition Joo M T Romano
CRC Press is an imprint of the
Taylor & Francis Group, an informa business
Boca Raton London New York
UNSUPERVISED
SIGNAL
PROCESSING
Channel Equalization
and Source Separation
~
Joao M. T. Romano
Romis R. de F. Attux
Charles C. Cavalcante
Ricardo Suyama
© 2011 by Taylor & Francis Group, LLC
CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
© 2011 by Taylor and Francis Group, LLC
CRC Press is an imprint of Taylor & Francis Group, an Informa business
No claim to original U.S. Government works
Printed in the United States of America on acid-free paper
10 9 8 7 6 5 4 3 2 1
International Standard Book Number-13: 978-1-4200-1946-9 (Ebook-PDF)
This book contains information obtained from authentic and highly regarded sources. Reasonable efforts
have been made to publish reliable data and information, but the author and publisher cannot assume
responsibility for the validity of all materials or the consequences of their use. The authors and publishers
have attempted to trace the copyright holders of all material reproduced in this publication and apologize to
copyright holders if permission to publish in this form has not been obtained. If any copyright material has
not been acknowledged please write and let us know so we may rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmit-
ted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented,
including photocopying, microfilming, and recording, or in any information storage or retrieval system,
without written permission from the publishers.
For permission to photocopy or use material electronically from this work, please access www.copyright.
com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood
Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and
registration for a variety of users. For organizations that have been granted a photocopy license by the CCC,
a separate system of payment has been arranged.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used
only for identification and explanation without intent to infringe.
Visit the Taylor & Francis Web site at
http://www.taylorandfrancis.com
and the CRC Press Web site at
http://www.crcpress.com
© 2011 by Taylor & Francis Group, LLC
To our families.
To the friendly members of DSPCom, past and present.
To the lovely memories of Hélio Drago Romano, Romis Attux, and
Francisco Casimiro do Nascimento.
© 2011 by Taylor & Francis Group, LLC
© 2011 by Taylor & Francis Group, LLC
Contents
Foreword .......................................................................... xix
Preface.............................................................................. xxi
Acknowledgments ............................................................... xxv
Authors ............................................................................xxvii
1. Introduction.................................................................... 1
1.1 Channel Equalization.................................................... 1
1.2 Source Separation ........................................................ 5
1.3 Organization and Contents ............................................. 7
2. Statistical Characterization of Signals and Systems.................... 11
2.1 Signals and Systems ..................................................... 13
2.1.1 Signals ............................................................. 13
2.1.1.1 Continuous- and Discrete-Time Signals............. 14
2.1.1.2 Analog and Digital Signals............................ 14
2.1.1.3 Periodic and Aperiodic/Causal and Noncausal
Signals .................................................... 14
2.1.1.4 Energy Signals and Power Signals ................... 15
2.1.1.5 Deterministic and Random Signals .................. 16
2.1.2 Transforms ........................................................ 16
2.1.2.1 The Fourier Transform of Continuous-Time
Signals .................................................... 17
2.1.2.2 The Fourier Transform of Discrete-Time
Signals .................................................... 18
2.1.2.3 The Laplace Transform ................................ 18
2.1.2.4 The z-Transform ........................................ 18
2.1.3 Systems ............................................................ 19
2.1.3.1 SISO/SIMO/MISO/MIMO Systems ................ 20
2.1.3.2 Causal Systems.......................................... 20
2.1.3.3 Invertible Systems ...................................... 20
2.1.3.4 Stable Systems........................................... 20
2.1.3.5 Linear Systems .......................................... 21
2.1.3.6 Time-Invariant Systems ............................... 21
2.1.3.7 Linear Time-Invariant Systems ....................... 21
2.1.4 Transfer Function and Frequency Response................. 22
2.2 Digital Signal Processing................................................ 23
2.2.1 The Sampling Theorem ......................................... 23
2.2.2 The Filtering Problem ........................................... 24
vii
© 2011 by Taylor & Francis Group, LLC
viii Contents
2.3 Probability Theory and Randomness ................................. 25
2.3.1 Definition of Probability ........................................ 25
2.3.2 Random Variables ............................................... 27
2.3.2.1 Joint and Conditional Densities ...................... 30
2.3.2.2 Function of a Random Variable ...................... 32
2.3.3 Moments and Cumulants....................................... 33
2.3.3.1 Properties of Cumulants............................... 36
2.3.3.2 Relationships between Cumulants and
Moments ................................................. 37
2.3.3.3 Joint Cumulants......................................... 37
2.4 Stochastic Processes...................................................... 38
2.4.1 Partial Characterization of Stochastic Processes: Mean,
Correlation, and Covariance ................................... 39
2.4.2 Stationarity........................................................ 41
2.4.3 Ergodicity ......................................................... 43
2.4.4 Cyclostationarity ................................................. 44
2.4.5 Discrete-Time Random Signals ................................ 45
2.4.6 Linear Time-Invariant Systems with Random Inputs ...... 46
2.5 Estimation Theory ....................................................... 49
2.5.1 The Estimation Problem ........................................ 50
2.5.1.1 Single-Parameter Estimation.......................... 50
2.5.1.2 Multiple-Parameter Estimation....................... 50
2.5.2 Properties of Estimators ........................................ 51
2.5.2.1 Bias........................................................ 51
2.5.2.2 Efficiency................................................. 51
2.5.2.3 Cramér–Rao Bound .................................... 52
2.5.3 Maximum Likelihood Estimation ............................. 53
2.5.4 Bayesian Approach .............................................. 53
2.5.4.1 Maximum a Posteriori Estimation ................... 54
2.5.4.2 Minimum Mean-Squared Error ...................... 56
2.5.5 Least Squares Estimation ....................................... 57
2.6 Concluding Remarks .................................................... 59
3. Linear Optimal and Adaptive Filtering ................................... 61
3.1 Supervised Linear Filtering............................................. 64
3.1.1 System Identification ............................................ 65
3.1.2 Deconvolution: Channel Equalization ........................ 66
3.1.3 Linear Prediction................................................. 67
3.2 Wiener Filtering .......................................................... 68
3.2.1 The MSE Surface ................................................. 70
3.3 The Steepest-Descent Algorithm....................................... 77
3.4 The Least Mean Square Algorithm .................................... 81
3.5 The Method of Least Squares........................................... 85
3.5.1 The Recursive Least-Squares Algorithm ..................... 87
© 2011 by Taylor & Francis Group, LLC
Contents ix
3.6 A Few Remarks Concerning Structural Extensions ................. 89
3.6.1 Infinite Impulse Response Filters.............................. 90
3.6.2 Nonlinear Filters ................................................. 90
3.7 Linear Filtering without a Reference Signal.......................... 91
3.7.1 Constrained Optimal Filters.................................... 92
3.7.2 Constrained Adaptive Filters .................................. 95
3.8 Linear Prediction Revisited ............................................. 96
3.8.1 The Linear Prediction-Error Filter as a Whitening
Filter ............................................................... 97
3.8.2 The Linear Prediction-Error Filter Minimum Phase
Property ........................................................... 98
3.8.3 The Linear Prediction-Error Filter as a Constrained
Filter ............................................................... 99
3.9 Concluding Remarks .................................................... 100
4. Unsupervised Channel Equalization...................................... 103
4.1 The Unsupervised Deconvolution Problem.......................... 106
4.1.1 The Specific Case of Equalization ............................. 107
4.2 Fundamental Theorems ................................................. 109
4.2.1 The Benveniste–Goursat–Ruget Theorem.................... 110
4.2.2 The Shalvi–Weinstein Theorem................................ 110
4.3 Bussgang Algorithms.................................................... 111
4.3.1 The Decision-Directed Algorithm ............................. 114
4.3.2 The Sato Algorithm.............................................. 115
4.3.3 The Godard Algorithm.......................................... 115
4.4 The Shalvi–Weinstein Algorithm ...................................... 117
4.4.1 Constrained Algorithm ......................................... 117
4.4.2 Unconstrained Algorithm ...................................... 119
4.5 The Super-Exponential Algorithm .................................... 121
4.6 Analysis of the Equilibrium Solutions of Unsupervised
Criteria ..................................................................... 125
4.6.1 Analysis of the Decision-Directed Criterion ................. 126
4.6.2 Elements of Contact between the Decision-Directed and
Wiener Criteria ................................................... 127
4.6.3 Analysis of the Constant Modulus Criterion ................ 128
4.6.4 Analysis in the Combined Channel + Equalizer
Domain ............................................................ 129
4.6.4.1 Ill-Convergence in the Equalizer Domain .......... 130
4.7 Relationships between Equalization Criteria ........................ 132
4.7.1 Relationships between the Constant Modulus and
Shalvi–Weinstein Criteria ...................................... 132
4.7.1.1 Regalia’s Proof of the Equivalence between the
Constant Modulus and Shalvi–Weinstein
Criteria ................................................... 133
© 2011 by Taylor & Francis Group, LLC
x Contents
4.7.2 Some Remarks Concerning the Relationship between the
Constant Modulus/Shalvi–Weinstein and the Wiener
Criteria............................................................. 135
4.8 Concluding Remarks .................................................... 139
5. Unsupervised Multichannel Equalization ............................... 141
5.1 Systems with Multiple Inputs and/or Multiple Outputs .......... 144
5.1.1 Conditions for Zero-Forcing Equalization of MIMO
Systems ............................................................ 146
5.2 SIMO Channel Equalization ............................................ 148
5.2.1 Oversampling and the SIMO Model .......................... 150
5.2.2 Cyclostationary Statistics of Oversampled Signals ......... 152
5.2.3 Representations of the SIMO Model .......................... 153
5.2.3.1 Standard Representation .............................. 153
5.2.3.2 Representation via the Sylvester Matrix ............ 154
5.2.4 Fractionally Spaced Equalizers and the MISO Equalizer
Model .............................................................. 156
5.2.5 Bezout’s Identity and the Zero-Forcing Criterion........... 158
5.3 Methods for Blind SIMO Equalization................................ 160
5.3.1 Blind Equalization Based on Higher-Order Statistics ...... 160
5.3.2 Blind Equalization Based on Subspace Decomposition .... 161
5.3.3 Blind Equalization Based on Linear Prediction ............. 165
5.4 MIMO Channels and Multiuser Processing.......................... 168
5.4.1 Multiuser Detection Methods Based on Decorrelation
Criteria............................................................. 169
5.4.1.1 The Multiuser Constant Modulus Algorithm ...... 170
5.4.1.2 The Fast Multiuser Constant Modulus
Algorithm................................................ 172
5.4.1.3 The Multiuser pdf Fitting Algorithm
(MU-FPA)................................................ 173
5.4.2 Multiuser Detection Methods Based on
Orthogonalization Criteria ..................................... 176
5.4.2.1 The Multiuser Kurtosis Algorithm................... 177
5.5 Concluding Remarks .................................................... 179
6. Blind Source Separation ..................................................... 181
6.1 The Problem of Blind Source Separation ............................. 184
6.2 Independent Component Analysis .................................... 186
6.2.1 Preprocessing: Whitening ...................................... 188
6.2.2 Criteria for Independent Component Analysis ............. 190
6.2.2.1 Mutual Information .................................... 191
6.2.2.2 A Criterion Based on Higher-Order Statistics ...... 194
6.2.2.3 Nonlinear Decorrelation............................... 195
6.2.2.4 Non-Gaussianity Maximization ...................... 196
© 2011 by Taylor & Francis Group, LLC
Contents xi
6.2.2.5 The Infomax Principle and the Maximum
Likelihood Approach .................................. 198
6.3 Algorithms for Independent Component Analysis ................ 200
6.3.1 Hérault and Jutten’s Approach ................................ 200
6.3.2 The Infomax Algorithm......................................... 201
6.3.3 Nonlinear PCA ................................................... 202
6.3.4 The JADE Algorithm ............................................ 204
6.3.5 Equivariant Adaptive Source Separation/Natural
Gradient ........................................................... 205
6.3.6 The FastICA Algorithm ......................................... 206
6.4 Other Approaches for Blind Source Separation ..................... 209
6.4.1 Exploring the Correlation Structure of the Sources......... 209
6.4.2 Nonnegative Independent Component Analysis ........... 210
6.4.3 Sparse Component Analysis ................................... 211
6.4.4 Bayesian Approaches ........................................... 213
6.5 Convolutive Mixtures ................................................... 214
6.5.1 Source Separation in the Time Domain....................... 215
6.5.2 Signal Separation in the Frequency Domain................. 216
6.6 Nonlinear Mixtures ...................................................... 218
6.6.1 Nonlinear ICA.................................................... 219
6.6.2 Post-Nonlinear Mixtures........................................ 220
6.6.3 Mutual Information Minimization ............................ 222
6.6.4 Gaussianization .................................................. 223
6.7 Concluding Remarks .................................................... 224
7. Nonlinear Filtering and Machine Learning .............................. 227
7.1 Decision-Feedback Equalizers.......................................... 229
7.1.1 Predictive DFE Approach ...................................... 231
7.2 Volterra Filters............................................................ 233
7.3 Equalization as a Classification Task.................................. 235
7.3.1 Derivation of the Bayesian Equalizer ......................... 237
7.4 Artificial Neural Networks ............................................. 241
7.4.1 A Neuron Model ................................................. 241
7.4.2 The Multilayer Perceptron ..................................... 242
7.4.2.1 The Backpropagation Algorithm ..................... 244
7.4.3 The Radial-Basis Function Network .......................... 247
7.5 Concluding Remarks .................................................... 251
8. Bio-Inspired Optimization Methods ...................................... 253
8.1 Why Bio-Inspired Computing? ........................................ 254
8.2 Genetic Algorithms ...................................................... 256
8.2.1 Fundamental Concepts and Terminology ................... 256
8.2.2 A Basic Genetic Algorithm ..................................... 257
8.2.3 Coding............................................................. 258
© 2011 by Taylor & Francis Group, LLC
xii Contents
8.2.4 Selection Operators .............................................. 259
8.2.5 Crossover and Mutation Operators ........................... 261
8.3 Artificial Immune Systems ............................................. 266
8.4 Particle Swarm Optimization .......................................... 269
8.5 Concluding Remarks .................................................... 273
Appendix A: Some Properties of the Correlation Matrix ................ 275
A.1 Hermitian Property ...................................................... 275
A.2 Eigenstructure ............................................................ 275
A.3 The Correlation Matrix in the Context of Temporal Filtering ..... 277
Appendix B: Kalman Filter .................................................... 279
B.1 State-Space Model........................................................ 279
B.2 Deriving the Kalman Filter ............................................. 280
References......................................................................... 285
© 2011 by Taylor & Francis Group, LLC
Unsupervised Signal Processing Channel Equalization And Source Separation 1st Edition Joo M T Romano
Foreword
Intelligent systems have made major contributions to the progress of
science and technology in recent decades. They find applications in all
technical fields and, particularly, in communications, consumer electron-
ics, and control. A distinct characteristic is their high level of complexity,
due to the fact that they capitalize on all sorts of scientific knowledge
and practical know-how. However, their architecture is rather simple and
can be broken down into four basic constituents, namely, sensors, actua-
tors, signal-processing modules, and information-processing modules. The
sensors and actuators constitute the interfaces of the system with its envi-
ronment, while the signal-processing modules link these interfaces with the
information-processing modules. Although it is generally recognized that
the intelligence of the system lies in the information-processing section,
intelligence is also needed in the signal-processing section to learn the
environment, follow its evolutions, and cope with its adverse effects. The
signal-processing modules deliver the raw data and even the most sophisti-
cated information-processing algorithms perform badly if the quality of the
raw data is poor.
From the perspective of signal processing, the most challenging problem
is the connection between the signal sources and the sensors, for two main
reasons. First, the transmission channels degrade the useful signals, and
second, the sources have to be identified and separated from the received
mixtures. Channel equalization and source separation can be dealt with sep-
arately or jointly. In any case, the quality of the corresponding processing
is essential for the performance of the system, because it determines the
reliability of the input data to the information-processing modules. When-
ever appropriate, the problem is simplified by the introduction of learning
phases, during which the algorithms are trained for optimal operation; this is
called supervised processing. However, this procedure is not always possi-
ble or desirable, and continuous optimization has many advantages in terms
of global performance and efficiency. Thus, we arrive at unsupervised signal
processing, which is the topic of this book.
Unsupervised signal-processing techniques are described in different
categories of books dealing with digital filters, adaptive methods, or sta-
tistical signal processing. But, until now, no unified presentation has been
available. Therefore, this book is timely and it is an important contribu-
tion to the signal-processing literature. Moreover, unifying under a common
framework the topics of blind equalization and source separation is particu-
larly appropriate and inspiring from the perspective of both education and
research.
xix
© 2011 by Taylor & Francis Group, LLC
xx Foreword
Through the remarkable synthesis of the field it provides and the new
vision it offers, this book will stimulate progress and contribute to the advent
of more useful, efficient, and friendly intelligent systems.
Maurice Bellanger
Académie des Technologies de France
Paris, France
© 2011 by Taylor & Francis Group, LLC
Preface
“At Cambridge, Russell had impressed on me not only the importance of
mathematics but the need for a physical sense...”
Norbert Wiener, I Am a Mathematician
Perhaps the most fundamental motivation for writing a book is the desire
to tell a story in which the author can express himself or herself and be under-
stood by others. This sort of motivation is also present in scientific works,
even if the story is usually narrated in formal and austere language.
The main motivation for writing this book is to tell something about the
work we carry out in the Laboratory of Signal Processing for Communica-
tions (DSPCom). This includes the research topics on which we have been
working as well as the way we work, which is closely related to the epigraph
we chose for this preface.
The work we have developed is founded on the theory of adaptive fil-
tering, having communication systems as the main focus of application. The
natural evolution of our studies and researches led us to widen our scope of
interest to themes like blind equalization, source separation, machine learn-
ing, and bio-inspired algorithms, always with the signal processing–oriented
approach that is registered in the DNA of our lab.
Hence, in short, our objective in this book is to provide a unified, sys-
tematic, and synthetic presentation of what may be called the theory of
unsupervised signal processing, with an emphasis on two topics that could be
considered as the pillars [137] of such a theory: blind equalization and source
separation. These two topics constitute the core of the book. They are based
on the foundations of statistical and adaptive signal processing, exposed in
Chapters 2 and 3, and they point to more emergent tools in signal processing,
like machine learning–based solutions and bio-inspired methods, presented
in Chapters 7 and 8.
Clearly, the objective described above represents a stimulating challenge
for, at least, two reasons: first, gathering together all the mentioned themes
was subject to the risk of dispersion or excessive verbosity, with the conse-
quent lack of interest on the part of the readers; second, the themes of interest
on their own have been specifically addressed by renowned specialists in a
number of excellent books.
In this sense, we feel obliged to mention that adaptive filter theory is
a well-established discipline that has been studied in depth in books like
[32, 100, 139, 194, 249, 262, 303], and others. Blind equalization methods and
algorithms are presented in detail in [99], and were recently surveyed in [70].
Blind source separation and related aspects like independent component
analysis have been treated in very important works such as in [76,148,156].
xxi
© 2011 by Taylor & Francis Group, LLC
xxii Preface
Numerous authors from different scientific communities have written on
topics related to machine learning and bio-inspired optimization. We must
also mention inspiring works like [12, 137, 138], which deal with both blind
deconvolution and separation problems.
In a certain sense, by placing the topics of this book under a similar con-
ceptual treatment and mathematical formalism, we have tried to reap some
of the important ideas disseminated and fertilized by the aforementioned
authors and others we necessarily omitted in our non-exhaustive citation.
Since the genesis of the book is strongly linked to the work the authors
carried out at DSPCom laboratory during more than a decade, words of
thankfulness and recognition must be addressed to those who supported and
inspired such work. First of all, we would like to thank all researchers, stu-
dents, and assistants who worked in the lab since its establishment. It seems
unreasonable to name everybody, so we decided to include all these friends
in the main dedication of the book.
The first author of this book was fortunate in having Professor Maurice
Bellanger, from CNAM/Paris, France, as a PhD advisor, a collaborator in
many works, and an inspirational figure for us in the process of writing this
book. We are grateful to many colleagues and friends for their constant sup-
port. Special thanks are due to Professor Paulo S.R. Diniz from the Federal
University of Rio de Janeiro (COPPE/UFRJ) and Professor Michel D. Yacoub
from FEEC/UNICAMP, first for their personal and professional example,
and also for attentively motivating and pushing us to finish the work. A spe-
cial mention must also be made to the memory of the late Professor Max
Gerken from the University of São Paulo (POLI/USP). We also express our
gratitude to Professor João C.M. Mota from the Federal University of Ceará
(UFC) for many years of fruitful cooperation.
We are indebted to many colleagues in our institution, the School of
Electrical and Computer Engineering at the University of Campinas (FEEC/
UNICAMP, Brazil). We are particularly thankful to Professor Renato Lopes,
Professor Murilo Loiola, Dr. Rafael Ferrari, Dr. Leonardo Tomazeli Duarte,
and Levy Boccato for directly influencing the contents of this book, and
for carefully reviewing and/or stimulating discussions about many central
themes of the book. We would also like to thank Professors Fernando Von
Zuben, Christiano Lyra, and Amauri Lopes, who collaborated with us by
means of scientific and/or academic partnerships. Our warmest regards are
reserved for Celi Pavanatti, for her constant and kind support.
Many friends and colleagues in other institutions influenced our work
in different ways. For their direct technical contribution to the book or to
our careers, and for their special attention in some key occasions, we would
like to thank Professor Francisco R. P. Cavalcanti from UFC; Professors
Maria Miranda and Cristiano Panazio from POLI/USP; Professor Leandro
de Castro from Universidade Presbiteriana Mackenzie (UPM); Professor
Aline Neves from Universidade Federal do ABC (UFABC); Professors Carlos
A. F. da Rocha, Leonardo Resende, and Rui Seara from Universidade Federal
© 2011 by Taylor & Francis Group, LLC
Preface xxiii
de Santa Catarina (UFSC); Professor Jacques Szczupak from Pontifícia Uni-
versidade Católica do Rio de Janeiro (PUC); Professor Moisés Ribeiro from
Universidade Federal de Juiz de Fora (UFJF); Professor Luiz C. Coradine
from Universidade Federal de Alagoas (UFAL); Professor Jugurta Mon-
talvão from Universidade Federal de Sergipe (UFS); Dr. Cynthia Junqueira
from Comando Geral de Tecnologia Aeroespacial (IAE/CTA); Dr. Danilo
Zanatta from NTi Audio AG; Maurício Sol de Castro from Von Braun
Center; Professors Madeleine Bonnet, Hisham Abou-Kandil, Bernadette
Dorizzi, and Odile Macchi, respectively, from the University Paris-Descartes,
ENS/Cachan, IT-SudParis, and CNRS, in France; and Professor Tülay Adali
from the University of Maryland in Baltimore, Maryland. We are especially
grateful to Professor Simon Haykin from McMaster University in Canada
for having given us the unforgettable opportunity of discussing our entire
project during the ICA Conference at Paraty in 2009.
The acknowledgment list would certainly be incomplete without men-
tioning the staff of CRC Press. Our deepest gratitude must be expressed to
Nora Konopka, Amber Donley, Vedavalli Karunagaran, Richard Tressider,
and Brittany Gilbert for their competence, solicitude, and patience. So many
thanks for believing in this project and pushing it from one end to the other!
João M. T. Romano
Romis R. de F. Attux
Charles C. Cavalcante
Ricardo Suyama
© 2011 by Taylor & Francis Group, LLC
© 2011 by Taylor & Francis Group, LLC
Acknowledgments
“Eu não ando só, só ando em boa companhia.”
Vinicius de Moraes, Brazilian poet
The authors would like to thank their families and friends for their constant
support during the process of writing this book.
João would like to express his gratitude to his lovely wife Maria Inês and
children Miriam, Teresa, Filipe, Débora, Maria Beatriz, Marcelo, Ana Laura,
and Daniela; to his parents, brothers, and sisters, particularly to Andy Stauf-
fer who reviewed the first draft of this project to be sent to CRC. He would
also like to thank his friends at the Cultural Center in Campinas, especially to
the kind memory of Professor Francesco Langone. He is grateful to his dear
friend Ilma Valadão for his support. Finally, he would like to acknowledge
the motivating words of his inspirational friend, Professor José L. Boldrini.
Romis would like to thank Dilmara, Clara, and Marina for their love
and for the constant happiness they bring to his life; Dina, Cecília, João
Gabriel, Jecy (in memoriam), and Flora for their support on all occasions;
Afrânio, Isabel, Beth, Toninho, Naby (in memoriam), Sônia, Ramsa, and his
whole family for their warm affection; his students and former students—
Cristina, Denis, Diogo, Everton, Filipe, George, Hassan, Hugo, Leonardo,
Tiago Dias, Tiago Tavares, and Wesley—for their kindness and patience;
Cristiano and Dr. Danilo for the many enriching conversations; the G6 (Alim,
Carol, Daniel, Inácio, Lídia, and Theo) for the countless happy moments; all
friends and colleagues from FEEC/UNICAMP; his undergraduate and grad-
uate students for the inspiration they bring to his life; and, finally, all his
friends (it would be impossible to name them here!).
Charles deeply recognizes the importance of some people in his life for
the realization of this book. He expresses his gratefulness to his wife, Erika;
to his children, Matheus and Yasmin; to his mother, Ivoneide; to his broth-
ers, Kleber, Rogério, and Marcelo; and to his friend, Josué, who has always
pushed him to achieve this result. For you all, my deepest gratitude; without
you, nothing would have been possible and worth the effort.
Ricardo expresses his warmest gratitude to Jorge, Cecília, Bruna, Maria
(in memoriam), and many others in the family, for the love and constant
support; Gislaine, for all the love and patience during the final stages of this
work; and all colleagues from DSPCom, for the enriching conversations and
friendship.
xxv
© 2011 by Taylor & Francis Group, LLC
© 2011 by Taylor & Francis Group, LLC
Authors
João Marcos Travassos Romano is a professor at the University of Cam-
pinas (UNICAMP), Campinas, Sao Paulo, Brazil. He received his BS and
MS in electrical engineering from UNICAMP in 1981 and 1984, respec-
tively. In 1987, he received his PhD from the University of Paris–XI, Orsay.
He has been an invited professor at CNAM, Paris; at University of Paris-
Descartes; and at ENS, Cachan. He is the coordinator of the DSPCom
Laboratory at UNICAMP, and his research interests include adaptive fil-
tering, unsupervised signal processing, and applications in communication
systems.
Romis Ribeiro de Faissol Attux is an assistant professor at the University
of Campinas (UNICAMP), Campinas, Sao Paulo, Brazil. He received his
BS, MS, and PhD in electrical engineering from UNICAMP in 1999, 2001,
and 2005, respectively. He is a researcher in the DSPCom Laboratory. His
research interests include blind signal processing, independent component
analysis (ICA), nonlinear adaptive filtering, information-theoretic learning,
neural networks, bio-inspired computing, dynamical systems, and chaos.
Charles Casimiro Cavalcante is an assistant professor at the Federal
University of Ceará (UFC), Fortaleza, Ceara, Brazil. He received his BSc and
MSc in electrical engineering from UFC in 1999 and 2001, respectively, and
his PhD from the University of Campinas, Campinas, Sao Paulo, Brazil,
in 2004. He is a researcher in the Wireless Telecommunications Research
Group (GTEL), where he leads research on signal processing for commu-
nications, blind source separation, wireless communications, and statistical
signal processing.
Ricardo Suyama is an assistant professor at the Federal University of ABC
(UFABC), Santo Andre, Sao Paulo, Brazil. He received his BS, MS, and PhD
in electrical engineering from the University of Campinas, Campinas, Sao
Paulo, Brazil in 2001, 2003, and 2007, respectively. He is a researcher in the
DSPCom Laboratory at UNICAMP. His research interests include adaptive
filtering, source separation, and applications in communication systems.
xxvii
© 2011 by Taylor & Francis Group, LLC
1
Introduction
The subject of this book could be summarized by a simple scheme as that
depicted in Figure 1.1.
We have an original set of data of our interest that we want, for instance,
to transmit, store, extract any kind of useful information from; such data
are represented by a quantity s. However, we do not have direct access to
s but have access only to a modified version of it, which we represent by
the quantity x. So, we can state that there is a data mapping H(·) so that the
observed data x are obtained by
x = H(s) (1.1)
Then our problem consists in finding a kind of inverse mapping W to
be applied in the available data so that we could, based on a certain perfor-
mance criterion, recover suitable information about the original set of data.
We represent this step by another mapping that provides, from x, what we
could name an estimate of s, represented by
ŝ = W(x) (1.2)
The above description is generalized on purpose so that a number of dif-
ferent concrete problems could fit it, with also a great variety of approaches
to tackle with them. According to the area of knowledge, the aforementioned
problem can be considerably relevant in signal processing, telecommunica-
tions, identification and control, pattern recognition, Bayesian analysis, and
other fields. The scope of this book is clearly signal processing oriented, with a
focus on two major problems: channel equalization and source separation. Even
thus, such character of the work does not restrict the wide field of application
of the theory and tools it presents.
1.1 Channel Equalization
In general terms, an equalization filter or, simply, equalizer, is a device
that compensates the distortion due to an inadequate response of a given
system. In communication systems, it is well known that any physical
1
2 Unsupervised Signal Processing
(s) (x)
...
x1
x2
xM
s1
s2
sN
...
...
s
x
s1




s2
sN
s
FIGURE 1.1
General scheme.
transmission channel is band-limited, i.e., it necessarily imposes distortion
over the transmitted signal if such signal exceeds the allowed passband.
Moreover, the channel presents additional impairments since its frequency-
response in the passband is often not flat, and is also subject to noise. In the
most treatable case, the channel is assumed linear and time-invariant, i.e.,
the output is obtained by a temporal convolution, and the noise is assumed
Gaussian and additive.
In analog communications systems, channel impairments lead to a
continuous-time distortion over the transmitted waveform. In digital com-
munication, information is carried by a sequence of symbols, instead of a
continuous waveform. Such symbols constitute a given transmission signal
in accordance with a given modulation scheme. Hence, the noxious effect
of the channel impairments in digital communications is a wrong symbol
decision at the receiver.
Since information is conveyed by a sequence of symbols, it is suitable to
employ a discrete-time model for the system, so that both the channel and
the equalizer may be viewed as discrete-time filters, and the involved signals
are numerical sequences. So, the problem may be represented by the scheme
in Figure 1.2, where s(n) is the transmitted signal; ν(n) is the additive noise;
x(n) is the received signal, i.e., the equalizer input; and ŝ(n) is the estimate of
the transmitted signal, provided by the equalizer through the mapping
ŝ(n) = W [x(n)] (1.3)
Since the channel is linear, we can characterize it by an impulse response
h(n) so that the mapping provided by the channel may be expressed by
H [s(n)] = s(n) ∗ h(n) (1.4)
s(n)
Σ
ν(n)
x(n) s(n)
ˆ
FIGURE 1.2
Equalization scheme.
Introduction 3
where ∗ stands for the discrete-time convolution, and then
x(n) = s(n) ∗ h(n) + ν(n) (1.5)
Clearly, the desired situation will correspond to a correct recovery of the
original sequence s(n), except for a delay and a constant factor, which can
include phase rotation if we deal with the most general case of complex sym-
bols. This very ideal situation is named zero-forcing (ZF) condition. As better
explained further in the book, it comes from the fact that, in such conditions,
all terms associated to the intersymbol interference (ISI) are “forced to zero.”
So, if the global system formed by the channel and the equalizer establishes
a global mapping G(·), the ZF conditions leads to
G [s(n)] = ρs(n − n0) (1.6)
where
n0 is a delay
ρ is the constant factor
Once ρ and n0 are known or estimated, the ideal operation under the ZF
condition leads to the correct retrieval of all transmitted symbol. However,
as we could expect, such a condition is not attainable in practice due to the
nonideal character of W [·] and to the effect of noise.
Hence, a more suitable approach is to search for the equalizer W [·] that
provides a minimal quantity of errors in the process of symbol recovery.
By considering the stochastic nature of the transmitted information and the
noise, the most natural mathematical procedure consists in dealing with the
notion of probability of error.
In this sense, the first effective solution is credited to Forney [111], which
considered the Viterbi algorithm for symbol recovery in presence of ISI. In its
turn, the Viterbi algorithm was conceived for decoding convolutional codes
in digital communications, in accordance with a maximum-likelihood (ML)
criterion [300].
One year after Forney’s paper, the BCJR algorithm, named after its inven-
tors [24], was proposed for decoding, but in accordance with a maximum a
posteriori (MAP) criterion. In this case, recovery was carried out symbol-
by-symbol basis instead of recovering the best sequence, as in the Viterbi
approach.
Once the transmitted symbols are equiprobable, the ML and MAP cri-
teria lead to the same result. So, the Viterbi algorithm minimizes the
probability of detecting a whole sequence erroneously, while the BCJR algo-
rithm minimizes the probability of error for each individual symbol. The
adaptive (supervised and unsupervised) techniques considered in this book
are typically based on a symbol-by-symbol recovery.
4 Unsupervised Signal Processing
We will refer to as Bayesian equalizer the mapping W [·] that provides
the minimal probability of error, considering symbol-by-symbol recovery.
It is important to think of the Bayesian equalizer, from now, as our refer-
ence of optimality. However, due to its nonlinear character, its mathematical
derivation will be postponed to Chapter 7.
Optimal equalizers derived from ML and/or MAP criteria are unfortu-
nately not so straightforward to implement in practice [112], especially in
realistic scenarios that involve real-time operation at high bit rates, nonsta-
tionary environments, etc. Taking into account the inherent difficulties of
a practical communication system, the search for suitable solutions of the
equalization problem includes the following steps:
• To implement the mapping W by means of a linear finite impulse
response (FIR) filter followed by a nonlinear symbol-recovering (-
decision) device.
• To choose a more feasible, although suboptimum, criterion instead
of that of probability of error.
• To derive operative (adaptive, if desirable) procedures to obtain the
equalizer in accordance with the chosen criterion.
• To use (as much as possible) prior knowledge about the transmitted
signal and/or the channel in the aforementioned procedures.
Taking into account the above steps, the mapping W [x(n)] will then be
accomplished by
y(n) = x(n) ∗ w(n) (1.7)
and
ŝ(n) = 

y(n)

(1.8)
where
w(n) is the equalizer impulse response
y(n) is the equalizer output
(·) stands for the decision device
In addition, we can now define the notion of combined response
channel+equalizer as
g(n) = h(n) ∗ w(n) (1.9)
so that the ZF condition can be simply established if we define a vector g, the
elements of which are those of the sequence g(n). The ZF condition holds if
and only if
g = [0, . . . , 0, ρ, 0, . . . , 0]T
(1.10)
where the position of ρ in g is associated with the equalization delay.
Introduction 5
As far as the criterion is concerned, the discussion is, in fact, founded on
the field of estimation theory. From there, we take two useful possibilities: the
minimum-squared error (MSE) and the least-squares (LS) criteria, as our main
practical tools. For the operative procedure, we have two distinct possibilities:
taking into account the whole transmitted sequence to obtain an optimized
equalizer for this set of data (data acquisition first and equalizer optimization
then) or proceeding to an adjustment of the equalizer as the data are available
at the receiver (joint acquisition and optimization). In this second case, we talk
about adaptive equalization. Finally, the use of a priori information is closely
relatedtothepossibilityofputtingintopracticeamechanismofsupervisionor
trainingoverthesystem.Ifsuchamechanismcanbeperiodicallyimplemented,
we talk about supervised equalization, while the absence of supervision
leads to the unsupervised or blind techniques.
To a certain extent, this book discusses a vast range of possible
approaches to pass through these three steps, with a clear emphasis on
adaptive and unsupervised methods.
We can easily observe that the problem of channel equalization, as
depicted in Figure 1.2, fits the general problem of Figure 1.1, for the particu-
lar case of M = N = 1. Another particularization is related to the hypothesis
over the transmitted signal: as a rule, it is considered to be a sequence of inde-
pendent and identically distributed (i.i.d.) random variables, which belong
to a finite alphabet of symbols. This last aspect clearly imposes the use of a
symbol-recovering device. Regarded in this light, the problem is referred to
as SISO channel equalization, since both the channel and the equalizer are
single-input single-output filters.
Nevertheless, we can also consider a communication channel with mul-
tiple inputs and/or multiple outputs. A typical and practical case may be a
wireless link with multiple antennas at the transmitter and/or at the receiver.
In this book, we will specially consider the following cases, to be discussed
in Chapter 4:
• A single-input multiple-output (SIMO) channel with a multiple-
input single-output (MISO) equalizer, which corresponds to N = 1
and M  1 in Figure 1.1.
• A multiple-input multiple-output (MIMO) channel with a multiple-
input multiple-output (MIMO) equalizer, which corresponds to
N  1 and M  1 in Figure 1.1.
1.2 Source Separation
The research work on SISO blind equalization has been particularly intense
during the 1980s. At this time, another challenging problem in signal pro-
cessing was proposed, that of blind source separation (BSS). In general terms,
6 Unsupervised Signal Processing
such a problem can be simply explained by the classical example known as
cocktail party phenomenon, where a number of speakers communicate at the
same time in the same noisy environment. In order to focus the attention in
a specific speaker s1, a given receiver must retrieve the corresponding signal
from a mixture of all signals {s1, . . . , sN}, where N is the number of speakers.
Despite the human ability in performing this task, a technical solution for
providing blind separation was unknown until the work of Hérault et al., in
1985 [144].
As stated above, the BSS problem also fits in the scheme of Figure 1.1.
The possibility of obtaining proper solutions will depend on the hypothe-
sis we consider for the mapping H(·) and for the set of original signals, or
sources, s. The most tractable case emerges from the following assumptions:
• The mapping H(·) stands for a linear and memoryless system, with
M = N.
• The sources {s1, . . . , sN} are assumed to be mutually independent
signals.
• There is, at most, one Gaussian source.
The main techniques for solving BSS under these assumptions come from
the principle of independent component analysis (ICA) [74]. Such techniques
are based on searching for a separating system W(·), the parameters of which
are obtained in accordance of a given criterion that imposes statistical inde-
pendence between the set of outputs ŝ. As pointed out in [137], ICA may
be viewed as an extension of the well-known principal component analysis
(PCA), which deals only with the second-order statistics of the involved
signals.
Although blind equalization and source separation problems have orig-
inated independently and in somewhat distinct scientific communities, we
can clearly observe a certain “duality” between them:
• In SISO channels, the output is a linear combination (temporal con-
volution) of the elements of the transmitted signal with additive
Gaussian noise. In BSS, the set of outputs comes from the linear
mixture of signals, among which one can be Gaussian.
• In SISO equalization, we try to recover a sequence of indepen-
dent symbols that correspond to the transmitted signal. In BSS, we
search for a set of independent variables that correspond to original
sources.
• In both cases, dealing with second-order statistics is not sufficient:
the output of a SISO channel may be whitened, for instance, by a
prediction-error filter, while the outputs of the mixing system may
be decorrelated by a PCA procedure. However, as we will stress
later in the book, neither of these procedures can guarantee a correct
retrieval.
Introduction 7
The above considerations will become clearer, and will be more rigor-
ously revisited, in the sequence of the chapters. Nevertheless, it is worth
remarking these points in this introduction to illustrate the interest in
bringing unsupervised equalization and source separation to a common
theoretical framework.
On the other hand, BSS can become a more challenging problem as the
aforementioned assumptions are discarded. The case of a mixing system
with memory corresponds to the more general problem of convolutive mix-
tures. Such a problem is rather similar to that of MIMO equalization. As a
rule in this book, we consider convolutive BSS as a more general problem
since, in MIMO channel equalization, we usually suppose that the trans-
mitted signals have the same statistical distributions and belong to a finite
alphabet. This is not at all the case in other typical applications of BSS.
If the hypothesis of linear mixing is discarded, the solution of BSS prob-
lems will require special care, particularly in applying ICA. Such a solution
may involve the use of nonlinear devices in the separating systems, as
done in the so-called post-nonlinear model. It is worth mentioning that
nonlinear channels can also be considered in communication and different
approaches have been proposed for nonlinear equalization, including the
widely known decision feedback equalizer (DFE). Overall, our problem will
certainly become more intricate when nonlinear mappings take place in H(·)
and/or in W(·), as we will discuss in more detail in Chapter 6.
Furthermore, other scenarios in BSS deserve the attention of researchers,
as those of underdetermined mixtures, i.e., in scenarios in which M  N in
Figure 1.1; correlated sources; sparse sources, etc.
1.3 Organization and Contents
We have organized the book as follows:
Chapter 2 reviews the fundamental concepts concerning the characteri-
zation of signals and systems. The purpose of this chapter is to emphasize
some notions and tools that are necessary to the sequence of the book. For
the sake of clarity, we first deal with deterministic concepts and then we
introduce statistical characterization tools. Although many readers could be
familiar with these subjects, we provide a synthetic presentation of the fol-
lowing topics: signals and systems definitions and main properties; basic
concepts of discrete-time signal processing, including the sampling theorem;
fundamentals of probability theory, including topics like cumulants, which
are particularly useful in the context of unsupervised processing; a review
on stochastic processes with a specific topic on discrete-time random signals;
and, finally, a section on estimation theory.
In order to establish the foundations of unsupervised signal processing,
we present in Chapter 3 the theory of optimal and adaptive filtering in the
8 Unsupervised Signal Processing
classic scenario of linear and supervised processing. As already commented,
many books are devoted to this rich subject and present it in a more exhaus-
tive fashion. We opt for a brief and, to a certain extent, personal presentation
that facilitates the introduction of the central themes of the book. First,
we discuss three emblematic problems in linear filter theory: identifica-
tion, deconvolution, and prediction. From there, the specific case of channel
equalization is introduced. Then, as usually done in the literature, we present
the Wiener filtering theory as the typical solution for supervised processing
and a paradigm for adaptive procedures. The sections on supervised adap-
tive filtering discuss the celebrated LMS and RLS algorithms, and also the
use of structures alternative to the linear FIR filter. Moreover, in Chapter 3
we introduce the notion of optimal and adaptive filtering without a refer-
ence signal, as a first step to consider blind techniques. In this context, we
discuss the problem of constrained filtering and revisit that of prediction,
indicating some relationships between linear prediction and unsupervised
equalization.
After establishing the necessary foundations in Chapters 2 and 3, the sub-
ject of unsupervised equalization itself is studied in Chapter 4, which deals
with single-input single-output (SISO) channels, and in Chapter 5, in which
the multichannel case is considered.
Chapter 4 starts with a general discussion on the problem of unsu-
pervised deconvolution, of which blind equalization may be viewed as a
particular case. After introducing the specific problem of equalization, we
state the two fundamental theorems: Benveniste–Goursat–Ruget and Shalvi–
Weinstein. Then we discuss the main adaptive techniques: the so-called
Bussgang algorithms that comprise different LMS-based blind techniques,
the Shalvi–Weinstein algorithm, and the super-exponential. Among Buss-
gang techniques, special attention is given to the decision-directed (DD)
and Godard/CMA approaches, due to their practical interest in communica-
tions schemes. We discuss important aspects about the equilibrium solutions
and convergence of these methods, having the Wiener MSE surface as a
benchmark for performance evaluation. Finally, based on a more recent
literature, we present some results concerning the relationships between
constant-modulus, Shalvi–Weinstein, and Wiener criteria.
The problem of blind equalization is extended to the context of systems
with multiple inputs and/or outputs in Chapter 5. First, we state some
theoretical properties concerning these systems. Then we discuss single-
input multiple-output (SIMO) channels, which may be engendered, for
instance, by two practical situations: temporal oversampling of the received
signal or the use of multiple antennas at the receiver. In the context of SIMO
equalization, we discuss equalization conditions in the light of Bezout’s
identity and the second-order methods for blind equalization. Afterward,
we turn our attention to the most general scenario, that of multiple-input
multiple-output (MIMO) channels. In such case, special attention is given to
Introduction 9
multiuser systems, the importance of which is notorious in modern wireless
communications.
Chapter 6 deals with blind source separation (BSS), the other central sub-
ject for the objectives of this book. We start this chapter by stating the main
models to be used and the standard case to be considered first, that of a
linear, instantaneous, and noiseless mixture. Then, we introduce a tool of
major interest in BSS: the independent component analysis (ICA). The first
part of Chapter 6 is devoted to the main concepts, criteria, and algorithms
to perform ICA. Afterward, we deal with alternative techniques that exploit
prior information as, in particular, the nonnegative and the sparse compo-
nent decompositions. Then, we leave the aforementioned standard case to
consider two relevant problems in BSS: those of convolutive and nonlinear
mixtures. Both of them can be viewed as open problems with significant
research results in the recent literature. So we focus our brief presentation on
some representative methods with emphasis on the so-called post-nonlinear
model.
Chapters 4 through 6 establish the fundamental core of the book, as we
try to bring together blind equalization and source separation under the
same conceptual and formal framework. The two final chapters consider
more emergent techniques that can be applied in the solution of those two
problems.
The synergy between the disciplines of machine learning and signal pro-
cessing has significantly increased during the last decades, which is attested
by the several regular and specific conferences and journal issues devoted
to the subject. From the standpoint of this book, it is quite relevant that
a nonnegligible part of this literature is related to unsupervised problems.
Chapter 7 presents some instigating connections between nonlinear filter-
ing, machine learning techniques, and unsupervised processing. We start
by considering a classical nonlinear solution for adaptive equalization—
the DFE structure—since this remarkably efficient approach can be equally
used in supervised and blind contexts. Then we turn our attention to more
sophisticated structures that present properties related to the idea of uni-
versal approximation, like Volterra filters and artificial neural networks.
For that, we previously revisit equalization within the framework of a
classification problem and introduce an important benchmark in digital
transmission: the Bayesian equalizer, which performs a classification task
by recovering the transmitted symbols in accordance with the criterion of
minimum probability of error. Finally, we discuss two classical artificial neu-
ral networks: multilayer perceptron (MLP) and radial basis function (RBF)
network. The training process of these networks is illustrated with the aid
of classical results, like the backpropagation algorithm and the k-means
algorithm.
The methods and techniques discussed all through this book are issued,
after all, from a problem of optimization. The solutions are obtained, as
10 Unsupervised Signal Processing
a rule, by the minimization or maximization of a given criterion or cost-
function. The bio-inspired optimization methods discussed in Chapter 8,
however, are part of a different paradigm, as they are founded on a number
of complex processes found in nature. These methods are generally charac-
terized by a significant global search potential and do not require significant
a priori information about the problem to be solved, which encourages appli-
cation, for instance, in nonlinear and/or unsupervised contexts. Chapter 8
closes the book by considering this family of techniques, which are finding
increasing applications in signal processing. Given the vastness of the sub-
ject, we limit our discussion to three potentially suitable approaches, taking
into account our domain of interest: genetic algorithms, artificial immune
systems, and particle swarm optimization methods.
The book presents enough material for a graduate course, since blind
techniques are increasingly present in graduate programs, and can also be
used as a complementary reference for undergraduate students. According
to the audience, Chapter 2 can be skipped, and even some topics of Chap-
ter 3, if the students have the possibility of attending a specific course on
adaptive filtering theory. Furthermore, the content of Chapters 7 and 8 can
be adapted to the audience and also serves as a complementary material
for courses on machine learning and/or optimization. Overall, it is worth
emphasizing that a course on unsupervised signal processing theory, com-
prising blind equalization and source separation, must not be organized in a
rigid way, but following the interests of different institutions.
Finally, it is worth emphasizing that adaptive filtering, unsupervised
equalization, source separation, and related themes present a number of
recent results and open problems. Necessarily, and to preserve the main
focus of this book, some of them were omitted or not dealt with in depth.
2
Statistical Characterization of Signals
and Systems
The statistical characterization of signals and systems provides an impor-
tant framework of concepts and mathematical tools that are fundamental to
the modern theory of filtering and signal processing. In signal theory, we
denote by statistical signal processing the field of study that treats signals as
stochastic processes. The word stochastic is etymologically associated with
the notion of randomness. Even though such notion gives rise to different
interpretations, in our field of study, randomness is related to the concept of
uncertainty. Uncertainty on its turn is present in the essence of information
signals in their different forms as well as in the several types of disturbances
that can affect a system.
The subject of statistical characterization of signals and systems is really
extensive and has been built along more than two centuries, as a result
of classical works on statistical inference, linear filtering, and information
theory. Nevertheless, the purpose of this chapter is rather objective and,
in a way, unpretentious: to present the basic foundations and to empha-
size some concepts and tools that are necessary to the understanding of the
next chapters. With this aim in mind we have chosen five main topics to
discuss:
• Section 2.1 is devoted to the basic theory of signals and systems. For
the sake of systemizing such theory, we first consider signals that do
not have randomness in their nature.
• Section 2.2 specifically considers discrete-time signal processing, since
most methods to be presented in the book tend to be implemented
using this approach.
• Section 2.3 discusses the foundations of the probability theory in order
to introduce the suitable tools to deal with random signals. The main
definitions and properties are exposed.
• Section 2.4 then deals with the notion of stochastic processes together
with some useful properties. An appendix on the correlation matrix
properties complements the subject.
• Finally, Section 2.5 discusses the main concepts of estimation theory,
a major area of statistical signal processing with strong connections
with that of optimal filtering, which is the subject of the following
chapter.
11
© 2011 by Taylor  Francis Group, LLC
12 Unsupervised Signal Processing
Historical Notes
The mathematical foundations of the theory of signals and systems have
been established by eminent mathematicians of the seventeenth and eigh-
teenth centuries. This coincides, in a way, with the advent of calculus,
since the representation of physical phenomena in terms of functions of
continuous variables and differential equations gave rise to an appropriate
description of the behavior of continuous signals and systems. Furthermore,
as mentioned by Alan Oppenheim and Ronald Schafer [219], the classical
works on numerical analysis developed by names like Euler, Bernoulli, and
Lagrange sowed the seeds of discrete-time signal processing.
The bridge between continuous- and discrete-time signal processing was
theoretically established by the sampling theorem, introduced in the works
of Harry Nyquist in 1928, D. Gabor in 1946, and definitely proved by
Claude Shannon in 1949. Notwithstanding this central result, signal process-
ing was typically carried out by analog systems and in a continuous-time
framework, basically due to performance limitations of the existing digital
machines. Simultaneously with the development of computers, a landmark
result appeared: the proposition of the fast Fourier transform algorithm by
Cooley and Tukey in 1965. Indeed, this result has been considered to be
one of the most important in the history of discrete-time signal process-
ing, since it opened a perspective of practical implementation of many other
algorithms in digital hardware.
Two other branches of mathematics are fundamental in the modern
theory of signals and systems: functional analysis and probability theory.
Functional analysis is concerned with the study of vector spaces and oper-
ators acting upon them, which are crucial for different methods of signal
analysis and representation. From it is derived the concept of Hilbert space,
the denomination of which is due to John von Neumann in 1929, as a
recognition of the work of the great mathematician David Hilbert. This is
a fundamental concept to describe signals and systems in a transformed
domain, including the Fourier transform, a major tool in signal process-
ing, the principles of which had been introduced one century before by
Jean-Baptiste Joseph Fourier.
Probability theory allows extending the theory of signals and systems
to a scenario where randomness or incertitude is present. The creation
of a mathematical theory of probability is attributed to two great French
mathematicians, Blaise Pascal and Pierre de Fermat, in 1654. Along three
centuries, important works were written by names like Jakob Bernoulli,
Abraham de Moivre, Thomas Bayes, Carl Friedrich Gauss, and many others.
In 1812, Pierre de Laplace introduced a host of new ideas and mathematical
techniques in his book Théorie Analytique des Probabilités [175].
Since Laplace, many authors have contributed to developing a mathe-
matical probability theory precise enough for use in mathematics as well
© 2011 by Taylor  Francis Group, LLC
Statistical Characterization of Signals and Systems 13
as suitable to be applicable to a wide range of practical problems. The
Russian mathematician Andrei Nikolaevich Kolmogorov established a solid
landmark in 1933, by proposing the axiomatic approach that forms the
basis of modern probability theory [169]. A few years later, in his clas-
sical paper [272], Shannon made use of probability in the definition of
entropy, in order to “play a central role in information theory as measures of
information, choice and uncertainty.” This fundamental link between uncer-
tainty and information raised many possibilities of using statistical tools in
the characterization of signals and systems within all fields of knowledge
concerned with information processing.
2.1 Signals and Systems
Information exchange has been a vital process since the dawn of mankind. If
we consider for a moment our routine, we will probably be able to point out
several sources of information that belong to our everyday life. Nevertheless,
“information in itself” cannot be transmitted. A message must find its proper
herald; this is the idea of signal.
We shall define a signal as a function that bears information, while a
system shall be understood as a device that produces one or more output
signals from one or more input signals. As mentioned in the introduction
of this chapter, the proper way to address signals and systems in the mod-
ern theory of filtering and signal processing is by means of their statistical
characterization, due to the intrinsic relationships between information and
randomness. Nevertheless, for the sake of systemizing such theory, we first
consider signals that do not have incertitude in their nature.
2.1.1 Signals
In simple terms, a signal can be defined as an information-bearing function.
The more we probe into the structure of a certain signal, the more informa-
tion we are able to extract. A cardiologist can find out a lot about your health
by simply glancing at an ECG. Conversely, someone without an adequate
training would hardly avoid a commonplace appreciation of the same data
set, which leads us to a conclusion: signals have but a small practical value
without the efficient means to interpret their content. From this it is easy
to understand why so much attention has been paid to the field of signal
analysis.
Mathematically, a function is a mapping that associates elements of two
sets—the domain and the codomain. The domain of a signal is usually,
although not necessarily, related to the idea of time flow. In signal processing,
there are countless examples of temporal signals: the electrical stimulus pro-
duced by a microphone, the voltage in a capacitor, the daily peak temperature
© 2011 by Taylor  Francis Group, LLC
14 Unsupervised Signal Processing
profile of a given city, etc. In a number of cases, signals can be, for instance,
functions of space: the gray intensity level of a monochrome image, the set of
measures provided by an array of sensors, etc. Also, spatiotemporal signals
may be of great interest, the most typical example being a video signal, which
is a function of a two-dimensional domain: space and time.
In this book, we deal much more frequently with temporal signals, but
some cases of space-time processing, like the use of antenna array in particu-
lar channels, are also relevant to the present work. Anyway, it is interesting
to expose some important properties concerning the nature of a signal as well
as ways of classifying and characterizing them.
2.1.1.1 Continuous- and Discrete-Time Signals
Insofarasthedomainoftemporalsignalsisconcerned,therearetwopossibilities
of particular relevance: to establish a continuum or an integer set of time
values. In the former case, the chosen domain engenders a continuous-time
signal, which is mathematically described by a function of a continuous
variable, denoted by x(t). Conversely, if time-dependence is expressed by
means of a set of integer values, it gives rise to a discrete-time signal, which
is mathematically described by a numerical sequence, denoted by x(n). For
instance, a signal received by a microphone or an antenna can be assumed to
be a continuous-time signal, while a daily stock quote is a discrete-time signal.
2.1.1.2 Analog and Digital Signals
A signal whose amplitude can assume any value in a continuous range is an
analog signal, which means that it can assume an infinite number of values.
On the other hand, if the signal amplitude assumes only a finite number of
values, it is a digital signal.
Figure 2.1 illustrates examples of different types of signals. It should be
clear that a continuous-time signal is not necessarily an analog signal, as well
as a discrete-time signal may not be digital. The terms continuous-time and
discrete-time refer to the nature of the signals along the time, while the terms
analog and digital qualify the nature of the signal amplitude. This is shown
in Figure 2.1.
2.1.1.3 Periodic and Aperiodic/Causal and Noncausal Signals
A signal x(t) is said to be periodic if, for some positive constant T,
x(t) = x(t + T) (2.1)
for all t. The smallest value of T for which (2.1) holds is the period of the
signal. Signals that do not exhibit periodicity are termed aperiodic signals.
From (2.1), we can notice that a periodic signal should not change if shifted
in time by a period T. Also, it must start at t = −∞, otherwise, it would not be
© 2011 by Taylor  Francis Group, LLC
Statistical Characterization of Signals and Systems 15
–1
(a)
–0.5 0 0.5 1
–0.4
–0.2
0
0.2
0.4
0.6
0.8
1
t
x
(t)
–1
(b)
–0.5 0 0.5 1
–0.5
0
0.5
1
t
x
(t)
–1
(c)
–0.5 0 0.5 1
–1
–0.8
–0.6
–0.4
–0.2
0
0.2
0.4
0.6
0.8
1
t
x
(t)
–1
(d)
–0.5 0 0.5 1
–1
–0.8
–0.6
–0.4
–0.2
0
0.2
0.4
0.6
0.8
1
t
x
(t)
FIGURE 2.1
Examples of analog/digital and continuous-time/discrete-time signals: (a) analog continuous-
time signal, (b) analog discrete-time signal, (c) digital continuous-time signal, (d) digital
discrete-time signal.
possible to respect the condition expressed in (2.1) for all t. Signals that start
at t = −∞ and extend until t = ∞ are denoted infinite duration signals.
In addition to these definitions, it is interesting to establish the difference
between a causal and a noncausal signal. A signal is causal if
x(t) = 0, t  0 (2.2)
and said to be noncausal if the signal starts before t = 0.
It is worth mentioning that all definitions also apply for discrete-time
signals.
2.1.1.4 Energy Signals and Power Signals
An energy signal is a signal that has finite energy, i.e.,
∞

−∞
|x(t)|2
dt  ∞ (2.3)
© 2011 by Taylor  Francis Group, LLC
16 Unsupervised Signal Processing
A signal with finite nonzero power, i.e.,
lim
α→∞
1
α
α/2

−α/2
|x(t)|2
dt  ∞ (2.4)
is called a power signal.
All practical signals present finite energy and, thus, are energy signals.
Another interesting fact is that a power signal should necessarily be an infi-
nite duration signal, otherwise, its average energy would tend to zero within
a long enough time interval.
2.1.1.5 Deterministic and Random Signals
A deterministic signal is a signal whose physical description, either in a
mathematical or in a graphical form, is completely known. Conversely, a
signal whose values cannot be precisely predicted but are known only in
terms of a probabilistic description is a random signal.
2.1.2 Transforms
In daily life, our senses are exposed to all kinds of information when we
move, or simply as long as time flows. Since time (and in a way this is also
true for space) constitutes the most natural domain in which we observe
information, it is also a natural standpoint to represent and analyze a sig-
nal, but it is not the only possibility. Sometimes, a great deal of insight
on the characteristics of an information-bearing function can be gained by
translating it into another support input space.
We can understand a transform as a mapping that establishes a one-to-
one relationship between the representations of a given signal in two distinct
domains. As a rule, a transform is employed when the present domain
wherein a signal is represented is not the most favorable to the study of
one or more of its relevant aspects. The domain of representation is strongly
related with the mathematical concept of complete orthogonal basis. For
instance, if we define the unit impulse function δ(t) (also known as the Dirac
delta function) as
δ(t) = 0 t = 0 (2.5)
∞

−∞
δ(τ)dτ = 1 (2.6)
© 2011 by Taylor  Francis Group, LLC
Statistical Characterization of Signals and Systems 17
it is interesting to observe that a continuous-time signal can be written as
x(t) =
∞

−∞
x(τ)δ(t − τ)dτ (2.7)
A similar representation can be obtained for discrete-time signals:
x(n) =
∞

−∞
x(k)δ(n − k) (2.8)
where δ(n) denotes the discrete-time unit impulse function (also known as
the Kronecker delta function), and is defined as
δ(n) =

1, for n = 0
0, otherwise
(2.9)
It comes, and this is even more evident in the discrete case, that the signal
of interest is a linear combination of shifted unit impulse functions. In this
sense, these shifted functions can be regarded as a basis for representing the
signal.
A change of representation domain corresponds to a change in the basis
over which the signal is decomposed. As mentioned before, this can be very
important in order to study some characteristics and properties of the signal
that are not directly observed in the form, for instance, of a temporal function
or sequence. In the classical theory of signal and systems, representation by
the complete orthogonal basis composed by complex exponentials deserves
special attention. For purely imaginary exponents, the complex exponential
functions and sequences are directly associated with the physical concept of
frequency, and such representation gives rise to the Fourier transform. For
general complex exponents, the corresponding decomposition gives rise to
the Laplace transform, in the continuous case, and to the z-transform, in the
discrete case, which are both crucial for the study of linear systems. These
four important cases are depicted in the sequel.
2.1.2.1 The Fourier Transform of Continuous-Time Signals
As mentioned earlier, the Fourier transform corresponds to the projection
of a signal x(t) onto a complete orthogonal basis composed of complex
exponentials exp(j 2πft). It is defined as
X( f) =
∞

−∞
x(t) exp (−j 2πft)dt (2.10)
© 2011 by Taylor  Francis Group, LLC
18 Unsupervised Signal Processing
while the inverse Fourier transform is given by
x(t) =
∞

−∞
X(f) exp (j 2πft)df (2.11)
2.1.2.2 The Fourier Transform of Discrete-Time Signals
The discrete-time counterpart of the Fourier transform correspond to the
projection of the sequence x(n) onto an orthogonal basis composed of
complex exponentials exp(j 2πfn), and is defined as
X(exp (j 2πf)) =
∞

n=−∞
x(n) exp (−j 2πfn) (2.12)
while the inverse Fourier transform is given by
x(n) =
1/2

−1/2
X(exp (j 2πfn)) exp (j 2πfn)df (2.13)
2.1.2.3 The Laplace Transform
The basic idea behind the Laplace transform is to build an alternative rep-
resentation X(s) of a continuous-time signal x(t), from a basis of complex
exponentials:
X(s) =
∞

−∞
x(t) exp(−st)dt (2.14)
where s = σ + j 2πf. The set of values of s for which the integral shown
in (2.14) converges is called region of convergence (ROC) of the Laplace
transform.
The inverse Laplace transform is then given by
x(t) =
1
2πj

C
X(s) exp(st)ds (2.15)
where C is a suitable contour path.
2.1.2.4 The z-Transform
The z-transform can be understood as the equivalent of the Laplace trans-
form in the context of discrete-time signals. The transform X(z) of a sequence
x(n) is defined by
© 2011 by Taylor  Francis Group, LLC
Statistical Characterization of Signals and Systems 19
X(z) =
∞

n=−∞
x(n)z−n
(2.16)
where z = exp(σ+j 2πf). The ROC of the z-transform is defined as the values
of z for which the summation presented in (2.16) converges.
The inverse z-transform is defined as
x(n) =
1
2πj

C
X(z)zn−1
dz (2.17)
where the integral must be evaluated in a path C that encircles all of the poles
of X(z).
It is worth mentioning that Equations 2.14 and 2.16 correspond to the
so-called bilateral Laplace and z-transforms, which are the most generic
representations. For causal signals, it is useful to consider the unilateral
transforms, in which the integral and the discrete sum start from zero instead
of −∞.
2.1.3 Systems
Having in mind the definition of signal presented in the beginning of
Section 2.1.1, we may alternatively define a system as an information-
processing device. In Figure 2.2, we present a schematic view of a system.
A system can be fully characterized by its input–output relation, i.e., by
the mathematical expression that relates its outputs to its inputs. Assuming
that the operator S[·] represents the mapping performed by the system, we
may write
y = S[x] (2.18)
where x and y are the input and output vectors, respectively. It is interest-
ing to analyze some important classes of systems and the properties that
characterize them.
System
x1
x2
xN
...
y1
y2
yM
...
y
x
FIGURE 2.2
Schematic view of a system.
© 2011 by Taylor  Francis Group, LLC
20 Unsupervised Signal Processing
2.1.3.1 SISO/SIMO/MISO/MIMO Systems
This classification is based on the number of input and output signals of a
system:
• SISO (single-input single-output) systems have a single input signal
and a single output signal. Therefore, x and y become scalars.
• SIMO (single-input multiple-outputs) systems have a single input
signal and more than one output signal.
• MISO (multiple-input single-output) systems have multiple input
signals and a single output signal.
• Finally, MIMO (multiple-input multiple-output) systems have mul-
tiple input and output signals, and form the most general of the four
classes.
Throughout the book, the reader will have the opportunity of considering
the differences between these classes of systems, the importance of which is
patent in modern signal processing techniques.
2.1.3.2 Causal Systems
If the system output depends exclusively on present and past values of the
input, the system is said to be causal. In other words, causality means that
the output of a system at a given instant is not influenced by future values of
the input.
When we consider real-time applications, causality will certainly hold.
However, when we manipulate acquired data, noncausal systems are accept-
able, and may even be desirable in some cases.
2.1.3.3 Invertible Systems
When it is possible to build a mapping that recovers the input signals of a
given system from its output, we say that such a system is invertible. This
means that it is possible to obtain x from y using an inverse system cas-
caded with the original one. This notion will be revisited when we analyze
the problems of equalization and source separation.
2.1.3.4 Stable Systems
Stability is also a major concern in system analysis. We shall assume that
a system is stable if the response to a bounded input is also bounded. In
simple words, if the input signal does not diverge to infinity, the output
will not diverge as well. Stability is a common feature in real-world systems,
which we suppose to be restricted by conservation laws, but the same may
not occur in some mathematical models and algorithms.
© 2011 by Taylor  Francis Group, LLC
Statistical Characterization of Signals and Systems 21
2.1.3.5 Linear Systems
In system theory, it is often convenient to introduce some classes of possi-
ble operators. A very relevant distinction is established between linear and
nonlinear systems. Linear systems are those whose defining S[·] operator
obeys the following superposition principle:
S[k1x1 + k2x2] = k1S[x1] + k2S[x2] (2.19)
The idea of superposition can be explained in simple terms: the response
to a linear combination of input stimuli is the linear combination of the indi-
vidual responses. Conversely, a nonlinear system is simply one that does not
obey this principle.
2.1.3.6 Time-Invariant Systems
Another important feature is time-invariance. A system is said to be time-
invariant when its input–output mapping does not vary with time. When the
contrary holds, the system is said to be time-variant. Since this characteristic
makes the system easier to be dealt with in mathematical terms, most models
of practical systems are, with different degrees of fidelity, time-invariant.
2.1.3.7 Linear Time-Invariant Systems
A very special class of systems is that formed by those that are both lin-
ear and time-invariant (linear time-invariant, LTI). These systems obey the
superposition principle and have an input–output mapping that does not
vary with time. The combination of these desirable properties gives rise to
the following mathematical result.
Suppose that x(t) and y(t) are, respectively, the input and the output of a
continuous-time LTI SISO system. In such case,
y(t) = h(t) ∗ x(t) =
∞

−∞
h(τ)x(t − τ)dτ (2.20)
where h(t) is the system impulse response, which is the system output when
x(t) is equal to the Dirac delta function δ(t). The symbol ∗ denotes that the
output y(n) is the result of the convolution of x(t) with h(t).
Analogously, if x(n) and y(n) are, respectively, the input and the output
of a discrete-time LTI SISO system, it holds that
y(n) = h(n) ∗ x(n) =
∞

k=−∞
h(k)x(n − k) (2.21)
© 2011 by Taylor  Francis Group, LLC
22 Unsupervised Signal Processing
where h(n) is the system impulse response, i.e., the system output when x(n)
is equal to the Kronecker delta function δ(n). Once more, the symbol ∗ stands
for convolution.
2.1.4 Transfer Function and Frequency Response
An important consequence of the fact that the input and the output of a
continuous-time LTI SISO system are related by a convolution integral is
that their Laplace transforms will be related in a very simple way:
Y(s) = H(s)X(s) (2.22)
where
Y(s) and X(s) are, respectively, the Laplace transforms of the output and
the input
H(s) is the transform of the system impulse response, the so-called transfer
function
This means that the input–output relation of an LTI system is the result
of a simple product in the Laplace domain.
If the ROCs of X(s), Y(s), and H(s) include the imaginary axis, expression
(2.22) can be promptly particularized to the domain of the Fourier analysis.
In this case, the following holds:
Y(f) = H(f)X(f) (2.23)
where
Y(f) and X(f) are, respectively, the Fourier transforms of the output and
the input
H(f) is the transform of the system impulse response, which is called
frequency response
It is possible to understand several key features of a given LTI sys-
tem simply by studying the functions H(s) and H(f). For instance, to
know the frequency response of a system is the key to understanding
how it responds to stimuli at any frequency of the spectrum, and how an
input signal characterized by certain frequency content will be processed
by it.
The extension to the discrete-time domain is straightforward. If Y(z) and
X(z) are the z-transforms of two discrete-time signals related by an LTI
system, it is possible to write
Y(z) = H(z)X(z) (2.24)
© 2011 by Taylor  Francis Group, LLC
Statistical Characterization of Signals and Systems 23
and, if the ROCs of X(z), Y(z), and H(z) include the unit circle, expression
(2.24) reduces to
Y

exp (j 2πf)

= H

exp (j 2πf)

X

exp (j 2πf)

(2.25)
2.2 Digital Signal Processing
Discrete-time signals can be characterized and stored very easily. This relies
on a very relevant feature of discrete-time signals: given a finite time interval,
there is a finite set of values that fully characterize a sequence, whereas the
same does not hold for a continuous-time signal. This essential difference is
a reflex of the profound structural divergences between the domains of these
classes of information-bearing functions.
The world of digital computers excels in storage capacity and potential
of information processing, and is essentially a “discrete-time world.” There-
fore, it is not surprising that digital signal processing is a widespread tool
nowadays. Nevertheless, it is also clear that many of our physical models are
inherently based on continuous-time signals. The bridge between this “real
world” and the existing digital tools is established by the sampling theorem.
2.2.1 The Sampling Theorem
The idea of sampling is very intuitive, as it is closely related to the notion
of measure. When we measure our height or weight, we are, in a certain
sense, sampling the continuous-time signal that expresses the time-evolution
of these variables. In the context of communications, the sampling process
produces, from a continuous-time signal, a representative discrete-time sig-
nal that lends itself to proper digital processing and storage. Conditions for
equivalent representation and perfect reconstruction of the original signal
from its samples were achieved through the sampling theorem, proposed by
Harry Nyquist (1926), D. Gabor (1946), and Claude Shannon (1949), and are
related to two requirements:
1. The continuous-time signal must be band-limited, i.e., its Fourier
spectrum must be null for f  fM.
2. The sampling rate, i.e., the inverse of the time-spacing TS of the
samples must be higher than or equal to 2fM.
Given these conditions, we are ready to enunciate the sampling
theorem [219]
© 2011 by Taylor  Francis Group, LLC
24 Unsupervised Signal Processing
THEOREM 2.1 (Sampling Theorem)
If x(t) is a signal that obeys requirement 1 above, it may be perfectly
determined by its samples x(nTS), n integer, if TS obeys requirement 2.
If these requirements are not complied with, the reconstruction process
will be adversely affected by a phenomenon referred to as aliasing [219].
2.2.2 The Filtering Problem
There are many practical instances in which it is relevant to process informa-
tion, i.e., to treat signals in a controlled way. A straightforward approach to
fulfill this task is to design a filter, i.e., a system whose input–output relation
is tailored to comply with preestablished requirements. The project of a filter
usually encompasses three major stages:
• Choice of the filtering structure, i.e., of the general mathematical
form of the input–output relation.
• Establishment of a filtering criterion, i.e., of an expression that
encompasses the general objectives of the signal processing task at
hand.
• Optimization of the cost function defined in the previous step with
respect to the free parameters of the structure defined in the first
step.
It is very useful to divide the universe of discrete-time filtering structures
into two classes: linear and nonlinear. There are two basic types of linear dig-
ital filters: finite impulse response filters (FIR) and infinite impulse response
filters (IIR). The main difference is that FIR filters are, by nature, feedforward
devices, whereas IIR filters are essentially related to the idea of feedback.
On the other hand, nonlinearity is essentially a negative concept. There-
fore, there are countless possible classes of nonlinear structures, which
means that the task of treating the filtering problem in general terms is far
from trivial.
Certain classes of nonlinear structures (like those of neural networks and
polynomial filters, which will be discussed in Chapter 7) share a very rele-
vant feature: they are derived within a mathematical framework related to
the idea of universal approximation. Consequently, they have the ability
of producing virtually any kind of nonpathological input–output mapping,
which is a remarkable feature in a universe as wide as that of nonlinear
filters.
A filtering criterion is a mathematical expression of the aims subjacent
to a certain task. The most direct expression of a filtering criterion is its
© 2011 by Taylor  Francis Group, LLC
Statistical Characterization of Signals and Systems 25
associated cost function, the optimization of which leads to the choice and
adaptation of the free parameters of the chosen structure.
When both the structure and an adequate cost function are chosen,
there remains the procedure of optimizing the function with respect to the
free parameters of the filtering device. Although there are many possible
approaches, iterative techniques are quite usual in practical applications for,
at least, two reasons:
• They avoid the need for explicitly finding closed-form solutions,
which, in some cases, can be rather complicated even in static
environments.
• Their dynamic nature suits very well the idea of adaptation, which
is essential in a vast number of real-world applications.
Adaptation will be a crucial idea in the sequel of this text and, as we will
see, the derivation of a number of adaptive algorithms depends on some
statistical concepts to be introduced now.
2.3 Probability Theory and Randomness
Up to this moment, signals have been completely described by mathematical
functions that generate information from a support input space. This is the
essence of deterministic signals. However, this direct mapping between the
input and output space cannot be established if uncertainties exist. In such
case, the element of randomness is introduced and probabilistic laws must
be used to represent information. Thus, it is of great interest to review some
fundamental concepts of probability theory.
2.3.1 Definition of Probability
Probability is essentially a measure to be employed in a random experiment.
When one deals with any kind of random experiment, it is often necessary
to establish some conditions in order that its outcome be representative of
the phenomenon under study. In more specific terms, a random experiment
should have the following three features [135]:
1. The experiment must be repeatable under identical conditions.
2. The outcome wi of the experiment on any trial is unpredictable
before its occurrence.
© 2011 by Taylor  Francis Group, LLC
26 Unsupervised Signal Processing
3. When a large number of trials is run, statistical regularity must be
observed in the outcome, i.e., an average behavior must be identified
if the experiment is repeated a large number of times.
The key point of analyzing a random experiment lies exactly in the
representation of the statistical regularity. A simple measure thereof is
the so-called relative frequency. In order to reach this concept, let us define
the following:
• The space of outcomes , or sample space, which is the set of all
possible outcomes of the random experiment.
• An event A, which is an element, a subset or a set of subsets of .
Relative frequency is the ratio between the number of occurrences of a
specific event and the total number of experiment trials. If an event A occurs
N(A) times over a total number of trials N, this ratio obeys
0 ≤
N(A)
N
≤ 1 (2.26)
We may state that an experiment exhibits statistical regularity if, for any
given sequence of N trials, (2.26) converges to the same limit as N becomes a
large number. Therefore, the information about the occurrence of a random
event can be expressed by the frequency definition of probability, given by
Pr(A) = lim
N→∞

N(A)
N

(2.27)
On the other hand, as stated by Andrey Nikolaevich Kolmogorov in his
seminal work [170], “The probability theory, as a mathematical discipline,
can and should be developed from axioms in exactly the same way as Geom-
etry and Algebra.” Kolmogorov thus established the axiomatic foundation of
probability theory. According to this elegant and rigorous approach, we can
define a field of probability formed by the triplet {, F, Pr(A)}, where  is the
space of outcomes, F is a field that contains all possible events of the ran-
dom experiment,∗ and Pr(A) is the probability of event A. This measure is so
chosen as to satisfy the following axioms.
Axiom 1: Pr(A) ≥ 0
Axiom 2: Pr() = 1
Axiom 3: If A ∩ B = ∅, then Pr(A ∪ B) = Pr(A) + Pr(B), where ∩ and ∪ stand
for the set operations intersection and union, respectively.
∗ In the terminology of mathematical analysis, the collection of subsets F is referred to as a
σ-algebra [110].
© 2011 by Taylor  Francis Group, LLC
Statistical Characterization of Signals and Systems 27
For a countable infinite sequence of mutually exclusive events, it is
possible to enunciate Axiom 3 in the following extended form:
Axiom 3’: For mutually exclusive events A1, A2, . . . , An,
Pr
 ∞
i=1
Ai =
∞

i=1
Pr(Ai)
From these three axioms, and using set operations, it follows that
Pr(Ā) = 1 − Pr(A) (2.28)
where Ā stands for the complement of the set A. If A ∩ B = ∅, then
Pr(A ∪ B) = Pr(A) + Pr(B) − Pr(A ∩ B) (2.29)
In probability theory, an important and very useful concept is that of
independence. Two events Ai and Aj, for i = j, are said to be independent if
and only if
Pr(Ai ∩ Aj) = Pr(Ai) Pr(Aj) (2.30)
It is also important to calculate the probability of a particular event given
the occurrence of another. Thus, we define the conditional probability of Ai
given Aj (supposing Pr(Aj) = 0) as
Pr(Ai|Aj) 
Pr(Ai ∩ Aj)
Pr(Aj)
(2.31)
It should be noted that, if Ai and Aj are independent, then Pr(Ai|Aj) =
Pr(Ai). This means that knowledge about the occurrence of Aj does not
modify the probability of occurrence of Ai. In other words, the condi-
tional probability of independent events is completely described by their
individual probabilities.
Computation of the probability of a given event can be performed with
the help of the theorem of total probability. Consider a finite or countably
infinite set of mutually exclusive (Ai ∩ Aj = ∅ for all i = j) and exhaustive
i Ai = A events. The probability of an arbitrary event B is given by
Pr(B) =

i
Pr(Ai ∩ B) =

i
Pr(Ai) Pr(B|Ai) (2.32)
2.3.2 Random Variables
A deterministic signal is defined in accordance with an established math-
ematical formula. In order to deal with random signals, it is important to
© 2011 by Taylor  Francis Group, LLC
Other documents randomly have
different content
Essai sur l'étude de la Littérature, Gibbon's first published work, i.
20, 80
Essex, Earl of, i. 276
Establishment Bill, i. 376
Etienne, Gibbon's valet, ii. 243, 253
Exchequer Bills, issue of, ii. 382
Exeter, Lord, i. 65
Exilles, Fort, i. 59
*Eyre, Mr., printer, i. 263
F
*Falkland, Lord, i. 282
*Fanshaw, Miss, ii. 284
Farquhar, Sir Walter, ii. 393, 395, 398, 401
*Farquhar's The Twin Rivals, ii. 102
Faukier, Mr., i. 163
Featherstonhaugh, Lady, i. 232, 235, 246, 249
Featherstonhaugh, Sir H., i. 162, 214, 235, 247, 249
Featherstonhaugh, Sir M., i. 56, 67, 83, 84, 131, 162, 247
Fenestrelle, Fort, i. 59
*Ferguson, Lieut. James, killed by Captain Roche, i. 209
Ferrières, M. de, ii. 318
*Fersen, Comte de, ii. 292
*Feuchéres, Madame de, ii. 237
Firth, Miss, ii. 82, 91, 98, 334;
Gibbon's letter to, ii. 98;
and Severy's studies, ii. 167
Fischer, M., ii. 260, 283, 375
*Fitzherbert, Mrs., ii. 150
Fitzjames, Duchess of, ii. 324
Fitzmaurice, Lord. See Shelburne, Earl of
Fitzpatrick, Lady Mary. See Holland, Lady
Fitzroy, Mrs., i. 90
Fitzwilliam, Lord, ii. 305
Flanders, invasion of, ii. 299
*Fleming, Sir John, i. 261
Flood, Henry, i. 264
Florence, Gibbon at, i. 63
*Floyer, Mr., Member of Madras Council, i. 362
Foley, Mr., English banker at Paris, i. 33, 36
Foote, Samuel, his Bankrupt, i. 192;
A Trip to Calais stopped by Duchess of Kingston, i. 265
Ford, Mrs., Gibbon's housekeeper, i. 192; ii. 8
Fordwich, Lord. See Cowper, Earl
Fort Louis, surrender of garrison to Austrians, ii. 396
Foster, Lady Elizabeth, described by Gibbon as a bewitching
animal, goddess, still adorable, Bess, etc., ii. 15, 81, 117, 300,
308, 310, 312, 319, 339, 388;
Gibbon's letter to, on Lady Sheffield's death, ii. 380
Foster, John. See Oriel, Lord
*Foster, John Thomas, ii. 15
Fothergill, Dr., i. 177
Fowler, Mr., ii. 340
Fox, Charles James, supports Church of England, i. 148;
Royal Marriage Bill, i. 151;
his debts, i. 198, 264;
on troubles with America, i. 249, 256, 303, 324, 328;
the king's debts, i, 308;
on the Canadian Expedition, i. 333;
Tickell's Anticipation, i. 348;
his lines on Gibbon as Commissioner of Trade, i. 354;
on Sheffield's Regiment of Horse, i. 380;
M.P. for Westminster, i. 388, 390;
the black Patriot, ii. 4;
Secretary of State, ii. 13, 34;
resigns office, ii. 18;
and American independence, ii. 25;
George III.'s behaviour to, ii. 34;
sale of his library, ii. 68;
his two India Bills, ii. 86;
Gibbon's opinion of, ii. 85, 92, 96, 251, 356, 360, 372;
suggested union with Pitt, ii. 92, 306, 307, 330;
no compromise, ii. 97;
his Martyrs, ii. 102;
the man of the people, ii. 179;
his marriage, ibid.;
twelve hours' talk with Gibbon, ii. 180;
speech on treaty between Russia and Turkey, ii. 246;
on Abolition of Slave Trade, ii. 294;
his half-support of Grey's motion, ii. 297, 320;
but fifty followers, ii. 305;
rejoices at retreat of Prussians, ii. 320;
detestable on French affairs, ii. 330;
on the calling out of the Militia, ii. 349, 350;
his motion for an Embassy to France, ii. 350, 353;
opposes Alien Bill, ii. 364;
Duke of Portland's adherence to, ii. 367, 368;
opposes augmentation of forces, ii. 368
Fox, Hon. Stephen. See Holland, 2nd Lord
France, fears of war with, i. 289, 317;
treaty with America, i. 333;
war with, i. 339; ii. 362, 374;
treaty with England, ii. 152;
war declared against Francis Joseph, ii. 279;
war with Austria and Prussia, ii. 319;
treaty with Geneva, ii. 325, 331, 345;
war with England, Holland, and Spain, ii. 362, 374
Francillon, M., ii. 283
*Francis Joseph, of Austria, ii. 279, 292
Frankland, Miss Anne (Lady Chichester), i. 200
*Frankland, Sir Thomas, i. 200
Franklin, Dr., i. 162, 243, 310, 313
Fraser, General, i. 264, 299, 363
Fraser, General Simon, i. 325
Fraser, Mrs., Donna Catherina, i. 300; ii. 105, 117
Fredennick, M., ii. 260
*Frederick the Great, i. 158; ii. 210
Frederick II. of Prussia, i. 143; ii. 137
Frederick William of Prussia at Pilnitz, ii. 271
French Revolution, ii. 246, 249, 270, 287, 293, 311;
massacres of September, 1792, ii. 312, 321, 351;
and Ireland, ii. 320;
murder of Louis XVI., ii. 360, 365
Frey, M., escorts Gibbon to Lausanne, i. 1
*Friends of the People, an association for reform of representative
system, ii. 297
Fullarton, Colonel, ii. 168
Fuller, Miss, called Sappho by Gibbon, i. 196, 198, 202, 208, 241
Fuller, Rose, i. 196, 208
G
Gage, General, i. 206, 257, 258, 260, 266
Gage, William Hall, Viscount, the green plumb, i. 225, 227
*Galovkin, Comte Fédor, i. 81
*Gansel, Major-General, i. 109
Garrick, David, as Sir John Brute, i. 19;
Gibbon a friend of, i. 201, 289, 333;
in Hamlet, i. 203;
letter from Gibbon to, quoted, i. 317
Gascoyne, Bamber, i. 366
*Gates, General, i. 325
Gazette, the, i. 257, 392
*Gazetteer, the, i. 146
Gee, Mr., i. 3, 6
*Genlis, Comte de, i. 326
Genlis, Madame de, her opinion of Madame de Cambis, i. 313;
of Princesse de Beauvau, i. 314;
on Decline and Fall, i. 326;
on Dr. Tissot's skill, ii. 77;
her story of Gibbon and Madame de Montolieu, ii. 154
Geneva, threatened by French, ii. 317, 322;
the Government at, ii. 318;
treaty with France, ii. 325, 331, 345;
new constitution of, ii. 370
Genoa, Gibbon at, i. 61
Gentleman's Magazine cited, ii. 289, 301, 302, 314, 349
Geoffrin, Madame, i. 29
*George II., ii. 321
George III., i. 45;
grants pension to M. de Viry, i. 56;
his intervention in Denmark, i. 143;
Royal Marriage Bill, i. 154;
reviews fleet at Spithead, i. 186;
the King's speech and America, i. 238;
negotiates for hire of Russian mercenaries, i. 270;
and Sir H. Palliser's leg, i. 356;
his behaviour to Fox, ii. 34;
refuses to dismiss ministers, ii. 100;
his illness and recovery, ii. 181, 191;
and Lally, ii. 285;
reviews troops at Bagshot, ii. 304;
proclaims tumultuous meetings, etc., ii. 305;
Lally's Plaidoyer, ii. 375.
George IV. See Wales, Prince of
Germain, Lady George, i. 328
Germain, Lord George. See Sackville, Lord
*Germain, Sir John, i. 198
Germanie, M. de, ii. 291
*Gibbon, Mrs., née Porten (Gibbon's mother), i. 2
Gibbon, Mrs., née Patton (Gibbon's stepmother), her opinion of Miss
Catherine Porten, i. 2;
marries Gibbon's father, i. 7;
Gibbon's inquiries about, i. 8;
subjects of Gibbon's letters to:—
Dr. Turton, i. 16, 114, 150, 371;
money troubles, i. 19, 352, 359;
his own health, i. 83, 114, 150, 158, 246, 321, 322, 371, 377-
379, 399; ii. 12, 108, 129, 141, 166, 248;
his father's accident, i. 26;
Paris and the Parisians, i. 28-32, 315, 320;
Duke of Bedford, i. 30, 32;
M. d'Augny: Madame Bontemps, i. 31;
Dr. Acton at Besançon, i. 36;
his life at Lausanne, i. 39, 42, 49, 50; ii. 76, 88-141 passim,
177;
Mdlle. Curchod, i. 40;
Voltaire, i. 43, 91;
Lady M. W. Montagu's Letters, i. 53;
his tour in Italy, i. 63;
English visitors at Lausanne, i. 65;
Rome to Naples, i. 73;
Venice, i. 75;
Deyverdun and Miss Comarque, i. 83;
the School of Vice, i. 84;
Ranelagh Gardens, i. 89;
his father's reproaches, i. 98;
his father's illness and death, i. 97, 105, 106, 118;
fall of the ministry, i. 112;
the Remonstrance debate, i. 113;
Lenborough, i. 126, 158, 182, 185, 187, 210, 289;
Beriton, i. 128, 153; ii. 175, 206, 248;
the formal Mr. Bricknall, i. 131-133, 141;
Danish revolution, i. 143;
Royal Marriage Bill, i. 154;
house-hunting in London, i. 171, 172, 175, 179;
James Scott's death, i. 177;
the Townshend-Bellamont duel, i. 180, 182;
his notions of London life, i. 188;
his friend Deyverdun, i. 188, 210, 262; ii. 89 et seq., 177, 207;
an approaching daughter-in-law, i. 197;
Johann C. Bach, i. 204;
masquerade at Pantheon, i. 215;
Mrs. Gibbon of Northamptonshire, not of Bath, i. 216;
Madame de Bavois, i. 220;
offer of a seat in Parliament, i. 230, 231;
M.P. for Liskeard, i. 234;
Godfrey Clarke's illness and death, i. 238, 244;
his Parliamentary life, i. 248, 253, 289, 325, 331, 365, 373;
his History, see Decline and Fall;
story of Essex's ring, i. 276;
the Neckers, i. 283, 306, 320; ii. 122;
Garrick, i. 289;
two answers to his History, i. 295;
Dr. Hunter's Anatomy Lectures, i. 304;
her groundless fears, i. 305, 306;
his Paris friends, i. 315;
Duke of Richmond, i. 316;
Madame de Genlis, i. 326;
at Coxheath Camp, i. 346;
his views on matrimony, i. 351;
a Lord of Trade, i. 366, 378;
Lord Eliot, i. 369, 374, 386, 391;
his Mémoire Justificatif, i. 371;
Mrs. Williams, i. 372, 374;
Irish trade, i. 373;
Lord Sheffield's first speech, i. 380;
a dissolution expected, i. 380;
the Gordon riots, i. 381, 382;
Sheffield and the Northumberland Militia, i. 381;
Sir Henry Clinton, i. 384;
weary of political life, i. 391;
George Scott's death, i. 393;
M.P. for Lymington, ii. 1;
at Brighthelmstone, ii. 3, 7;
Hayley, the poet, ii. 8, 17;
North's resignation, ii. 13;
Board of Trade suppressed, ii. 14;
Lady Elizabeth Foster, ii. 15;
Rockingham's death, ii. 17;
at Single-Speech Hamilton's house, ii. 21;
Mrs. Ashby, ii. 22;
Pitt, ii. 28;
Mrs. Siddons, ii. 29;
the Coalition Ministry, ii. 34;
retires from Parliament, ii. 58;
his Lausanne plans, ii. 58, 61, 64, 71;
his propensity for happiness, ii. 88;
society at Lausanne, ii. 89, 90, 122;
climate at Lausanne, ii. 129;
changes in English politics, ii. 131;
a regimen of boiled milk, ii. 142;
his house and garden, ii. 142, 248;
a ministry of respectable boys, ii. 143;
intention to visit England, ii. 155;
the two Mr. Gibbons, ii. 159;
Sheffield Place, ii. 160;
Bath, ii. 161;
his compliment to Lord North, ii. 170;
Cadell's discretion, ii. 176;
Hugonin's neglect, ii. 207;
the French Revolution, ii. 249, 308;
the Sheffields' visit to Lausanne, ii. 309;
her illness and recovery, ii. 348;
his return to England, ii. 381, 384;
at Althorp, ii. 391;
his illness, ii. 394, 398.
Her letters to Gibbon, ii. 385, 399
Gibbon, Edward (father), subjects of his son's letters to:—
First impressions of Lausanne, i. 1;
Voltaire, i. 5;
a stepmother, i. 10;
studies under Pavillard, ibid.;
proposed Swiss tour, i. 13;
Holland, i. 15;
Sir George Elkin's marriage, i. 16;
the Lottery, i. 17;
King's Scholars' play, i. 18;
the Celesias, i. 18, 62;
Dr. Maty: Mdlle. de Vaucluse and M. Celesia, i. 20;
his London friends, i. 21;
hopes of Parliament, i. 23, 45;
paternal doubts and suspicions, i. 34;
Taafe, i. 35;
gambling losses, i. 36, 47;
Dr. Acton and Besançon, i. 37;
the Swiss Militia, i. 38;
financial troubles, i. 45-48, 51, 52, 55, 69, 71, 73, 93-107
passim;
Mont Cenis, i. 55;
Turin, i. 56;
Venice, i. 61;
his friend Guise, i. 62;
Rome, i. 66;
Trajan's Pillar, i. 67;
Barazzi the banker, i. 71;
Sir T. Worsley, i. 78;
a burgess of Newtown, i. 88;
the Putney Writings, i. 93;
Gosling's mortgage, i. 94, 95.
His death, i. 117
Gibbon, Edward—
1753-1772.
Under Pavillard's care at Lausanne, i. 1;
a gambling scrape: his appeal to Aunt Catherine, i. 3, 4;
Voltaire at Geneva, i. 5, 43;
his father's second marriage, i. 7;
his plans and studies, i. 9-11;
his father's silence, i. 13;
returns to England, i. 15;
the Lottery, i. 17;
the Celesias, i. 18, 20;
distressed for money, i. 19;
his quarrel with Dr. Maty, i. 21;
a seat in Parliament—ambitions, hopes, and fears, i. 23, 45;
in the Hants Militia, i. 25, 87;
at Boulogne, i. 27;
friends and acquaintances in Paris, i. 28, 33;
Thomas Bradley's affair, i. 35;
Dr. Acton at Besançon, i. 36;
with his old acquaintance at Lausanne, i. 38 et seq.;
Mdlle. Curchod, i. 40, 81;
the fall of our tyrant, i. 44;
unhappy circumstances of our estate, i. 47;
a mixture of books and good company, i. 49;
Lady M. W. Montagu's Letters, i. 53;
proposed tour in Italy, i. 54;
Turin, i. 55, 58;
Borromean Islands, i. 57;
his snuff box and the King of Sardinia's daughters, i. 58;
Milan, i. 60;
Genoa, i. 61;
Florence, i. 63;
Englishmen at Florence, i. 65;
Rome, i. 67 et seq.;
ways and means, i. 69, 100 et seq., 127, 136, 165-170;
the very worst roads in the universe, i. 73;
least satisfied with Venice, i. 75;
Austrian etiquette, i. 80;
separations increase daily, i. 82;
the School of Vice, i. 84;
Monsieur Olroy's marriage, i. 85;
a burgess of Newtown, i. 88;
Ranelagh Gardens, i. 89;
Voltaire ruined, i. 91;
the Putney Writings, i. 93, 105;
paternal doubts and suspicions, i. 98;
the deed of trust, i. 99, 101;
Wentzel, the oculist, i. 105;
the plain dish of friendship, i. 108;
the Remonstrance debate, i. 113;
his father's illness and death, i. 115, 117, 121, 122;
Aunt Hester's kind letter, i. 121;
detained by Ridottos, i. 124;
the Soho masquerade, i. 131;
the eternal Bricknall, i. 133;
Farmer Gibbon of no use! i. 138;
Quis tulerit Gracchos, i. 140;
these Denmark affairs, i. 143, 149;
Royal Marriage Bill, i. 146, 151, 154;
the Pantheon, i. 147;
Worthy Champions of the Church, i. 148;
the business of Lord and Lady Grosvenor, i. 149;
Dr. Nowell's sermon, i. 151;
Sir R. Worsley, i. 153;
Lord Sheffield's editorial methods, i. 155;
Deyverdun's arrival, i. 158 (see also Deyverdun, George);
Master Holroyd's death, i. 160;
a sprained ankle, i. 161;
the loud trumpet of advertisements, i. 163;
a tenant for Beriton, i. 165;
Lady Rous' house, i. 171-175;
North's somnolence, i. 173;
James Scott's death, i. 177
1773-1783.
Bellamont-Townshend duel, i. 180;
a due mixture of study and society, i. 183;
the E. I. Co., i. 184, 186, 209, 308; ii. 85;
sale of Lenborough, i. 186; ii. 83;
Hume: W. Robertson, i. 190;
Foote's Bankrupt, i. 192;
the beauties of Cornwall, i. 194;
declines publication of Chesterfield's Letters, i. 195;
an approaching daughter-in-law, i. 197;
Fox's debts, i. 198;
Kelly's School of Wives, i. 199;
a dinner at the Breetish Coffee House, i. 201;
Colman's Man of Business, i. 202;
heads of a convention, i. 205;
Boston Port Bill, i. 206;
Mrs. Horneck, i. 207;
great news from India, i. 209;
receiving one friend and comforting another, i. 210;
Johnson and Gibbon—a contrast, i. 213;
Boodle's triumph, i. 215;
all the news of Versailles, i. 218;
Lord Stanley's fête champêtre, i. 219;
Madame de Bavois, i. 220;
Godfrey Clarke's illness and death, i. 223, 238, 244;
a new man for the county, i. 225;
Romanzow's victory, i. 227;
offer of a seat, i. 228;
M.P. for Liskeard, i. 229;
dissolution and election, i. 231;
Wilkes at the Mansion House, i. 231;
a visit to Bath, i. 231;
his anxiety for Mrs. Holroyd, i. 237;
deep in America, i. 243 (see also America);
a party of foxhunters, i. 247;
troops for America, i. 249;
North's conciliatory scheme, i. 251;
a silent member, i. 253;
presentation at Court, i. 255;
the march to Concord, i. 257;
a great historical work, i. 259;
his History going to press, i. 261;
nothing new from America, i. 265;
his dog the comfort of his life, i. 267;
his stepmother's small-pox, i. 268;
difficulty in raising troops, i. 271;
at work on his History, i. 273;
the book almost ready, i. 275;
story of Essex's ring, i. 276;
his History published, i. 279;
the Neckers in London, i. 281, 282;
poor Mallet, i. 283;
Dr. Porteous, i. 285;
an Irish edition of the Decline and Fall, i. 288;
fears of French war, i. 289;
Howe's proclamation, i. 291;
Suard translates his History, i. 293;
two answers to his book, i. 295;
Septehênes' translation of Decline and Fall, i. 297;
a war of posts, i. 299;
John the Painter, i. 301;
his uniform life, i. 302;
Hunter's Lectures, i. 304;
his stepmother's groundless fears, i. 306;
starts for Paris, i. 309;
pleasures and occupations in Paris, i. 311;
his success in French society, i. 313;
his friends and acquaintances, i. 315;
no risk of war with France, i. 317;
Duc de Choiseul, i. 318;
a martyr to gout, i. 321;
weary of the war, i. 323;
Saratoga, i. 324;
Madame de Genlis, i. 326;
London a dead calm and delicious solitude, i. 327;
conciliation for America, i. 329;
suing for peace, i. 331;
war with France, i. 333;
his private affairs, i. 335;
in attendance of my Mama, i. 336;
d'Estaing's fleet, i. 337;
Keppel and the French frigates, i. 339, 343;
Coxheath Camp, i. 340, 346;
Brighton unsuitable, i. 345;
Paul Jones, i. 347;
battle of Ushant, i. 349;
an effort of friendship, i. 351;
advice to his stepmother, i. 352, 362;
prospect of a place, i. 355;
Palliser and Keppel, i. 356;
his plans of economy, i. 359;
Parliament and the Roman Empire, i. 361;
a crestfallen ministry, i. 363;
at work on his second volume, i. 365;
a Lord of Trade, i. 366, 373;
disclaims the History of Opposition, i. 369;
his Mémoire Justificatif, i. 371;
Holroyd for Coventry, i. 375;
Rodney's victory, i. 376;
a mighty unrelenting tyrant, called the Gout, i. 377;
Gordon Riots, i. 380;
his two volumes in the press, i. 382;
his seat uncertain, i. 385;
another seat promised, i. 387;
M.P. for Lymington, i. 387, 400; ii. 1;
defends his conduct in Parliament, i. 389;
weary of political life, i. 391;
the Coventry election, i. 393;
Holroyd created Lord Sheffield, i. 395;
the reception given to his two volumes, i. 397;
his annual Gout-tax, i. 399;
his house at Brighton, ii. 3;
French and Spanish ships in the Channel, ii. 5;
Brighton in November, ii. 7;
William Hayley, ii. 8, 17;
his advice in a quarrel, ii. 9;
noise and nonsense of Parliament, ii. 11;
fall of North's ministry, ii. 13;
his loss of office, ii. 14;
Rockingham's death, ii. 17;
Shelburne's ministry, ii. 19;
immersed in the Roman Empire, ii. 21;
his Hampton Court Villa, ii. 23;
Lord Loughborough's marriage, ii. 24;
relief of Gibraltar, ii. 25;
enthusiasm for Sir George Eliott, ii. 27;
Pitt, ii. 28;
Mrs. Siddons, ii. 29;
the dearth of news, ii. 31;
Shelburne resigns, ii. 33;
Coalition Ministry, ii. 34;
his view of English politics, ii. 37;
proposes to settle abroad, ii. 38;
Deyverdun offers his house, ii. 41;
Lausanne society, ii. 43;
his gratitude to Deyverdun, ii. 45;
his hesitation to accept, ii. 47;
his friend and valet, ii. 49;
hopes of a political place, ii. 51;
social habits at Lausanne, ii. 52;
decides to leave England, ii. 55;
plan of joining Deyverdun, ii. 57;
his departure necessary, ii. 58;
his reasons, ii. 61;
his preparations, ii. 63;
farewell to Sheffield Place, ii. 65;
the Peace of Versailles, ii. 67;
his departure delayed, ii. 69;
the Sheffields' kindness, ii. 71
1783-1794.
His journey through France, ii. 73;
the Abbé Raynal, ii. 75;
the charms of Lausanne, ii. 77;
a pension, for Miss Holroyd, ii. 79;
proud of Fox, ii. 85;
North's insignificance, ii. 87;
his daily life, ii. 89;
the zeal and diligence of Sheffield's pen, ii. 91;
sale of his seat, ii. 93;
a factious opposition, ii. 95;
arrival of his books, ii. 97;
a happy winter, ii. 99;
Parliament dissolved, ii. 101;
a free-spoken counsellor, ii. 103;
English friends, ii. 105;
the reign of sinecures over, ii. 107;
his house and garden, ii. 108;
his hospitalities, ii. 111;
his pecuniary affairs, ii. 112;
a list of his acquaintances, ii. 115;
Prince Henry of Prussia and Mdlle. Necker, ii. 117;
thoughts of marriage, ii. 118, 220;
loses Caplin, ii. 119;
invites the Sheffields, ii. 120;
a temperate diet and an easy mind, ii. 123;
his establishment at Lausanne, ii. 125;
Pitt a favourite abroad, ii. 127;
a young man at fifty, ii. 129;
changes in English politics, ii. 131;
his reported death, ii. 132;
a curious question of philosophy, ii. 133;
his countrymen at Lausanne, ii. 135;
Achilles Pitt and Hector Fox, ii. 136;
his History delayed, ii. 139;
his health improved, ii. 141;
glories of the landskip, ii. 142;
Aunt Kitty's death, ii. 144;
books longer in making than puddings, ii. 147;
hopes to visit England, ii. 149, 155;
building a great book, ii. 151;
a citizen of the world, ii. 153;
his arrival in London, ii. 157;
the two Mr. Gibbons, ii. 159;
visits his stepmother, ii. 161;
a miserable cripple, ii. 163;
an unlucky check, ii. 165;
an act of duty at Bath, ii. 167;
his work and friends, ii. 169;
the horrors of shopping and packing, ii. 171;
dines with Warren Hastings, ii. 173;
sale of Beriton, ii. 175, 189;
back at Lausanne, ii. 177;
Deyverdun ill, ii. 179, 187;
George III. insane, ii. 181;
Hugonin dead, ii. 183;
Hugonin's deceit, ii. 185;
George III. recovers, ii. 191;
the Saint ripe for heaven, ii. 193;
Deyverdun's death, ii. 194, 207;
fierce and erect, a free master, ii. 197;
a defect in Beriton title, ii. 199;
his idea of adopting Charlotte Porten, ii. 201;
a life interest in Deyverdun's house, ii. 203;
the authority of Blackstone, ii. 205;
Deyverdun's loss irreparable, ii. 207;
France's opportunity, ii. 209;
French exiles at Lausanne, ii. 210;
dirty land and vile money, ii. 213;
legal forms benefit lawyers, ii. 215;
Sheffield M.P. for Bristol. ii. 216;
Aunt Hester's will, ii. 218, 225;
a comfortless state, ii. 221;
his Madeira almost exhausted, ii. 223;
Bruce's Travels, ii. 226;
M. Langer, ii. 227;
history of the Guelphs, ii. 229;
servitude to lawyers, ii. 231;
seriously ill, ii. 233;
an annuity for Newhaven, ii. 235, 240;
Burke's Reflections, ii. 237;
Corn Law and Slave Trade, ii. 239;
a bargain with the Sheffields, ii. 243;
snugness of his affairs, ii. 245;
danger of Russian war, ii. 247;
effects of French Revolution, ii. 249;
Burke a rational madman, ii. 251;
Sheffield an anti-democrat, ii. 253;
flight and arrest of Louis XVI., ii. 255, 286;
the crisis in Paris, ii. 257;
Sheffield at the Jacobins, ii. 259;
safe in the land of liberty, ii. 261;
Switzerland's strange charm, ii. 263;
Coblentz and white cockades, ii. 265;
the sights of Brussels, ii. 267;
military forces on French frontier, ii. 269;
the Pilnitz meeting, ii. 271;
a distressful voyage, ii. 273;
Lally, ii. 274;
the demon of procrastination, ii. 277;
peace or war in Europe? ii. 279;
an amazing push of remorse, ii. 281;
Maria's capacity, ii. 283;
Lally Tollendal, ii. 284;
the hideous plague in France, ii. 287;
Massa King Wilberforce, ii. 289;
a month with the Neckers, ii. 291;
Jacques Necker, ii. 292;
the march of the Marseillais, ii. 293;
an asylum at Berne, 295;
democratic progress in England, ii. 297;
Gallic wolves prowl round Geneva, ii. 299;
the destiny of his library, ii. 301;
his Tabby apprehensions, ii. 303;
Opposition and Government, ii. 305;
the attempted Pitt-Fox union, ii. 306;
taint of democracy, ii. 309;
Brunswick's march on Paris, ii. 311;
every day more sedentary, ii. 313;
French invasion of Savoy, ii. 314;
Geneva threatened, ii. 316;
prepared for flight, ii. 319;
the Irish at their old tricks, ii. 321;
the liberty of murdering defenceless prisoners, ii. 323;
Sheffield's emigrants, ii. 324;
Brunswick's strange retreat, ii. 326, 346;
occupants of the hotel in Downing Street, ii. 329;
the Geneva flea and the Leviathan France, ii. 331;
the Gallic dogs' day, ii. 333;
neither a monster, nor a statue, ii. 335;
Severy's state hopeless, ii. 336;
France's cruel fate, ii. 337;
Archbishop of Arles' murder, ii. 339-342;
common cause against the Disturbers of the World, ii. 343;
Montesquieu's desertion, ii. 345;
Necker's defence of the king, ii. 347;
associations in London, ii. 349, 353;
Is Fox mad? ii. 350;
Sheffield's speech, ii. 353;
the Egaliseurs, ii. 355;
the great question of peace and war, ii. 358;
the Memoirs must be postponed, ii. 359;
a word or two of Parliamentary and pecuniary concerns, ii. 362;
Duke of Portland and Fox, ii. 363, 367;
Louis XVI. condemned to death, ii. 365;
a miserable Frenchman, ii. 367;
poor de Severy is no more, ii. 369;
his letter of congratulations to Loughborough, ii. 372;
the Pays de Vaud, ii. 373;
Madame de Staël at Dorking, ii. 375;
a pleasant dinner-party in Downing Street, ii. 377;
Lady Sheffield's death, ii. 379;
the cannon of the siege of Mayence, ii. 382;
safe, well, and happy in London, ii. 384;
intends to visit Bath, ii. 387, 389;
Lord Hervey's Memorial, ii. 388;
a tête-à-tête of eight or nine hours daily, ii. 390;
at Althorpe, ii. 391;
a serious complaint, ii. 393;
hopes of a radical cure, ii. 395;
in darkness about Lord Howe, ii. 397;
reaches St. James's Street half-dead, ii. 400;
account of his last moments, ii. 400, 401
Gibbon, Miss Hester (Gibbon's aunt), the Northamptonshire Saint,
i. 7, 134, 244, 295, 398; ii. 91, 185, 187, 190, 193, 218, 222, 225;
Gibbon's letters to, i. 15, 121
Gibbon, John, Bluemantle Pursuivant at Arms, ii. 162
Gibraltar, relieved by Rodney, i. 276;
by Howe, ii. 19, 25, 27;
defended by Lord Heathfield, ii. 25
Gideon, Sir Sampson (Lord Eardley), i. 225, 332; ii. 216
Gilbert, Mr., of Lewes, i. 244, 248, 295
Gilbert, Bett, i. 7
Gilliers, Baron de, ii. 330, 377
Glenbervie, Lord (Sylvester Douglas), ii. 180
Gloucester, Duchess of, i. 173
*Gloucester, Duke of, i. 131;
his clandestine marriage, i. 146;
on Decline and Fall, i. 396
Glynn, Serjeant, the advocate of Wilkes, i. 90
Godolphin, Lord, i. 172
Goldsmith, Oliver, Gibbon's friendship with, i. 191, 202;
his Captain-in-Lace, i. 207;
quotation from his Retaliation, i. 210
*Gonchon, M., ii. 352
Gordon, Duchess of, ii. 157, 164, 168
Gordon, Lord George, i. 376;
No Popery riots, i. 380;
sent to the Tower, i. 382
Gordon Riots, the, i. 381
Gosling, the banker, i. 94, 126, 166-168, 332; ii. 110. 281
Gosling's mortgage, i. 94, 116, 126, 166, 187
Gould, Colonel. i. 114, 159, 274
Gould, Mrs., i. 114, 159, 272, 274; ii. 386
Gouvernet, Comte de la Tour-du-Pin, ii. 329
Gower, Lord, i. 148; ii. 86, 255, 311, 360
*Grafton, Duchess of, i. 27
Grafton, Duke of, i. 26, 90, 112, 278, 377;
Lord Privy Seal, ii. 13
*Grammont, Duc de (de Guiche), i. 89; ii. 203, 265, 266
*Granby, Marquis of, i. 192
Grand, M., banker at Lausanne, i. 4, 61, 74, 81
Grand, Mdlle. Nanette. See Prevôt, Madame
Grantham, Lord, ii. 19
*Grasse, Comte de, ii. 16
Graves, Admiral Lord, i. 384
Gray, Booth, i. 254, 264
Grenville Act, the, i. 233
*Grenville Correspondence, i. 44
*Grenville, George, i. 45, 85, 233, 243
Grenville, James, ii. 19, 93
Grenville, Lord, ii. 362, 366
*Greville, Hon. Charles, i. 366
Grey, Mr., and the Friends of the People resolution, ii. 297, 305,
320
Grey, Sir Charles (afterwards 1st Earl), ii. 396
Grey, Sir W. de. See Walsingham, Lord
*Grey, Thomas de, i. 366
*Grimaldi, Marquis Jeronymo, i. 30
Grimstone, Mrs., ii. 339
Grosvenor, Lady, i. 149
Grosvenor, Lord, i. 82, 149
Guiche, Duc de. See Grammont, Duc de
Guilford, 1st Lord, ii. 86, 164, 238
Guilford, 2nd Lord. See North, Lord
Guines, Duc de, ii. 210
Guise, Sir William (Gibbon's intimate friend), i. 40, 50, 56, 61, 63,
79, 80, 82, 87, 195
Gunning, Sir Robert, British Envoy at Petersburg, i. 270
*Gustavus III., King of Sweden, ii. 279
H
Hague, the, Gibbon at, i. 15
*Hailes, Daniel, ii. 86
*Hales, Sir Philip, i. 250
Hall, James, i. 26
*Hallifax, Sir Thomas, i. 393
*Hamilton, Emma, Lady, i. 74, 214
*Hamilton, Lord Archibald, i. 148
Hamilton, Sir William, British Minister at Naples, i. 74
Hamilton, William Gerard (Single-Speech), i. 343; ii. 21, 31, 396
Hammersley's Bank, ii. 303
Hamond, Sir Andrew Snape, R.N., ii. 81, 93
Hampden, Lord, ii. 135
Hampshire Militia, i. 25, 109;
Gibbon major in, i. 51;
colonel, i. 87;
father of, i. 346
Hanger, William (Lord Coleraine), i. 146, 148, 310
Hanley, Mrs., ii. 159
Harbord, Hon. Harbord (afterwards Lord Suffield), i. 250, 252
Harcourt, Earl of, i. 9
Harcourt, Mr., i. 232, 233
Hardy, Sir Charles, i. 347; ii. 72
Hare, James, politician and wit (the Hare and many Friends), i. 201
Harris, John, Lenborough Estate Agent, i. 95, 127, 165, 167, 170; ii.
104
Harrison, John Butler, Gibbon's opinion of, i. 27
Harrison, Mrs., i. 87
Hartley, David, M.P. for Kingston-upon-Hull, i. 240
Harvey, Stephen, i. 95
Hastings, Marquis of. ii. 396
Hastings, Warren, i. 209, 349;
Governor-General of India, ii. 26, 85;
Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.
More than just a book-buying platform, we strive to be a bridge
connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.
Join us on a journey of knowledge exploration, passion nurturing, and
personal growth every day!
ebookbell.com

More Related Content

PDF
Sustainable Practices In Geoenvironmental Engineering Raymond N Yong
PDF
Reverse Osmosis A Guide For The Nonengineering Professional Spellman
PDF
Reverse Osmosis A Guide For The Nonengineering Professional Spellman
PDF
Greener products
PDF
Building a Programmable Logic Controller with a PIC16F648A Microcontroller le...
PDF
Introduction To Supply Chain Management Technologies
PDF
Multifunctional Ultrawideband Antennas Trends Techniques And Applications Chi...
PDF
Ethics in IT Outsourcing 1st Edition Tandy Gold
Sustainable Practices In Geoenvironmental Engineering Raymond N Yong
Reverse Osmosis A Guide For The Nonengineering Professional Spellman
Reverse Osmosis A Guide For The Nonengineering Professional Spellman
Greener products
Building a Programmable Logic Controller with a PIC16F648A Microcontroller le...
Introduction To Supply Chain Management Technologies
Multifunctional Ultrawideband Antennas Trends Techniques And Applications Chi...
Ethics in IT Outsourcing 1st Edition Tandy Gold

Similar to Unsupervised Signal Processing Channel Equalization And Source Separation 1st Edition Joo M T Romano (20)

PDF
Environmental Hydrogeology Second Edition Assaad
PDF
Principles of adaptive optics 3rd Edition Robert Tyson
PDF
Information Technology Control and Audit.pdf
PDF
Touchless Fingerprint Biometrics 1st Edition Ruggero Donida Labati (Author)
PDF
Micro and Macromechanical Properties of Materials 1st Edition Yichun Zhou
PDF
Radioactive Air Sampling Methods 1st Edition Mark L. Maiello (Editor)
PDF
Interfacial Physical Chemistry Of Hightemperature Melts Kusuhiro Mukai Taishi...
PDF
Principles of adaptive optics 3rd Edition Robert Tyson
PDF
Experimental Surgical Models In The Laboratory Rat 1st Edition Alfredo Rigalli
PDF
Radioactive Air Sampling Methods 1st Edition Mark L. Maiello (Editor)
PDF
Analyzing Health Data In R For Sas Users 1st Edition Monika Maya Wahi
PDF
Entrepreneurship For Engineers 1st Edition Kenji Uchino
PDF
Interfacial Electroviscoelasticity And Electrophoresis 1st Edition Jyhping Hsu
PDF
Design Principles Of Ships And Marine Structures Misra Suresh Chan
PDF
Design Principles Of Ships And Marine Structures Misra Suresh Chan
PDF
Fuels Energy And The Environment 1st Edition Ghazi A Karim
PDF
Principles of Polymer Systems 6th ed. Edition Archer
PDF
Radioactive Air Sampling Methods 1st Edition Mark L. Maiello (Editor)
PDF
New Horizons In Standardized Work Techniques For Manufacturing And Business P...
PDF
Display And Interface Design Subtle Science Exact Art Kevin B Bennett
Environmental Hydrogeology Second Edition Assaad
Principles of adaptive optics 3rd Edition Robert Tyson
Information Technology Control and Audit.pdf
Touchless Fingerprint Biometrics 1st Edition Ruggero Donida Labati (Author)
Micro and Macromechanical Properties of Materials 1st Edition Yichun Zhou
Radioactive Air Sampling Methods 1st Edition Mark L. Maiello (Editor)
Interfacial Physical Chemistry Of Hightemperature Melts Kusuhiro Mukai Taishi...
Principles of adaptive optics 3rd Edition Robert Tyson
Experimental Surgical Models In The Laboratory Rat 1st Edition Alfredo Rigalli
Radioactive Air Sampling Methods 1st Edition Mark L. Maiello (Editor)
Analyzing Health Data In R For Sas Users 1st Edition Monika Maya Wahi
Entrepreneurship For Engineers 1st Edition Kenji Uchino
Interfacial Electroviscoelasticity And Electrophoresis 1st Edition Jyhping Hsu
Design Principles Of Ships And Marine Structures Misra Suresh Chan
Design Principles Of Ships And Marine Structures Misra Suresh Chan
Fuels Energy And The Environment 1st Edition Ghazi A Karim
Principles of Polymer Systems 6th ed. Edition Archer
Radioactive Air Sampling Methods 1st Edition Mark L. Maiello (Editor)
New Horizons In Standardized Work Techniques For Manufacturing And Business P...
Display And Interface Design Subtle Science Exact Art Kevin B Bennett
Ad

Recently uploaded (20)

PDF
Chinmaya Tiranga quiz Grand Finale.pdf
PPTX
TNA_Presentation-1-Final(SAVE)) (1).pptx
PDF
LDMMIA Reiki Yoga Finals Review Spring Summer
PDF
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
PDF
Complications of Minimal Access-Surgery.pdf
PDF
International_Financial_Reporting_Standa.pdf
PPTX
B.Sc. DS Unit 2 Software Engineering.pptx
PDF
What if we spent less time fighting change, and more time building what’s rig...
PPTX
20th Century Theater, Methods, History.pptx
PDF
advance database management system book.pdf
PPTX
Chinmaya Tiranga Azadi Quiz (Class 7-8 )
PDF
My India Quiz Book_20210205121199924.pdf
DOC
Soft-furnishing-By-Architect-A.F.M.Mohiuddin-Akhand.doc
PDF
Vision Prelims GS PYQ Analysis 2011-2022 www.upscpdf.com.pdf
PPTX
202450812 BayCHI UCSC-SV 20250812 v17.pptx
PDF
Paper A Mock Exam 9_ Attempt review.pdf.
PPTX
Introduction to pro and eukaryotes and differences.pptx
PDF
FORM 1 BIOLOGY MIND MAPS and their schemes
PPTX
CHAPTER IV. MAN AND BIOSPHERE AND ITS TOTALITY.pptx
PDF
Empowerment Technology for Senior High School Guide
Chinmaya Tiranga quiz Grand Finale.pdf
TNA_Presentation-1-Final(SAVE)) (1).pptx
LDMMIA Reiki Yoga Finals Review Spring Summer
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
Complications of Minimal Access-Surgery.pdf
International_Financial_Reporting_Standa.pdf
B.Sc. DS Unit 2 Software Engineering.pptx
What if we spent less time fighting change, and more time building what’s rig...
20th Century Theater, Methods, History.pptx
advance database management system book.pdf
Chinmaya Tiranga Azadi Quiz (Class 7-8 )
My India Quiz Book_20210205121199924.pdf
Soft-furnishing-By-Architect-A.F.M.Mohiuddin-Akhand.doc
Vision Prelims GS PYQ Analysis 2011-2022 www.upscpdf.com.pdf
202450812 BayCHI UCSC-SV 20250812 v17.pptx
Paper A Mock Exam 9_ Attempt review.pdf.
Introduction to pro and eukaryotes and differences.pptx
FORM 1 BIOLOGY MIND MAPS and their schemes
CHAPTER IV. MAN AND BIOSPHERE AND ITS TOTALITY.pptx
Empowerment Technology for Senior High School Guide
Ad

Unsupervised Signal Processing Channel Equalization And Source Separation 1st Edition Joo M T Romano

  • 1. Unsupervised Signal Processing Channel Equalization And Source Separation 1st Edition Joo M T Romano download https://ebookbell.com/product/unsupervised-signal-processing- channel-equalization-and-source-separation-1st-edition-joo-m-t- romano-2153304 Explore and download more ebooks at ebookbell.com
  • 2. Here are some recommended products that we believe you will be interested in. You can click the link to download. Unsupervised Machine Learning For Clustering In Political And Social Research Philip D Waggoner https://ebookbell.com/product/unsupervised-machine-learning-for- clustering-in-political-and-social-research-philip-d-waggoner-47499310 Unsupervised Pattern Discovery In Automotive Time Series Patternbased Construction Of Representative Driving Cycles Fabian Kai Dietrich Noering https://ebookbell.com/product/unsupervised-pattern-discovery-in- automotive-time-series-patternbased-construction-of-representative- driving-cycles-fabian-kai-dietrich-noering-49007560 Unsupervised Classification Similarity Measures Classical And Metaheuristic Approaches And Applications Sanghamitra Bandyopadhyay https://ebookbell.com/product/unsupervised-classification-similarity- measures-classical-and-metaheuristic-approaches-and-applications- sanghamitra-bandyopadhyay-49007620 Unsupervised Learning Approaches For Dimensionality Reduction And Data Visualization B K Tripathy https://ebookbell.com/product/unsupervised-learning-approaches-for- dimensionality-reduction-and-data-visualization-b-k-tripathy-49029446
  • 3. Unsupervised Navigating And Influencing A World Controlled By Powerful New Technologies 1st Edition Daniel Dollsteinberg https://ebookbell.com/product/unsupervised-navigating-and-influencing- a-world-controlled-by-powerful-new-technologies-1st-edition-daniel- dollsteinberg-51796286 Unsupervised Process Monitoring And Fault Diagnosis With Machine Learning Methods 1st Edition Chris Aldrich https://ebookbell.com/product/unsupervised-process-monitoring-and- fault-diagnosis-with-machine-learning-methods-1st-edition-chris- aldrich-4241296 Unsupervised Learning With R Erik Rodriguez Pacheco https://ebookbell.com/product/unsupervised-learning-with-r-erik- rodriguez-pacheco-42984814 Unsupervised Information Extraction By Text Segmentation 1st Edition Eli Cortez https://ebookbell.com/product/unsupervised-information-extraction-by- text-segmentation-1st-edition-eli-cortez-4415036 Unsupervised Learning In Space And Time A Modern Approach For Computer Vision Using Graphbased Techniques And Deep Neural Networks Marius Leordeanu https://ebookbell.com/product/unsupervised-learning-in-space-and-time- a-modern-approach-for-computer-vision-using-graphbased-techniques-and- deep-neural-networks-marius-leordeanu-52374352
  • 8. CRC Press is an imprint of the Taylor & Francis Group, an informa business Boca Raton London New York UNSUPERVISED SIGNAL PROCESSING Channel Equalization and Source Separation ~ Joao M. T. Romano Romis R. de F. Attux Charles C. Cavalcante Ricardo Suyama © 2011 by Taylor & Francis Group, LLC
  • 9. CRC Press Taylor & Francis Group 6000 Broken Sound Parkway NW, Suite 300 Boca Raton, FL 33487-2742 © 2011 by Taylor and Francis Group, LLC CRC Press is an imprint of Taylor & Francis Group, an Informa business No claim to original U.S. Government works Printed in the United States of America on acid-free paper 10 9 8 7 6 5 4 3 2 1 International Standard Book Number-13: 978-1-4200-1946-9 (Ebook-PDF) This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint. Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmit- ted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. For permission to photocopy or use material electronically from this work, please access www.copyright. com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Visit the Taylor & Francis Web site at http://www.taylorandfrancis.com and the CRC Press Web site at http://www.crcpress.com © 2011 by Taylor & Francis Group, LLC
  • 10. To our families. To the friendly members of DSPCom, past and present. To the lovely memories of Hélio Drago Romano, Romis Attux, and Francisco Casimiro do Nascimento. © 2011 by Taylor & Francis Group, LLC
  • 11. © 2011 by Taylor & Francis Group, LLC
  • 12. Contents Foreword .......................................................................... xix Preface.............................................................................. xxi Acknowledgments ............................................................... xxv Authors ............................................................................xxvii 1. Introduction.................................................................... 1 1.1 Channel Equalization.................................................... 1 1.2 Source Separation ........................................................ 5 1.3 Organization and Contents ............................................. 7 2. Statistical Characterization of Signals and Systems.................... 11 2.1 Signals and Systems ..................................................... 13 2.1.1 Signals ............................................................. 13 2.1.1.1 Continuous- and Discrete-Time Signals............. 14 2.1.1.2 Analog and Digital Signals............................ 14 2.1.1.3 Periodic and Aperiodic/Causal and Noncausal Signals .................................................... 14 2.1.1.4 Energy Signals and Power Signals ................... 15 2.1.1.5 Deterministic and Random Signals .................. 16 2.1.2 Transforms ........................................................ 16 2.1.2.1 The Fourier Transform of Continuous-Time Signals .................................................... 17 2.1.2.2 The Fourier Transform of Discrete-Time Signals .................................................... 18 2.1.2.3 The Laplace Transform ................................ 18 2.1.2.4 The z-Transform ........................................ 18 2.1.3 Systems ............................................................ 19 2.1.3.1 SISO/SIMO/MISO/MIMO Systems ................ 20 2.1.3.2 Causal Systems.......................................... 20 2.1.3.3 Invertible Systems ...................................... 20 2.1.3.4 Stable Systems........................................... 20 2.1.3.5 Linear Systems .......................................... 21 2.1.3.6 Time-Invariant Systems ............................... 21 2.1.3.7 Linear Time-Invariant Systems ....................... 21 2.1.4 Transfer Function and Frequency Response................. 22 2.2 Digital Signal Processing................................................ 23 2.2.1 The Sampling Theorem ......................................... 23 2.2.2 The Filtering Problem ........................................... 24 vii © 2011 by Taylor & Francis Group, LLC
  • 13. viii Contents 2.3 Probability Theory and Randomness ................................. 25 2.3.1 Definition of Probability ........................................ 25 2.3.2 Random Variables ............................................... 27 2.3.2.1 Joint and Conditional Densities ...................... 30 2.3.2.2 Function of a Random Variable ...................... 32 2.3.3 Moments and Cumulants....................................... 33 2.3.3.1 Properties of Cumulants............................... 36 2.3.3.2 Relationships between Cumulants and Moments ................................................. 37 2.3.3.3 Joint Cumulants......................................... 37 2.4 Stochastic Processes...................................................... 38 2.4.1 Partial Characterization of Stochastic Processes: Mean, Correlation, and Covariance ................................... 39 2.4.2 Stationarity........................................................ 41 2.4.3 Ergodicity ......................................................... 43 2.4.4 Cyclostationarity ................................................. 44 2.4.5 Discrete-Time Random Signals ................................ 45 2.4.6 Linear Time-Invariant Systems with Random Inputs ...... 46 2.5 Estimation Theory ....................................................... 49 2.5.1 The Estimation Problem ........................................ 50 2.5.1.1 Single-Parameter Estimation.......................... 50 2.5.1.2 Multiple-Parameter Estimation....................... 50 2.5.2 Properties of Estimators ........................................ 51 2.5.2.1 Bias........................................................ 51 2.5.2.2 Efficiency................................................. 51 2.5.2.3 Cramér–Rao Bound .................................... 52 2.5.3 Maximum Likelihood Estimation ............................. 53 2.5.4 Bayesian Approach .............................................. 53 2.5.4.1 Maximum a Posteriori Estimation ................... 54 2.5.4.2 Minimum Mean-Squared Error ...................... 56 2.5.5 Least Squares Estimation ....................................... 57 2.6 Concluding Remarks .................................................... 59 3. Linear Optimal and Adaptive Filtering ................................... 61 3.1 Supervised Linear Filtering............................................. 64 3.1.1 System Identification ............................................ 65 3.1.2 Deconvolution: Channel Equalization ........................ 66 3.1.3 Linear Prediction................................................. 67 3.2 Wiener Filtering .......................................................... 68 3.2.1 The MSE Surface ................................................. 70 3.3 The Steepest-Descent Algorithm....................................... 77 3.4 The Least Mean Square Algorithm .................................... 81 3.5 The Method of Least Squares........................................... 85 3.5.1 The Recursive Least-Squares Algorithm ..................... 87 © 2011 by Taylor & Francis Group, LLC
  • 14. Contents ix 3.6 A Few Remarks Concerning Structural Extensions ................. 89 3.6.1 Infinite Impulse Response Filters.............................. 90 3.6.2 Nonlinear Filters ................................................. 90 3.7 Linear Filtering without a Reference Signal.......................... 91 3.7.1 Constrained Optimal Filters.................................... 92 3.7.2 Constrained Adaptive Filters .................................. 95 3.8 Linear Prediction Revisited ............................................. 96 3.8.1 The Linear Prediction-Error Filter as a Whitening Filter ............................................................... 97 3.8.2 The Linear Prediction-Error Filter Minimum Phase Property ........................................................... 98 3.8.3 The Linear Prediction-Error Filter as a Constrained Filter ............................................................... 99 3.9 Concluding Remarks .................................................... 100 4. Unsupervised Channel Equalization...................................... 103 4.1 The Unsupervised Deconvolution Problem.......................... 106 4.1.1 The Specific Case of Equalization ............................. 107 4.2 Fundamental Theorems ................................................. 109 4.2.1 The Benveniste–Goursat–Ruget Theorem.................... 110 4.2.2 The Shalvi–Weinstein Theorem................................ 110 4.3 Bussgang Algorithms.................................................... 111 4.3.1 The Decision-Directed Algorithm ............................. 114 4.3.2 The Sato Algorithm.............................................. 115 4.3.3 The Godard Algorithm.......................................... 115 4.4 The Shalvi–Weinstein Algorithm ...................................... 117 4.4.1 Constrained Algorithm ......................................... 117 4.4.2 Unconstrained Algorithm ...................................... 119 4.5 The Super-Exponential Algorithm .................................... 121 4.6 Analysis of the Equilibrium Solutions of Unsupervised Criteria ..................................................................... 125 4.6.1 Analysis of the Decision-Directed Criterion ................. 126 4.6.2 Elements of Contact between the Decision-Directed and Wiener Criteria ................................................... 127 4.6.3 Analysis of the Constant Modulus Criterion ................ 128 4.6.4 Analysis in the Combined Channel + Equalizer Domain ............................................................ 129 4.6.4.1 Ill-Convergence in the Equalizer Domain .......... 130 4.7 Relationships between Equalization Criteria ........................ 132 4.7.1 Relationships between the Constant Modulus and Shalvi–Weinstein Criteria ...................................... 132 4.7.1.1 Regalia’s Proof of the Equivalence between the Constant Modulus and Shalvi–Weinstein Criteria ................................................... 133 © 2011 by Taylor & Francis Group, LLC
  • 15. x Contents 4.7.2 Some Remarks Concerning the Relationship between the Constant Modulus/Shalvi–Weinstein and the Wiener Criteria............................................................. 135 4.8 Concluding Remarks .................................................... 139 5. Unsupervised Multichannel Equalization ............................... 141 5.1 Systems with Multiple Inputs and/or Multiple Outputs .......... 144 5.1.1 Conditions for Zero-Forcing Equalization of MIMO Systems ............................................................ 146 5.2 SIMO Channel Equalization ............................................ 148 5.2.1 Oversampling and the SIMO Model .......................... 150 5.2.2 Cyclostationary Statistics of Oversampled Signals ......... 152 5.2.3 Representations of the SIMO Model .......................... 153 5.2.3.1 Standard Representation .............................. 153 5.2.3.2 Representation via the Sylvester Matrix ............ 154 5.2.4 Fractionally Spaced Equalizers and the MISO Equalizer Model .............................................................. 156 5.2.5 Bezout’s Identity and the Zero-Forcing Criterion........... 158 5.3 Methods for Blind SIMO Equalization................................ 160 5.3.1 Blind Equalization Based on Higher-Order Statistics ...... 160 5.3.2 Blind Equalization Based on Subspace Decomposition .... 161 5.3.3 Blind Equalization Based on Linear Prediction ............. 165 5.4 MIMO Channels and Multiuser Processing.......................... 168 5.4.1 Multiuser Detection Methods Based on Decorrelation Criteria............................................................. 169 5.4.1.1 The Multiuser Constant Modulus Algorithm ...... 170 5.4.1.2 The Fast Multiuser Constant Modulus Algorithm................................................ 172 5.4.1.3 The Multiuser pdf Fitting Algorithm (MU-FPA)................................................ 173 5.4.2 Multiuser Detection Methods Based on Orthogonalization Criteria ..................................... 176 5.4.2.1 The Multiuser Kurtosis Algorithm................... 177 5.5 Concluding Remarks .................................................... 179 6. Blind Source Separation ..................................................... 181 6.1 The Problem of Blind Source Separation ............................. 184 6.2 Independent Component Analysis .................................... 186 6.2.1 Preprocessing: Whitening ...................................... 188 6.2.2 Criteria for Independent Component Analysis ............. 190 6.2.2.1 Mutual Information .................................... 191 6.2.2.2 A Criterion Based on Higher-Order Statistics ...... 194 6.2.2.3 Nonlinear Decorrelation............................... 195 6.2.2.4 Non-Gaussianity Maximization ...................... 196 © 2011 by Taylor & Francis Group, LLC
  • 16. Contents xi 6.2.2.5 The Infomax Principle and the Maximum Likelihood Approach .................................. 198 6.3 Algorithms for Independent Component Analysis ................ 200 6.3.1 Hérault and Jutten’s Approach ................................ 200 6.3.2 The Infomax Algorithm......................................... 201 6.3.3 Nonlinear PCA ................................................... 202 6.3.4 The JADE Algorithm ............................................ 204 6.3.5 Equivariant Adaptive Source Separation/Natural Gradient ........................................................... 205 6.3.6 The FastICA Algorithm ......................................... 206 6.4 Other Approaches for Blind Source Separation ..................... 209 6.4.1 Exploring the Correlation Structure of the Sources......... 209 6.4.2 Nonnegative Independent Component Analysis ........... 210 6.4.3 Sparse Component Analysis ................................... 211 6.4.4 Bayesian Approaches ........................................... 213 6.5 Convolutive Mixtures ................................................... 214 6.5.1 Source Separation in the Time Domain....................... 215 6.5.2 Signal Separation in the Frequency Domain................. 216 6.6 Nonlinear Mixtures ...................................................... 218 6.6.1 Nonlinear ICA.................................................... 219 6.6.2 Post-Nonlinear Mixtures........................................ 220 6.6.3 Mutual Information Minimization ............................ 222 6.6.4 Gaussianization .................................................. 223 6.7 Concluding Remarks .................................................... 224 7. Nonlinear Filtering and Machine Learning .............................. 227 7.1 Decision-Feedback Equalizers.......................................... 229 7.1.1 Predictive DFE Approach ...................................... 231 7.2 Volterra Filters............................................................ 233 7.3 Equalization as a Classification Task.................................. 235 7.3.1 Derivation of the Bayesian Equalizer ......................... 237 7.4 Artificial Neural Networks ............................................. 241 7.4.1 A Neuron Model ................................................. 241 7.4.2 The Multilayer Perceptron ..................................... 242 7.4.2.1 The Backpropagation Algorithm ..................... 244 7.4.3 The Radial-Basis Function Network .......................... 247 7.5 Concluding Remarks .................................................... 251 8. Bio-Inspired Optimization Methods ...................................... 253 8.1 Why Bio-Inspired Computing? ........................................ 254 8.2 Genetic Algorithms ...................................................... 256 8.2.1 Fundamental Concepts and Terminology ................... 256 8.2.2 A Basic Genetic Algorithm ..................................... 257 8.2.3 Coding............................................................. 258 © 2011 by Taylor & Francis Group, LLC
  • 17. xii Contents 8.2.4 Selection Operators .............................................. 259 8.2.5 Crossover and Mutation Operators ........................... 261 8.3 Artificial Immune Systems ............................................. 266 8.4 Particle Swarm Optimization .......................................... 269 8.5 Concluding Remarks .................................................... 273 Appendix A: Some Properties of the Correlation Matrix ................ 275 A.1 Hermitian Property ...................................................... 275 A.2 Eigenstructure ............................................................ 275 A.3 The Correlation Matrix in the Context of Temporal Filtering ..... 277 Appendix B: Kalman Filter .................................................... 279 B.1 State-Space Model........................................................ 279 B.2 Deriving the Kalman Filter ............................................. 280 References......................................................................... 285 © 2011 by Taylor & Francis Group, LLC
  • 19. Foreword Intelligent systems have made major contributions to the progress of science and technology in recent decades. They find applications in all technical fields and, particularly, in communications, consumer electron- ics, and control. A distinct characteristic is their high level of complexity, due to the fact that they capitalize on all sorts of scientific knowledge and practical know-how. However, their architecture is rather simple and can be broken down into four basic constituents, namely, sensors, actua- tors, signal-processing modules, and information-processing modules. The sensors and actuators constitute the interfaces of the system with its envi- ronment, while the signal-processing modules link these interfaces with the information-processing modules. Although it is generally recognized that the intelligence of the system lies in the information-processing section, intelligence is also needed in the signal-processing section to learn the environment, follow its evolutions, and cope with its adverse effects. The signal-processing modules deliver the raw data and even the most sophisti- cated information-processing algorithms perform badly if the quality of the raw data is poor. From the perspective of signal processing, the most challenging problem is the connection between the signal sources and the sensors, for two main reasons. First, the transmission channels degrade the useful signals, and second, the sources have to be identified and separated from the received mixtures. Channel equalization and source separation can be dealt with sep- arately or jointly. In any case, the quality of the corresponding processing is essential for the performance of the system, because it determines the reliability of the input data to the information-processing modules. When- ever appropriate, the problem is simplified by the introduction of learning phases, during which the algorithms are trained for optimal operation; this is called supervised processing. However, this procedure is not always possi- ble or desirable, and continuous optimization has many advantages in terms of global performance and efficiency. Thus, we arrive at unsupervised signal processing, which is the topic of this book. Unsupervised signal-processing techniques are described in different categories of books dealing with digital filters, adaptive methods, or sta- tistical signal processing. But, until now, no unified presentation has been available. Therefore, this book is timely and it is an important contribu- tion to the signal-processing literature. Moreover, unifying under a common framework the topics of blind equalization and source separation is particu- larly appropriate and inspiring from the perspective of both education and research. xix © 2011 by Taylor & Francis Group, LLC
  • 20. xx Foreword Through the remarkable synthesis of the field it provides and the new vision it offers, this book will stimulate progress and contribute to the advent of more useful, efficient, and friendly intelligent systems. Maurice Bellanger Académie des Technologies de France Paris, France © 2011 by Taylor & Francis Group, LLC
  • 21. Preface “At Cambridge, Russell had impressed on me not only the importance of mathematics but the need for a physical sense...” Norbert Wiener, I Am a Mathematician Perhaps the most fundamental motivation for writing a book is the desire to tell a story in which the author can express himself or herself and be under- stood by others. This sort of motivation is also present in scientific works, even if the story is usually narrated in formal and austere language. The main motivation for writing this book is to tell something about the work we carry out in the Laboratory of Signal Processing for Communica- tions (DSPCom). This includes the research topics on which we have been working as well as the way we work, which is closely related to the epigraph we chose for this preface. The work we have developed is founded on the theory of adaptive fil- tering, having communication systems as the main focus of application. The natural evolution of our studies and researches led us to widen our scope of interest to themes like blind equalization, source separation, machine learn- ing, and bio-inspired algorithms, always with the signal processing–oriented approach that is registered in the DNA of our lab. Hence, in short, our objective in this book is to provide a unified, sys- tematic, and synthetic presentation of what may be called the theory of unsupervised signal processing, with an emphasis on two topics that could be considered as the pillars [137] of such a theory: blind equalization and source separation. These two topics constitute the core of the book. They are based on the foundations of statistical and adaptive signal processing, exposed in Chapters 2 and 3, and they point to more emergent tools in signal processing, like machine learning–based solutions and bio-inspired methods, presented in Chapters 7 and 8. Clearly, the objective described above represents a stimulating challenge for, at least, two reasons: first, gathering together all the mentioned themes was subject to the risk of dispersion or excessive verbosity, with the conse- quent lack of interest on the part of the readers; second, the themes of interest on their own have been specifically addressed by renowned specialists in a number of excellent books. In this sense, we feel obliged to mention that adaptive filter theory is a well-established discipline that has been studied in depth in books like [32, 100, 139, 194, 249, 262, 303], and others. Blind equalization methods and algorithms are presented in detail in [99], and were recently surveyed in [70]. Blind source separation and related aspects like independent component analysis have been treated in very important works such as in [76,148,156]. xxi © 2011 by Taylor & Francis Group, LLC
  • 22. xxii Preface Numerous authors from different scientific communities have written on topics related to machine learning and bio-inspired optimization. We must also mention inspiring works like [12, 137, 138], which deal with both blind deconvolution and separation problems. In a certain sense, by placing the topics of this book under a similar con- ceptual treatment and mathematical formalism, we have tried to reap some of the important ideas disseminated and fertilized by the aforementioned authors and others we necessarily omitted in our non-exhaustive citation. Since the genesis of the book is strongly linked to the work the authors carried out at DSPCom laboratory during more than a decade, words of thankfulness and recognition must be addressed to those who supported and inspired such work. First of all, we would like to thank all researchers, stu- dents, and assistants who worked in the lab since its establishment. It seems unreasonable to name everybody, so we decided to include all these friends in the main dedication of the book. The first author of this book was fortunate in having Professor Maurice Bellanger, from CNAM/Paris, France, as a PhD advisor, a collaborator in many works, and an inspirational figure for us in the process of writing this book. We are grateful to many colleagues and friends for their constant sup- port. Special thanks are due to Professor Paulo S.R. Diniz from the Federal University of Rio de Janeiro (COPPE/UFRJ) and Professor Michel D. Yacoub from FEEC/UNICAMP, first for their personal and professional example, and also for attentively motivating and pushing us to finish the work. A spe- cial mention must also be made to the memory of the late Professor Max Gerken from the University of São Paulo (POLI/USP). We also express our gratitude to Professor João C.M. Mota from the Federal University of Ceará (UFC) for many years of fruitful cooperation. We are indebted to many colleagues in our institution, the School of Electrical and Computer Engineering at the University of Campinas (FEEC/ UNICAMP, Brazil). We are particularly thankful to Professor Renato Lopes, Professor Murilo Loiola, Dr. Rafael Ferrari, Dr. Leonardo Tomazeli Duarte, and Levy Boccato for directly influencing the contents of this book, and for carefully reviewing and/or stimulating discussions about many central themes of the book. We would also like to thank Professors Fernando Von Zuben, Christiano Lyra, and Amauri Lopes, who collaborated with us by means of scientific and/or academic partnerships. Our warmest regards are reserved for Celi Pavanatti, for her constant and kind support. Many friends and colleagues in other institutions influenced our work in different ways. For their direct technical contribution to the book or to our careers, and for their special attention in some key occasions, we would like to thank Professor Francisco R. P. Cavalcanti from UFC; Professors Maria Miranda and Cristiano Panazio from POLI/USP; Professor Leandro de Castro from Universidade Presbiteriana Mackenzie (UPM); Professor Aline Neves from Universidade Federal do ABC (UFABC); Professors Carlos A. F. da Rocha, Leonardo Resende, and Rui Seara from Universidade Federal © 2011 by Taylor & Francis Group, LLC
  • 23. Preface xxiii de Santa Catarina (UFSC); Professor Jacques Szczupak from Pontifícia Uni- versidade Católica do Rio de Janeiro (PUC); Professor Moisés Ribeiro from Universidade Federal de Juiz de Fora (UFJF); Professor Luiz C. Coradine from Universidade Federal de Alagoas (UFAL); Professor Jugurta Mon- talvão from Universidade Federal de Sergipe (UFS); Dr. Cynthia Junqueira from Comando Geral de Tecnologia Aeroespacial (IAE/CTA); Dr. Danilo Zanatta from NTi Audio AG; Maurício Sol de Castro from Von Braun Center; Professors Madeleine Bonnet, Hisham Abou-Kandil, Bernadette Dorizzi, and Odile Macchi, respectively, from the University Paris-Descartes, ENS/Cachan, IT-SudParis, and CNRS, in France; and Professor Tülay Adali from the University of Maryland in Baltimore, Maryland. We are especially grateful to Professor Simon Haykin from McMaster University in Canada for having given us the unforgettable opportunity of discussing our entire project during the ICA Conference at Paraty in 2009. The acknowledgment list would certainly be incomplete without men- tioning the staff of CRC Press. Our deepest gratitude must be expressed to Nora Konopka, Amber Donley, Vedavalli Karunagaran, Richard Tressider, and Brittany Gilbert for their competence, solicitude, and patience. So many thanks for believing in this project and pushing it from one end to the other! João M. T. Romano Romis R. de F. Attux Charles C. Cavalcante Ricardo Suyama © 2011 by Taylor & Francis Group, LLC
  • 24. © 2011 by Taylor & Francis Group, LLC
  • 25. Acknowledgments “Eu não ando só, só ando em boa companhia.” Vinicius de Moraes, Brazilian poet The authors would like to thank their families and friends for their constant support during the process of writing this book. João would like to express his gratitude to his lovely wife Maria Inês and children Miriam, Teresa, Filipe, Débora, Maria Beatriz, Marcelo, Ana Laura, and Daniela; to his parents, brothers, and sisters, particularly to Andy Stauf- fer who reviewed the first draft of this project to be sent to CRC. He would also like to thank his friends at the Cultural Center in Campinas, especially to the kind memory of Professor Francesco Langone. He is grateful to his dear friend Ilma Valadão for his support. Finally, he would like to acknowledge the motivating words of his inspirational friend, Professor José L. Boldrini. Romis would like to thank Dilmara, Clara, and Marina for their love and for the constant happiness they bring to his life; Dina, Cecília, João Gabriel, Jecy (in memoriam), and Flora for their support on all occasions; Afrânio, Isabel, Beth, Toninho, Naby (in memoriam), Sônia, Ramsa, and his whole family for their warm affection; his students and former students— Cristina, Denis, Diogo, Everton, Filipe, George, Hassan, Hugo, Leonardo, Tiago Dias, Tiago Tavares, and Wesley—for their kindness and patience; Cristiano and Dr. Danilo for the many enriching conversations; the G6 (Alim, Carol, Daniel, Inácio, Lídia, and Theo) for the countless happy moments; all friends and colleagues from FEEC/UNICAMP; his undergraduate and grad- uate students for the inspiration they bring to his life; and, finally, all his friends (it would be impossible to name them here!). Charles deeply recognizes the importance of some people in his life for the realization of this book. He expresses his gratefulness to his wife, Erika; to his children, Matheus and Yasmin; to his mother, Ivoneide; to his broth- ers, Kleber, Rogério, and Marcelo; and to his friend, Josué, who has always pushed him to achieve this result. For you all, my deepest gratitude; without you, nothing would have been possible and worth the effort. Ricardo expresses his warmest gratitude to Jorge, Cecília, Bruna, Maria (in memoriam), and many others in the family, for the love and constant support; Gislaine, for all the love and patience during the final stages of this work; and all colleagues from DSPCom, for the enriching conversations and friendship. xxv © 2011 by Taylor & Francis Group, LLC
  • 26. © 2011 by Taylor & Francis Group, LLC
  • 27. Authors João Marcos Travassos Romano is a professor at the University of Cam- pinas (UNICAMP), Campinas, Sao Paulo, Brazil. He received his BS and MS in electrical engineering from UNICAMP in 1981 and 1984, respec- tively. In 1987, he received his PhD from the University of Paris–XI, Orsay. He has been an invited professor at CNAM, Paris; at University of Paris- Descartes; and at ENS, Cachan. He is the coordinator of the DSPCom Laboratory at UNICAMP, and his research interests include adaptive fil- tering, unsupervised signal processing, and applications in communication systems. Romis Ribeiro de Faissol Attux is an assistant professor at the University of Campinas (UNICAMP), Campinas, Sao Paulo, Brazil. He received his BS, MS, and PhD in electrical engineering from UNICAMP in 1999, 2001, and 2005, respectively. He is a researcher in the DSPCom Laboratory. His research interests include blind signal processing, independent component analysis (ICA), nonlinear adaptive filtering, information-theoretic learning, neural networks, bio-inspired computing, dynamical systems, and chaos. Charles Casimiro Cavalcante is an assistant professor at the Federal University of Ceará (UFC), Fortaleza, Ceara, Brazil. He received his BSc and MSc in electrical engineering from UFC in 1999 and 2001, respectively, and his PhD from the University of Campinas, Campinas, Sao Paulo, Brazil, in 2004. He is a researcher in the Wireless Telecommunications Research Group (GTEL), where he leads research on signal processing for commu- nications, blind source separation, wireless communications, and statistical signal processing. Ricardo Suyama is an assistant professor at the Federal University of ABC (UFABC), Santo Andre, Sao Paulo, Brazil. He received his BS, MS, and PhD in electrical engineering from the University of Campinas, Campinas, Sao Paulo, Brazil in 2001, 2003, and 2007, respectively. He is a researcher in the DSPCom Laboratory at UNICAMP. His research interests include adaptive filtering, source separation, and applications in communication systems. xxvii © 2011 by Taylor & Francis Group, LLC
  • 28. 1 Introduction The subject of this book could be summarized by a simple scheme as that depicted in Figure 1.1. We have an original set of data of our interest that we want, for instance, to transmit, store, extract any kind of useful information from; such data are represented by a quantity s. However, we do not have direct access to s but have access only to a modified version of it, which we represent by the quantity x. So, we can state that there is a data mapping H(·) so that the observed data x are obtained by x = H(s) (1.1) Then our problem consists in finding a kind of inverse mapping W to be applied in the available data so that we could, based on a certain perfor- mance criterion, recover suitable information about the original set of data. We represent this step by another mapping that provides, from x, what we could name an estimate of s, represented by ŝ = W(x) (1.2) The above description is generalized on purpose so that a number of dif- ferent concrete problems could fit it, with also a great variety of approaches to tackle with them. According to the area of knowledge, the aforementioned problem can be considerably relevant in signal processing, telecommunica- tions, identification and control, pattern recognition, Bayesian analysis, and other fields. The scope of this book is clearly signal processing oriented, with a focus on two major problems: channel equalization and source separation. Even thus, such character of the work does not restrict the wide field of application of the theory and tools it presents. 1.1 Channel Equalization In general terms, an equalization filter or, simply, equalizer, is a device that compensates the distortion due to an inadequate response of a given system. In communication systems, it is well known that any physical 1
  • 29. 2 Unsupervised Signal Processing (s) (x) ... x1 x2 xM s1 s2 sN ... ... s x s1     s2 sN s FIGURE 1.1 General scheme. transmission channel is band-limited, i.e., it necessarily imposes distortion over the transmitted signal if such signal exceeds the allowed passband. Moreover, the channel presents additional impairments since its frequency- response in the passband is often not flat, and is also subject to noise. In the most treatable case, the channel is assumed linear and time-invariant, i.e., the output is obtained by a temporal convolution, and the noise is assumed Gaussian and additive. In analog communications systems, channel impairments lead to a continuous-time distortion over the transmitted waveform. In digital com- munication, information is carried by a sequence of symbols, instead of a continuous waveform. Such symbols constitute a given transmission signal in accordance with a given modulation scheme. Hence, the noxious effect of the channel impairments in digital communications is a wrong symbol decision at the receiver. Since information is conveyed by a sequence of symbols, it is suitable to employ a discrete-time model for the system, so that both the channel and the equalizer may be viewed as discrete-time filters, and the involved signals are numerical sequences. So, the problem may be represented by the scheme in Figure 1.2, where s(n) is the transmitted signal; ν(n) is the additive noise; x(n) is the received signal, i.e., the equalizer input; and ŝ(n) is the estimate of the transmitted signal, provided by the equalizer through the mapping ŝ(n) = W [x(n)] (1.3) Since the channel is linear, we can characterize it by an impulse response h(n) so that the mapping provided by the channel may be expressed by H [s(n)] = s(n) ∗ h(n) (1.4) s(n) Σ ν(n) x(n) s(n) ˆ FIGURE 1.2 Equalization scheme.
  • 30. Introduction 3 where ∗ stands for the discrete-time convolution, and then x(n) = s(n) ∗ h(n) + ν(n) (1.5) Clearly, the desired situation will correspond to a correct recovery of the original sequence s(n), except for a delay and a constant factor, which can include phase rotation if we deal with the most general case of complex sym- bols. This very ideal situation is named zero-forcing (ZF) condition. As better explained further in the book, it comes from the fact that, in such conditions, all terms associated to the intersymbol interference (ISI) are “forced to zero.” So, if the global system formed by the channel and the equalizer establishes a global mapping G(·), the ZF conditions leads to G [s(n)] = ρs(n − n0) (1.6) where n0 is a delay ρ is the constant factor Once ρ and n0 are known or estimated, the ideal operation under the ZF condition leads to the correct retrieval of all transmitted symbol. However, as we could expect, such a condition is not attainable in practice due to the nonideal character of W [·] and to the effect of noise. Hence, a more suitable approach is to search for the equalizer W [·] that provides a minimal quantity of errors in the process of symbol recovery. By considering the stochastic nature of the transmitted information and the noise, the most natural mathematical procedure consists in dealing with the notion of probability of error. In this sense, the first effective solution is credited to Forney [111], which considered the Viterbi algorithm for symbol recovery in presence of ISI. In its turn, the Viterbi algorithm was conceived for decoding convolutional codes in digital communications, in accordance with a maximum-likelihood (ML) criterion [300]. One year after Forney’s paper, the BCJR algorithm, named after its inven- tors [24], was proposed for decoding, but in accordance with a maximum a posteriori (MAP) criterion. In this case, recovery was carried out symbol- by-symbol basis instead of recovering the best sequence, as in the Viterbi approach. Once the transmitted symbols are equiprobable, the ML and MAP cri- teria lead to the same result. So, the Viterbi algorithm minimizes the probability of detecting a whole sequence erroneously, while the BCJR algo- rithm minimizes the probability of error for each individual symbol. The adaptive (supervised and unsupervised) techniques considered in this book are typically based on a symbol-by-symbol recovery.
  • 31. 4 Unsupervised Signal Processing We will refer to as Bayesian equalizer the mapping W [·] that provides the minimal probability of error, considering symbol-by-symbol recovery. It is important to think of the Bayesian equalizer, from now, as our refer- ence of optimality. However, due to its nonlinear character, its mathematical derivation will be postponed to Chapter 7. Optimal equalizers derived from ML and/or MAP criteria are unfortu- nately not so straightforward to implement in practice [112], especially in realistic scenarios that involve real-time operation at high bit rates, nonsta- tionary environments, etc. Taking into account the inherent difficulties of a practical communication system, the search for suitable solutions of the equalization problem includes the following steps: • To implement the mapping W by means of a linear finite impulse response (FIR) filter followed by a nonlinear symbol-recovering (- decision) device. • To choose a more feasible, although suboptimum, criterion instead of that of probability of error. • To derive operative (adaptive, if desirable) procedures to obtain the equalizer in accordance with the chosen criterion. • To use (as much as possible) prior knowledge about the transmitted signal and/or the channel in the aforementioned procedures. Taking into account the above steps, the mapping W [x(n)] will then be accomplished by y(n) = x(n) ∗ w(n) (1.7) and ŝ(n) = y(n) (1.8) where w(n) is the equalizer impulse response y(n) is the equalizer output (·) stands for the decision device In addition, we can now define the notion of combined response channel+equalizer as g(n) = h(n) ∗ w(n) (1.9) so that the ZF condition can be simply established if we define a vector g, the elements of which are those of the sequence g(n). The ZF condition holds if and only if g = [0, . . . , 0, ρ, 0, . . . , 0]T (1.10) where the position of ρ in g is associated with the equalization delay.
  • 32. Introduction 5 As far as the criterion is concerned, the discussion is, in fact, founded on the field of estimation theory. From there, we take two useful possibilities: the minimum-squared error (MSE) and the least-squares (LS) criteria, as our main practical tools. For the operative procedure, we have two distinct possibilities: taking into account the whole transmitted sequence to obtain an optimized equalizer for this set of data (data acquisition first and equalizer optimization then) or proceeding to an adjustment of the equalizer as the data are available at the receiver (joint acquisition and optimization). In this second case, we talk about adaptive equalization. Finally, the use of a priori information is closely relatedtothepossibilityofputtingintopracticeamechanismofsupervisionor trainingoverthesystem.Ifsuchamechanismcanbeperiodicallyimplemented, we talk about supervised equalization, while the absence of supervision leads to the unsupervised or blind techniques. To a certain extent, this book discusses a vast range of possible approaches to pass through these three steps, with a clear emphasis on adaptive and unsupervised methods. We can easily observe that the problem of channel equalization, as depicted in Figure 1.2, fits the general problem of Figure 1.1, for the particu- lar case of M = N = 1. Another particularization is related to the hypothesis over the transmitted signal: as a rule, it is considered to be a sequence of inde- pendent and identically distributed (i.i.d.) random variables, which belong to a finite alphabet of symbols. This last aspect clearly imposes the use of a symbol-recovering device. Regarded in this light, the problem is referred to as SISO channel equalization, since both the channel and the equalizer are single-input single-output filters. Nevertheless, we can also consider a communication channel with mul- tiple inputs and/or multiple outputs. A typical and practical case may be a wireless link with multiple antennas at the transmitter and/or at the receiver. In this book, we will specially consider the following cases, to be discussed in Chapter 4: • A single-input multiple-output (SIMO) channel with a multiple- input single-output (MISO) equalizer, which corresponds to N = 1 and M 1 in Figure 1.1. • A multiple-input multiple-output (MIMO) channel with a multiple- input multiple-output (MIMO) equalizer, which corresponds to N 1 and M 1 in Figure 1.1. 1.2 Source Separation The research work on SISO blind equalization has been particularly intense during the 1980s. At this time, another challenging problem in signal pro- cessing was proposed, that of blind source separation (BSS). In general terms,
  • 33. 6 Unsupervised Signal Processing such a problem can be simply explained by the classical example known as cocktail party phenomenon, where a number of speakers communicate at the same time in the same noisy environment. In order to focus the attention in a specific speaker s1, a given receiver must retrieve the corresponding signal from a mixture of all signals {s1, . . . , sN}, where N is the number of speakers. Despite the human ability in performing this task, a technical solution for providing blind separation was unknown until the work of Hérault et al., in 1985 [144]. As stated above, the BSS problem also fits in the scheme of Figure 1.1. The possibility of obtaining proper solutions will depend on the hypothe- sis we consider for the mapping H(·) and for the set of original signals, or sources, s. The most tractable case emerges from the following assumptions: • The mapping H(·) stands for a linear and memoryless system, with M = N. • The sources {s1, . . . , sN} are assumed to be mutually independent signals. • There is, at most, one Gaussian source. The main techniques for solving BSS under these assumptions come from the principle of independent component analysis (ICA) [74]. Such techniques are based on searching for a separating system W(·), the parameters of which are obtained in accordance of a given criterion that imposes statistical inde- pendence between the set of outputs ŝ. As pointed out in [137], ICA may be viewed as an extension of the well-known principal component analysis (PCA), which deals only with the second-order statistics of the involved signals. Although blind equalization and source separation problems have orig- inated independently and in somewhat distinct scientific communities, we can clearly observe a certain “duality” between them: • In SISO channels, the output is a linear combination (temporal con- volution) of the elements of the transmitted signal with additive Gaussian noise. In BSS, the set of outputs comes from the linear mixture of signals, among which one can be Gaussian. • In SISO equalization, we try to recover a sequence of indepen- dent symbols that correspond to the transmitted signal. In BSS, we search for a set of independent variables that correspond to original sources. • In both cases, dealing with second-order statistics is not sufficient: the output of a SISO channel may be whitened, for instance, by a prediction-error filter, while the outputs of the mixing system may be decorrelated by a PCA procedure. However, as we will stress later in the book, neither of these procedures can guarantee a correct retrieval.
  • 34. Introduction 7 The above considerations will become clearer, and will be more rigor- ously revisited, in the sequence of the chapters. Nevertheless, it is worth remarking these points in this introduction to illustrate the interest in bringing unsupervised equalization and source separation to a common theoretical framework. On the other hand, BSS can become a more challenging problem as the aforementioned assumptions are discarded. The case of a mixing system with memory corresponds to the more general problem of convolutive mix- tures. Such a problem is rather similar to that of MIMO equalization. As a rule in this book, we consider convolutive BSS as a more general problem since, in MIMO channel equalization, we usually suppose that the trans- mitted signals have the same statistical distributions and belong to a finite alphabet. This is not at all the case in other typical applications of BSS. If the hypothesis of linear mixing is discarded, the solution of BSS prob- lems will require special care, particularly in applying ICA. Such a solution may involve the use of nonlinear devices in the separating systems, as done in the so-called post-nonlinear model. It is worth mentioning that nonlinear channels can also be considered in communication and different approaches have been proposed for nonlinear equalization, including the widely known decision feedback equalizer (DFE). Overall, our problem will certainly become more intricate when nonlinear mappings take place in H(·) and/or in W(·), as we will discuss in more detail in Chapter 6. Furthermore, other scenarios in BSS deserve the attention of researchers, as those of underdetermined mixtures, i.e., in scenarios in which M N in Figure 1.1; correlated sources; sparse sources, etc. 1.3 Organization and Contents We have organized the book as follows: Chapter 2 reviews the fundamental concepts concerning the characteri- zation of signals and systems. The purpose of this chapter is to emphasize some notions and tools that are necessary to the sequence of the book. For the sake of clarity, we first deal with deterministic concepts and then we introduce statistical characterization tools. Although many readers could be familiar with these subjects, we provide a synthetic presentation of the fol- lowing topics: signals and systems definitions and main properties; basic concepts of discrete-time signal processing, including the sampling theorem; fundamentals of probability theory, including topics like cumulants, which are particularly useful in the context of unsupervised processing; a review on stochastic processes with a specific topic on discrete-time random signals; and, finally, a section on estimation theory. In order to establish the foundations of unsupervised signal processing, we present in Chapter 3 the theory of optimal and adaptive filtering in the
  • 35. 8 Unsupervised Signal Processing classic scenario of linear and supervised processing. As already commented, many books are devoted to this rich subject and present it in a more exhaus- tive fashion. We opt for a brief and, to a certain extent, personal presentation that facilitates the introduction of the central themes of the book. First, we discuss three emblematic problems in linear filter theory: identifica- tion, deconvolution, and prediction. From there, the specific case of channel equalization is introduced. Then, as usually done in the literature, we present the Wiener filtering theory as the typical solution for supervised processing and a paradigm for adaptive procedures. The sections on supervised adap- tive filtering discuss the celebrated LMS and RLS algorithms, and also the use of structures alternative to the linear FIR filter. Moreover, in Chapter 3 we introduce the notion of optimal and adaptive filtering without a refer- ence signal, as a first step to consider blind techniques. In this context, we discuss the problem of constrained filtering and revisit that of prediction, indicating some relationships between linear prediction and unsupervised equalization. After establishing the necessary foundations in Chapters 2 and 3, the sub- ject of unsupervised equalization itself is studied in Chapter 4, which deals with single-input single-output (SISO) channels, and in Chapter 5, in which the multichannel case is considered. Chapter 4 starts with a general discussion on the problem of unsu- pervised deconvolution, of which blind equalization may be viewed as a particular case. After introducing the specific problem of equalization, we state the two fundamental theorems: Benveniste–Goursat–Ruget and Shalvi– Weinstein. Then we discuss the main adaptive techniques: the so-called Bussgang algorithms that comprise different LMS-based blind techniques, the Shalvi–Weinstein algorithm, and the super-exponential. Among Buss- gang techniques, special attention is given to the decision-directed (DD) and Godard/CMA approaches, due to their practical interest in communica- tions schemes. We discuss important aspects about the equilibrium solutions and convergence of these methods, having the Wiener MSE surface as a benchmark for performance evaluation. Finally, based on a more recent literature, we present some results concerning the relationships between constant-modulus, Shalvi–Weinstein, and Wiener criteria. The problem of blind equalization is extended to the context of systems with multiple inputs and/or outputs in Chapter 5. First, we state some theoretical properties concerning these systems. Then we discuss single- input multiple-output (SIMO) channels, which may be engendered, for instance, by two practical situations: temporal oversampling of the received signal or the use of multiple antennas at the receiver. In the context of SIMO equalization, we discuss equalization conditions in the light of Bezout’s identity and the second-order methods for blind equalization. Afterward, we turn our attention to the most general scenario, that of multiple-input multiple-output (MIMO) channels. In such case, special attention is given to
  • 36. Introduction 9 multiuser systems, the importance of which is notorious in modern wireless communications. Chapter 6 deals with blind source separation (BSS), the other central sub- ject for the objectives of this book. We start this chapter by stating the main models to be used and the standard case to be considered first, that of a linear, instantaneous, and noiseless mixture. Then, we introduce a tool of major interest in BSS: the independent component analysis (ICA). The first part of Chapter 6 is devoted to the main concepts, criteria, and algorithms to perform ICA. Afterward, we deal with alternative techniques that exploit prior information as, in particular, the nonnegative and the sparse compo- nent decompositions. Then, we leave the aforementioned standard case to consider two relevant problems in BSS: those of convolutive and nonlinear mixtures. Both of them can be viewed as open problems with significant research results in the recent literature. So we focus our brief presentation on some representative methods with emphasis on the so-called post-nonlinear model. Chapters 4 through 6 establish the fundamental core of the book, as we try to bring together blind equalization and source separation under the same conceptual and formal framework. The two final chapters consider more emergent techniques that can be applied in the solution of those two problems. The synergy between the disciplines of machine learning and signal pro- cessing has significantly increased during the last decades, which is attested by the several regular and specific conferences and journal issues devoted to the subject. From the standpoint of this book, it is quite relevant that a nonnegligible part of this literature is related to unsupervised problems. Chapter 7 presents some instigating connections between nonlinear filter- ing, machine learning techniques, and unsupervised processing. We start by considering a classical nonlinear solution for adaptive equalization— the DFE structure—since this remarkably efficient approach can be equally used in supervised and blind contexts. Then we turn our attention to more sophisticated structures that present properties related to the idea of uni- versal approximation, like Volterra filters and artificial neural networks. For that, we previously revisit equalization within the framework of a classification problem and introduce an important benchmark in digital transmission: the Bayesian equalizer, which performs a classification task by recovering the transmitted symbols in accordance with the criterion of minimum probability of error. Finally, we discuss two classical artificial neu- ral networks: multilayer perceptron (MLP) and radial basis function (RBF) network. The training process of these networks is illustrated with the aid of classical results, like the backpropagation algorithm and the k-means algorithm. The methods and techniques discussed all through this book are issued, after all, from a problem of optimization. The solutions are obtained, as
  • 37. 10 Unsupervised Signal Processing a rule, by the minimization or maximization of a given criterion or cost- function. The bio-inspired optimization methods discussed in Chapter 8, however, are part of a different paradigm, as they are founded on a number of complex processes found in nature. These methods are generally charac- terized by a significant global search potential and do not require significant a priori information about the problem to be solved, which encourages appli- cation, for instance, in nonlinear and/or unsupervised contexts. Chapter 8 closes the book by considering this family of techniques, which are finding increasing applications in signal processing. Given the vastness of the sub- ject, we limit our discussion to three potentially suitable approaches, taking into account our domain of interest: genetic algorithms, artificial immune systems, and particle swarm optimization methods. The book presents enough material for a graduate course, since blind techniques are increasingly present in graduate programs, and can also be used as a complementary reference for undergraduate students. According to the audience, Chapter 2 can be skipped, and even some topics of Chap- ter 3, if the students have the possibility of attending a specific course on adaptive filtering theory. Furthermore, the content of Chapters 7 and 8 can be adapted to the audience and also serves as a complementary material for courses on machine learning and/or optimization. Overall, it is worth emphasizing that a course on unsupervised signal processing theory, com- prising blind equalization and source separation, must not be organized in a rigid way, but following the interests of different institutions. Finally, it is worth emphasizing that adaptive filtering, unsupervised equalization, source separation, and related themes present a number of recent results and open problems. Necessarily, and to preserve the main focus of this book, some of them were omitted or not dealt with in depth.
  • 38. 2 Statistical Characterization of Signals and Systems The statistical characterization of signals and systems provides an impor- tant framework of concepts and mathematical tools that are fundamental to the modern theory of filtering and signal processing. In signal theory, we denote by statistical signal processing the field of study that treats signals as stochastic processes. The word stochastic is etymologically associated with the notion of randomness. Even though such notion gives rise to different interpretations, in our field of study, randomness is related to the concept of uncertainty. Uncertainty on its turn is present in the essence of information signals in their different forms as well as in the several types of disturbances that can affect a system. The subject of statistical characterization of signals and systems is really extensive and has been built along more than two centuries, as a result of classical works on statistical inference, linear filtering, and information theory. Nevertheless, the purpose of this chapter is rather objective and, in a way, unpretentious: to present the basic foundations and to empha- size some concepts and tools that are necessary to the understanding of the next chapters. With this aim in mind we have chosen five main topics to discuss: • Section 2.1 is devoted to the basic theory of signals and systems. For the sake of systemizing such theory, we first consider signals that do not have randomness in their nature. • Section 2.2 specifically considers discrete-time signal processing, since most methods to be presented in the book tend to be implemented using this approach. • Section 2.3 discusses the foundations of the probability theory in order to introduce the suitable tools to deal with random signals. The main definitions and properties are exposed. • Section 2.4 then deals with the notion of stochastic processes together with some useful properties. An appendix on the correlation matrix properties complements the subject. • Finally, Section 2.5 discusses the main concepts of estimation theory, a major area of statistical signal processing with strong connections with that of optimal filtering, which is the subject of the following chapter. 11 © 2011 by Taylor Francis Group, LLC
  • 39. 12 Unsupervised Signal Processing Historical Notes The mathematical foundations of the theory of signals and systems have been established by eminent mathematicians of the seventeenth and eigh- teenth centuries. This coincides, in a way, with the advent of calculus, since the representation of physical phenomena in terms of functions of continuous variables and differential equations gave rise to an appropriate description of the behavior of continuous signals and systems. Furthermore, as mentioned by Alan Oppenheim and Ronald Schafer [219], the classical works on numerical analysis developed by names like Euler, Bernoulli, and Lagrange sowed the seeds of discrete-time signal processing. The bridge between continuous- and discrete-time signal processing was theoretically established by the sampling theorem, introduced in the works of Harry Nyquist in 1928, D. Gabor in 1946, and definitely proved by Claude Shannon in 1949. Notwithstanding this central result, signal process- ing was typically carried out by analog systems and in a continuous-time framework, basically due to performance limitations of the existing digital machines. Simultaneously with the development of computers, a landmark result appeared: the proposition of the fast Fourier transform algorithm by Cooley and Tukey in 1965. Indeed, this result has been considered to be one of the most important in the history of discrete-time signal process- ing, since it opened a perspective of practical implementation of many other algorithms in digital hardware. Two other branches of mathematics are fundamental in the modern theory of signals and systems: functional analysis and probability theory. Functional analysis is concerned with the study of vector spaces and oper- ators acting upon them, which are crucial for different methods of signal analysis and representation. From it is derived the concept of Hilbert space, the denomination of which is due to John von Neumann in 1929, as a recognition of the work of the great mathematician David Hilbert. This is a fundamental concept to describe signals and systems in a transformed domain, including the Fourier transform, a major tool in signal process- ing, the principles of which had been introduced one century before by Jean-Baptiste Joseph Fourier. Probability theory allows extending the theory of signals and systems to a scenario where randomness or incertitude is present. The creation of a mathematical theory of probability is attributed to two great French mathematicians, Blaise Pascal and Pierre de Fermat, in 1654. Along three centuries, important works were written by names like Jakob Bernoulli, Abraham de Moivre, Thomas Bayes, Carl Friedrich Gauss, and many others. In 1812, Pierre de Laplace introduced a host of new ideas and mathematical techniques in his book Théorie Analytique des Probabilités [175]. Since Laplace, many authors have contributed to developing a mathe- matical probability theory precise enough for use in mathematics as well © 2011 by Taylor Francis Group, LLC
  • 40. Statistical Characterization of Signals and Systems 13 as suitable to be applicable to a wide range of practical problems. The Russian mathematician Andrei Nikolaevich Kolmogorov established a solid landmark in 1933, by proposing the axiomatic approach that forms the basis of modern probability theory [169]. A few years later, in his clas- sical paper [272], Shannon made use of probability in the definition of entropy, in order to “play a central role in information theory as measures of information, choice and uncertainty.” This fundamental link between uncer- tainty and information raised many possibilities of using statistical tools in the characterization of signals and systems within all fields of knowledge concerned with information processing. 2.1 Signals and Systems Information exchange has been a vital process since the dawn of mankind. If we consider for a moment our routine, we will probably be able to point out several sources of information that belong to our everyday life. Nevertheless, “information in itself” cannot be transmitted. A message must find its proper herald; this is the idea of signal. We shall define a signal as a function that bears information, while a system shall be understood as a device that produces one or more output signals from one or more input signals. As mentioned in the introduction of this chapter, the proper way to address signals and systems in the mod- ern theory of filtering and signal processing is by means of their statistical characterization, due to the intrinsic relationships between information and randomness. Nevertheless, for the sake of systemizing such theory, we first consider signals that do not have incertitude in their nature. 2.1.1 Signals In simple terms, a signal can be defined as an information-bearing function. The more we probe into the structure of a certain signal, the more informa- tion we are able to extract. A cardiologist can find out a lot about your health by simply glancing at an ECG. Conversely, someone without an adequate training would hardly avoid a commonplace appreciation of the same data set, which leads us to a conclusion: signals have but a small practical value without the efficient means to interpret their content. From this it is easy to understand why so much attention has been paid to the field of signal analysis. Mathematically, a function is a mapping that associates elements of two sets—the domain and the codomain. The domain of a signal is usually, although not necessarily, related to the idea of time flow. In signal processing, there are countless examples of temporal signals: the electrical stimulus pro- duced by a microphone, the voltage in a capacitor, the daily peak temperature © 2011 by Taylor Francis Group, LLC
  • 41. 14 Unsupervised Signal Processing profile of a given city, etc. In a number of cases, signals can be, for instance, functions of space: the gray intensity level of a monochrome image, the set of measures provided by an array of sensors, etc. Also, spatiotemporal signals may be of great interest, the most typical example being a video signal, which is a function of a two-dimensional domain: space and time. In this book, we deal much more frequently with temporal signals, but some cases of space-time processing, like the use of antenna array in particu- lar channels, are also relevant to the present work. Anyway, it is interesting to expose some important properties concerning the nature of a signal as well as ways of classifying and characterizing them. 2.1.1.1 Continuous- and Discrete-Time Signals Insofarasthedomainoftemporalsignalsisconcerned,therearetwopossibilities of particular relevance: to establish a continuum or an integer set of time values. In the former case, the chosen domain engenders a continuous-time signal, which is mathematically described by a function of a continuous variable, denoted by x(t). Conversely, if time-dependence is expressed by means of a set of integer values, it gives rise to a discrete-time signal, which is mathematically described by a numerical sequence, denoted by x(n). For instance, a signal received by a microphone or an antenna can be assumed to be a continuous-time signal, while a daily stock quote is a discrete-time signal. 2.1.1.2 Analog and Digital Signals A signal whose amplitude can assume any value in a continuous range is an analog signal, which means that it can assume an infinite number of values. On the other hand, if the signal amplitude assumes only a finite number of values, it is a digital signal. Figure 2.1 illustrates examples of different types of signals. It should be clear that a continuous-time signal is not necessarily an analog signal, as well as a discrete-time signal may not be digital. The terms continuous-time and discrete-time refer to the nature of the signals along the time, while the terms analog and digital qualify the nature of the signal amplitude. This is shown in Figure 2.1. 2.1.1.3 Periodic and Aperiodic/Causal and Noncausal Signals A signal x(t) is said to be periodic if, for some positive constant T, x(t) = x(t + T) (2.1) for all t. The smallest value of T for which (2.1) holds is the period of the signal. Signals that do not exhibit periodicity are termed aperiodic signals. From (2.1), we can notice that a periodic signal should not change if shifted in time by a period T. Also, it must start at t = −∞, otherwise, it would not be © 2011 by Taylor Francis Group, LLC
  • 42. Statistical Characterization of Signals and Systems 15 –1 (a) –0.5 0 0.5 1 –0.4 –0.2 0 0.2 0.4 0.6 0.8 1 t x (t) –1 (b) –0.5 0 0.5 1 –0.5 0 0.5 1 t x (t) –1 (c) –0.5 0 0.5 1 –1 –0.8 –0.6 –0.4 –0.2 0 0.2 0.4 0.6 0.8 1 t x (t) –1 (d) –0.5 0 0.5 1 –1 –0.8 –0.6 –0.4 –0.2 0 0.2 0.4 0.6 0.8 1 t x (t) FIGURE 2.1 Examples of analog/digital and continuous-time/discrete-time signals: (a) analog continuous- time signal, (b) analog discrete-time signal, (c) digital continuous-time signal, (d) digital discrete-time signal. possible to respect the condition expressed in (2.1) for all t. Signals that start at t = −∞ and extend until t = ∞ are denoted infinite duration signals. In addition to these definitions, it is interesting to establish the difference between a causal and a noncausal signal. A signal is causal if x(t) = 0, t 0 (2.2) and said to be noncausal if the signal starts before t = 0. It is worth mentioning that all definitions also apply for discrete-time signals. 2.1.1.4 Energy Signals and Power Signals An energy signal is a signal that has finite energy, i.e., ∞ −∞ |x(t)|2 dt ∞ (2.3) © 2011 by Taylor Francis Group, LLC
  • 43. 16 Unsupervised Signal Processing A signal with finite nonzero power, i.e., lim α→∞ 1 α α/2 −α/2 |x(t)|2 dt ∞ (2.4) is called a power signal. All practical signals present finite energy and, thus, are energy signals. Another interesting fact is that a power signal should necessarily be an infi- nite duration signal, otherwise, its average energy would tend to zero within a long enough time interval. 2.1.1.5 Deterministic and Random Signals A deterministic signal is a signal whose physical description, either in a mathematical or in a graphical form, is completely known. Conversely, a signal whose values cannot be precisely predicted but are known only in terms of a probabilistic description is a random signal. 2.1.2 Transforms In daily life, our senses are exposed to all kinds of information when we move, or simply as long as time flows. Since time (and in a way this is also true for space) constitutes the most natural domain in which we observe information, it is also a natural standpoint to represent and analyze a sig- nal, but it is not the only possibility. Sometimes, a great deal of insight on the characteristics of an information-bearing function can be gained by translating it into another support input space. We can understand a transform as a mapping that establishes a one-to- one relationship between the representations of a given signal in two distinct domains. As a rule, a transform is employed when the present domain wherein a signal is represented is not the most favorable to the study of one or more of its relevant aspects. The domain of representation is strongly related with the mathematical concept of complete orthogonal basis. For instance, if we define the unit impulse function δ(t) (also known as the Dirac delta function) as δ(t) = 0 t = 0 (2.5) ∞ −∞ δ(τ)dτ = 1 (2.6) © 2011 by Taylor Francis Group, LLC
  • 44. Statistical Characterization of Signals and Systems 17 it is interesting to observe that a continuous-time signal can be written as x(t) = ∞ −∞ x(τ)δ(t − τ)dτ (2.7) A similar representation can be obtained for discrete-time signals: x(n) = ∞ −∞ x(k)δ(n − k) (2.8) where δ(n) denotes the discrete-time unit impulse function (also known as the Kronecker delta function), and is defined as δ(n) = 1, for n = 0 0, otherwise (2.9) It comes, and this is even more evident in the discrete case, that the signal of interest is a linear combination of shifted unit impulse functions. In this sense, these shifted functions can be regarded as a basis for representing the signal. A change of representation domain corresponds to a change in the basis over which the signal is decomposed. As mentioned before, this can be very important in order to study some characteristics and properties of the signal that are not directly observed in the form, for instance, of a temporal function or sequence. In the classical theory of signal and systems, representation by the complete orthogonal basis composed by complex exponentials deserves special attention. For purely imaginary exponents, the complex exponential functions and sequences are directly associated with the physical concept of frequency, and such representation gives rise to the Fourier transform. For general complex exponents, the corresponding decomposition gives rise to the Laplace transform, in the continuous case, and to the z-transform, in the discrete case, which are both crucial for the study of linear systems. These four important cases are depicted in the sequel. 2.1.2.1 The Fourier Transform of Continuous-Time Signals As mentioned earlier, the Fourier transform corresponds to the projection of a signal x(t) onto a complete orthogonal basis composed of complex exponentials exp(j 2πft). It is defined as X( f) = ∞ −∞ x(t) exp (−j 2πft)dt (2.10) © 2011 by Taylor Francis Group, LLC
  • 45. 18 Unsupervised Signal Processing while the inverse Fourier transform is given by x(t) = ∞ −∞ X(f) exp (j 2πft)df (2.11) 2.1.2.2 The Fourier Transform of Discrete-Time Signals The discrete-time counterpart of the Fourier transform correspond to the projection of the sequence x(n) onto an orthogonal basis composed of complex exponentials exp(j 2πfn), and is defined as X(exp (j 2πf)) = ∞ n=−∞ x(n) exp (−j 2πfn) (2.12) while the inverse Fourier transform is given by x(n) = 1/2 −1/2 X(exp (j 2πfn)) exp (j 2πfn)df (2.13) 2.1.2.3 The Laplace Transform The basic idea behind the Laplace transform is to build an alternative rep- resentation X(s) of a continuous-time signal x(t), from a basis of complex exponentials: X(s) = ∞ −∞ x(t) exp(−st)dt (2.14) where s = σ + j 2πf. The set of values of s for which the integral shown in (2.14) converges is called region of convergence (ROC) of the Laplace transform. The inverse Laplace transform is then given by x(t) = 1 2πj C X(s) exp(st)ds (2.15) where C is a suitable contour path. 2.1.2.4 The z-Transform The z-transform can be understood as the equivalent of the Laplace trans- form in the context of discrete-time signals. The transform X(z) of a sequence x(n) is defined by © 2011 by Taylor Francis Group, LLC
  • 46. Statistical Characterization of Signals and Systems 19 X(z) = ∞ n=−∞ x(n)z−n (2.16) where z = exp(σ+j 2πf). The ROC of the z-transform is defined as the values of z for which the summation presented in (2.16) converges. The inverse z-transform is defined as x(n) = 1 2πj C X(z)zn−1 dz (2.17) where the integral must be evaluated in a path C that encircles all of the poles of X(z). It is worth mentioning that Equations 2.14 and 2.16 correspond to the so-called bilateral Laplace and z-transforms, which are the most generic representations. For causal signals, it is useful to consider the unilateral transforms, in which the integral and the discrete sum start from zero instead of −∞. 2.1.3 Systems Having in mind the definition of signal presented in the beginning of Section 2.1.1, we may alternatively define a system as an information- processing device. In Figure 2.2, we present a schematic view of a system. A system can be fully characterized by its input–output relation, i.e., by the mathematical expression that relates its outputs to its inputs. Assuming that the operator S[·] represents the mapping performed by the system, we may write y = S[x] (2.18) where x and y are the input and output vectors, respectively. It is interest- ing to analyze some important classes of systems and the properties that characterize them. System x1 x2 xN ... y1 y2 yM ... y x FIGURE 2.2 Schematic view of a system. © 2011 by Taylor Francis Group, LLC
  • 47. 20 Unsupervised Signal Processing 2.1.3.1 SISO/SIMO/MISO/MIMO Systems This classification is based on the number of input and output signals of a system: • SISO (single-input single-output) systems have a single input signal and a single output signal. Therefore, x and y become scalars. • SIMO (single-input multiple-outputs) systems have a single input signal and more than one output signal. • MISO (multiple-input single-output) systems have multiple input signals and a single output signal. • Finally, MIMO (multiple-input multiple-output) systems have mul- tiple input and output signals, and form the most general of the four classes. Throughout the book, the reader will have the opportunity of considering the differences between these classes of systems, the importance of which is patent in modern signal processing techniques. 2.1.3.2 Causal Systems If the system output depends exclusively on present and past values of the input, the system is said to be causal. In other words, causality means that the output of a system at a given instant is not influenced by future values of the input. When we consider real-time applications, causality will certainly hold. However, when we manipulate acquired data, noncausal systems are accept- able, and may even be desirable in some cases. 2.1.3.3 Invertible Systems When it is possible to build a mapping that recovers the input signals of a given system from its output, we say that such a system is invertible. This means that it is possible to obtain x from y using an inverse system cas- caded with the original one. This notion will be revisited when we analyze the problems of equalization and source separation. 2.1.3.4 Stable Systems Stability is also a major concern in system analysis. We shall assume that a system is stable if the response to a bounded input is also bounded. In simple words, if the input signal does not diverge to infinity, the output will not diverge as well. Stability is a common feature in real-world systems, which we suppose to be restricted by conservation laws, but the same may not occur in some mathematical models and algorithms. © 2011 by Taylor Francis Group, LLC
  • 48. Statistical Characterization of Signals and Systems 21 2.1.3.5 Linear Systems In system theory, it is often convenient to introduce some classes of possi- ble operators. A very relevant distinction is established between linear and nonlinear systems. Linear systems are those whose defining S[·] operator obeys the following superposition principle: S[k1x1 + k2x2] = k1S[x1] + k2S[x2] (2.19) The idea of superposition can be explained in simple terms: the response to a linear combination of input stimuli is the linear combination of the indi- vidual responses. Conversely, a nonlinear system is simply one that does not obey this principle. 2.1.3.6 Time-Invariant Systems Another important feature is time-invariance. A system is said to be time- invariant when its input–output mapping does not vary with time. When the contrary holds, the system is said to be time-variant. Since this characteristic makes the system easier to be dealt with in mathematical terms, most models of practical systems are, with different degrees of fidelity, time-invariant. 2.1.3.7 Linear Time-Invariant Systems A very special class of systems is that formed by those that are both lin- ear and time-invariant (linear time-invariant, LTI). These systems obey the superposition principle and have an input–output mapping that does not vary with time. The combination of these desirable properties gives rise to the following mathematical result. Suppose that x(t) and y(t) are, respectively, the input and the output of a continuous-time LTI SISO system. In such case, y(t) = h(t) ∗ x(t) = ∞ −∞ h(τ)x(t − τ)dτ (2.20) where h(t) is the system impulse response, which is the system output when x(t) is equal to the Dirac delta function δ(t). The symbol ∗ denotes that the output y(n) is the result of the convolution of x(t) with h(t). Analogously, if x(n) and y(n) are, respectively, the input and the output of a discrete-time LTI SISO system, it holds that y(n) = h(n) ∗ x(n) = ∞ k=−∞ h(k)x(n − k) (2.21) © 2011 by Taylor Francis Group, LLC
  • 49. 22 Unsupervised Signal Processing where h(n) is the system impulse response, i.e., the system output when x(n) is equal to the Kronecker delta function δ(n). Once more, the symbol ∗ stands for convolution. 2.1.4 Transfer Function and Frequency Response An important consequence of the fact that the input and the output of a continuous-time LTI SISO system are related by a convolution integral is that their Laplace transforms will be related in a very simple way: Y(s) = H(s)X(s) (2.22) where Y(s) and X(s) are, respectively, the Laplace transforms of the output and the input H(s) is the transform of the system impulse response, the so-called transfer function This means that the input–output relation of an LTI system is the result of a simple product in the Laplace domain. If the ROCs of X(s), Y(s), and H(s) include the imaginary axis, expression (2.22) can be promptly particularized to the domain of the Fourier analysis. In this case, the following holds: Y(f) = H(f)X(f) (2.23) where Y(f) and X(f) are, respectively, the Fourier transforms of the output and the input H(f) is the transform of the system impulse response, which is called frequency response It is possible to understand several key features of a given LTI sys- tem simply by studying the functions H(s) and H(f). For instance, to know the frequency response of a system is the key to understanding how it responds to stimuli at any frequency of the spectrum, and how an input signal characterized by certain frequency content will be processed by it. The extension to the discrete-time domain is straightforward. If Y(z) and X(z) are the z-transforms of two discrete-time signals related by an LTI system, it is possible to write Y(z) = H(z)X(z) (2.24) © 2011 by Taylor Francis Group, LLC
  • 50. Statistical Characterization of Signals and Systems 23 and, if the ROCs of X(z), Y(z), and H(z) include the unit circle, expression (2.24) reduces to Y exp (j 2πf) = H exp (j 2πf) X exp (j 2πf) (2.25) 2.2 Digital Signal Processing Discrete-time signals can be characterized and stored very easily. This relies on a very relevant feature of discrete-time signals: given a finite time interval, there is a finite set of values that fully characterize a sequence, whereas the same does not hold for a continuous-time signal. This essential difference is a reflex of the profound structural divergences between the domains of these classes of information-bearing functions. The world of digital computers excels in storage capacity and potential of information processing, and is essentially a “discrete-time world.” There- fore, it is not surprising that digital signal processing is a widespread tool nowadays. Nevertheless, it is also clear that many of our physical models are inherently based on continuous-time signals. The bridge between this “real world” and the existing digital tools is established by the sampling theorem. 2.2.1 The Sampling Theorem The idea of sampling is very intuitive, as it is closely related to the notion of measure. When we measure our height or weight, we are, in a certain sense, sampling the continuous-time signal that expresses the time-evolution of these variables. In the context of communications, the sampling process produces, from a continuous-time signal, a representative discrete-time sig- nal that lends itself to proper digital processing and storage. Conditions for equivalent representation and perfect reconstruction of the original signal from its samples were achieved through the sampling theorem, proposed by Harry Nyquist (1926), D. Gabor (1946), and Claude Shannon (1949), and are related to two requirements: 1. The continuous-time signal must be band-limited, i.e., its Fourier spectrum must be null for f fM. 2. The sampling rate, i.e., the inverse of the time-spacing TS of the samples must be higher than or equal to 2fM. Given these conditions, we are ready to enunciate the sampling theorem [219] © 2011 by Taylor Francis Group, LLC
  • 51. 24 Unsupervised Signal Processing THEOREM 2.1 (Sampling Theorem) If x(t) is a signal that obeys requirement 1 above, it may be perfectly determined by its samples x(nTS), n integer, if TS obeys requirement 2. If these requirements are not complied with, the reconstruction process will be adversely affected by a phenomenon referred to as aliasing [219]. 2.2.2 The Filtering Problem There are many practical instances in which it is relevant to process informa- tion, i.e., to treat signals in a controlled way. A straightforward approach to fulfill this task is to design a filter, i.e., a system whose input–output relation is tailored to comply with preestablished requirements. The project of a filter usually encompasses three major stages: • Choice of the filtering structure, i.e., of the general mathematical form of the input–output relation. • Establishment of a filtering criterion, i.e., of an expression that encompasses the general objectives of the signal processing task at hand. • Optimization of the cost function defined in the previous step with respect to the free parameters of the structure defined in the first step. It is very useful to divide the universe of discrete-time filtering structures into two classes: linear and nonlinear. There are two basic types of linear dig- ital filters: finite impulse response filters (FIR) and infinite impulse response filters (IIR). The main difference is that FIR filters are, by nature, feedforward devices, whereas IIR filters are essentially related to the idea of feedback. On the other hand, nonlinearity is essentially a negative concept. There- fore, there are countless possible classes of nonlinear structures, which means that the task of treating the filtering problem in general terms is far from trivial. Certain classes of nonlinear structures (like those of neural networks and polynomial filters, which will be discussed in Chapter 7) share a very rele- vant feature: they are derived within a mathematical framework related to the idea of universal approximation. Consequently, they have the ability of producing virtually any kind of nonpathological input–output mapping, which is a remarkable feature in a universe as wide as that of nonlinear filters. A filtering criterion is a mathematical expression of the aims subjacent to a certain task. The most direct expression of a filtering criterion is its © 2011 by Taylor Francis Group, LLC
  • 52. Statistical Characterization of Signals and Systems 25 associated cost function, the optimization of which leads to the choice and adaptation of the free parameters of the chosen structure. When both the structure and an adequate cost function are chosen, there remains the procedure of optimizing the function with respect to the free parameters of the filtering device. Although there are many possible approaches, iterative techniques are quite usual in practical applications for, at least, two reasons: • They avoid the need for explicitly finding closed-form solutions, which, in some cases, can be rather complicated even in static environments. • Their dynamic nature suits very well the idea of adaptation, which is essential in a vast number of real-world applications. Adaptation will be a crucial idea in the sequel of this text and, as we will see, the derivation of a number of adaptive algorithms depends on some statistical concepts to be introduced now. 2.3 Probability Theory and Randomness Up to this moment, signals have been completely described by mathematical functions that generate information from a support input space. This is the essence of deterministic signals. However, this direct mapping between the input and output space cannot be established if uncertainties exist. In such case, the element of randomness is introduced and probabilistic laws must be used to represent information. Thus, it is of great interest to review some fundamental concepts of probability theory. 2.3.1 Definition of Probability Probability is essentially a measure to be employed in a random experiment. When one deals with any kind of random experiment, it is often necessary to establish some conditions in order that its outcome be representative of the phenomenon under study. In more specific terms, a random experiment should have the following three features [135]: 1. The experiment must be repeatable under identical conditions. 2. The outcome wi of the experiment on any trial is unpredictable before its occurrence. © 2011 by Taylor Francis Group, LLC
  • 53. 26 Unsupervised Signal Processing 3. When a large number of trials is run, statistical regularity must be observed in the outcome, i.e., an average behavior must be identified if the experiment is repeated a large number of times. The key point of analyzing a random experiment lies exactly in the representation of the statistical regularity. A simple measure thereof is the so-called relative frequency. In order to reach this concept, let us define the following: • The space of outcomes , or sample space, which is the set of all possible outcomes of the random experiment. • An event A, which is an element, a subset or a set of subsets of . Relative frequency is the ratio between the number of occurrences of a specific event and the total number of experiment trials. If an event A occurs N(A) times over a total number of trials N, this ratio obeys 0 ≤ N(A) N ≤ 1 (2.26) We may state that an experiment exhibits statistical regularity if, for any given sequence of N trials, (2.26) converges to the same limit as N becomes a large number. Therefore, the information about the occurrence of a random event can be expressed by the frequency definition of probability, given by Pr(A) = lim N→∞ N(A) N (2.27) On the other hand, as stated by Andrey Nikolaevich Kolmogorov in his seminal work [170], “The probability theory, as a mathematical discipline, can and should be developed from axioms in exactly the same way as Geom- etry and Algebra.” Kolmogorov thus established the axiomatic foundation of probability theory. According to this elegant and rigorous approach, we can define a field of probability formed by the triplet {, F, Pr(A)}, where is the space of outcomes, F is a field that contains all possible events of the ran- dom experiment,∗ and Pr(A) is the probability of event A. This measure is so chosen as to satisfy the following axioms. Axiom 1: Pr(A) ≥ 0 Axiom 2: Pr() = 1 Axiom 3: If A ∩ B = ∅, then Pr(A ∪ B) = Pr(A) + Pr(B), where ∩ and ∪ stand for the set operations intersection and union, respectively. ∗ In the terminology of mathematical analysis, the collection of subsets F is referred to as a σ-algebra [110]. © 2011 by Taylor Francis Group, LLC
  • 54. Statistical Characterization of Signals and Systems 27 For a countable infinite sequence of mutually exclusive events, it is possible to enunciate Axiom 3 in the following extended form: Axiom 3’: For mutually exclusive events A1, A2, . . . , An, Pr ∞ i=1 Ai = ∞ i=1 Pr(Ai) From these three axioms, and using set operations, it follows that Pr(Ā) = 1 − Pr(A) (2.28) where Ā stands for the complement of the set A. If A ∩ B = ∅, then Pr(A ∪ B) = Pr(A) + Pr(B) − Pr(A ∩ B) (2.29) In probability theory, an important and very useful concept is that of independence. Two events Ai and Aj, for i = j, are said to be independent if and only if Pr(Ai ∩ Aj) = Pr(Ai) Pr(Aj) (2.30) It is also important to calculate the probability of a particular event given the occurrence of another. Thus, we define the conditional probability of Ai given Aj (supposing Pr(Aj) = 0) as Pr(Ai|Aj) Pr(Ai ∩ Aj) Pr(Aj) (2.31) It should be noted that, if Ai and Aj are independent, then Pr(Ai|Aj) = Pr(Ai). This means that knowledge about the occurrence of Aj does not modify the probability of occurrence of Ai. In other words, the condi- tional probability of independent events is completely described by their individual probabilities. Computation of the probability of a given event can be performed with the help of the theorem of total probability. Consider a finite or countably infinite set of mutually exclusive (Ai ∩ Aj = ∅ for all i = j) and exhaustive i Ai = A events. The probability of an arbitrary event B is given by Pr(B) = i Pr(Ai ∩ B) = i Pr(Ai) Pr(B|Ai) (2.32) 2.3.2 Random Variables A deterministic signal is defined in accordance with an established math- ematical formula. In order to deal with random signals, it is important to © 2011 by Taylor Francis Group, LLC
  • 55. Other documents randomly have different content
  • 56. Essai sur l'étude de la Littérature, Gibbon's first published work, i. 20, 80 Essex, Earl of, i. 276 Establishment Bill, i. 376 Etienne, Gibbon's valet, ii. 243, 253 Exchequer Bills, issue of, ii. 382 Exeter, Lord, i. 65 Exilles, Fort, i. 59 *Eyre, Mr., printer, i. 263 F *Falkland, Lord, i. 282 *Fanshaw, Miss, ii. 284 Farquhar, Sir Walter, ii. 393, 395, 398, 401 *Farquhar's The Twin Rivals, ii. 102 Faukier, Mr., i. 163 Featherstonhaugh, Lady, i. 232, 235, 246, 249 Featherstonhaugh, Sir H., i. 162, 214, 235, 247, 249 Featherstonhaugh, Sir M., i. 56, 67, 83, 84, 131, 162, 247
  • 57. Fenestrelle, Fort, i. 59 *Ferguson, Lieut. James, killed by Captain Roche, i. 209 Ferrières, M. de, ii. 318 *Fersen, Comte de, ii. 292 *Feuchéres, Madame de, ii. 237 Firth, Miss, ii. 82, 91, 98, 334; Gibbon's letter to, ii. 98; and Severy's studies, ii. 167 Fischer, M., ii. 260, 283, 375 *Fitzherbert, Mrs., ii. 150 Fitzjames, Duchess of, ii. 324 Fitzmaurice, Lord. See Shelburne, Earl of Fitzpatrick, Lady Mary. See Holland, Lady Fitzroy, Mrs., i. 90 Fitzwilliam, Lord, ii. 305 Flanders, invasion of, ii. 299 *Fleming, Sir John, i. 261 Flood, Henry, i. 264 Florence, Gibbon at, i. 63
  • 58. *Floyer, Mr., Member of Madras Council, i. 362 Foley, Mr., English banker at Paris, i. 33, 36 Foote, Samuel, his Bankrupt, i. 192; A Trip to Calais stopped by Duchess of Kingston, i. 265 Ford, Mrs., Gibbon's housekeeper, i. 192; ii. 8 Fordwich, Lord. See Cowper, Earl Fort Louis, surrender of garrison to Austrians, ii. 396 Foster, Lady Elizabeth, described by Gibbon as a bewitching animal, goddess, still adorable, Bess, etc., ii. 15, 81, 117, 300, 308, 310, 312, 319, 339, 388; Gibbon's letter to, on Lady Sheffield's death, ii. 380 Foster, John. See Oriel, Lord *Foster, John Thomas, ii. 15 Fothergill, Dr., i. 177 Fowler, Mr., ii. 340 Fox, Charles James, supports Church of England, i. 148; Royal Marriage Bill, i. 151; his debts, i. 198, 264; on troubles with America, i. 249, 256, 303, 324, 328; the king's debts, i, 308; on the Canadian Expedition, i. 333; Tickell's Anticipation, i. 348; his lines on Gibbon as Commissioner of Trade, i. 354; on Sheffield's Regiment of Horse, i. 380;
  • 59. M.P. for Westminster, i. 388, 390; the black Patriot, ii. 4; Secretary of State, ii. 13, 34; resigns office, ii. 18; and American independence, ii. 25; George III.'s behaviour to, ii. 34; sale of his library, ii. 68; his two India Bills, ii. 86; Gibbon's opinion of, ii. 85, 92, 96, 251, 356, 360, 372; suggested union with Pitt, ii. 92, 306, 307, 330; no compromise, ii. 97; his Martyrs, ii. 102; the man of the people, ii. 179; his marriage, ibid.; twelve hours' talk with Gibbon, ii. 180; speech on treaty between Russia and Turkey, ii. 246; on Abolition of Slave Trade, ii. 294; his half-support of Grey's motion, ii. 297, 320; but fifty followers, ii. 305; rejoices at retreat of Prussians, ii. 320; detestable on French affairs, ii. 330; on the calling out of the Militia, ii. 349, 350; his motion for an Embassy to France, ii. 350, 353; opposes Alien Bill, ii. 364; Duke of Portland's adherence to, ii. 367, 368; opposes augmentation of forces, ii. 368 Fox, Hon. Stephen. See Holland, 2nd Lord France, fears of war with, i. 289, 317; treaty with America, i. 333; war with, i. 339; ii. 362, 374; treaty with England, ii. 152; war declared against Francis Joseph, ii. 279; war with Austria and Prussia, ii. 319; treaty with Geneva, ii. 325, 331, 345;
  • 60. war with England, Holland, and Spain, ii. 362, 374 Francillon, M., ii. 283 *Francis Joseph, of Austria, ii. 279, 292 Frankland, Miss Anne (Lady Chichester), i. 200 *Frankland, Sir Thomas, i. 200 Franklin, Dr., i. 162, 243, 310, 313 Fraser, General, i. 264, 299, 363 Fraser, General Simon, i. 325 Fraser, Mrs., Donna Catherina, i. 300; ii. 105, 117 Fredennick, M., ii. 260 *Frederick the Great, i. 158; ii. 210 Frederick II. of Prussia, i. 143; ii. 137 Frederick William of Prussia at Pilnitz, ii. 271 French Revolution, ii. 246, 249, 270, 287, 293, 311; massacres of September, 1792, ii. 312, 321, 351; and Ireland, ii. 320; murder of Louis XVI., ii. 360, 365 Frey, M., escorts Gibbon to Lausanne, i. 1 *Friends of the People, an association for reform of representative system, ii. 297
  • 61. Fullarton, Colonel, ii. 168 Fuller, Miss, called Sappho by Gibbon, i. 196, 198, 202, 208, 241 Fuller, Rose, i. 196, 208 G Gage, General, i. 206, 257, 258, 260, 266 Gage, William Hall, Viscount, the green plumb, i. 225, 227 *Galovkin, Comte Fédor, i. 81 *Gansel, Major-General, i. 109 Garrick, David, as Sir John Brute, i. 19; Gibbon a friend of, i. 201, 289, 333; in Hamlet, i. 203; letter from Gibbon to, quoted, i. 317 Gascoyne, Bamber, i. 366 *Gates, General, i. 325 Gazette, the, i. 257, 392 *Gazetteer, the, i. 146 Gee, Mr., i. 3, 6 *Genlis, Comte de, i. 326 Genlis, Madame de, her opinion of Madame de Cambis, i. 313; of Princesse de Beauvau, i. 314;
  • 62. on Decline and Fall, i. 326; on Dr. Tissot's skill, ii. 77; her story of Gibbon and Madame de Montolieu, ii. 154 Geneva, threatened by French, ii. 317, 322; the Government at, ii. 318; treaty with France, ii. 325, 331, 345; new constitution of, ii. 370 Genoa, Gibbon at, i. 61 Gentleman's Magazine cited, ii. 289, 301, 302, 314, 349 Geoffrin, Madame, i. 29 *George II., ii. 321 George III., i. 45; grants pension to M. de Viry, i. 56; his intervention in Denmark, i. 143; Royal Marriage Bill, i. 154; reviews fleet at Spithead, i. 186; the King's speech and America, i. 238; negotiates for hire of Russian mercenaries, i. 270; and Sir H. Palliser's leg, i. 356; his behaviour to Fox, ii. 34; refuses to dismiss ministers, ii. 100; his illness and recovery, ii. 181, 191; and Lally, ii. 285; reviews troops at Bagshot, ii. 304; proclaims tumultuous meetings, etc., ii. 305; Lally's Plaidoyer, ii. 375. George IV. See Wales, Prince of Germain, Lady George, i. 328
  • 63. Germain, Lord George. See Sackville, Lord *Germain, Sir John, i. 198 Germanie, M. de, ii. 291 *Gibbon, Mrs., née Porten (Gibbon's mother), i. 2 Gibbon, Mrs., née Patton (Gibbon's stepmother), her opinion of Miss Catherine Porten, i. 2; marries Gibbon's father, i. 7; Gibbon's inquiries about, i. 8; subjects of Gibbon's letters to:— Dr. Turton, i. 16, 114, 150, 371; money troubles, i. 19, 352, 359; his own health, i. 83, 114, 150, 158, 246, 321, 322, 371, 377- 379, 399; ii. 12, 108, 129, 141, 166, 248; his father's accident, i. 26; Paris and the Parisians, i. 28-32, 315, 320; Duke of Bedford, i. 30, 32; M. d'Augny: Madame Bontemps, i. 31; Dr. Acton at Besançon, i. 36; his life at Lausanne, i. 39, 42, 49, 50; ii. 76, 88-141 passim, 177; Mdlle. Curchod, i. 40; Voltaire, i. 43, 91; Lady M. W. Montagu's Letters, i. 53; his tour in Italy, i. 63; English visitors at Lausanne, i. 65; Rome to Naples, i. 73; Venice, i. 75; Deyverdun and Miss Comarque, i. 83; the School of Vice, i. 84; Ranelagh Gardens, i. 89; his father's reproaches, i. 98;
  • 64. his father's illness and death, i. 97, 105, 106, 118; fall of the ministry, i. 112; the Remonstrance debate, i. 113; Lenborough, i. 126, 158, 182, 185, 187, 210, 289; Beriton, i. 128, 153; ii. 175, 206, 248; the formal Mr. Bricknall, i. 131-133, 141; Danish revolution, i. 143; Royal Marriage Bill, i. 154; house-hunting in London, i. 171, 172, 175, 179; James Scott's death, i. 177; the Townshend-Bellamont duel, i. 180, 182; his notions of London life, i. 188; his friend Deyverdun, i. 188, 210, 262; ii. 89 et seq., 177, 207; an approaching daughter-in-law, i. 197; Johann C. Bach, i. 204; masquerade at Pantheon, i. 215; Mrs. Gibbon of Northamptonshire, not of Bath, i. 216; Madame de Bavois, i. 220; offer of a seat in Parliament, i. 230, 231; M.P. for Liskeard, i. 234; Godfrey Clarke's illness and death, i. 238, 244; his Parliamentary life, i. 248, 253, 289, 325, 331, 365, 373; his History, see Decline and Fall; story of Essex's ring, i. 276; the Neckers, i. 283, 306, 320; ii. 122; Garrick, i. 289; two answers to his History, i. 295; Dr. Hunter's Anatomy Lectures, i. 304; her groundless fears, i. 305, 306; his Paris friends, i. 315; Duke of Richmond, i. 316; Madame de Genlis, i. 326; at Coxheath Camp, i. 346; his views on matrimony, i. 351; a Lord of Trade, i. 366, 378; Lord Eliot, i. 369, 374, 386, 391;
  • 65. his Mémoire Justificatif, i. 371; Mrs. Williams, i. 372, 374; Irish trade, i. 373; Lord Sheffield's first speech, i. 380; a dissolution expected, i. 380; the Gordon riots, i. 381, 382; Sheffield and the Northumberland Militia, i. 381; Sir Henry Clinton, i. 384; weary of political life, i. 391; George Scott's death, i. 393; M.P. for Lymington, ii. 1; at Brighthelmstone, ii. 3, 7; Hayley, the poet, ii. 8, 17; North's resignation, ii. 13; Board of Trade suppressed, ii. 14; Lady Elizabeth Foster, ii. 15; Rockingham's death, ii. 17; at Single-Speech Hamilton's house, ii. 21; Mrs. Ashby, ii. 22; Pitt, ii. 28; Mrs. Siddons, ii. 29; the Coalition Ministry, ii. 34; retires from Parliament, ii. 58; his Lausanne plans, ii. 58, 61, 64, 71; his propensity for happiness, ii. 88; society at Lausanne, ii. 89, 90, 122; climate at Lausanne, ii. 129; changes in English politics, ii. 131; a regimen of boiled milk, ii. 142; his house and garden, ii. 142, 248; a ministry of respectable boys, ii. 143; intention to visit England, ii. 155; the two Mr. Gibbons, ii. 159; Sheffield Place, ii. 160; Bath, ii. 161; his compliment to Lord North, ii. 170;
  • 66. Cadell's discretion, ii. 176; Hugonin's neglect, ii. 207; the French Revolution, ii. 249, 308; the Sheffields' visit to Lausanne, ii. 309; her illness and recovery, ii. 348; his return to England, ii. 381, 384; at Althorp, ii. 391; his illness, ii. 394, 398. Her letters to Gibbon, ii. 385, 399 Gibbon, Edward (father), subjects of his son's letters to:— First impressions of Lausanne, i. 1; Voltaire, i. 5; a stepmother, i. 10; studies under Pavillard, ibid.; proposed Swiss tour, i. 13; Holland, i. 15; Sir George Elkin's marriage, i. 16; the Lottery, i. 17; King's Scholars' play, i. 18; the Celesias, i. 18, 62; Dr. Maty: Mdlle. de Vaucluse and M. Celesia, i. 20; his London friends, i. 21; hopes of Parliament, i. 23, 45; paternal doubts and suspicions, i. 34; Taafe, i. 35; gambling losses, i. 36, 47; Dr. Acton and Besançon, i. 37; the Swiss Militia, i. 38; financial troubles, i. 45-48, 51, 52, 55, 69, 71, 73, 93-107 passim; Mont Cenis, i. 55; Turin, i. 56; Venice, i. 61; his friend Guise, i. 62; Rome, i. 66;
  • 67. Trajan's Pillar, i. 67; Barazzi the banker, i. 71; Sir T. Worsley, i. 78; a burgess of Newtown, i. 88; the Putney Writings, i. 93; Gosling's mortgage, i. 94, 95. His death, i. 117 Gibbon, Edward— 1753-1772. Under Pavillard's care at Lausanne, i. 1; a gambling scrape: his appeal to Aunt Catherine, i. 3, 4; Voltaire at Geneva, i. 5, 43; his father's second marriage, i. 7; his plans and studies, i. 9-11; his father's silence, i. 13; returns to England, i. 15; the Lottery, i. 17; the Celesias, i. 18, 20; distressed for money, i. 19; his quarrel with Dr. Maty, i. 21; a seat in Parliament—ambitions, hopes, and fears, i. 23, 45; in the Hants Militia, i. 25, 87; at Boulogne, i. 27; friends and acquaintances in Paris, i. 28, 33; Thomas Bradley's affair, i. 35; Dr. Acton at Besançon, i. 36; with his old acquaintance at Lausanne, i. 38 et seq.; Mdlle. Curchod, i. 40, 81; the fall of our tyrant, i. 44; unhappy circumstances of our estate, i. 47; a mixture of books and good company, i. 49; Lady M. W. Montagu's Letters, i. 53; proposed tour in Italy, i. 54; Turin, i. 55, 58; Borromean Islands, i. 57;
  • 68. his snuff box and the King of Sardinia's daughters, i. 58; Milan, i. 60; Genoa, i. 61; Florence, i. 63; Englishmen at Florence, i. 65; Rome, i. 67 et seq.; ways and means, i. 69, 100 et seq., 127, 136, 165-170; the very worst roads in the universe, i. 73; least satisfied with Venice, i. 75; Austrian etiquette, i. 80; separations increase daily, i. 82; the School of Vice, i. 84; Monsieur Olroy's marriage, i. 85; a burgess of Newtown, i. 88; Ranelagh Gardens, i. 89; Voltaire ruined, i. 91; the Putney Writings, i. 93, 105; paternal doubts and suspicions, i. 98; the deed of trust, i. 99, 101; Wentzel, the oculist, i. 105; the plain dish of friendship, i. 108; the Remonstrance debate, i. 113; his father's illness and death, i. 115, 117, 121, 122; Aunt Hester's kind letter, i. 121; detained by Ridottos, i. 124; the Soho masquerade, i. 131; the eternal Bricknall, i. 133; Farmer Gibbon of no use! i. 138; Quis tulerit Gracchos, i. 140; these Denmark affairs, i. 143, 149; Royal Marriage Bill, i. 146, 151, 154; the Pantheon, i. 147; Worthy Champions of the Church, i. 148; the business of Lord and Lady Grosvenor, i. 149; Dr. Nowell's sermon, i. 151; Sir R. Worsley, i. 153;
  • 69. Lord Sheffield's editorial methods, i. 155; Deyverdun's arrival, i. 158 (see also Deyverdun, George); Master Holroyd's death, i. 160; a sprained ankle, i. 161; the loud trumpet of advertisements, i. 163; a tenant for Beriton, i. 165; Lady Rous' house, i. 171-175; North's somnolence, i. 173; James Scott's death, i. 177 1773-1783. Bellamont-Townshend duel, i. 180; a due mixture of study and society, i. 183; the E. I. Co., i. 184, 186, 209, 308; ii. 85; sale of Lenborough, i. 186; ii. 83; Hume: W. Robertson, i. 190; Foote's Bankrupt, i. 192; the beauties of Cornwall, i. 194; declines publication of Chesterfield's Letters, i. 195; an approaching daughter-in-law, i. 197; Fox's debts, i. 198; Kelly's School of Wives, i. 199; a dinner at the Breetish Coffee House, i. 201; Colman's Man of Business, i. 202; heads of a convention, i. 205; Boston Port Bill, i. 206; Mrs. Horneck, i. 207; great news from India, i. 209; receiving one friend and comforting another, i. 210; Johnson and Gibbon—a contrast, i. 213; Boodle's triumph, i. 215; all the news of Versailles, i. 218; Lord Stanley's fête champêtre, i. 219; Madame de Bavois, i. 220; Godfrey Clarke's illness and death, i. 223, 238, 244; a new man for the county, i. 225; Romanzow's victory, i. 227;
  • 70. offer of a seat, i. 228; M.P. for Liskeard, i. 229; dissolution and election, i. 231; Wilkes at the Mansion House, i. 231; a visit to Bath, i. 231; his anxiety for Mrs. Holroyd, i. 237; deep in America, i. 243 (see also America); a party of foxhunters, i. 247; troops for America, i. 249; North's conciliatory scheme, i. 251; a silent member, i. 253; presentation at Court, i. 255; the march to Concord, i. 257; a great historical work, i. 259; his History going to press, i. 261; nothing new from America, i. 265; his dog the comfort of his life, i. 267; his stepmother's small-pox, i. 268; difficulty in raising troops, i. 271; at work on his History, i. 273; the book almost ready, i. 275; story of Essex's ring, i. 276; his History published, i. 279; the Neckers in London, i. 281, 282; poor Mallet, i. 283; Dr. Porteous, i. 285; an Irish edition of the Decline and Fall, i. 288; fears of French war, i. 289; Howe's proclamation, i. 291; Suard translates his History, i. 293; two answers to his book, i. 295; Septehênes' translation of Decline and Fall, i. 297; a war of posts, i. 299; John the Painter, i. 301; his uniform life, i. 302; Hunter's Lectures, i. 304;
  • 71. his stepmother's groundless fears, i. 306; starts for Paris, i. 309; pleasures and occupations in Paris, i. 311; his success in French society, i. 313; his friends and acquaintances, i. 315; no risk of war with France, i. 317; Duc de Choiseul, i. 318; a martyr to gout, i. 321; weary of the war, i. 323; Saratoga, i. 324; Madame de Genlis, i. 326; London a dead calm and delicious solitude, i. 327; conciliation for America, i. 329; suing for peace, i. 331; war with France, i. 333; his private affairs, i. 335; in attendance of my Mama, i. 336; d'Estaing's fleet, i. 337; Keppel and the French frigates, i. 339, 343; Coxheath Camp, i. 340, 346; Brighton unsuitable, i. 345; Paul Jones, i. 347; battle of Ushant, i. 349; an effort of friendship, i. 351; advice to his stepmother, i. 352, 362; prospect of a place, i. 355; Palliser and Keppel, i. 356; his plans of economy, i. 359; Parliament and the Roman Empire, i. 361; a crestfallen ministry, i. 363; at work on his second volume, i. 365; a Lord of Trade, i. 366, 373; disclaims the History of Opposition, i. 369; his Mémoire Justificatif, i. 371; Holroyd for Coventry, i. 375; Rodney's victory, i. 376;
  • 72. a mighty unrelenting tyrant, called the Gout, i. 377; Gordon Riots, i. 380; his two volumes in the press, i. 382; his seat uncertain, i. 385; another seat promised, i. 387; M.P. for Lymington, i. 387, 400; ii. 1; defends his conduct in Parliament, i. 389; weary of political life, i. 391; the Coventry election, i. 393; Holroyd created Lord Sheffield, i. 395; the reception given to his two volumes, i. 397; his annual Gout-tax, i. 399; his house at Brighton, ii. 3; French and Spanish ships in the Channel, ii. 5; Brighton in November, ii. 7; William Hayley, ii. 8, 17; his advice in a quarrel, ii. 9; noise and nonsense of Parliament, ii. 11; fall of North's ministry, ii. 13; his loss of office, ii. 14; Rockingham's death, ii. 17; Shelburne's ministry, ii. 19; immersed in the Roman Empire, ii. 21; his Hampton Court Villa, ii. 23; Lord Loughborough's marriage, ii. 24; relief of Gibraltar, ii. 25; enthusiasm for Sir George Eliott, ii. 27; Pitt, ii. 28; Mrs. Siddons, ii. 29; the dearth of news, ii. 31; Shelburne resigns, ii. 33; Coalition Ministry, ii. 34; his view of English politics, ii. 37; proposes to settle abroad, ii. 38; Deyverdun offers his house, ii. 41; Lausanne society, ii. 43;
  • 73. his gratitude to Deyverdun, ii. 45; his hesitation to accept, ii. 47; his friend and valet, ii. 49; hopes of a political place, ii. 51; social habits at Lausanne, ii. 52; decides to leave England, ii. 55; plan of joining Deyverdun, ii. 57; his departure necessary, ii. 58; his reasons, ii. 61; his preparations, ii. 63; farewell to Sheffield Place, ii. 65; the Peace of Versailles, ii. 67; his departure delayed, ii. 69; the Sheffields' kindness, ii. 71 1783-1794. His journey through France, ii. 73; the Abbé Raynal, ii. 75; the charms of Lausanne, ii. 77; a pension, for Miss Holroyd, ii. 79; proud of Fox, ii. 85; North's insignificance, ii. 87; his daily life, ii. 89; the zeal and diligence of Sheffield's pen, ii. 91; sale of his seat, ii. 93; a factious opposition, ii. 95; arrival of his books, ii. 97; a happy winter, ii. 99; Parliament dissolved, ii. 101; a free-spoken counsellor, ii. 103; English friends, ii. 105; the reign of sinecures over, ii. 107; his house and garden, ii. 108; his hospitalities, ii. 111; his pecuniary affairs, ii. 112; a list of his acquaintances, ii. 115; Prince Henry of Prussia and Mdlle. Necker, ii. 117;
  • 74. thoughts of marriage, ii. 118, 220; loses Caplin, ii. 119; invites the Sheffields, ii. 120; a temperate diet and an easy mind, ii. 123; his establishment at Lausanne, ii. 125; Pitt a favourite abroad, ii. 127; a young man at fifty, ii. 129; changes in English politics, ii. 131; his reported death, ii. 132; a curious question of philosophy, ii. 133; his countrymen at Lausanne, ii. 135; Achilles Pitt and Hector Fox, ii. 136; his History delayed, ii. 139; his health improved, ii. 141; glories of the landskip, ii. 142; Aunt Kitty's death, ii. 144; books longer in making than puddings, ii. 147; hopes to visit England, ii. 149, 155; building a great book, ii. 151; a citizen of the world, ii. 153; his arrival in London, ii. 157; the two Mr. Gibbons, ii. 159; visits his stepmother, ii. 161; a miserable cripple, ii. 163; an unlucky check, ii. 165; an act of duty at Bath, ii. 167; his work and friends, ii. 169; the horrors of shopping and packing, ii. 171; dines with Warren Hastings, ii. 173; sale of Beriton, ii. 175, 189; back at Lausanne, ii. 177; Deyverdun ill, ii. 179, 187; George III. insane, ii. 181; Hugonin dead, ii. 183; Hugonin's deceit, ii. 185; George III. recovers, ii. 191;
  • 75. the Saint ripe for heaven, ii. 193; Deyverdun's death, ii. 194, 207; fierce and erect, a free master, ii. 197; a defect in Beriton title, ii. 199; his idea of adopting Charlotte Porten, ii. 201; a life interest in Deyverdun's house, ii. 203; the authority of Blackstone, ii. 205; Deyverdun's loss irreparable, ii. 207; France's opportunity, ii. 209; French exiles at Lausanne, ii. 210; dirty land and vile money, ii. 213; legal forms benefit lawyers, ii. 215; Sheffield M.P. for Bristol. ii. 216; Aunt Hester's will, ii. 218, 225; a comfortless state, ii. 221; his Madeira almost exhausted, ii. 223; Bruce's Travels, ii. 226; M. Langer, ii. 227; history of the Guelphs, ii. 229; servitude to lawyers, ii. 231; seriously ill, ii. 233; an annuity for Newhaven, ii. 235, 240; Burke's Reflections, ii. 237; Corn Law and Slave Trade, ii. 239; a bargain with the Sheffields, ii. 243; snugness of his affairs, ii. 245; danger of Russian war, ii. 247; effects of French Revolution, ii. 249; Burke a rational madman, ii. 251; Sheffield an anti-democrat, ii. 253; flight and arrest of Louis XVI., ii. 255, 286; the crisis in Paris, ii. 257; Sheffield at the Jacobins, ii. 259; safe in the land of liberty, ii. 261; Switzerland's strange charm, ii. 263; Coblentz and white cockades, ii. 265;
  • 76. the sights of Brussels, ii. 267; military forces on French frontier, ii. 269; the Pilnitz meeting, ii. 271; a distressful voyage, ii. 273; Lally, ii. 274; the demon of procrastination, ii. 277; peace or war in Europe? ii. 279; an amazing push of remorse, ii. 281; Maria's capacity, ii. 283; Lally Tollendal, ii. 284; the hideous plague in France, ii. 287; Massa King Wilberforce, ii. 289; a month with the Neckers, ii. 291; Jacques Necker, ii. 292; the march of the Marseillais, ii. 293; an asylum at Berne, 295; democratic progress in England, ii. 297; Gallic wolves prowl round Geneva, ii. 299; the destiny of his library, ii. 301; his Tabby apprehensions, ii. 303; Opposition and Government, ii. 305; the attempted Pitt-Fox union, ii. 306; taint of democracy, ii. 309; Brunswick's march on Paris, ii. 311; every day more sedentary, ii. 313; French invasion of Savoy, ii. 314; Geneva threatened, ii. 316; prepared for flight, ii. 319; the Irish at their old tricks, ii. 321; the liberty of murdering defenceless prisoners, ii. 323; Sheffield's emigrants, ii. 324; Brunswick's strange retreat, ii. 326, 346; occupants of the hotel in Downing Street, ii. 329; the Geneva flea and the Leviathan France, ii. 331; the Gallic dogs' day, ii. 333; neither a monster, nor a statue, ii. 335;
  • 77. Severy's state hopeless, ii. 336; France's cruel fate, ii. 337; Archbishop of Arles' murder, ii. 339-342; common cause against the Disturbers of the World, ii. 343; Montesquieu's desertion, ii. 345; Necker's defence of the king, ii. 347; associations in London, ii. 349, 353; Is Fox mad? ii. 350; Sheffield's speech, ii. 353; the Egaliseurs, ii. 355; the great question of peace and war, ii. 358; the Memoirs must be postponed, ii. 359; a word or two of Parliamentary and pecuniary concerns, ii. 362; Duke of Portland and Fox, ii. 363, 367; Louis XVI. condemned to death, ii. 365; a miserable Frenchman, ii. 367; poor de Severy is no more, ii. 369; his letter of congratulations to Loughborough, ii. 372; the Pays de Vaud, ii. 373; Madame de Staël at Dorking, ii. 375; a pleasant dinner-party in Downing Street, ii. 377; Lady Sheffield's death, ii. 379; the cannon of the siege of Mayence, ii. 382; safe, well, and happy in London, ii. 384; intends to visit Bath, ii. 387, 389; Lord Hervey's Memorial, ii. 388; a tête-à-tête of eight or nine hours daily, ii. 390; at Althorpe, ii. 391; a serious complaint, ii. 393; hopes of a radical cure, ii. 395; in darkness about Lord Howe, ii. 397; reaches St. James's Street half-dead, ii. 400; account of his last moments, ii. 400, 401 Gibbon, Miss Hester (Gibbon's aunt), the Northamptonshire Saint, i. 7, 134, 244, 295, 398; ii. 91, 185, 187, 190, 193, 218, 222, 225;
  • 78. Gibbon's letters to, i. 15, 121 Gibbon, John, Bluemantle Pursuivant at Arms, ii. 162 Gibraltar, relieved by Rodney, i. 276; by Howe, ii. 19, 25, 27; defended by Lord Heathfield, ii. 25 Gideon, Sir Sampson (Lord Eardley), i. 225, 332; ii. 216 Gilbert, Mr., of Lewes, i. 244, 248, 295 Gilbert, Bett, i. 7 Gilliers, Baron de, ii. 330, 377 Glenbervie, Lord (Sylvester Douglas), ii. 180 Gloucester, Duchess of, i. 173 *Gloucester, Duke of, i. 131; his clandestine marriage, i. 146; on Decline and Fall, i. 396 Glynn, Serjeant, the advocate of Wilkes, i. 90 Godolphin, Lord, i. 172 Goldsmith, Oliver, Gibbon's friendship with, i. 191, 202; his Captain-in-Lace, i. 207; quotation from his Retaliation, i. 210 *Gonchon, M., ii. 352 Gordon, Duchess of, ii. 157, 164, 168
  • 79. Gordon, Lord George, i. 376; No Popery riots, i. 380; sent to the Tower, i. 382 Gordon Riots, the, i. 381 Gosling, the banker, i. 94, 126, 166-168, 332; ii. 110. 281 Gosling's mortgage, i. 94, 116, 126, 166, 187 Gould, Colonel. i. 114, 159, 274 Gould, Mrs., i. 114, 159, 272, 274; ii. 386 Gouvernet, Comte de la Tour-du-Pin, ii. 329 Gower, Lord, i. 148; ii. 86, 255, 311, 360 *Grafton, Duchess of, i. 27 Grafton, Duke of, i. 26, 90, 112, 278, 377; Lord Privy Seal, ii. 13 *Grammont, Duc de (de Guiche), i. 89; ii. 203, 265, 266 *Granby, Marquis of, i. 192 Grand, M., banker at Lausanne, i. 4, 61, 74, 81 Grand, Mdlle. Nanette. See Prevôt, Madame Grantham, Lord, ii. 19 *Grasse, Comte de, ii. 16 Graves, Admiral Lord, i. 384
  • 80. Gray, Booth, i. 254, 264 Grenville Act, the, i. 233 *Grenville Correspondence, i. 44 *Grenville, George, i. 45, 85, 233, 243 Grenville, James, ii. 19, 93 Grenville, Lord, ii. 362, 366 *Greville, Hon. Charles, i. 366 Grey, Mr., and the Friends of the People resolution, ii. 297, 305, 320 Grey, Sir Charles (afterwards 1st Earl), ii. 396 Grey, Sir W. de. See Walsingham, Lord *Grey, Thomas de, i. 366 *Grimaldi, Marquis Jeronymo, i. 30 Grimstone, Mrs., ii. 339 Grosvenor, Lady, i. 149 Grosvenor, Lord, i. 82, 149 Guiche, Duc de. See Grammont, Duc de Guilford, 1st Lord, ii. 86, 164, 238
  • 81. Guilford, 2nd Lord. See North, Lord Guines, Duc de, ii. 210 Guise, Sir William (Gibbon's intimate friend), i. 40, 50, 56, 61, 63, 79, 80, 82, 87, 195 Gunning, Sir Robert, British Envoy at Petersburg, i. 270 *Gustavus III., King of Sweden, ii. 279 H Hague, the, Gibbon at, i. 15 *Hailes, Daniel, ii. 86 *Hales, Sir Philip, i. 250 Hall, James, i. 26 *Hallifax, Sir Thomas, i. 393 *Hamilton, Emma, Lady, i. 74, 214 *Hamilton, Lord Archibald, i. 148 Hamilton, Sir William, British Minister at Naples, i. 74 Hamilton, William Gerard (Single-Speech), i. 343; ii. 21, 31, 396 Hammersley's Bank, ii. 303 Hamond, Sir Andrew Snape, R.N., ii. 81, 93
  • 82. Hampden, Lord, ii. 135 Hampshire Militia, i. 25, 109; Gibbon major in, i. 51; colonel, i. 87; father of, i. 346 Hanger, William (Lord Coleraine), i. 146, 148, 310 Hanley, Mrs., ii. 159 Harbord, Hon. Harbord (afterwards Lord Suffield), i. 250, 252 Harcourt, Earl of, i. 9 Harcourt, Mr., i. 232, 233 Hardy, Sir Charles, i. 347; ii. 72 Hare, James, politician and wit (the Hare and many Friends), i. 201 Harris, John, Lenborough Estate Agent, i. 95, 127, 165, 167, 170; ii. 104 Harrison, John Butler, Gibbon's opinion of, i. 27 Harrison, Mrs., i. 87 Hartley, David, M.P. for Kingston-upon-Hull, i. 240 Harvey, Stephen, i. 95 Hastings, Marquis of. ii. 396 Hastings, Warren, i. 209, 349; Governor-General of India, ii. 26, 85;
  • 83. Welcome to our website – the perfect destination for book lovers and knowledge seekers. We believe that every book holds a new world, offering opportunities for learning, discovery, and personal growth. That’s why we are dedicated to bringing you a diverse collection of books, ranging from classic literature and specialized publications to self-development guides and children's books. More than just a book-buying platform, we strive to be a bridge connecting you with timeless cultural and intellectual values. With an elegant, user-friendly interface and a smart search system, you can quickly find the books that best suit your interests. Additionally, our special promotions and home delivery services help you save time and fully enjoy the joy of reading. Join us on a journey of knowledge exploration, passion nurturing, and personal growth every day! ebookbell.com