Audio Source Separation And Speech Enhancement 1st Edition Gannot
1. Audio Source Separation And Speech Enhancement
1st Edition Gannot download
https://ebookbell.com/product/audio-source-separation-and-speech-
enhancement-1st-edition-gannot-9952182
Explore and download more ebooks at ebookbell.com
2. Here are some recommended products that we believe you will be
interested in. You can click the link to download.
Audio Source Separation 1st Edition Shoji Makino Eds
https://ebookbell.com/product/audio-source-separation-1st-edition-
shoji-makino-eds-6989626
Confidential Source Ninetysix The Making Of Americas Preeminent
Confidential Informant Unabridged Blackstone Audio
https://ebookbell.com/product/confidential-source-ninetysix-the-
making-of-americas-preeminent-confidential-informant-unabridged-
blackstone-audio-11348720
Audio Culture Readings In Modern Music Christoph Cox Daniel Warner
https://ebookbell.com/product/audio-culture-readings-in-modern-music-
christoph-cox-daniel-warner-49432492
Audiovision Sound On Screen Michel Chion Claudia Gorbman
https://ebookbell.com/product/audiovision-sound-on-screen-michel-
chion-claudia-gorbman-51911434
3. Audio Production And Critical Listening Technical Ear Training 2nd
Edition Jason Andrew Corey
https://ebookbell.com/product/audio-production-and-critical-listening-
technical-ear-training-2nd-edition-jason-andrew-corey-53520966
Audio Mastering Separating The Science From Fiction 1st Edition Jp
Braddock
https://ebookbell.com/product/audio-mastering-separating-the-science-
from-fiction-1st-edition-jp-braddock-54204428
Audio Power Amplifier Design Handbook 4th Edition Douglas Self
https://ebookbell.com/product/audio-power-amplifier-design-
handbook-4th-edition-douglas-self-2004134
Audio Coding Theory And Applications 1st Edition Yuli You Auth
https://ebookbell.com/product/audio-coding-theory-and-
applications-1st-edition-yuli-you-auth-2030512
Audio Culture Revised Edition Readings In Modern Music Cox
https://ebookbell.com/product/audio-culture-revised-edition-readings-
in-modern-music-cox-20640346
6. Table of Contents
Cover
List of Authors
Preface
Acknowledgment
Notations
Acronyms
About the Companion Website
Part I: Prerequisites
Chapter 1: Introduction
1.1 Why are Source Separation and Speech Enhancement Needed?
1.2 What are the Goals of Source Separation and Speech Enhancement?
1.3 How can Source Separation and Speech Enhancement be Addressed?
1.4 Outline
Bibliography
Chapter 2: TimeFrequency Processing: Spectral Properties
2.1 TimeFrequency Analysis and Synthesis
2.2 Source Properties in the TimeFrequency Domain
2.3 Filtering in the TimeFrequency Domain
2.4 Summary
Bibliography
Chapter 3: Acoustics: Spatial Properties
3.1 Formalization of the Mixing Process
3.2 Microphone Recordings
3.3 Artificial Mixtures
3.4 Impulse Response Models
3.5 Summary
Bibliography
Chapter 4: Multichannel Source Activity Detection, Localization, and Tracking
4.1 Basic Notions in Multichannel Spatial Audio
4.2 MultiMicrophone Source Activity Detection
4.3 Source Localization
7. 4.4 Summary
Bibliography
Part II: SingleChannel Separation and Enhancement
Chapter 5: Spectral Masking and Filtering
5.1 TimeFrequency Masking
5.2 Mask Estimation Given the Signal Statistics
5.3 Perceptual Improvements
5.4 Summary
Bibliography
Chapter 6: SingleChannel Speech Presence Probability Estimation and Noise
Tracking
6.1 Speech Presence Probability and its Estimation
6.2 Noise Power Spectrum Tracking
6.3 Evaluation Measures
6.4 Summary
Bibliography
Chapter 7: SingleChannel Classification and Clustering Approaches
7.1 Source Separation by Computational Auditory Scene Analysis
7.2 Source Separation by Factorial HMMs
7.3 Separation Based Training
7.4 Summary
Bibliography
Chapter 8: Nonnegative Matrix Factorization
8.1 NMF and Source Separation
8.2 NMF Theory and Algorithms
8.3 NMF Dictionary Learning Methods
8.4 Advanced NMF Models
8.5 Summary
Bibliography
Chapter 9: Temporal Extensions of Nonnegative Matrix Factorization
9.1 Convolutive NMF
9.2 Overview of Dynamical Models
9.3 Smooth NMF
9.4 Nonnegative StateSpace Models
8. 9.5 Discrete Dynamical Models
9.6 The Use of Dynamic Models in Source Separation
9.7 Which Model to Use?
9.8 Summary
9.9 Standard Distributions
Bibliography
Part III: Multichannel Separation and Enhancement
Chapter 10: Spatial Filtering
10.1 Fundamentals of Array Processing
10.2 Array Topologies
10.3 DataIndependent Beamforming
10.4 DataDependent Spatial Filters: Design Criteria
10.5 Generalized Sidelobe Canceler Implementation
10.6 Postfilters
10.7 Summary
Bibliography
Chapter 11: Multichannel Parameter Estimation
11.1 Multichannel Speech Presence Probability Estimators
11.2 Covariance Matrix Estimators Exploiting SPP
11.3 Methods for Weakly Guided and Strongly Guided RTF Estimation
11.4 Summary
Bibliography
Chapter 12: Multichannel Clustering and Classification Approaches
12.1 TwoChannel Clustering
12.2 Multichannel Clustering
12.3 Multichannel Classification
12.4 Spatial Filtering Based on Masks
12.5 Summary
Bibliography
Chapter 13: Independent Component and Vector Analysis
13.1 Convolutive Mixtures and their TimeFrequency Representations
13.2 FrequencyDomain Independent Component Analysis
13.3 Independent Vector Analysis
13.4 Example
9. 13.5 Summary
Bibliography
Chapter 14: Gaussian Model Based Multichannel Separation
14.1 Gaussian Modeling
14.2 Library of Spectral and Spatial Models
14.3 Parameter Estimation Criteria and Algorithms
14.4 Detailed Presentation of Some Methods
14.5 Summary
Acknowledgment
Bibliography
Chapter 15: Dereverberation
15.1 Introduction to Dereverberation
15.2 Reverberation Cancellation Approaches
15.3 Reverberation Suppression Approaches
15.4 Direct Estimation
15.5 Evaluation of Dereverberation
15.6 Summary
Bibliography
Part IV: Application Scenarios and Perspectives
Chapter 16: Applying Source Separation to Music
16.1 Challenges and Opportunities
16.2 Nonnegative Matrix Factorization in the Case of Music
16.3 Taking Advantage of the Harmonic Structure of Music
16.4 Nonparametric Local Models: Taking Advantage of Redundancies in Music
16.5 Taking Advantage of Multiple Instances
16.6 Interactive Source Separation
16.7 CrowdBased Evaluation
16.8 Some Examples of Applications
16.9 Summary
Bibliography
Chapter 17: Application of Source Separation to Robust Speech Analysis and
Recognition
17.1 Challenges and Opportunities
17.2 Applications
10. 17.3 Robust Speech Analysis and Recognition
17.4 Integration of FrontEnd and BackEnd
17.5 Use of Multimodal Information with Source Separation
17.6 Summary
Bibliography
Chapter 18: Binaural Speech Processing with Application to Hearing Devices
18.1 Introduction to Binaural Processing
18.2 Binaural Hearing
18.3 Binaural Noise Reduction Paradigms
18.4 The Binaural Noise Reduction Problem
18.5 Extensions for Diffuse Noise
18.6 Extensions for Interfering Sources
18.7 Summary
Bibliography
Chapter 19: Perspectives
19.1 Advancing Deep Learning
19.2 Exploiting Phase Relationships
19.3 Advancing Multichannel Processing
19.4 Addressing MultipleDevice Scenarios
19.5 Towards Widespread Commercial Use
Acknowledgment
Bibliography
Index
End User License Agreement
List of Tables
Chapter 1
Table 1.1 Evaluation software and metrics.
Chapter 3
Table 3.1 Range of RT60 reported in the literature for different environments (Ribas et
al., 2016).
Table 3.2 Example artificial mixing effects, from Sturmel et al. (2012).
Chapter 5
11. Table 5.1 Bayesian probability distributions for the observation and the searched
quantity .
Table 5.2 Criteria for the estimation of .
Table 5.3 Overview of the discussed estimation schemes.
Chapter 6
Table 6.1 Summary of noise estimation in the minima controlled recursive averaging
approach (Cohen, 2003).
Chapter 7
Table 7.1 SDR (dB) achieved on the CHiME2 dataset using supervised training (2ch:
average of two input channels).
Chapter 14
Table 14.1 Categorization of existing approaches according to the underlying mixing
model, spectral model, spatial model, estimation criterion, and algorithm.
Chapter 17
Table 17.1 Recent noise robust speech recognition tasks: ASpIRE (Harper, 2015), AMI
(Hain et al., 2007), CHiME1 (Barker et al., 2013), CHiME2 (Vincent et al.,
2013), CHiME3 (Barker et al., 2015), CHiME4 (Vincent et al., 2017), and
REVERB (Kinoshita et al., 2016). See Le Roux and Vincent (2014)a for a more
detailed list of robust speech processing datasets.
Table 17.2 ASR word accuracy achieved by a GMMHMM acoustic model on
MFCCs with delta and doubledelta features. The data are enhanced by FDICA
followed by timefrequency masking. We compare two different masking schemes,
based on phase or interference estimates (Kolossa et al., 2010). UD and MI stand for
uncertainty decoding and modified imputation with estimated uncertainties,
respectively, while UD* and MI* stand for uncertainty decoding and modified
imputation with ideal uncertainties, respectively. Bold font indicates the best results
achievable in practice, i.e. without the use of oracle knowledge.
Table 17.3 ASR performance on the AMI meeting task using a single distant
microphone and enhanced signals obtained by the DS beamformer and the joint
trainingbased beamforming network.
Table 17.4 Human recognition rates (%) of a listening test. Each score is based on
averaging about 260 unique utterances. “Ephraim–Malah” refers to the logspectral
amplitude estimator of Ephraim and Malah (1985) in the implementation by Loizou
(2007).
Chapter 19
Table 19.1 Average signaltodistortion ratio (SDR) achieved by the computational
12. auditory scene analysis (CASA) method of Hu and Wang (2013), a DRNN trained to
separate the foreground speaker, and two variants of deep clustering for the separation
of mixtures of two speakers with all gender combinations at random signaltonoise
ratios (SNRs) between 0 and 10 dB (Hershey et al., 2016; Isik et al., 2016). The test
speakers are not in the training set.
List of Illustrations
Chapter 1
Figure 1.1 General mixing process, illustrated in the case of sources, including
three point sources and one diffuse source, and channels.
Figure 1.2 General processing scheme for singlechannel and multichannel source
separation and speech enhancement.
Chapter 2
Figure 2.1 STFT analysis.
Figure 2.2 STFT synthesis.
Figure 2.3 STFT and Mel spectrograms of an example music signal. High energies are
illustrated with dark color and low energies with light color.
Figure 2.4 Set of triangular filter responses distributed uniformly on the Mel scale.
Figure 2.5 Independent sound sources are sparse in timefrequency domain. The top
row illustrates the magnitude STFT spectrograms of two speech signals. The bottom
left panel illustrates the histogram of the magnitude STFT coefficients of the first
signal, and the bottom right panel the bivariate histogram of the coefficients of both
signals.
Figure 2.6 Magnitude spectrum of an exemplary harmonic sound. The fundamental
frequency is marked with a cross. The other harmonics are at integer multiples of the
fundamental frequency.
Figure 2.7 Example spectrograms of a stationary noise signal (top left), a note played
by a piano (top right), a sequence of drum hits (bottom left), and a speech signal
(bottom right).
Chapter 3
Figure 3.1 Schematic illustration of the shape of an acoustic impulse response
for a room of dimensions 8.00 5.00 3.10 m, an RT60 of 230 ms, and a source
distance of m. All reflections are depicted as Dirac impulses.
Figure 3.2 First 100 ms of a pair of real acoustic impulse responses from the
Aachen Impulse Response Database (Jeub et al., 2009) recorded in a meeting room
13. with the same characteristics as in Figure 3.1 and sampled at 48 kHz.
Figure 3.3 DRR as a function of the RT60 and the source distance based on Eyring's
formula (Gustafsson et al., 2003). These curves assume that there is no obstacle
between the source and the microphone, so that the direct path exists. The room
dimensions are the same as in Figure 3.1.
Figure 3.4 IC of the reverberant part of an acoustic impulse response as a
function of microphone distance and frequency .
Figure 3.5 ILD and IPD corresponding to the pair of real acoustic impulse responses in
Figure 3.2. Dashed lines denote the theoretical ILD and IPD in the free field, as defined
by the relative steering vector in (3.15).
Figure 3.6 Geometrical illustration of the position of a farfield source with respect
to a pair of microphones on the horizontal plane, showing the azimuth , the elevation
, the angle of arrival , the microphone distance , the sourcetomicrophone
distances and , and the unitnorm vector pointing to the source.
Chapter 4
Figure 4.1 Example of GCCPHAT computed for a microphone pair and represented
with gray levels in the case of a speech utterance in noisy and reverberant environment.
The left part highlights the GCCPHAT at a single frame, with a clear peak at lag
.
Figure 4.2 Two examples of global coherence field acoustic maps. (a) 2D spatial
localization using a distributed network of three microphone pairs, represented by
circles. Observe the hyperbolic high correlation lines departing from the three pairs
and crossing in the source position. (b) DOA likelihood for upper hemisphere angles
using an array of five microphones in the ( , ) plane. High likelihood region exists
around azimuth and elevation , i.e. unitnorm vector
, see Figure 3.6.
Figure 4.3 1 position error distributions (68.3% confidence) resulting from TDOA
errors ( sample) for two microphone pairs in three different geometries.
Figure 4.4 Graphical example of the particle filter procedure.
Chapter 5
Figure 5.1 Separation of speech from cafe noise by binary vs. soft masking. The masks
shown in this example are oracle masks.
Figure 5.2 Illustration of the Rician posterior (5.45) for , ,
and . The red dashed line shows the mode of the posterior and thus the
14. MAP estimate of target spectral magnitudes . The purple dotted line corresponds to
the approximate MAP estimate (5.47), and the yellow dashdotted line corresponds to
the posterior mean (5.46) and thus the MMSE estimate of .
Figure 5.3 Histogram of the real part of complex speech coefficients (Gerkmann and
Martin, 2010).
Figure 5.4 Inputoutput characteristics of different spectral filtering masks. In this
example . “Wiener” refers to the Wiener filter, “Ephraim–Malah” to the
shorttime spectral amplitude estimator of Ephraim and Malah (1984), and “approx.
MAP” to the approximate MAP amplitude estimator (5.47) of Wolfe and Godsill
(2003). While “Wiener”, “Ephraim–Malah”, and “approx. MAP” are based on a
Gaussian speech model, “Laplace prior” refers to an estimator of complex speech
coefficients with a superGaussian speech prior (Martin and Breithaupt, 2003).
Compared to the linear Wiener filter, amplitude estimators tend to apply less
attenuation for low inputs, while superGaussian estimators tend to apply less
attenuation for high inputs.
Figure 5.5 Examples of estimated filters for the noisy speech signal in Figure 5.1. The
filters were computed in the STFT domain but are displayed on a nonlinear frequency
scale for visualization purposes.
Chapter 6
Figure 6.1 Stateoftheart singlechannel noise reduction system operating in the
STFT domain. The application of a spectral gain function to the output of the STFT
block results in an estimate of clean speech spectral coefficients . The
spectral gain is controlled by the estimated SNR, which in turn requires the noise
power tracker as a central component.
Figure 6.2 Spectrograms of (a) a clean speech sample, (b) clean speech mixed with
traffic noise at 5 dB SNR, (c) SPP based on the Gaussian model, and (d) SPP based on
the Gaussian model with a fixed a priori SNR prior. Single bins in the timefrequency
plane are considered. The fixed a priori SNR was set to 15 dB and .
Figure 6.3 Power of the noisy speech signal (thin solid line) and estimated noise power
(thick solid line) using the noise power tracking approach according to (6.19). The
slope parameter is set such that the maximum slope is 5 dB /s. Top: stationary white
Gaussian noise; bottom: nonstationary multiplespeaker babble noise.
Figure 6.4 Probability distribution of shorttime power ( distribution with 10
degrees of freedom (DoF)) and the corresponding distribution of the minimum of
independent power values.
Figure 6.5 Optimal smoothing parameter as a function of the power ratio
.
15. Figure 6.6 Power of noisy speech signal (thin solid line) and estimated noise power
(thick solid line) using the MMSE noise power tracking approach (Hendriks et al.,
2010). Top: stationary white Gaussian noise; bottom: nonstationary multiplespeaker
babble noise.
Figure 6.7 Spectrogram of (a) clean speech, (b) clean speech plus additive amplitude
modulated white Gaussian noise, (c) minimum statistics noise power estimate, (d)
logerror for the minimum statistics estimator, (e) MMSE noise power estimate, and
(f) logerror for the MMSE estimator. For the computation of these noise power
estimates we concatenated three identical phrases of which only the last one is shown
in the figure. Note that in (d) and (f) light color indicates noise power overestimation
while dark color indicates noise power underestimation.
Chapter 7
Figure 7.1 An implementation of a featurebased CASA system for source separation.
Figure 7.2 Architecture of a GMMHMM. The GMM models the typical spectral
patterns produced by each source, while the HMM models the spectral continuity.
Figure 7.3 Schematic depiction of an exemplary DNN with two hidden layers and three
neurons per layer.
Figure 7.4 DRNN and unfolded DRNN.
Figure 7.5 Architecture of regression DNN for speech enhancement.
Chapter 8
Figure 8.1 An example signal played by piano consists of a sequence of notes C, E, and
G, followed by the three notes played simultaneously. The basic NMF models the
magnitude spectrogram of the signal (top left) as a sum of components having
fixed spectra (rightmost panels) and activation coefficients (lowest panels).
Each component represents parts of the spectrogram corresponding to an individual
note.
Figure 8.2 The NMF model can be used to generate timefrequency masks in order to
separate sources from a mixture. Top row: the spectrogram of the mixture
signal in Figure 8.1 is modeled with an NMF. Middle row: the model for an individual
source can be obtained using a specific set of components. In this case, only
component 1 is used to represent an individual note in the mixture. Bottom row: the
mixture spectrogram is elementwise multiplied by the timefrequency mask matrix
, resulting in a separated source spectrogram .
Figure 8.3 Illustration of the separation of two sources, where sourcespecific models
are obtained in the training stage. Dictionaries and consisting of source
specific basis vectors are obtained for sources 1 and 2 using isolated samples of each
16. source. The mixture spectrogram is modeled as a weighted linear
combination of all the basis vectors. Matrices and contain activations of basis
vectors in all frames.
Figure 8.4 Illustration of the model where a basis vector matrix is obtained for the
target source at the training stage and kept fixed, and a basis vector matrix that
represents other sources in the mixture is estimated from the mixture. The two
activation matrices and are both estimated from the mixture.
Figure 8.5 Gaussian composite model (ISNMF) by Févotte et al. (2009).
Figure 8.6 Harmonic NMF model by Vincent et al. (2010) and Bertin et al. (2010).
Figure 8.7 Illustration of the coupled factorization model, where basis vectors
acquired from training data are elementwise multiplied with an equalization filter
response to better model the observed data at test time.
Chapter 9
Figure 9.1 Learning temporal dependencies. The top right plot shows the input matrix
, which has a consistent leftright structure. The top left plot shows the learned
matrices and the bottom right plot shows the learned activations . The bottom
left plot shows the bases again, only this time we concatenate the corresponding
columns from each . We can clearly see that this sequence of columns learns bases
that extend over time.
Figure 9.2 Extraction of timefrequency sources. The input to this case is shown in the
top right plot. It is a drum pattern composed of four distinct drum sounds. The set of
four top left plots shows the extracted timefrequency templates using 2D convolutive
NMF. Their corresponding activations are shown in the step plots in the bottom right,
and the individual convolutions of each template with its activation as the lower left set
of plots. As one can see this model learns the timefrequency profile of the four drum
sounds and correctly identifies where they are located.
Figure 9.3 Convolutive NMF dictionary elements ( ) for a speech recording. Note
that each component has the form of a short phonemelike speech inflection.
Figure 9.4 Convolutive NMF decomposition for a violin recording. Note how the one
extracted basis corresponds to a constantQ spectrum that when 2D convolved with
the activation approximates the input. The peaks in produce a pitch transcription
of the recording by indicating energy at each pitch and time offset.
Figure 9.5 Effect of regularization for . A segment of one of the rows
of is displayed, corresponding to the activations of the accompaniment (piano and
double bass). A trumpet solo occurs in the middle of the displayed time interval, where
the accompaniment vanishes; the regularization smoothes out coefficients with small
17. energies that remain in unpenalized ISNMF.
Figure 9.6 Dictionaries were learned from the speech data of a given speaker. Shown
are the dictionaries learned for 18 of the 40 states. Each dictionary is composed of 10
elements that are stacked next to each other. Each of these dictionaries roughly
corresponds to a subunit of speech, either a voiced or unvoiced phoneme.
Figure 9.7 Example of dynamic models for source separation. The four spectrograms
show the mixture and the extracted speech for three different approaches. DPLCA
denotes dynamic PLCA and NHMM denotes nonnegative HMM. The bar plot shows
a quantitative evaluation of the separation performance of each approach. Adapted
from Smaragdis et al. (2014).
Chapter 10
Figure 10.1 General block diagram of a beamformer with filters
.
Figure 10.2 Beampower (10.6) in polar coordinates of the DS beamformer for a
uniform linear array. The beampower is normalized to unity at the look direction. The
different behavior of the beampower as a function of the array look direction and the
signal wavelength is clearly demonstrated.
Figure 10.3 Harmonically nested linear arrays with for to 8 kHz.
Figure 10.4 Constantbeamwidth distortionless response beamformer design with
desired response towards broadside and constrained white noise gain after FIR
approximation of (Mabande et al., 2009) for the array of Figure 10.3.
Figure 10.5 GSC structure for implementing the LCMV beamformer.
Chapter 11
Figure 11.1 Highlevel block diagram of the common estimation framework for
constructing datadependent spatial filters.
Figure 11.2 Example of singlechannel and multichannel SPP in a noisy scenario.
Figure 11.3 Example of a DDR estimator in a noisy and reverberant scenario.
Figure 11.4 Postfilter incorporating spatial information (Cohen et al., 2003; Gannot and
Cohen, 2004).
Chapter 12
Figure 12.1 Left: spectrograms of threesource mixture recorded on two directional
microphones spaced approximately 6 cm apart (Sawada et al., 2011). Right: ILD and
IPD of this recording.
Figure 12.2 Histogram of ITD and ILD features extracted by DUET from the recording
in Figure 12.1 along with separation masks estimated using manually selected
18. parameters.
Figure 12.3 Probabilistic masks estimated by MESSLfrom the mixture shown in Figure
12.1 using Markov random field mask smoothing (Mandel and Roman, 2015).
Figure 12.4 Multichannel MESSLalgorithm. Computes masks for each pair of
microphones in the Estep, then combines masks across pairs, then reestimates
parameters for each pair from the global masks.
Figure 12.5 Narrowband clustering followed by permutation alignment.
Figure 12.6 Example spectra and masks via a multichannel clustering system on
reverberant mixtures (Sawada et al., 2011).
Figure 12.7 System diagram of a multichannel classification system.
Figure 12.8 Block diagram of spatial filtering based on masks.
Chapter 13
Figure 13.1 Multichannel timefrequency representation of an observed signal (left)
and its slices: frequencywise and microphonewise (right). Methods discussed in
this chapter are shown in red.
Figure 13.2 The scatter plot on the lefthand side illustrates the joint probability
distribution of two normalized and uniformly distributed signals. The histograms
correspond to marginal distributions of the two variables. The lefthand side
corresponds to the same signals when mixed by matrix . In the latter
(mixed) case, the histograms are closer to a Gaussian.
Figure 13.3 (a) Examples of generalized Gaussian distributions for (gamma),
(Laplacian), (Gaussian), and . The latter case demonstrates the
fact that for the distribution is uniform. (b) Histogram of a female utterance,
which can be modeled as the Laplacian or gamma distribution.
Figure 13.4 Complexvalued source models. (a) and (b) are based on 13.16. (c) is a
complex Gaussian distribution.
Figure 13.5 A singletrial SIR achieved by EFICA and BARBI when separating 15
different signals (five i.i.d. sequences, three nonwhite AR Gaussian processes, three
piecewise Gaussian i.i.d. processes (nonstationary) and four speech signals).
Figure 13.6 Flow of FDICA for separating convolutive mixtures.
Figure 13.7 Comparison of two criteria for permutation alignment: magnitude spectrum
and power ratio. Permutations are aligned as each color (blue or green) corresponds to
the same source. Power ratios generally exhibit a higher correlation coefficient for the
same source and a more negative correlation coefficient for different sources.
Figure 13.8 TDOA estimation and permutation alignment. For a twomicrophone
19. twosource situation (left), ICA is applied in each frequency bin and TDOAs for two
sources between the two microphones are estimated (right upper). Each color (navy or
orange) corresponds to the same source. Clustering for TDOAs aligns the permutation
ambiguities (right lower).
Figure 13.9 Experimental setups: source positions a and b for a twosource setup, a
and b and c for a threesource setup, and a through d for a foursource setup. The
number of microphones used is always the same as the number of sources.
Figure 13.10 Experimental results (dataset A): SDR averaged over separated spatial
images of the sources.
Figure 13.11 Spectrograms of the separated signals (dataset A, two sources, STFT
window size of 1024). Here, IV
A failed to solve the permutation alignment. These two
sources are difficult to separate because the signals share the same silent period at
around 2.5 s.
Figure 13.12 Experimental results with less active sources (dataset B): SDR averaged
over separated spatial images of the sources.
Chapter 14
Figure 14.1 Illustration of multichannel NMF. , , and represent the complex
valued spectrograms of the sources and the mixture channels, and the complexvalued
mixing coefficients, respectively. NMF factors the power spectrogram of each
source as (see Chapter 8 and Section 14.2.1). The mixing system is represented
by a rank1 spatial model (see Section 14.2.2).
Figure 14.2 Block diagram of multichannel Gaussian model based source separation.
Figure 14.3 Illustration of the HMM and multichannel NMF spectral models.
Figure 14.4 Graphical illustration of the EM algorithm for MAP estimation.
Chapter 15
Figure 15.1 Example room acoustic impulse response.
Figure 15.2 Schematic view of the linearpredictive multipleinput equalization
method, after Habets (2016).
Figure 15.3 Datadependent spatial filtering approach to perform reverberation
suppression.
Figure 15.4 Singlechannel spectral enhancement approach to perform reverberation
suppression.
Chapter 16
Figure 16.1 STFT and constantQ representation of a trumpet signal composed of
three musical notes of different pitch.
20. Figure 16.2 Schematic illustration of the filter part . Durrieu et al. (2011) defined
the dictionary of filter atomic elements as a set of 30 Hann functions, with 75%
overlap.
Figure 16.3 The sourcefilter model in the magnitude frequency domain.
Figure 16.4 Local regularities in the spectrograms of percussive (vertical) and
harmonic (horizontal) sounds.
Figure 16.5 REPET: building the repeating background model. In stage 1, we analyze
the mixture spectrogram and identify the repeating period. In stage 2, we split the
mixture into patches of the identified length and take the median of them. This allows
their common part and hence the repeating pattern to be extracted. In stage 3, we use
this repeating pattern in each segment to construct a mask for separation.
Figure 16.6 Examples of kernels to use in KAM for modeling (a) percussive,
(b) harmonic, (c) repetitive, and (d) spectrally smooth sounds.
Figure 16.7 Using the audio tracks of multiple related videos to perform source
separation. Each circle on the left represents a mixture containing music and vocals in
the language associated with the flag. The music is the same in all mixtures. Only the
language varies. Given multiple copies of a mixture where one element is fixed lets one
separate out this stable element (the music) from the varied elements (the speech in
various languages).
Figure 16.8 Interferences of different sources in realworld multitrack recordings.
Left: microphone setup. Right: interference pattern. (Courtesy of R. Bittner and T.
Prätzlich.)
Chapter 17
Figure 17.1 Block diagram of an ASR system.
Figure 17.2 Calculation of MFCC and log Melfilterbank features. “DCT” is the
discrete cosine transform.
Figure 17.3 Flow chart of feature extraction in a typical speaker recognition system.
Figure 17.4 Flow chart of robust multichannel ASR with source separation, adaptation,
and testing blocks.
Figure 17.5 Comparison of the DS and MVDR beamformers on the CHiME3 dataset.
The results are obtained with a DNN acoustic model applied to the feature pipeline in
Figure 17.6 and trained with the statelevel minimum Bayes risk cost. Realdev and
Simudev denote the real and simulation development sets, respectively. Realtest
and Simutest denote the real and simulation evaluation sets, respectively.
Figure 17.6 Pipeline of stateoftheart feature extraction, normalization, and
transformation procedure for noise robust speech analysis and recognition. CMN,
LDA, STC, and fMLLR stand for cepstral mean normalization, linear discriminant
21. analysis, semitied covariance transform, and featurespace MLlinear regression,
respectively.
Figure 17.7 Observation uncertainty pipeline.
Figure 17.8 Joint training of a unified network for beamforming and acoustic modeling.
Adapted from Xiao et al. (2016).
Chapter 18
Figure 18.1 Block diagram for binaural spectral postfiltering based on a common
spectrotemporal gain: (a) direct gain computation (one microphone on each hearing
device) and (b) indirect gain computation (two microphones on each hearing device).
Figure 18.2 Block diagram for binaural spatial filtering: (a) incorporating constraints
into spatial filter design and (b) mixing with scaled reference signals.
Figure 18.3 Considered acoustic scenario, consisting of a desired source ( ), an
interfering source ( ), and background noise in a reverberant room. The signals are
received by the microphones on both hearing devices of the binaural hearing system.
Figure 18.4 Schematic overview of this chapter.
Figure 18.5 Psychoacoustically motivated lower and upper MSC boundaries: (a)
frequency range 0–8000 Hz and (b) frequency range 0–500 Hz. For frequencies below
500 Hz, the boundaries depend on the desired MSC while for frequencies above
500 Hz the boundaries are independent of the desired MSC.
Figure 18.6 MSC error of the noise component, intelligibilityweighted speech
distortion, and intelligibilityweighted output SNR for the MWF, MWFN, and
MWFIC.
Figure 18.7 Performance measures for the binaural MWF, MWFRTF, MWFIR0,
and MWFIR0.2 for a desired speech source at 5° and different interfering source
positions. The global input SINR was equal to 3 dB.
Figure 18.8 Performance measures for the binaural MWF, MWFRTF, MWFIR
0.2, and MWFIR0 for a desired speech source at 35° and different interfering
source positions. The global input SINR was equal to 3 dB.
Chapter 19
Figure 19.1 Shortterm magnitude spectrum and various representations of the phase
spectrum of a speech signal for an STFT analysis window size of 64 ms. For easier
visualization, the deviation of the instantaneous frequency from the center frequency of
each band is shown rather than the instantaneous frequency itself.
Figure 19.2 Interchannel level difference (ILD) and IPD for two different source
positions (plain curve) and (dashed curve) 10 cm apart from each other at
1.70 m distance from the microphone pair. The source DOAs are and ,
respectively. The room size is m, the reverberation time is 230 ms,
22. and the microphone distance is 15 cm.
Figure 19.3 IPD between two microphones spaced by 15 cm belonging (a) to the same
device or (b) to two distinct devices with relative sampling rate
mismatch. For illustration purposes, the recorded sound scene consists of a single
speech source at a distance of 1.70 m and a DOA of in a room with a reverberation
time of 230 ms, without any interference or noise, and the two devices have zero
temporal offset at .
23. Audio Source Separation and Speech
Enhancement
Edited by
Emmanuel Vincent
Inria
France
Tuomas Virtanen
Tampere University of Technology
Finland
Sharon Gannot
Bar-Ilan University
Israel
25. List of Authors
Shoko Araki
NTT Communication Science Laboratories
Japan
Roland Badeau
Institut MinesTélécom
France
Alessio Brutti
Fondazione Bruno Kessler
Italy
Israel Cohen
Technion
Israel
Simon Doclo
Carl von OssietzkyUniversität Oldenburg
Germany
Jun Du
University of Science and Technology of China
China
Zhiyao Duan
University of Rochester
NY
USA
Cédric Févotte
CNRS
France
Sharon Gannot
BarIlan University
26. Israel
Tian Gao
University of Science and Technology of China
China
Timo Gerkmann
Universität Hamburg
Germany
Emanuël A.P. Habets
International Audio Laboratories Erlangen
Germany
Elior Hadad
BarIlan University
Israel
Hirokazu Kameoka
The University of Tokyo
Japan
Walter Kellermann
FriedrichAlexander Universität ErlangenNürnberg
Germany
Zbyněk Koldovský
Technical University of Liberec
Czech Republic
Dorothea Kolossa
RuhrUniversität Bochum
Germany
Antoine Liutkus
Inria
France
Michael I. Mandel
City University of New York
27. NY
USA
Erik Marchi
Technische Universität München
Germany
Shmulik MarkovichGolan
BarIlan University
Israel
Daniel Marquardt
Carl von OssietzkyUniversität Oldenburg
Germany
Rainer Martin
RuhrUniversität Bochum
Germany
Nasser Mohammadiha
Chalmers University of Technology
Sweden
Gautham J. Mysore
Adobe Research
CA
USA
Tomohiro Nakatani
NTT Communication Science Laboratories
Japan
Patrick A. Naylor
Imperial College London
UK
Maurizio Omologo
Fondazione Bruno Kessler
Italy
28. Alexey Ozerov
Technicolor
France
Bryan Pardo
Northwestern University
IL
USA
Pasi Pertilä
Tampere University of Technology
Finland
Gaël Richard
Institut MinesTélécom
France
Hiroshi Sawada
NTT Communication Science Laboratories
Japan
Paris Smaragdis
University of Illinois at UrbanaChampaign
IL
USA
Piergiorgio Svaizer
Fondazione Bruno Kessler
Italy
Emmanuel Vincent
Inria
France
Tuomas Virtanen
Tampere University of Technology
Finland
Shinji Watanabe
30. Preface
Source separation and speech enhancement are some of the most studied technologies in audio
signal processing. Their goal is to extract one or more source signals of interest from an audio
recording involving several sound sources. This problem arises in many everyday situations.
For instance, spoken communication is often obscured by concurrent speakers or by
background noise, outdoor recordings feature a variety of environmental sounds, and most
music recordings involve a group of instruments. When facing such scenes, humans are able to
perceive and listen to individual sources so as to communicate with other speakers, navigate in
a crowded street or memorize the melody of a song. Source separation and speech
enhancement technologies aim to empower machines with similar abilities.
These technologies are already present in our lives today. Beyond “clean” singlesource
signals recorded with close microphones, they allow the industry to extend the applicability of
speech and audio processing systems to multisource, reverberant, noisy signals recorded
with distant microphones. Some of the most striking examples include hearing aids, speech
enhancement for smartphones, and distantmicrophone voice command systems. Current
technologies are expected to keep improving and spread to many other scenarios in the next
few years.
Traditionally, speech enhancement has referred to the problem of segregating speech and
background noise, while source separation has referred to the segregation of multiple speech
or audio sources. Most textbooks focus on one of these problems and on one of three historical
approaches, namely sensor array processing, computational auditory scene analysis, or
independent component analysis. These communities now routinely borrow ideas from each
other and other approaches have emerged, most notably based on deep learning.
This textbook is the first to provide a comprehensive overview of these problems and
approaches by presenting their shared foundations and their differences using common
language and notations. Starting with prerequisites (Part I), it proceeds with singlechannel
separation and enhancement (Part II), multichannel separation and enhancement (Part III), and
applications and perspectives (Part IV). Each chapter provides both introductory and advanced
material.
We designed this textbook for people in academia and industry with basic knowledge of signal
processing and machine learning. Thanks to its comprehensiveness, we hope it will help
students select a promising research track, researchers leverage the acquired crossdomain
knowledge to design improved techniques, and engineers and developers choose the right
technology for their application scenario. We also hope that it will be useful for practitioners
from other fields (e.g., acoustics, multimedia, phonetics, musicology) willing to exploit audio
source separation or speech enhancement as a preprocessing tool for their own needs.
Emmanuel Vincent, Tuomas Virtanen, and Sharon Gannot
32. Acknowledgment
We would like to thank all the chapter authors, as well as the following people who helped
with proofreading: Sebastian Braun, Yaakov Buchris, Emre Cakir, Aleksandr Diment, Dylan
Fagot, Nico Gößling, Tomoki Hayashi, Jakub Janský, Ante Jukić, Václav Kautský, Martin
KrawczykBecker, Simon Leglaive, Bochen Li, Min Ma, Paul Magron, Zhong Meng, Gaurav
Naithani, Zhaoheng Ni, Aditya Arie Nugraha, Sanjeel Parekh, Robert Rehr, Lea Schönherr,
Georgina Tryfou, Ziteng Wang, and Mehdi Zohourian
Emmanuel Vincent, Tuomas Virtanen, and Sharon Gannot
May 2017
33. Notations
Linear algebra
scalar
vector
vector with entries
th entry of vector
vector of zeros
vector of ones
matrix
matrix with entries
th entry of matrix
identity matrix
tensor/array (with three or more dimensions) or set
tensor with entries
diagonal matrix whose entries are those of vector
entrywise product of matrices and
trace of matrix
determinant of matrix
transpose of vector
conjugatetranspose of vector
conjugate of scalar
real part of scalar
imaginary unit
Statistics
probability distribution of continuous random variable
conditional probability distribution of given
probability value of discrete random variable
conditional probability value of given
34. expectation of random variable
conditional expectation of
entropy of random variable
real Gaussian distribution with mean and covariance
complex Gaussian distribution with mean and covariance
estimated value of random variable (e.g., firstorder statistics)
variance of random variable
estimated secondorder statistics of random variable
autocovariance of random vector
estimated secondorder statistics of random vector
covariance of random vectors and
estimated secondorder statistics of random vectors and
cost function to be minimized w.r.t. the vector of parameters
objective function to be maximized w.r.t. the vector of parameters
auxiliary function to be minimized or maximized, depending on the context
Common indexes
number of microphones or channels
microphone or channel index in
number of sources
source index in
number of timedomain samples
sample index in
timedomain filter length
tap index in
number of time frames
time frame index in
number of frequency bins
frequency bin index in
frequency in Hz corresponding to frequency bin
35. timedomain signal
complexvalued STFT coefficient of signal
Signals
input signal recorded at microphone
multichannel input signal, e.g.
matrix of input signals, e.g. or
input magnitude spectrogram, i.e.
tensor/array/set of input signals, e.g.
point source signal
vector of source signals, e.g.
matrix of source signals, e.g.
spatial image of source as recorded on microphone
spatial image of source on all microphones
tensor/array/set of spatial source image signals, e.g.
acoustic impulse response (or transfer function) from source to microphone
vector of acoustic impulse responses (or transfer functions) from source , mixing
vector
matrix of acoustic impulse responses (or transfer functions), mixing matrix
noise signal
Filters
convolution operator
singleoutput singlechannel filter (mask), e.g.
singleoutput multichannel filter (beamformer), e.g.
multipleoutput multichannel filter, e.g.
Nonnegative matrix factorization
th nonnegative basis spectrum
matrix of nonnegative basis spectra
th activation coefficient in time frame
36. vector of activation coefficients in time frame
matrix of activation coefficients
Deep learning
number of layers
layer index in
number of neurons in layer
neuron index in
matrix of weights and biases in layer
activation function in layer
multivariate nonlinear function encoded by the full DNN
Geometry
3D location of microphone with respect to the array origin
distance between microphones and
3D location of source with respect to the array origin
distance between source and microphone
azimuth of source
elevation of source
speed of sound in air
time difference of arrival of source between microphones and
37. Acronyms
AR autoregressive
ASR automatic speech recognition
BSS blind source separation
CASA computational auditory scene analysis
DDR directtodiffuse ratio
DFT discrete Fourier transform
DNN deep neural network
DOA direction of arrival
DRNN deep recurrent neural network
DRR directtoreverberant ratio
DS delayandsum
ERB equivalent rectangular bandwidth
EM expectationmaximization
EUC Euclidean
FDICA frequencydomain independent component analysis
FIR finite impulse response
GCC generalized crosscorrelation
GCCPHAT generalized crosscorrelation with phase transform
GMM Gaussian mixture model
GSC generalized sidelobe canceler
HMM hidden Markov model
IC interchannel (or interaural) coherence
ICA independent component analysis
ILD interchannel (or interaural) level difference
IPD interchannel (or interaural) phase difference
ITD interchannel (or interaural) time difference
IV
A independent vector analysis
IS Itakura–Saito
KL Kullback–Leibler
LCMV linearly constrained minimum variance
LSTM long shortterm memory
38. MAP maximum a posteriori
MFCC Melfrequency cepstral coefficient
ML maximum likelihood
MM majorizationminimization
MMSE minimum mean square error
MSC magnitude squared coherence
MSE mean square error
MVDR minimum variance distortionless response
MWF multichannel Wiener filter
NMF nonnegative matrix factorization
PLCA probabilistic latent component analysis
RNN recurrent neural network
RT60 reverberation time
RTF relative transfer function
SAR signaltoartifacts ratio
SDR signaltodistortion ratio
SINR signaltointerferenceplusnoise ratio
SIR signaltointerference ratio
SNR signaltonoise ratio
SPP speech presence probability
SRP steered response power
SRPPHAT steered response power with phase transform
SRR signaltoreverberation ratio
STFT shorttime Fourier transform
TDOA time difference of arrival
V
AD voice activity detection
VB variational Bayesian
39. About the Companion Website
This book is accompanied by a companion website:
https://project.inria.fr/ssse/
The website includes:
Implementations of algorithms
Audio samples
41. 1
Introduction
Emmanuel Vincent Sharon Gannot and Tuomas Virtanen
Source separation and speech enhancement are core problems in the field of audio signal
processing, with applications to speech, music, and environmental audio. Research in this field
has accompanied technological trends, such as the move from landline to mobile or hands
free phones, the gradual replacement of stereo by 3D audio, and the emergence of connected
devices equipped with one or more microphones that can execute audio processing tasks which
were previously regarded as impossible. In this short introductory chapter, after a brief
discussion of the application needs in Section 1.1, we define the problems of source separation
and speech enhancement and introduce relevant terminology regarding the scenarios and the
desired outcome in Section 1.2. We then present the general processing scheme followed by
most source separation and speech enhancement approaches and categorize these approaches
in Section 1.3. Finally, we provide an outline of the book in Section 1.4.
1.1 Why are Source Separation and Speech
Enhancement Needed?
The problems of source separation and speech enhancement arise from several application
needs in the context of speech, music, and environmental audio processing.
Realworld speech signals are often contaminated by interfering speakers, environmental
noise, and/or reverberation. These phenomena deteriorate speech quality and, in adverse
scenarios, speech intelligibility and automatic speech recognition (ASR) performance. Source
separation and speech enhancement are therefore required in such scenarios. For instance,
spoken communication over mobile phones or handsfree systems requires the separation or
enhancement of the nearend speaker's voice with respect to interfering speakers and
environmental noises before it is transmitted to the farend listener. Conference call systems
or hearing aids face the same problem, except that several speakers may be considered as
targets. Source separation and speech enhancement are also crucial preprocessing steps for
robust distantmicrophone ASR, as available in today's personal assistants, car navigation
systems, televisions, video game consoles, medical dictation devices, and meeting
transcription systems. Finally, they are necessary components in providing humanoid robots,
assistive listening devices, and surveillance systems with “superhearing” capabilities, which
may exceed the hearing capabilities of humans.
Besides speech, music and movie soundtracks are another important application area for
source separation. Indeed, music recordings typically involve several instruments playing
together live or mixed together in a studio, while movie soundtracks involve speech
overlapped with music and sound effects. Source separation has been successfully used to
42. upmix mono or stereo recordings to 3D sound formats and/or to remix them. It lies at the core
of objectbased audio coders, which encode a given recording as the sum of several sound
objects that can then easily be rendered and manipulated. It is also useful for music information
retrieval purposes, e.g. to transcribe the melody or the lyrics of a song from the separated
singing voice.
This is an emerging research field with many reallife applications concerning the analysis of
general sound scenes, involving the detection of sound events, their localization and tracking,
and the inference of the acoustic environment properties.
1.2 What are the Goals of Source Separation and
Speech Enhancement?
The goal of source separation and speech enhancement can be defined in layman's terms as that
of recovering the signal of one or more sound sources from an observed signal involving other
sound sources and/or reverberation. This definition turns out to be ambiguous. In order to
address the ambiguity, the notion of source and the process leading to the observed signal must
be characterized more precisely. In this section and in the rest of this book we adopt the
general notations defined on p. xxv–xxvii.
1.2.1 SingleChannel vs. Multichannel
Let us assume that the observed signal has channels indexed by . By channel,
we mean the output of one microphone in the case when the observed signal has been recorded
by one or more microphones, or the input of one loudspeaker in the case when it is destined to
be played back on one or more loudspeakers.1 A signal with channels is called single
channel and is represented by a scalar , while a signal with channels is called
multichannel and is represented by an vector . The explanation below employs
multichannel notation, but is also valid in the singlechannel case.
1.2.2 Point vs. Diffuse Sources
Furthermore, let us assume that there are sound sources indexed by . The
word “source” can refer to two different concepts. A point source such as a human speaker, a
bird, or a loudspeaker is considered to emit sound from a single point in space. It can be
represented as a singlechannel signal. A diffuse source such as a car, a piano, or rain
simultaneously emits sound from a whole region in space. The sounds emitted from different
points of that region are different but not always independent of each other. Therefore, a diffuse
source can be thought of as an infinite collection of point sources. The estimation of the
individual point sources in this collection can be important for the study of vibrating bodies,
but it is considered irrelevant for source separation or speech enhancement. A diffuse source is
therefore typically represented by the corresponding signal recorded at the microphone(s) and
it is processed as a whole.
43. (1.1)
1.2.3 Mixing Process
The mixing process leading to the observed signal can generally be expressed in two steps.
First, each singlechannel point source signal is transformed into an source
spatial image signal (Vincent et al., 2012) by means of a possibly
nonlinear spatialization operation. This operation can describe the acoustic propagation from
the point source to the microphone(s), including reverberation, or some artificial mixing
effects. Diffuse sources are directly represented by their spatial images instead.
Second, the spatial images of all sources are summed to yield the observed signal
called the mixture:
This summation is due to the superposition of the sources in the case of microphone recording
or to explicit summation in the case of artificial mixing. This implies that the spatial image of
each source represents the contribution of the source to the mixture signal. A schematic
overview of the mixing process is depicted in Figure 1.1. More specific details are given in
Chapter 3.
Note that target sources, interfering sources, and noise are treated in the same way in this
formulation. All these signals can be either point or diffuse sources. The choice of target
sources depends on the use case. Also, the distinction between interfering sources and noise
may or may not be relevant depending on the use case. In the context of speech processing,
these terms typically refer to undesired speech vs. nonspeech sources, respectively. In the
context of music or environmental sound processing, this distinction is most often irrelevant
and the former term is preferred to the latter.
Figure 1.1 General mixing process, illustrated in the case of sources, including three
point sources and one diffuse source, and channels.
In the following, we assume that all signals are digital, meaning that the time variable is
discrete. We also assume that quantization effects are negligible, so that we can operate on
continuous amplitudes. Regarding the conversion of acoustic signals to analog audio signals
and analog signals to digital, see, for example, Havelock et al. (2008, Part XII) and Pohlmann
44. (1995, pp. 22–49).
1.2.4 Separation vs. Enhancement
The above mixing process implies one or more distortions of the target signals: interfering
sources, noise, reverberation, and echo emitted by the loudspeakers (if any). In this context,
source separation refers to the problem of extracting one or more target sources while
suppressing interfering sources and noise. It explicitly excludes dereverberation and echo
cancellation. Enhancement is more general, in that it refers to the problem of extracting one or
more target sources while suppressing all types of distortion, including reverberation and
echo. In practice, though, this term is mostly used in the case when the target sources are
speech. In the audio processing literature, these two terms are often interchanged, especially
when referring to the problem of suppressing both interfering speakers and noise from a speech
signal. Note that, for either source separation or enhancement tasks, the extracted source(s) can
be either the spatial image of the source or its direct path component, namely the delayed and
attenuated version of the original source signal (Vincent et al., 2012; Gannot et al., 2001).
The problem of echo cancellation is out of the scope of this book. Please refer to Hänsler and
Schmidt (2004) for a comprehensive overview of this topic. The problem of source
localization and tracking cannot be viewed as a separation or enhancement task, but it is
sometimes used as a preprocessing step prior to separation or enhancement, hence it is
discussed in Chapter 4. Dereverberation is explored in Chapter 15. The remaining chapters
focus on separation and enhancement.
1.2.5 Typology of Scenarios
The general source separation literature has come up with a terminology to characterize the
mixing process (Hyvärinen et al., 2001; O'Grady et al., 2005; Comon and Jutten, 2010). A
given mixture signal is said to be
linear if the mixing process is linear, and nonlinear otherwise;
timeinvariant if the mixing process is fixed over time, and timevarying otherwise;
instantaneous if the mixing process simply scales each source signal by a different factor
on each channel, anechoic if it also applies a different delay to each source on each
channel, and convolutive in the more general case when it results from summing multiple
scaled and delayed versions of the sources;
overdetermined if there is no diffuse source and the number of point sources is strictly
smaller than the number of channels, determined if there is no diffuse source and the
number of point sources is equal to the number of channels, and underdetermined
otherwise.
This categorization is relevant but has limited usefulness in the case of audio. As we shall see
in Chapter 3, virtually all audio mixtures are linear (or can be considered so) and convolutive.
The over vs. underdetermined distinction was motivated by the fact that a determined or
45. overdetermined linear timeinvariant mixture can be perfectly separated by inverting the
mixing system using a linear timeinvariant inverse (see Chapter 13). In practice, however,
the majority of audio mixtures involve at least one diffuse source (e.g., background noise) or
more point sources than channels. Audio source separation and speech enhancement systems
are therefore generally faced with underdetermined linear (timeinvariant or timevarying)
convolutive mixtures.2
Recently, an alternative categorization has been proposed based on the amount of prior
information available about the mixture signal to be processed (Vincent et al., 2014). The
separation problem is said to be
blind when absolutely no information is given about the source signals, the mixing process
or the intended application;
weakly guided orsemiblind when general information is available about the context of
use, e.g. the nature of the sources (speech, music, environmental sounds), the microphone
positions, the recording scenario (domestic, outdoor, professional music), and the intended
application (hearing aid, speech recognition);
strongly guided when specific information is available about the signal to be processed,
e.g. the spatial location of the sources, their activity pattern, the identity of the speakers, or
a musical score;
informed when highly precise information about the sources and the mixing process is
encoded and transmitted along with the audio.
Although the term “blind” has been extensively used in source separation (see Chapters 4, 10,
11, and 13), strictly blind separation is inapplicable in the context of audio. As we shall see in
Chapter 13, certain assumptions about the probability distribution of the sources and/or the
mixing process must always be made in practice. Strictly speaking, the term “weakly guided”
would therefore be more appropriate. Informed separation is closer to audio coding than to
separation and will be briefly covered in Chapter 16. All other source separation and speech
enhancement methods reviewed in this book are therefore either weakly or strongly guided.
Finally, the separation or enhancement problem can be categorized depending on the order in
which the samples of the mixture signal are processed. It is calledonline when the mixture
signal is captured in real time by small blocks of a few tens or hundred samples and each
block must be processed given past blocks only, or few future blocks introducing tolerated
latency. On the contrary, it is calledoffline orbatch when the recording has been completed and
it is processed as a whole, using both past and future samples to estimate a given sample of the
sources.
1.2.6 Evaluation
Using current technology, source separation and dereverberation are rarely perfect in reallife
scenarios. For each source, the estimated source or source spatial image signal can differ from
the true target signal in several ways, including (Vincent et al., 2006; Loizou, 2007)
46. distortion of the target signal, e.g. lowpass filtering, fluctuating intensity over time;
residual interference or noise from the other sources;
“musical noise”artifacts, i.e. isolated sounds in both frequency and time similar to those
generated by a lossy audio codec at a very low bitrate.
The assessment of these distortions is essential to compare the merits of different algorithms
and understand how to improve their performance.
Ideally, this assessment should be based on the performance of the tested source separation or
speech enhancement method for the desired application. Indeed, the importance of various
types of distortion depends on the specific application. For instance, some amount of distortion
of the target signal which is deemed acceptable when listening to the separated signals can
lead to a major drop in the speech recognition performance. Artifacts are often greatly reduced
when the separated signals are remixed together in a different way, while they must be avoided
at all costs in hearing aids. Standard performance metrics are typically available for each task,
some of which will be mentioned later in this book.
When the desired application involves listening to the separated or enhanced signals or to a
remix, sound quality and, whenever relevant, speech intelligibility should ideally be assessed
by means of a subjective listening test (ITUT, 2003; Emiya et al., 2011; ITUT, 2016).
Contrary to a widespread belief, a number of subjects as low as ten can sometimes suffice to
obtain statistically significant results. However, data selection and subject screening are time
consuming. Recent attempts with crowdsourcing are a promising way of making subjective
testing more convenient in the near future (Cartwright et al., 2016). An alternative approach is
to use objective separation or dereverberation metrics. Table 1.1 provides an overview of
some commonly used metrics. The socalled PESQ metric, the segmental signaltonoise
ratio (SNR), and the signaltodistortion ratio (SDR) measure the overall estimation error,
including the three types of distortion listed above. The socalled STOI index is more related
to speech intelligibility by humans, and the loglikelihood ratio and cepstrum distance to ASR
by machines. The signaltointerference ratio (SIR) and the signaltoartifacts ratio
(SAR) aim to assess separately the latter two types of distortion listed above. The segmental
SNR, SDR, SIR, and SAR are expressed in decibels (dB), while PESQ and STOI are
expressed on a perceptual scale. More specific metrics will be reviewed later in the book.
47. Table 1.1 Evaluation software and metrics.
Software Implemented metrics
ITUT ( 2001) PESQ
Taal et al. (2011)3 STOI
Loizou (2007)4 Segmental SNR
Loglikelihood ratio
Cepstrum distance
BSS Eval (Vincent et al., 2006)5 SDR
SIR
SAR
Falk et al. (2010) Speech to reverberation modulation energy ratio
3http://amtoolbox.sourceforge.net/doc/speech/taal2011.php.
4http://www.crcpress.com/product/isbn/9781466504219.
5http://bassdb.gforge.inria.fr/bss_eval/ .
A natural question that arises once the metrics have been defined is: what is the best
performance possibly achievable for a given mixture signal? This can be used to assess the
difficulty of solving the source separation or speech enhancement problem in a given scenario
and the room left for performance improvement as compared to current systems. This question
can be answered usingoracle orideal estimators based on the knowledge of the true source or
source spatial image signals (Vincent et al., 2007).
1.3 How can Source Separation and Speech
Enhancement be Addressed?
Now that we have defined the goals of source separation and speech enhancement, let us turn
to how they can be addressed.
1.3.1 General Processing Scheme
Many different approaches to source separation and speech enhancement have been proposed
in the literature. The vast majority of approaches follow the general processing scheme
depicted in Figure 1.2, which applies to both singlechannel and multichannel scenarios. The
timedomain mixture signal is represented in the timefrequency domain (see Chapter
2). A model of the complexvalued timefrequency coefficients of the mixture and the
sources (resp. the source spatial images ) is built. The choice of model is
motivated by the general prior information about the scenario (see Section 1.2.5). The model
48. parameters are estimated from or from separate training data according to a certain
criterion. Additional specific prior information can be used to help parameter estimation
whenever available. Given these parameters, a timevarying singleoutput (resp. multiple
output) complexvalued filter is derived and applied to the mixture in order to obtain
an estimate of the complexvalued timefrequency coefficients of the sources (resp.
the source spatial images ). Finally, the timefrequency transform is inverted, yielding
timedomain source estimates (resp. source spatial image estimates ).
Figure 1.2 General processing scheme for singlechannel and multichannel source separation
and speech enhancement.
1.3.2 Converging Historical Trends
The various approaches proposed in the literature differ by the choice of model, the parameter
estimation algorithm, and the derivation of the separation or enhancement filter. Research has
followed three historical paths. First, microphone array processing emerged from the theory of
sensor array processing for telecommunications and focused mostly on the localization and
enhancement of speech in noisy or reverberant environments. Second, the concepts of
independent component analysis (ICA) and nonnegative matrix factorization (NMF) gave birth
to a stream of blind source separation (BSS) methods aiming to address “cocktail party”
scenarios (as coined by Cherry (1953)) involving several sound sources mixed together. Third,
attempts to implement the sound segregation properties of the human ear (Bregman, 1994) in a
computer gave rise to computational auditory scene analysis (CASA) methods. These paths
have converged in the last decade and they are hardly distinguishable anymore. As a matter of
fact, virtually all source separation and speech enhancement methods rely on modeling the
spectral properties of the sources, i.e. their distribution of energy over time and frequency,
and/or their spatial properties, i.e. the relations between channels over time.
Most books and surveys about audio source separation and speech enhancement so far have
focused on a single point of view, namely microphone array processing (Gay and Benesty,
2000; Brandstein and Ward, 2001; Loizou, 2007; Cohen et al., 2010), CASA (Divenyi, 2004;
49. Wang and Brown, 2006), BSS (O'Grady et al., 2005; Makino et al., 2007; Virtanen et al.,
2015), or machine learning (Vincent et al., 2010, 2014). These are complemented by books on
general sensor array processing and BSS (Hyvärinen et al., 2001; Van Trees, 2002; Cichocki
et al., 2009; Haykin and Liu, 2010; Comon and Jutten, 2010), which do not specifically focus
on speech and audio, and books on general speech processing (Benesty et al., 2007; Wölfel
and McDonough, 2009; Virtanen et al., 2012; Li et al., 2015), which do not specifically focus
on separation and enhancement. A few books and surveys have attempted to cross the
boundaries between these points of view (Benesty et al., 2005; Cohen et al., 2009; Gannot et
al., 2017; Makino, 2018), but they do not cover all stateoftheart approaches and all
application scenarios. We designed this book to provide the most comprehensive, uptodate
overview of the state of the art and allow readers to acquire a wide understanding of these
topics.
1.3.3 Typology of Approaches
With the merging of the three historical paths introduced above, a new categorization of source
separation and speech enhancement methods has become necessary. One of the most relevant
ones today is based on the use of training data to estimate the model parameters and on the
nature of this data. This categorization differs from the one in Section 1.2.5: it does not relate
to the problem posed, but to the way it is solved. Both categorizations are essentially
orthogonal. We distinguish four categories of approaches:
learningfree methods do not rely on any training data: all parameters are either fixed
manually by the user or estimated from the test mixture (e.g., frequencydomain
ICA in Section 13.2);
unsupervised source modeling methods train a model for each source from unannotated
isolated signals of that source type, i.e. without using any information about each training
signal besides the source type (e.g., socalled “supervised NMF” in Section 8.1.3);
supervised source modeling methods train a model for each source from annotated isolated
signals of that source type, i.e. using additional information about each training signal (e.g.,
isolated notes annotated with pitch information in the case of music, see Section 16.2.2.1);
separation based training methods (e.g., deep neural network (DNN) based methods in
Section 7.3) train a separation mechanism or jointly train models for all sources from
mixture signals given the underlying true source signals.
In all cases, development data whose conditions are similar to the test mixture can be used to
tune a small number of hyperparameters. Certain methods borrow ideas from several
categories of approaches. For instance, “semisupervised” NMF in Section 8.1.4 is halfway
between learningfree and unsupervised source modeling based separation.
Other terms were used in the literature, such as generative vs. discriminative methods. We do
not use these terms in the following and prefer the finergrained categories above, which are
specific to source separation and speech enhancement.
50. 1.4 Outline
This book is structured in four parts.
Part I introduces the basic concepts of timefrequency processing in Chapter 2 and sound
propagation in Chapter 3, and highlights the spectral and spatial properties of the sources.
Chapter 4 provides additional background material on source activity detection and
localization. These chapters are mostly designed for beginners and can be skipped by
experienced readers.
Part II focuses on singlechannel separation and enhancement based on the spectral properties
of the sources. We first define the concept of spectral filtering in Chapter 5. We then explain
how suitable spectral filters can be derived from various models and present algorithms to
estimate the model parameters in Chapters 6to 9. Most of these algorithms are not restricted to
a given application area.
Part III addresses multichannel separation and enhancement based on spatial and/ or spectral
properties. It follows a similar structure to Part II. We first define the concept of spatial
filtering in Chapter 10 and proceed with several models and algorithms in Chapters 11to 14.
Chapter 15 focuses on dereverberation. Again, most of the algorithms reviewed in this part are
not restricted to a given application area.
Readers interested in singlechannel audio should focus on Part II, while those interested in
multichannel audio are advised to read both Parts II and III since most singlechannel
algorithms can be employed or extended in a multichannel context. In either case, Chapters 5
and 10 must be read first, since they are are prerequisites to the other chapters. Chapters 6to 9
and 11to 15 are independent of each other and can be read separately, except Chapter 9 which
relies on Chapter 8. Reading all chapters in either part is strongly recommended, however.
This will provide the reader with a more complete view of the field and allow him/her to
select the most appropriate algorithm or develop a new algorithm for his own use case.
Part IV presents the challenges and opportunities associated with the use of these algorithms in
specific application areas: music in Chapter 16, speech in Chapter 17, and hearing instruments
in Chapter 18. These chapters are independent of each other and may be skipped or not
depending on the reader's interest. We conclude by discussing several research perspectives in
Chapter 19.
Bibliography
Benesty, J., Makino, S., and Chen, J. (eds) (2005) Speech Enhancement, Springer.
Benesty, J., Sondhi, M.M., and Huang, Y. (eds) (2007) Springer Handbook of Speech
Processing and Speech Communication, Springer.
Brandstein, M.S. and Ward, D.B. (eds) (2001) Microphone Arrays: Signal Processing
Techniques and Applications, Springer.
51. Bregman, A.S. (1994) Auditory scene analysis: The perceptual organization of sound, MIT
Press.
Cartwright, M., Pardo, B., Mysore, G.J., and Hoffman, M. (2016) Fast and easy crowdsourced
perceptual audio evaluation, in Proceedings of IEEE International Conference on Audio,
Speech and Signal Processing, pp. 619–623.
Cherry, E.C. (1953) Some experiments on the recognition of speech, with one and with two
ears. Journal of the Acoustical Society of America, 25 (5), 975–979.
Cichocki, A., Zdunek, R., Phan, A.H., and Amari, S. (2009) Nonnegative Matrix and Tensor
Factorizations: Applications to Exploratory Multiway Data Analysis and Blind Source
Separation, Wiley.
Cohen, I., Benesty, J., and Gannot, S. (2009) Speech processing in modern communication:
Challenges and perspectives, vol. 3, Springer.
Cohen, I., Benesty, J., and Gannot, S. (eds) (2010) Speech Processing in Modern
Communication: Challenges and Perspectives, Springer.
Comon, P. and Jutten, C. (eds) (2010) Handbook of Blind Source Separation, Independent
Component Analysis and Applications, Academic Press.
Divenyi, P. (ed.) (2004) Speech Separation by Humans and Machines, Springer.
Emiya, V
., Vincent, E., Harlander, N., and Hohmann, V
. (2011) Subjective and objective quality
assessment of audio source separation. IEEE Transactions on Audio, Speech, and Language
Processing, 19 (7), 2046–2057.
Falk, T.H., Zheng, C., and Chan, W.Y. (2010) A nonintrusive quality and intelligibility
measure of reverberant and dereverberated speech. IEEE Transactions on Audio, Speech, and
Language Processing, 18 (7), 1766–1774.
Gannot, S., Burshtein, D., and Weinstein, E. (2001) Signal enhancement using beamforming and
nonstationarity with applications to speech. IEEE Transactions on Signal Processing, 49 (8),
1614–1626.
Gannot, S., Vincent, E., MarkovichGolan, S., and Ozerov, A. (2017) A consolidated
perspective on multimicrophone speech enhancement and source separation. IEEE/ACM
Transactions on Audio, Speech, and Language Processing, 25 (4), 692–730.
Gay, S.L. and Benesty, J. (eds) (2000) Acoustic Signal Processing for Telecommunication,
Kluwer.
Hänsler, E. and Schmidt, G. (2004) Acoustic Echo and Noise Control: A Practical Approach,
Wiley.
Havelock, D., Kuwano, S., and V
orländer, M. (eds) (2008) Handbook of Signal Processing in
52. Acoustics, vol. 2, Springer.
Haykin, S. and Liu, K.R. (eds) (2010) Handbook on Array Processing and Sensor Networks,
Wiley.
Hyvärinen, A., Karhunen, J., and Oja, E. (2001) Independent Component Analysis, Wiley.
ITUT (2001) Recommendation P.862. perceptual evaluation of speech quality (PESQ): An
objective method for endtoend speech quality assessment of narrowband telephone
networks and speech codecs.
ITUT (2003) Recommendation P.835: Subjective test methodology for evaluating speech
communication systems that include noise suppression algorithm.
ITUT (2016) Recommendation P.807. subjective test methodology for assessing speech
intelligibility.
Li, J., Deng, L., HaebUmbach, R., and Gong, Y. (2015) Robust Automatic Speech
Recognition, Academic Press.
Loizou, P.C. (2007) Speech Enhancement: Theory and Practice, CRC Press.
Makino, S. (ed.) (2018) Audio Source Separation, Springer.
Makino, S., Lee, T.W., and Sawada, H. (eds) (2007) Blind Speech Separation, Springer.
O'Grady, P.D., Pearlmutter, B.A., and Rickard, S.T. (2005) Survey of sparse and nonsparse
methods in source separation. International Journal of Imaging Systems and Technology, 15,
18–33.
Pohlmann, K.C. (1995) Principles of Digital Audio, McGrawHill, 3rd edn.
Taal, C.H., Hendriks, R.C., Heusdens, R., and Jensen, J. (2011) An algorithm for intelligibility
prediction of timefrequency weighted noisy speech. IEEE Transactions on Audio, Speech,
and Language Processing, 19 (7), 2125–2136.
Van Trees, H.L. (2002) Optimum Array Processing, Wiley.
Vincent, E., Araki, S., Theis, F.J., Nolte, G., Bofill, P., Sawada, H., Ozerov, A., Gowreesunker,
B.V
., Lutter, D., and Duong, N.Q.K. (2012) The Signal Separation Evaluation Campaign
(2007–2010): Achievements and remaining challenges. Signal Processing, 92, 1928–1936.
Vincent, E., Bertin, N., Gribonval, R., and Bimbot, F. (2014) From blind to guided audio
source separation: How models and side information can improve the separation of sound.
IEEE Signal Processing Magazine, 31 (3), 107–115.
Vincent, E., Gribonval, R., and Févotte, C. (2006) Performance measurement in blind audio
source separation. IEEE Transactions on Audio, Speech, and Language Processing, 14 (4),
1462–1469.
53. Vincent, E., Gribonval, R., and Plumbley, M.D. (2007) Oracle estimators for the benchmarking
of source separation algorithms. Signal Processing, 87 (8), 1933–1950.
Vincent, E., Jafari, M.G., Abdallah, S.A., Plumbley, M.D., and Davies, M.E. (2010)
Probabilistic modeling paradigms for audio source separation, in Machine Audition:
Principles, Algorithms and Systems, IGI Global, pp. 162–185.
Virtanen, T., Gemmeke, J.F., Raj, B., and Smaragdis, P. (2015) Compositional models for
audio processing: Uncovering the structure of sound mixtures. IEEE Signal Processing
Magazine, 32 (2), 125–144.
Virtanen, T., Singh, R., and Raj, B. (eds) (2012) Techniques for Noise Robustness in
Automatic Speech Recognition, Wiley.
Wang, D. and Brown, G.J. (eds) (2006) Computational Auditory Scene Analysis: Principles,
Algorithms, and Applications, Wiley.
Wölfel, M. and McDonough, J. (2009) Distant Speech Recognition, Wiley.
Notes
1 This is the usual meaning of “channel” in the field of professional and consumer audio. In the
field of telecommunications and, by extension, in some speech enhancement papers,
“channel” refers to the distortions (e.g., noise and reverberation) occurring when
transmitting a signal instead. The latter meaning will not be employed hereafter.
2 Certain authors call mixtures for which the number of point sources is equal to (resp. strictly
smaller than) the number of channels as determined (resp. overdetermined) even when there
is a diffuse noise source. Perfect separation of such mixtures cannot be achieved using
timeinvariant filtering anymore: it requires a timevarying separation filter, similarly to
underdetermined mixtures. Indeed, a timeinvariant filter can cancel the interfering sources
and reduce the noise, but it cannot cancel the noise perfectly. We prefer the above definition
of “determined” and “overdetermined”, which matches the mathematical definition of these
concepts for systems of linear equations and has a more direct implication on the separation
performance achievable by linear timeinvariant filtering.
54. 2
TimeFrequency Processing: Spectral Properties
Tuomas Virtanen Emmanuel Vincent and Sharon Gannot
Many audio signal processing algorithms typically do not operate on raw timedomain audio
signals, but rather on timefrequency representations . A raw audio signal encodes the
amplitude of a sound as a function of time. Its Fourier spectrum represents it as a function of
frequency, but does not represent variations over time. A timefrequency representation
presents the amplitude of a sound as a function of both time and frequency, and is able to
jointly account for its temporal and spectral characteristics (Gröchenig, 2001).
Timefrequency representations are appropriate for three reasons in our context. First,
separation and enhancement often require modeling the structure of sound sources. Natural
sound sources have a prominent structure both in time and frequency, which can be easily
modeled in the timefrequency domain. Second, the sound sources are often mixed
convolutively, and this convolutive mixing process can be approximated with simpler
operations in the timefrequency domain. Third, natural sounds are more sparsely distributed
and overlap less with each other in the timefrequency domain than in the time or frequency
domain, which facilitates their separation.
In this chapter we introduce the most common timefrequency representations used for source
separation and speech enhancement. Section 2.1 describes the procedure for calculating a
timefrequency representation and converting it back to the time domain, using the shorttime
Fourier transform (STFT) as an example. It also presents other common timefrequency
representations and their relevance for separation and enhancement. Section 2.2 discusses the
properties of sound sources in the timefrequency domain, including sparsity, disjointness,
and more complex structures such as harmonicity. Section 2.3 explains how to achieve
separation by timevarying filtering in the timefrequency domain. We summarize the main
concepts and provide links to other chapters and more advanced topics in Section 2.4.
2.1 TimeFrequency Analysis and Synthesis
In order to operate in the timefrequency domain, there is a need for analysis methods that
convert a timedomain signal to the timefrequency domain, and synthesis methods that
convert the resulting timefrequency representation back to the time domain after separation
or enhancement. For simplicity, we consider the case of a singlechannel signal ( ) and
omit the channel index . In the case of multichannel signals, the timefrequency
representation is simply obtained by applying the same procedure individually to each channel.
2.1.1 STFT Analysis
Our first example of timefrequency representation is the STFT. It is the most commonly used
56. [607] No doubt Galba’s personal appearance offered a striking
contrast to that of “the implacable, beautiful tyrant” Nero. See infra,
ch. 15, and Tac. Hist. i. 7
[608] ‘Tanquam innocentes,’ Tac. Hist. i. 6.
[609] More properly “rowers,” men employed to row in ships of war,
who regarded it as promotion to become legionary soldiers.
[610] Vinius had engaged to marry the daughter of Tigellinus, who
was a widow with a large dower.
[611] ‘Hordeonius Flaccus,’ Tac. Hist. i. 12, 53, etc.
[612] Tigellinus, we have learned from the last chapter but one, was
living at Rome. Moreover he was never in command of any legions;
and evidently some legions in the provinces are meant. Clough
conjectures that we should read Vitellius instead of Tigellinus; and this
I think very reasonable.
[613] This seems to be a mistake, as Asiaticus was a freedman of
Vitellius. See Tac. (Hist. ii. 57)
[614] Of sesterces.
[615] A.D. 69.
[616] The First Legion, in Lower Germany.
[617] At Cologne.
[618] Tac. (Hist. i. 62).
[619] Suetonius (Otho, 4) calls him Seleukus.
[620] So I have ventured to translate “speculator.” The speculatores
under the empire were employed as special adjutants, messengers,
and body-guards of a general.
[621] Counting inclusively in the Roman fashion.
[622] The Miliarium Aureum, or Golden Milestone. London Stone was
established by the Romans in Britain for the same purpose.
[623] This habit of the ancient Romans, of being carried about Rome
in litters, survives to the present day in the Pope’s “sedia gestatoria.”
[624] We learn from Tacitus that this man was the standard-bearer
(vexillarius) of a cohort which still accompanied Galba. Tac. (Hist. i.
57. 41).
[625] Galba before leaving the palace had put on a light, quilted tunic.
Suet. (Galba, ch. 19).
[626] She was obliged to pay for it. Tac. (Hist. i. 47).
[627] Patrobius was a freedman of Nero who had been punished by
Galba. The words “and Vitellius” are probably corrupt.
[628] Argius was Galba’s house-steward. He buried his master’s body
in his own private garden. Tac. (Hist. i. 49).
[629] This life must be read as the sequel to that of Galba.
[630] See Life of Galba, ch. viii., note.
[631] Tac. (Hist. i. 80, 82, s. 99).
[632] A body of troops, consisting of two centuriae (Polyb. ii. 23, 1),
and consequently commanded by two centurions.
[633] Tacitus (Hist. i. 83, 84) gives Otho’s speech at length.
[634] Almost literally translated by Plutarch from Tacitus (Hist. i. 71)
[635] Tac. (Hist. i. 86).
[636] Caius Julius Cæsar.
[637] Tac. (Hist. i. 86).
[638] These are more particularly described in Tac. (Hist. ii. 21).
[639] I imagine that Cæcina made himself disliked by using signs
instead of speaking, not that he had forgotten his language, but
because he did not choose to speak to the provincial magistrates.
Tacitus (Hist. ii. 20) says that he conducted himself modestly while in
Italy.
[640] We learn from Tacitus (Hist. ii. 20) that her name was Salonina.
He adds that she did no one any harm, but that people were offended
with her because she rode upon a fine horse and dressed in scarlet.
[641] “At every place where he halted his devouring legions, and at
every place which he was induced to pass without halting, this
rapacious chief required to be gratified with money, under threats of
plunder and conflagration.” Merivale (History of the Romans, ch. lvi.)
58. [642] Tacitus (Hist. i. 87) describes Julius Proculus as active in the
discharge of his duties at Rome, but ignorant of real war. He was,
Tacitus adds, a knave and a villain, who got himself preferred before
honest men by the unscrupulous accusations which he brought
against them.
[643] Tac. Hist. ii. 30.
[644] Tacitus, (Hist. ii. 39) says that Otho was not present, but sent
letters to the generals urging them to make haste. He adds that it is
not so easy to decide what ought to have been done as to condemn
what was actually done.
[645] Tac. (Hist. ii. 37).
[646] Tac. (Hist. ii. 43). The legions were the 21st “Rapax,” and the
1st “Adjutrix.”
[647] Their journey was, no doubt, back to Rome.
59. INDEX.
Abantes, i. Theseus, ch. 5.
Abantidas of Sikyon, iv. Aratus, ch. 2.
Abas, river, iii. Pompeius, ch. 35.
Abdera, iii. Alexander, ch. 52.
Abœokritus, iv. Aratus, ch. 16.
Abolus, river in Sicily, i. Timoleon, ch. 34.
Abra, iv. Cicero, ch. 28.
Abriorix the Gaul, iii. Cæsar, ch. 24.
Abrotonon, i. Themistokles, ch. 1.
Abouletes, iii. Alexander, ch. 68.
Abydos, i. Alkibiades, chs. 27, 29; iii. Cæsar, ch. 69.
Academia, a garden at Athens, i. Theseus, ch. 32; Solon, ch. 1; ii. Sulla,
ch. 12; Kimon, ch. 13.
——, a school of philosophy, ii. Philopœmen, ch. 1; Lucullus, ch. 42;
Comparison of Kimon and Lucullus, ch. 1; iii. Phokion, ch. 4; iv.
Cicero, ch. 4; Dion, chs. 14, 20, 22, 47, 52; Brutus, ch. 2.
Academus, i. Theseus, ch. 32.
Acerræ, ii. Marcellus, ch. 6.
Achæans of Phthiotis, i. Perikles, ch. 17; ii. Pelopidas, ch. 31;
Flamininus, ch. 10.
Achæan harbour, ii. Lucullus, ch. 12.
Achæa and Achæans, i. Perikles, chs. 17, 19; Cato Major, ch. 9;
Philopœmen, chs. 9, 12, 14, 16, and after; Flamininus, chs. 13, 17;
Agesilaus, ch. 22; iv. Agis, chs. 13, 15; Kleomenes, ch. 3, and after;
Demosthenes, ch. 17; Dion, ch. 23; Aratus, chs. 9, 11, and after.
Achaicus, surname of Mummius, ii. Marius, ch. 1.
Acharnæ, i. Themistokles, ch. 24; Perikles, ch. 33.
‘Acharnians,’ play of Aristophanes, i. Perikles, ch. 30.
Achelous, i. Perikles, ch. 19.
60. Achillas, an Egyptian, iii. Pompeius, chs. 77-80; Cæsar, ch. 49.
Achilles, i. Theseus, ch. 34; Camillus, ch. 13; Alkibiades, ch. 23; ii.
Aristeides, ch. 7; Philopœmen, ch. 1; Pyrrhus, chs. 1, 13, 22;
Comparison of Lysander and Sulla, ch. 4; iii. Pompeius, ch. 29;
Alexander, chs. 5, 15, 24.
——, a Macedonian, ii. Pyrrhus, ch. 2.
Achradina, in Syracuse, i. Timoleon, ch. 21; ii. Marcellus, ch. 18; iv. Dion,
chs. 29, 30, 35, 42.
Acilius, a historian, i. Romulus, ch. 21; ii. Cato Major, ch. 22.
——, Glabrio, Manius, ii. Sulla, ch. 12; Cato Major, chs. 12, 14.
——, a friend of Brutus, iv. Brutus, ch. 23.
——, a soldier of Cæsar, iii. Cæsar, ch. 16.
Aciris, river in Lucania, ii. Pyrrhus, ch. 16.
Acrillæ, ii. Marcellus, ch. 18.
Acrocorinthus, the citadel of Corinth, iv. Kleomenes, chs. 16, 19; Aratus,
ch. 16, and after.
Acron, king of the Ceninetes, killed by Romulus, i. Romulus, ch. 16;
Comparison, ch. 1.
Actium, iv. Antonius, chs. 62, 63, 71.
Ada, queen of Caria, iii. Alexander, ch. 22.
Adeimantus, an Archon, i. Themistokles, ch. 5; an Athenian general, i.
Alkibiades, ch. 36.
Adiabeni, ii. Lucullus, chs. 26, 27.
Admetus, king of the Molossians, i. Themistokles, ch. 24; king of Pheræ,
i. Numa, ch. 4.
Adonis, festival of, i. Alkibiades, ch. 18; iii. Nikias, ch. 13.
Adramyttium, iv. Cicero, ch. 4.
Adranum, i. Timoleon, chs. 12, 16.
Adranus, i. Timoleon, ch. 12.
Adrastean hills, ii. Lucullus, ch. 9.
Adrastus, i. Theseus, ch. 29.
Adria, a town of the Tyrrhenians, i. Camillus, ch. 16.
——, a corrupt reading in Aratus, iv. Aratus, ch. 12.
61. Adrianus, ii. Lucullus, ch. 17.
Adrumetum, iii. Cato Minor, ch. 59.
Æakides, son of Arybas, father of Pyrrhus, king of the Molossians, ii.
Pyrrhus, ch. 1.
——, ii. Pyrrhus, chs. 1, 2.
Æakus, i. Theseus, ch. 10.
Ædepsus, ii. Sulla, ch. 26.
Ædui, iii. Cæsar, ch. 26.
Ægæ, i. Themistokles, ch. 26.
Ægeis, Attic tribe, i. Alkibiades, ch. 21.
Ægeste, town in Sicily. _See_ Egesta.
Ægeus, father of Theseus, i. Theseus, chs. 3, 4, 12, 13, 17, 22;
Comparison, ch. 6.
Ægialia, iv. Kleomenes, chs. 31, 32.
Ægias, banker at Sikyon, iv. Aratus, chs. 18, 19.
Ægikoreis, Attic tribe, i. Solon, ch. 23. _See_ Aigikoreis.
Ægina, i. Themistokles, chs. 4, 15, 17, 19; Perikles, chs. 8, 34; ii.
Aristeides, ch. 8; Lysander, chs. 9, 14; iii. Comparison of Nikias and
Crassus, ch. 4; iv. Demosthenes, ch. 26.
Ægium, ii. Cato Major, ch. 12; iv. Kleomenes, chs. 17, 25; Aratus, ch. 42.
Ægle, daughter of Panopeus, i. Theseus, chs. 20, 29.
Ægospotami, i. Alkibiades, ch. 36; ii. Lysander, chs. 9-12; iv. Artaxerxes,
ch. 21.
Ælia, wife of Sulla, ii. Sulla, ch. 6.
Ælii, i. Æmilius, ch. 5.
Ælius, Sextus, ii. Flamininus, ch. 2.
Ælius Tubero, i. Æmilius, chs. 5, 27, 28.
Æmilia, daughter of Æneas, i. Romulus, ch. 2.
——, wife of Africanus, i. Æmilius, ch. 2.
——, stepdaughter of Sulla and wife of Pompeius, ii. Sulla, ch. 33; iii.
Pompeius, ch. 9.
Æmilii, i. Numa, ch. 8; Æmilius, ch. 2.
62. Æmilius, son of Pythagoras, _ibidem_.
——, Quintus, ii. Pyrrhus, ch. 21.
——, Lucius. _See_ Paulus.
——, Marcus (Lucius Æmilius Mamercinus), i. Camillus, ch. 42.
——, Marcus Lepidus, i. Æmilius, ch. 38.
——, a crier, i. Æmilius, ch. 38.
——, quaestor (censor?), i. Numa, ch. 9.
Ænaria (now Ischia), off the coast of Campania, ii. Marius, chs. 37. 40.
Æneas, i. Romulus, ch. 2; Comparison, ch. 5; Camillus, ch. 20.
Ænus, in Thrace, iii. Cato Minor, ch. 11.
Æolus, islands of, i. Camillus, ch. 8.
Æquians, i. Camillus, chs. 2, 33, 35; Coriolanus, ch. 39.
Æropus, a friend of Pyrrhus, ii. Pyrrhus, ch. 8; a king of Macedonia, iv.
Demetrius, ch. 20.
Æschines, orator, iv. Demosthenes, chs. 4, 9, 12, 15, 16, 22, 24.
Æschines of Lampra, ii. Aristeides, ch. 13.
——, scholar of Sokrates, i. Perikles, chs. 24, 32; ii. Aristeides, ch. 25.
Æschylus, an Argive, iv. Aratus, ch. 25.
——, kinsman of Timoleon, i. Timoleon, ch. 4.
——, the poet, i. Theseus, ch. 1; Romulus, ch. 9; Themistokles, ch. 14;
ii. Aristeides, ch. 3; Kimon, ch. 8; iii. Pompeius, ch. 1; Alexander, ch.
8; iv. Comparison of Demosthenes and Cicero, ch. 2; Demetrius, ch.
35.
Æsculapius, i. Numa, ch. 4; iii. Pompeius, ch. 24.
Æsion, iv. Demosthenes, ch. 11.
Æson, a river, i. Æmilius, ch. 16.
Æsopus, tragic poet, iv. Cicero, ch. 5.
——, the fabulist, i. Solon, chs. 6, 28; ii. Pelopidas, ch. 34; iii. Crassus,
ch. 32; iv. Aratus, chs. 30, 38.
Æsuvian meadow, i. Poplicola, ch. 9.
Æthra, i. Theseus, chs. 3, 4, 6, 7, 34.
63. Ætolia and Ætolians, ii. Cato Major, ch. 13; Philopœmen, chs. 7, 15;
Flamininus, chs. 7-10, 15; iii. Alexander, ch. 49; iv. Agis, ch. 13;
Kleomenes, chs. 10, 18, 34; Demetrius, ch. 40; Aratus, frequent.
Afidius, ii. Sulla, ch. 31.
Afranius, consul B.C. 60, iii. Sertorius, ch. 19; Pompeius, chs. 34, 36, 44,
67; Cæsar, chs. 36, 41, 53.
Agamemnon, i. Perikles, ch. 28; ii. Pelopidas, ch. 21; Lysander, ch. 15;
iii. Nikias, ch. 5; Sertorius ch. 1; Agesilaus, chs. 6, 9; Pompeius, ch.
67; Cæsar, ch. 41; Comparison, ch. 4.
Agariste, mother of Perikles, i. Perikles, ch. 3.
Agatharchus, a painter, i. Perikles, ch. 13.
Agathoklea, iv. Kleomenes, ch. 33.
Agathokles, son of Lysimachus, iv. Demetrius, chs. 31, 46, 47. Of
Syracuse, ii. Pyrrhus, chs. 9, 14; iv. Demetrius, ch. 25.
Agave, iii. Crassus, ch. 33, note.
Agesias of Acharnæ, ii. Aristeides, ch. 13.
Agesilaus I., king of Sparta, iii. ; Life and Comparison with Pompeius, i.
Lykurgus, chs. 12, 29; Timoleon, ch. 36; ii. Pelopidas, chs. 16, 21,
30; Flamininus, ch. 11; Lysander, chs. 22-27, 30; Kimon, chs. 10,
19; iii. Phokion, ch. 3; iv. Agis, chs. 3, 4, 14; Artaxerxes, ch. 20.
——, uncle of Agis IV., iv. Agis, chs. 6, 9, 12, 13, 16, 19; Comparison, ch.
194.
Agesipolis I., king of Sparta, son of Pausanias, ii. Pelopidas, ch. 4; iii.
Agesilaus, chs. 20, 24; iv. Agis, ch. 3.
——, II., king of Sparta, son of Kleombrotus, iv. Agis, ch. 3.
Agesistrata, mother of Agis IV., iv. Agis, chs. 4, 20.
Agiadæ, ii. Lysander, chs. 24, 30.
Agias, at Argos, iv. Aratus, ch. 29.
Agiatis, daughter of Gylippus, iv., Kleomenes, chs. 1, 22.
Agis I., king of Sparta, ii. Lysander, chs. 24, 30; iv. Agis. ch. 33.
——, II., king of Sparta, son of Archidamus II., i. Lykurgus, chs. 11, 18,
19, 28, 29; Alkibiades, chs. 24, 25, 34, 38; ii. Lysander, chs. 9, 14,
22; iii. Agesilaus, chs. 1-4.
64. ——, III., king of Sparta, son of Archidamus III., iii. Agesilaus, ch. 15; iv.
Agis, ch. 3; Demosthenes, ch. 24.
——, IV., king of Sparta, son of Eudamidas, iv. Life and Comparison with
the Gracchi; iii. Agesilaus, ch. 40; iv. Kleomenes, ch. 1; Aratus, ch.
31.
Agnus, Attic township, i. Theseus, ch. 13.
Agraulai, i. Themistokles, ch. 23.
Agraulos, i. Alkibiades, ch. 15.
Agrigentum, i. Timoleon, ch. 35; ii. Pyrrhus, ch. 22; iv. Dion, ch. 26.
Agrippa, Marcus Vipsanius, iv. Comparison of Demosthenes and Cicero,
ch. 33; Antonius, chs. 35, 65, 66, 73, 87; Brutus, ch. 27; Galba, ch.
25.
——, Menenius, i. Coriolanus, ch. 6.
Agrippina, iv. Antonius, ch. 87; Galba, ch. 14.
Agylæus, iv. Kleomenes, ch. 8.
Ahala, Servilius, iv. Brutus, ch. 1.
Ahenobarbus, the first of the name, i. Æmilius, ch. 26. _See_ Domitius.
Ajax, i. Theseus, ch. 29; Solon, ch. 10; Alkibiades, ch. 1; iii. Pompeius,
ch. 72.
Aidoneus, king of the Molossians, i. Theseus, chs. 31, 35.
Aiantis, Attic tribe, ii. Aristeides, ch. 19.
Aipeia, i. Solon, ch. 26.
Aithra. _See_ Æthra.
Aius Locutius, i. Camillus, ch. 30.
Akademus, i. Theseus, ch. 32. _See_ Academia.
Akamantis, Athenian tribe, i. Perikles, ch. 3.
Akanthians, ii. Lysander, chs. 1, 18.
Akestodorus, i. Themistokles, ch. 13.
Akontium, ii. Sulla, chs. 17, 19.
Akræ, iv. Dion, ch. 27.
Akrillæ. _See_ Acrillæ.
Akrotatus I., king of Sparta, iv. Agis, ch. 3.
65. ——, II., king of Sparta, grandson of Akrotatus I., ii. Pyrrhus, chs. 26,
28; iv. Agis, ch. 3.
Akrourian mountain, iii. Phokion, ch. 33.
Akte, iv. Aratus, ch. 40.
Alba, in Latium, i. Romulus, chs. 3, 7, 9, 27, 28; Comparison of Theseus
and Romulus, ch. 1; Pompeius, chs. 53, 80; Cæsar, ch. 60; iv.
Antonius, ch. 60.
Albans, i. Romulus, ch. 2; Camillus, ch. 17. Alban farm, ii. Sulla, ch. 31.
Alban hills, iv. Cicero, ch. 31. Alban mount, ii. Marcellus, ch. 22.
Albani of the Caucasus, ii. Lucullus, ch. 26; iii. Pompeius, chs. 34, 35,
38, 45; iv. Antonius, ch. 34.
Albinus, Decimus Brutus. _See_ under Brutus.
——, or Albinius, Lucius, i. Camillus, ch. 21.
——, Spurius Postumius, consul B.C. 110, ii. Marius, ch. 9.
Aleas, ii. Lysander, ch. 28.
Alesia, iii. Cæsar, ch. 27.
Alexander of Antioch, iv. Antonius, ch. 46.
——, son of Antony and Cleopatra, iv. Antonius, ch. 54.
——, son of Kassander, ii. Pyrrhus, chs. 6, 7; iv. Demetrius, ch. 36;
Comparison of Demetrius and Antonius, 5.
——, an Aristotelian philosopher, a teacher of Crassus, iii. Crassus, ch. 3.
——, grandson of Kraterus, iv. Aratus, ch. 17.
——, son of Demetrius, iv. Demetrius, ch. 53.
——, a freedman, iii. Pompeius, ch. 4.
——, a young Macedonian, iii. Alexander, ch. 58.
——, I., king of Macedon, ii. Aristeides, ch. 15; Kimon, ch. 14.
——, II., king of Macedon, ii. Pelopidas, chs. 26-28.
——, the Great, iii. Life; i. Theseus, ch. 5; Camillus, ch. 19; Æmilius, ch.
23; ii. Pelopidas, ch. 34; Aristeides, ch. 11; Philopœmen, ch. 4;
Flamininus, chs. 7, 21; Pyrrhus, chs. 8, 11, 19; iii. Comparison of
Nikias and Crassus, ch. 4; Eumenes, chs. 1, 6, 7; Agesilaus, ch. 15;
Pompeius, chs. 2, 34, 45; Comparison of Pompeius and Agesilaus,
ch. 2; Cæsar, ch. 11; Phokion, chs. 9, 17, 18, 22; iv. Demosthenes,
chs. 9, 20, 23, 24, 25, 27, &c.; Demetrius, chs. 10, 25, 27, 29, 37;
66. Antonius, chs. 6, 80; Comparison of Demetrius and Antonius, ch. 4;
Kleomenes, ch. 31.
Alexander, son of Priam, iv. Galba, ch. 19.
——, the Myndian, ii. Marius, ch. 17.
——, son of Perseus, i. Æmilius, ch. 37.
——, of Pheræ, ii. Pelopidas, chs. 26, 31, 32.
——, son of Polysperchon, iii. Phokion, ch. 33; iv. Demetrius, ch. 9.
——, son of Pyrrhus, ii. Pyrrhus, ch. 9.
——, son of Roxana, ii. Pyrrhus, ch. 4.
——, general of the Thracians, i. Æmilius, ch. 18.
Alexandria and Alexandrians, ii. Lucullus, ch. 2; iii. Pompeius, ch. 49;
Alexander, ch. 26; Cæsar, ch. 48; Cato Minor, ch. 35; iv. Kleomenes,
chs. 37, 39; Antonius, chs. 69, 71, and after.
Alexandropolis, iii. Alexander, ch. 9.
Alexandristes, ii. Alexander, ch. 24.
Alexas of Laodicea, iv. Antonius, ch. 72.
——, a Syrian, perhaps, same as preceding, iv. Antonius, ch. 66.
Alexikrates, ii. Pyrrhus, ch. 5.
Alexippus, iii. Alexander, ch. 41.
Alfenus Varus, general of Vitellius, iv. Otho, ch. 12. _See_ Alphenus.
Alkæus, an epigrammatist, ii. Flamininus, ch. 9.
——, of Sardis, iii. Pompeius, ch. 37.
Alkander, a Spartan, i. Lykurgus, ch. 10.
Alketas, king of the Molossians, ii. Pyrrhus, ch. 1.
——, iii. Eumenes, chs. 5, 8; Alexander, ch. 55.
Alkibiades, i. Life and Comparison with Coriolanus; Lykurgus, ch. 15;
Numa, ch. 8; Perikles, chs. 20, 37; ii. Pelopidas, ch. 4; Aristeides,
ch. 7; Flamininus, ch. 11; Lysander, chs. 3, 4, 10, 11; Comparison of
Lysander and Sulla ch. 4; iii. Nikias, chs. 9-15; Comparison of Nikias
and Crassus, chs. 2, 3; Agesilaus, ch. 3; iv. Demosthenes, chs. 1,
27; Comparison of Demosthenes and Cicero, ch. 4; Antonius, ch.
70.
Alkidamas, an orator, iv. Demosthenes, ch. 5.
67. Alkimenes, an Achæan, iv. Dion, ch. 23.
Alkimus, a promontory in Attica, i. Themistokles, ch. 32.
Alkimus, an Epirot, iv. Demetrius, ch. 21.
Alkmæon, in command of the Athenians, i. Solon, chs. 11, 30.
——, of Agraulæ, i. Themistokles, ch. 23; ii. Aristeides, ch. 25.
——, son of Amphiaraus, i. Alkibiades, ch. 1; iv. Aratus, ch. 3.
Alkmæonidæ, i. Perikles, ch. 33.
Alkman, a Lacedæmonian poet, i. Lykurgus, ch. 27; ii. Sulla, ch. 36.
Alkmena, mother of Herakles, i. Theseus, ch. 7; Romulus, ch. 28; ii.
Lysander, ch. 28.
Allia, river, i. Camillus, chs. 18, 19.
Allobroges, iv. Cicero, ch. 18.
Alopekæ, township in Attica, i. Themistokles, ch. 32; Perikles, ch. 11; ii.
Aristeides, ch. 1.
Alopekus, or Fox-hill, ii. Lysander, ch. 29.
Alphenus Varus, iv. Otho, ch. 12.
Alsæa, iv. Kleomenes, ch. 7.
Alykus, son of Skeiron, i. Theseus, ch. 32.
Amantius (Matius?), friend of Cæsar, iii. Cæsar, ch. 50.
Amanus, iii. Pompeius, ch. 39; iv. Cicero, ch. 36; Demetrius, ch. 48.
Amarsyas, i. Theseus, ch. 17.
Amathus, i. Theseus, ch. 20.
Amazons, i. Theseus, chs. 26-28; Comparison of Theseus and Romulus,
ch. 1; Perikles, ch. 31; ii. Lucullus, ch. 23; iii. Pompeius, ch. 35;
Alexander, ch. 46; iv. Demosthenes, ch. 19.
Amazoneum, at Athens, i. Theseus, ch. 27; at Chalkis, i. Theseus, ch.
27.
Ambiorix, or Abriorix, iii. Cæsar, ch. 24.
Ambrakia in Acarnania, i. Perikles, ch. 16; ii. Pyrrhus, ch. 6.
Ambrones, a Celtic tribe, ii. Marius, chs. 15. 19. 20.
Ambustus, Q. Fabius, i. Numa, ch. 12; Camillus, ch. 4.
68. Ameinias, of Dekeleia, i. Themistokles, ch. 14; Comparison of Aristeides
and Cato, ch. 2.
——, a Phokian, ii. Pyrrhus, ch. 29.
Ameria, in Umbria, ii. Marius, ch. 17.
Amestris, iv. Artaxerxes, ch. 2, 3, 27.
Amisus, a town in Pontus, i. Lucullus, chs. 14, 15, 19, 32, 33; iii.
Pompeius, ch. 38.
Ammon, ii. Lysander, chs. 20, 25; Kimon, ch. 8; iii. Nikias, ch. 13;
Alexander, chs. 26, 27, 47, 50.
——, son of Zeus and Pasiphæ, iv. Agis, ch. 9.
Ammonius, i. Themistokles, ch. 32.
Amnæus, iii. Cato Minor, ch. 19.
Amœbeus, iv. Aratus, ch. 17.
Amompharetus, i. Solon, ch. 10; ii. Aristeides, ch. 17.
Amorgos, iv. Demetrius, ch. 11.
Amphares, iv. Agis, chs. 18-21.
Amphiaraus, i. Aristeides, chs. 3, 19; iv. Aratus, ch. 3.
Amphikrates, ii. Lucullus, ch. 22.
Amphiktyons, i. Solon, ch. 11; Themistokles, ch. 20; ii. Cato Major, ch.
12; Sulla, ch. 12; Kimon, ch. 8.
Amphilochia, ii. Pyrrhus, ch. 6.
Amphipolis, i. Lykurgus, ch. 24; Æmilius, chs. 23, 24; ii. Kimon, ch. 8; iii.
Nikias, chs. 9, 10; Pompeius, ch. 74.
Amphissa, iv. Demosthenes, ch. 18; Antonius, ch. 28.
Amphitheus, ii. Lysander, ch. 27.
Amphitrope, i. Aristeides, ch. 26.
Amphitryon, ii. Lysander, ch. 28.
Amulius, i. Romulus, chs. 3, 6-9, 21; Comparison of Theseus and
Romulus, ch. 1.
Amykla, i. Alkibiades, ch. 1; Lykurgus, ch. 15.
Amyklas, iv. Agis, ch. 9.
Amyntas, a Macedonian, iii. Alexander, ch. 20.
69. ——, envoy of Philip, iv. Demosthenes, ch. 18.
——, king of Lycaonia and Galatia, iv. Antonius, chs. 61, 63.
Anaitis (Artemis), iv. Artaxerxes, ch. 27.
Anakes, i. Theseus, ch. 33; Numa, ch. 13.
Anacharsis, i. Solon, ch. 5.
Anakreon, i. Perikles, ch. 27.
Analius, Lucius, iii. Comparison of Nikias and Crassus, ch. 2.
Anaphlystus, ii. Kimon, ch. 17.
Anapus, i. Timoleon, ch. 21; iii. Nikias, 16; iv. Dion, ch. 27.
Anaxagorus, a philosopher, i. Themistokles, ch. 2; Perikles, chs. 4, 5, 6,
8, 16, 32; ii. Lysander, ch. 12; iii. Nikias, ch. 23.
Anaxandrides, of Delphi, ii. Lysander, ch. 18.
Anaxarchus, a philosopher, iii. Alexander, chs. 8, 28, 32.
Anaxenor, iv. Antonius, ch. 24.
Anaxidamus, of Chæronea, ii. Sulla, chs. 17, 1719.
Anaxilas, i. Solon, ch. 10.
Anaxilaus, i. Alkibiades, ch. 31.
Anaximenes, i. Poplicola, ch. 9; iv. Demosthenes, ch. 28; Comparison of
Demosthenes and Cicero, ch. 2.
Anaxo, i. Theseus, ch. 29; Comparison of Theseus and Romulus, ch. 6.
Ancharia, iv. Antonius, ch. 31.
Ancharius, ii. Marius, ch. 43.
Ancus Marcius, i. Coriolanus, ch. 1.
Andokides, i. Themistokles, ch. 32; Alkibiades, ch. 21; Nikias, ch. 13.
Androgeus, i. Theseus, ch. 15, 16; Comparison of Theseus and Romulus,
ch. 1.
Androkleon, ii. Pyrrhus, ch. 2.
Androkles, i. Alkibiades, ch. 19.
Androkleides, an Epirot, ii. Pyrrhus, ch. 2.
——, an author, ii. Lysander, ch. 8.
——, a Bœotian, ii. Lysander, ch. 27.
70. Androkottus, iii. Alexander, ch. 62.
Androkrates, i. Aristeides, ch. 11.
Androkydes, ii. Pelopidas, ch. 25.
Andromache, ii. Pelopidas, ch. 29; iii. Alexander, ch. 51; iv. Brutus, ch.
23.
Andromachus, of Carrhæ, iii. Crassus, ch. 29.
——, of Tauromenium, i. Timoleon, ch. 10.
Andron, i. Theseus, ch. 25.
Andronikus, ii. Sulla, ch. 26.
Andros, i. Themistokles, ch. 21; Perikles, ch. 11; Alkibiades, ch. 35; ii.
Pelopidas, ch. 2, and (?) iv. Aratus, ch. 12.
Androtion, a writer, i. Solon, ch. 15.
——, an Athenian, iv. Demosthenes, ch. 15.
Angelus, ii. Pyrrhus, ch. 2.
Anicius, Lucius, i. Æmilius, ch. 13.
Anienus, iii. Cæsar, ch. 58.
Anio, i. Poplicola, ch. 21; Camillus, ch. 41; Coriolanus, ch. 6.
Annalius. _See_ Analius.
Anius, a river in Epirus, iii. Cæsar, ch. 38.
Annius, Caius, iii. Sertorius, ch. 7.
——, who killed Antonius the orator, ii. Marius, ch. 44.
——, Milo. _See_ Milo.
——, Titus, iv. Tib. Gracchus, ch. 14.
Annius Gallus, iv. Otho, chs. 7, 8, 13.
Antæus, i. Theseus, ch. 11; iii. Sertorius, ch. 9.
Antagoras, i. Aristeides, ch. 23.
Antalkidas, i. Lykurgus, ch. 12; ii. Pelopidas, chs. 15, 30; iii. Agesilaus,
chs. 23, 26, 32; iv. Artaxerxes, chs. 21, 22.
Antemna, or Antemnæ, i. Romulus, ch. 17; ii. Sulla, ch. 30.
Antenor, i. Numa, ch. 8.
Anthedon, ii. Sulla, ch. 26.
71. Anthemion, i. Alkibiades, ch. 4; Coriolanus, ch. 14.
Anthemokritus, i. Perikles, ch. 30.
Antho, i. Romulus, ch. 3.
Anticato, iii. Cæsar, ch. 54; iv. Cicero, ch. 39.
Antikleides, iii. Alexander, ch. 46.
Antikrates, iii. Agesilaus, ch. 35.
Antikyra, iv. Demetrius, ch. 24.
——, a town in Phokis, iv. Antonius, ch. 68.
Antigenes, chief of the Asgyraspids, iii. Eumenes, chs. 13, 16; Alexander,
ch. 70.
——, a writer, iv. Alexander, ch. 46.
Antigenidas, iv. Demetrius, ch. 1.
Antigone, daughter of Philip and Berenike, ii. Pyrrhus, chs. 4, 5, 9.
——, of Pydna, iii. Alexander, ch. 48.
Antigonea, iv. Aratus, ch. 45.
Antigonis, Attic tribe, iv. Demetrius, ch. 10.
Antigonus, father of Demetrius Poliorketes, i. Romulus, ch. 17; Æmilius,
chs. 8, 33; ii. Pelopidas, chs. 1, 2; Pyrrhus, chs. 4, 8; iii. Sertorius,
ch. 1; Eumenes, chs. 3, 8, and following; Comparison of Eumenes
and Sertorius, ch. 2; Alexander, ch. 77; Phokion, chs. 29, 30; iv.
Demetrius throughout; Comparison of Demetrius and Antonius, ch.
1; Aratus, ch. 54.
——, Gonatas, son of Demetrius, i. Æmilius, ch. 8; ii. Pyrrhus, chs. 26,
29, 30, and following; iv. Demetrius, chs. 39, 40, 51, 53; Aratus,
chs. 4, 9, 12, 15, 17, 18, 23-25, 34, 41.
——, Doson, king of Macedonia, i. Coriolanus, ch. 11; Æmilius, ch. 8; ii.
Philopœmen, chs. 6, 7; iv. Kleomenes, chs. 16, 20, and following;
Aratus, ch. 38, and following.
Antigonus, king of the Jews, iv. Antonius, ch. 36.
Antilibanus, iii. Alexander, ch. 24.
Antimachus, poet of Kolophon, i. Timoleon, ch. 36; ii. Lysander, ch. 18.
——, poet of Teos, i. Romulus, ch. 12.
Antioch on the Orontes, near Daphne, capital of Syria, ii. Lucullus, ch.
21, and note; iii. Pompeius ch. 40; Cato Minor, ch. 13; iv. Demetrius,
72. ch. 32; Galba, ch. 13.
——, of Mygdonia, ii. Lucullus, ch. 32.
Antiochis, an Athenian tribe, ii. Aristeides, chs. 1, 5.
Antiochus of Askalon, ii. Lucullus, chs. 28, 42; iv. Cicero, ch. 4; Brutus,
ch. 2.
——, Athenian pilot, i. Alkibiades, chs. 10, 35; ii. Lysander, ch. 5;
Comparison of Lysander and Sulla, ch. 4.
——, of Commagene, iv. Antonius, ch. 34.
——, I., Soter, son of Seleukus, iv. Demetrius, chs. 20, 31, 38, 51.
——, III., the Great, i. Æmilius, chs. 4, 7; ii. Cato Major, chs. 12, 13, 14;
Comparison of Aristeides and Cato, chs. 2, 5; Philopœmen, ch. 17;
Flamininus, chs. 9, 15, 16, 17, 20; Sulla, ch. 12; Lucullus, chs. 11,
31; iii. Crassus, ch. 26.
Antiope, i. Theseus, chs. 26, 27; Comparison of Theseus and Romulus,
ch. 6.
Antiorus, i. Lykurgus, ch. 31.
Antipater, governor of Macedonia, i. Camillus, ch. 19; Comparison of
Alkibiades and Coriolanus, ch. 3; Comparison of Aristeides and Cato,
ch. 2; iii. Eumenes, chs. 3, 4, 6, 8, 12; Agesilaus, ch. 15; Alexander,
chs. 11, 39, 46, 47, 74, 77; Phokion, chs. 1, 17, 23, 25-31; iv. Agis,
ch. 2; Demosthenes, chs. 27-29; Comparison of Demosthenes and
Cicero, ch. 5; Demetrius, chs. 14, 47; Comparison of Antonius and
Demetrius, ch. 1.
Antipater, son of Kassander, ii. Pyrrhus, ch. 6; Demetrius, chs. 36, 37.
——, of Tarsus, ii. Marius, ch. 46; iv. Tib. Gracchus, ch. 8.
——, of Tyre, iii. Cato Minor, ch. 4.
Antiphanes, comic poet, iv. Demosthenes, ch. 9.
Antiphates, i. Themistokles, ch. 18.
Antiphilus, iii. Phokion, chs. 24, 25.
Antiphon, an orator, i. Alkibiades, ch. 3; iii. Nikias, ch. 6; iv. Antonius, ch.
28.
——, a criminal, iv. Demosthenes, ch. 14.
Antisthenes, i. Lykurgus, ch. 30; Perikles, ch. 1; Alkibiades, ch. 1.
Antistia, wife of Appius Claudius, iv. Tib. Gracchus, ch. 4.
73. ——, wife of Pompeius, iii. Pompeius, chs. 4, 9.
Antistius (Appuleius?), in command of ships, iv. Brutus, ch. 25.
——, father-in-law of Pompeius, iii. Pompeius, chs. 4, 9.
Antium, i. Romulus, ch. 14; Fabius, ch. 2; Coriolanus, chs. 9, 13, 39; iv.
Brutus, ch. 21.
Anton, son of Hercules, iv. Antonius, ch. 4.
Antonia, iv. Antonius, ch. 87.
Antonias, flagship of Cleopatra, iv. Antonius, ch. 60.
Antonius, Marcus, the orator, ii. Marius, ch. 44; iii. Pompeius, ch. 24; iv.
Antonius, ch. 1.
——, Creticus, father of the triumvir, iv. Antonius, ch. 1.
——, Caius, son of the orator, iii. Cicero, chs. 11, 12, 16; iv. Antonius, ch.
1.
——, Caius, brother of the triumvir, iv. Antonius, chs. 15, 22; Brutus, ch.
26, and after.
——, Lucius, brother of the triumvir, iv. Antonius, ch. 15.
——, Iulus, son of Marcus Antonius and Fulvia, iv. Antonius, ch. 87.
Antonius, Publius, more properly Caius, iii. Cæsar, ch. 4.
——, Lucius Antonius Saturninus, who rebelled against Domitian, i.
Æmilius, ch. 25.
——, murderer of Sertorius, iii. Sertorius, ch. 26.
——, Marcus, the triumvir, iv. Life and Comparison; i. Numa, ch. 20;
Æmilius, ch. 38; iii. Pompeius, chs. 58, 59; Cæsar, ch. 30, and after;
Cato Minor, ch. 73; iv. Cicero, ch. 41, and after; Demetrius, ch. 1;
Brutus, chs. 18-24, 38, 41, and after; Comparison of Brutus and
Dion, ch. 5.
——, Honoratus, iv. Galba, ch. 14.
Antyllius, Q., iv. C. Gracchus, chs. 13, 14; Comparison, ch. 5.
Antyllus, iv. Antonius, chs. 71, 81, 87.
Anytus, i. Alkibiades, ch. 3; Coriolanus, ch. 14.
Aollius, or Avillius. _See_ Avillius.
Aous. _See_ Anius.
Apama, wife of Seleukus, iv. Demetrius, ch. 31.
74. ——, daughter of Artaxerxes, iv. Artaxerxes, ch. 27.
——, daughter of Artabazus, wife of Ptolemy, iii. Eumenes, ch. 1.
Apellas, a Macedonian, iv. Aratus, ch. 48.
Apelles, the painter, iii. Alexander, ch. 4; iv. Demetrius, ch. 22; Aratus,
ch. 13.
Apellikon, of Teos, ii. Sulla, ch. 26.
Apemantes, iv. Antonius, ch. 70.
Aperantians, ii. Flamininus, ch. 15.
Aphetai, i. Themistokles, ch. 7.
Aphidnæ, i. Theseus, chs. 31-33; comparison, ch. 6.
Aphidnus, i. Theseus, ch. 33.
Aphytæ, ii. Lysander, ch. 20.
Aphrodite, i. Numa, ch. 19.
Aphepsion, an Archon, ii. Kimon, ch. 8.
Apis, iv. Kleomenes, ch. 34.
Apollodorus, governor of Babylon, iii. Alexander, ch. 73.
——, the Phalerian, iii. Cato Minor, ch. 46.
——, a Sicilian, iii. Cæsar, ch. 49.
——, a writer, i. Lykurgus, ch. 1.
——, an Athenian, iv. Demosthenes, ch. 15; comparison of Demosthenes
and Cicero, ch. 3.
Apollokrates, iv. Dion, chs. 37, 40, 41, 51, 56.
Apollonia, in Mysia, ii. Lucullus, ch. 11.
——, in Sicily, i. Timoleon, ch. 24.
——, in Epirus, ii. Sulla, ch. 27; iii. Cæsar, chs. 37, 38; iv. Cicero, ch.
4343; Antonius, ch. 16; Brutus, chs. 22, 25, 26.
Apollonides, iv. Demetrius, ch. 50.
——, a philosopher, iii. Cato Minor, chs. 65, 66, 69, 70.
Apollonius, son of Molon, iii. Cæsar, ch. 3; iv. Cicero, ch. 4.
——, despot of Zenodotia, iii. Crassus, ch. 17.
Apollophanes, iii. Agesilaus, ch. 12.
75. Apollothemis, i. Lykurgus, ch. 31.
Aponius, iv. Galba, ch. 8.
Apothetæ, or the “Exposure,” a chasm under Mount Taygetus, i.
Lykurgus, ch. 15.
Appian Road, iii. Cæsar, ch. 5.
Appius Claudius (Cæcus), ii. Pyrrhus, chs. 18, 19.
——, Claudius, consul B.C. 212, i. Comparison of Fabius and Perikles, ch.
2; ii. Marcellus, chs. 13, 14.
——, Claudius, consul B.C. 177, i. Poplicola, ch. 7.
——, Claudius, consul B.C. 143, i. Æmilius ch. 38; Tib. Gracchus, chs. 4,
9, 13.
——, Claudius, consul B.C. 54, iii. Pompeius, ch. 57.
——, Claudius, ii. Sulla, ch. 29.
——, Clodius, sent by Lucullus to Tigranes, ii. Lucullus, chs. 19, 21, 29.
——, Clausus, i. Poplicola, chs. 21, 22, same as Appius Claudius;
Coriolanus, ch. 19.
Appius, governor of Sardinia, iii. Cæsar, ch. 21.
——, Marcus, iv. Cicero, ch. 26.
Apsephion, in text Aphepsion, Archon at Athens, ii. Kimon, ch. 8.
Apsus, ii. Flamininus, ch. 3.
Aptera, ii. Pyrrhus, ch. 30.
Apuleius, Lucius, i. Camillus, ch. 12.
Apulia, ii. Marcellus, ch. 24.
Aquæ Sextiæ, ii. Marius, ch. 18.
Aquillii, i. Poplicola, chs. 3, 4, and after.
Aquillius, Manius, ii. Marius, ch. 14.
——, Gallus, P., tribune of the people, iii. Cato Minor, ch. 43.
Aquinius, Marcus, iv. Cicero, ch. 27.
Aquinum, iv. Otho, ch. 5.
Aquinus, iii. Sertorius, ch. 13.
Arabia and Arabians, i. Theseus, ch. 5; ii. Lucullus, ch. 21, and after; iii.
Crassus, chs. 28, 29, and after; Pompeius, ch. 44, and after;
76. Alexander, ch. 24; iv. Antonius, ch. 37, and after; Arabia Nabathea,
iv. Antonius ch. 2730.
Arachosia, iii, Eumenes, ch. 19.
Arakus, ii. Lysander, ch. 7.
Arar, iii. Caesar, ch. 18.
Araterion, i. Theseus, ch. 35.
Arateum, iv. Aratus, ch. 53.
Aratus of Sikyon, iv. Life and Comparison; ii. Philopœmen, chs. 1, 8; iv.
Agis, ch. 15; Kleomenes, chs. 3, 4, 6, 15-17, 20, 25.
——, son of the preceding, iv. Aratus, chs. 49-54.
Araxes, ii. Lucullus, ch. 26; iii. Pompeius, chs. 33, 34; iv. Antonius, chs.
49, 52.
Arbakes, iv. Artaxerxes, ch. 14.
Arbela, i. Camillus, ch. 19; iii. Pompeius, ch. 36; Alexander, ch. 31.
Arcadia and Arcadians, i. Theseus, ch. 32; Numa, ch. 18; the Arcadian
months, Coriolanus, ch. 3; ii. Pelopidas, chs. 4, 20, and after;
Philopœmen, ch. 13; Agesilaus, chs. 15, 22, 30, 32; iv. Kleomenes,
ch. 3, and after; Demosthenes, ch. 27; Demetrius, ch. 25; Aratus,
ch. 34.
Archedamus, an Ætolian, i. Æmilius, ch. 23.
Archedemus, an Ætolian, ii. Comparison of Titus and Philopœmen, ch. 2.
——, a friend of Archytas, iv. Dion, ch. 18.
Archelaus, general of Antigonus Gonatas, iv. Aratus, ch. 22.
——, of Delos, ii. Sulla, ch. 22.
——, general of Mithridates, ii. Marius, ch. 34; Sulla, chs. 11, 15-17, 19-
24; Comparison, ch. 4; Lucullus, chs. 3, 8, 9, 11.
——, king of Cappadocia, iv. Antonius, ch. 61.
——, an Egyptian general, son of the preceding, iv. Antonius, ch. 3.
——, a writer, ii. Kimon, ch. 4.
——, a poet, ii. Kimon, ch. 4.
——, king of Sparta, i. Lykurgus, ch. 5.
——, in Phokis, ii. Sulla, ch. 17.
Archeptolis, half-brother of Themistokles, i. Themistokles, ch. 32.
77. ——, son of Themistokles, i. Themistokles, ch. 32.
Archestratus, an Athenian, i. Alkibiades, ch. 16; ii. Lysander, ch. 19.
——, an Athenian, iii. Phokion, ch. 33.
——, a dramatic poet, i. Aristeides, ch. 1.
Archias, an Athenian, ii. Pelopidas, ch. 10.
——, a Theban, ii. Pelopidas, chs. 5, 7-11; iii. Agesilaus, ch. 23.
——, a Thurian, iv. Demosthenes, chs. 28, 29.
Archibiades, iii. Phokion, ch. 10.
Archibius, iv. Antonius, ch. 86.
Archidamia, grandmother of Agis, ii. Pyrrhus, ch. 27; iv. Agis, chs. 4, 20;
perhaps not both the same.
Archidamidas, i. Lykurgus, ch. 19.
Archidamus II., king of Sparta, i. Lykurgus, ch. 19; Perikles, chs. 8, 29,
33; ii. Kimon, ch. 16; iii. Crassus, ch. 2; Agesilaus, chs. 1, 2; iv.
Kleomenes, ch. 27.
——, III., king of Sparta, i. Camillus, ch. 19; iii. Agesilaus, ch. 25, 33, 39,
40; iv. Agis, ch. 3.
——, IV., king of Sparta, iv. Agis, ch. 3; Demetrius, ch. 35.
——, V., king of Sparta, iv. Kleomenes, chs. 1, 5; comparison, ch. 5.
‘Archilochi,’ play by Kratinus, ii. Kimon, ch. 10.
Archilochus, i. Theseus, ch. 5; Numa, ch. 4; Perikles, chs. 2, 27; ii.
Marius, ch. 21; iii. Phokion, ch. 7; Cato minor, ch. 7; Demetrius, ch.
35; Galba, ch. 27.
Archimedes, ii. Marcellus, chs. 14-19.
Archippe, i. Themistokles, ch. 32.
Archippus, i. Alkibiades, ch. 1.
Architeles, i. Themistokles, ch. 7.
Archonides, iv. Dion, ch. 42.
Archytas, ii. Marcellus, ch. 14; iv. Dion, chs. 18, 20.
Ardea, i. Camillus, chs. 17, 23, 24.
Ardettus, i. Theseus, ch. 27.
Areius or Arius, iv. Antonius, ch. 80.
78. Areopagus, i. Solon, chs. 19, 31; Themistokles, ch. 10; Perikles, chs. 7,
9; ii. Kimon, chs. 10, 15; iii. Phokion, ch. 16; iv. Demosthenes, chs.
14, 26; Cicero, ch. 24.
Aretæus, iv. Dion, ch. 31.
Arete, i. Timoleon, ch. 33; iv. Dion, chs. 6, 31, 51, 58.
Arethusa in Macedonia, i. Lykurgus, ch. 31.
——, iv. Antonius, ch. 37.
Areus I., king of Sparta, ii. Pyrrhus, chs. 26, 27, 29, 30, 32; iv. Agis, ch.
3.
Areus II., king of Sparta, iv. Agis, ch. 3.
Argas, iv. Demosthenes, ch. 4.
Argileonis, i. Lykurgus, ch. 24.
Arginusæ, i. Perikles, ch. 37; ii. Lysander, ch. 7.
Argo, i. Theseus, ch. 19.
Argos and Argives, i. Lykurgus, ch. 7; Alkibiades, chs. 14, 15; ii.
Pelopidas, ch. 24; Philopœmen, chs. 12, 18; Pyrrhus, ch. 29, and
after; iii. Nikias, ch. 10; Agesilaus, ch. 31; Pompeius, ch. 24; iv.
Kleomenes, ch. 17, and after; Demetrius, ch. 25; Aratus throughout.
Argius, Galba’s freedman, iv. Galba, 1128.
Argyraspids, iii. Eumenes, chs. 13, 16, 17, 19.
Ariadne, i. Theseus, chs. 19-21; Comparison, ch. 1.
Ariæus, iv. Artaxerxes, ch. 11.
Ariamenes, i. Themistokles, ch. 14.
Ariamnes, iii. Crassus, ch. 21.
Ariarathes II., king of Cappadocia, iii. Eumenes, ch. 3.
——, son of Mithridates, ii. Sulla, ch. 11; Pompeius, ch. 37.
——, iii. Pompeius, ch. 42.
Ariaspes, iv. Artaxerxes, chs. 29, 30.
Arimanius, i. Themistokles, ch. 28.
Ariminum, ii. Marcellus, ch. 4; iii. Pompeius, ch. 60; Cæsar, chs. 32, 33;
Cato Minor, ch. 52.
Arimnestus, a Platæan, ii. Aristeides, ch. 11.
79. ——, a Spartan, ii. Aristeides, ch. 19.
Ariobarzanes, ii. Sulla, chs. 5, 22, 24; iv. Cicero, ch. 36; Demetrius, ch.
4.
Ariomandes, ii. Kimon, ch. 12.
Ariovistus, iii. Cæsar, ch. 19.
Ariphron, i. Alkibiades, chs. 1, 3.
Aristænetus, Aristæus, or Aristænus, ii. Philopœmen, ch. 17.
Aristagoras, ii. Lucullus, ch. 10.
Aristander, iii. Alexander, chs. 2, 25, 33, 50, &c.
Aristeas of Argos, ii. Pyrrhus, chs. 30, 32.
——, of Prokonnesus, i. Romulus, ch. 28.
Aristeides, i. Life and Comparison; i. Themistokles, chs. 3, 5, 11, 12, 16,
20; Perikles, ch. 7; Comparison of Alkibiades and Coriolanus, chs. 1,
3; ii. Pelopidas, ch. 4; Kimon, chs. 5, 6, 10; iii. Nikias, ch. 11;
Comparison, ch. 1; Phokion, chs. 3, 7; iv. Demosthenes, ch. 14.
——, a Lokrian, i. Timoleon, ch. 6.
——, author of Milesian Tales, iii. Crassus, ch. 32.
——, son of Xenophilus, i. Aristeides, ch. 1.
Aristion, i. Numa, ch. 9; ii. Sulla, chs. 12-14, 23; Lucullus, ch. 19.
——, Corinthian pilot, iii. Nikias, chs. 20, 25.
Aristippus of Argos, ii. Pyrrhus, ch. 30; iv. Aratus, chs. 25, 30.
——, of Cyrene, iv. Dion, ch. 19.
Aristobulus, Alexander’s historian, iii. Alexander, chs. 15, 18, 46, 74; iv.
Demosthenes, ch. 23.
——, king of Judæa, ii. Pompeius, chs. 39, 44; iv. Antonius, ch. 3.
Aristodemus, of Miletus, iv. Demetrius, chs. 8, 17.
——, despot of Megalopolis, ii. Philopœmen, ch. 1; iv. Agis, ch. 3.
——, founder of the royal houses of Sparta, i. Lykurgus, ch. 1, and note;
iii. Agesilaus, ch. 19.
Aristodikus, i. Perikles, ch. 10.
Aristogeiton, companion of Harmodius, i. Aristeides, ch. 27.
80. ——, an Athenian sycophant, iii. Phokion, ch. 10; iv. Demosthenes, ch.
15.
Aristokleitus, ii. Lysander, ch. 2.
Aristokrates, an Athenian, iv. Demosthenes, ch. 15.
——, son of Hipparchus, a Spartan writer, i. Lykurgus, chs. 4, 31; ii.
Philopœmen, ch. 16.
Aristokrates, a rhetorician, iv. Antonius, ch. 69.
Aristokritus, iii. Alexander, ch. 10.
Aristomache, i. Timoleon, ch. 33; iv. Dion, chs. 6, 7, 14, 34, 51, 58.
Aristomachus, Achæan general, iv. Kleomenes, ch. 4.
——, despot of Argos, iv. Aratus, chs. 25, 35, 44.
——, of Sikyon, iv. Aratus, ch. 5.
Aristomenes, i. Romulus, ch. 25; iv. Agis, ch. 21.
Ariston of Keos, i. Themistokles, ch. 3; Aristeides, ch. 2.
——, of Chios, ii. Cato Major, ch. 18; iv. Demosthenes, ch. 10.
——, a Corinthian pilot, iii. Nikias, chs. 20, 25.
——, captain of the Pæonians, iii. Alexander, ch. 39.
——, friend of Peisistratus, i. Solon, ch. 30.
Aristonikus, admiral of Mithridates, ii. Lucullus, ch. 11.
——, of Marathon, iv. Demosthenes, ch. 28.
Aristonikus of Pergamus, iv. Flamininus, ch. 21; Tib. Gracchus, ch. 20.
Aristonous, ii. Lysander, ch. 18.
Aristophanes, the comic poet, i. Themistokles, ch. 19; Perikles, ch. 30,
the verses; Alkibiades, ch. 16; ii. Kimon, ch. 16; iii. Nikias, chs. 4, 8;
iv. Demetrius, ch. 12; Antonius, ch. 70.
——, a Macedonian, iii. Alexander, ch. 51.
Aristophon, archon at Athens, iv. Demosthenes, ch. 24.
——, an Athenian, iii. Phokion, ch. 7.
——, a painter, i. Alkibiades, ch. 16.
Aristoteles, of Argos, iv. Kleomenes, ch. 20; Aratus, ch. 44.
——, a logician, iv. Aratus, ch. 3.
81. ——, of Sikyon, iv. Aratus, 33.
Aristotle, i. Theseus, chs. 3, 16, 25; Lykurgus, chs. 5, 6; Solon, chs. 11,
31; Themistokles, ch. 10; Camillus, ch. 22; Perikles, chs. 9, 10, 25;
Comparison of Alkibiades and Coriolanus, ch. 3; ii. Pelopidas, chs. 3,
18; Aristeides, ch. 27; Comparison, ch. 2; Lysander, ch. 2; Sulla, ch.
26; Kimon, ch. 10; iii. Nikias, chs. 1, 2; Crassus, ch. 3; Alexander,
chs. 7, 8, 17, 52, 54, 55, 74, 77; iv. Kleomenes, ch. 9; Cicero, ch.
24; Dion, ch. 22.
Aristoxenus, i. Lykurgus, ch. 31; Timoleon, ch. 15; ii. Aristeides, ch. 27;
iii. Alexander, ch. 4.
Aristratus, iv. Aratus, ch. 13.
Aristus, iv. Brutus, ch. 2.
Arkesilaus, philosopher, ii. Philopœmen, ch. 1; iv. Aratus, ch. 5.
——, a Spartan, iv. Agis, ch. 18.
Arkissus, ii. Pelopidas, ch. 13.
Armenia, and Armenians, i. Camillus, ch. 19; ii. Sulla, ch. 5; Kimon, ch.
3; Lucullus, chs. 9, 21, 24, 25, 27, 31, and after; Eumenes, chs. 4,
5, 16; Crassus, chs. 18, 22, 32; iii. Pompeius, chs. 31-34, 39, 44;
Cæsar, ch. 50; iv. Cicero, ch. 10; Demetrius, ch. 46; Antonius, chs.
34, 37-39, 41, 49, 50, 54, 56.
Armenian Carthage, ii. Lucullus, ch. 32.
Armilustrum, i. Romulus, ch. 23.
Arnakes, i. Themistokles, ch. 16; ii. Aristeides, ch. 9.
Arpates, iv. Artaxerxes, ch. 30.
Arpinum, ii. Marius, ch. 3; iv. Cicero, ch. 8.
Arrhenides, iv. Demosthenes, ch. 25.
Arrhidæus, son of Philip, and himself called Philip, iii. Alexander, chs. 10,
77; compare iii. Eumenes, ch. 12; and Phokion, ch. 33.
Arrius, Quintus, iv. Cicero, ch. 15.
Arruntius, iv. Antonius, ch. 66.
Arsakes, ii. Sulla, ch. 5; iii. Crassus, chs. 18, 27; Pompeius, ch. 76; iv.
Comparison of Demetrius and Antonius, ch. 1.
Arsakidæ, iii. Crassus, ch. 32.
Arsames, iv. Artaxerxes, ch. 30.
82. Arsanias, ii. Lucullus, ch. 31.
Arsian Grove, i. Poplicola, ch. 9.
Arsikas, iv. Artaxerxes, ch. 1.
Arsis, iii. Pompeius, ch. 7.
Artabanus, i. Themistokles, ch. 27.
Artabazes. _See_ Artavasdes.
Artabazus, father of Barsine, iii. Eumenes, ch. 1; Alexander, ch. 21.
——, a Persian, ii. Aristeides, ch. 19.
Artagerses, iv. Artaxerxes, ch. 9.
Artasyras, iv. Artaxerxes, ch. 12.
Artauktes, i. Themistokles, ch. 13.
Artavasdes, king of Armenia, same as Artabazes, iii. Crassus, chs. 19,
22, 23; iv. Antonius, chs. 37, 39, 50; Comparison, ch. 5.
Artaxas, ii. Lucullus, ch. 31.
Artaxata, ii. Lucullus, ch. 31.
Artaxerxes I., Longimanus, i. Alkibiades, ch. 37; iv. Artaxerxes, ch. 1.
——, II., Mnemon, iv. Life; ii. Pelopidas, ch. 30.
Artemidorus of Knidos, iii. Cæsar, ch. 65.
——, a Greek, ii. Lucullus, ch. 15.
Artemisia, i. Themistokles, ch. 14.
Artemisium, i. Themistokles, chs. 7, 8, 9; Alkibiades, ch. 1.
Artemius of Kolophon, iii. Alexander, ch. 51.
Artemon, i. Perikles, ch. 27.
Arthmiadas, i. Lykurgus, ch. 5.
Arthmias of Zelea, i. Themistokles, ch. 6.
Artorius, Marcus, iv. Brutus, ch. 41.
Aruns, son of Porsena, i. Poplicola, ch. 19.
——, a Tuscan, i. Camillus, ch. 15.
——, son of Tarquin, i. Poplicola, ch. 9.
Aruveni, iii. Cæsar, chs. 25, 26.
Arverni. _See_ Aruveni.
83. Arybas, ii. Pyrrhus, ch. 1.
Arymbas, iii. Alexander, ch. 2.
Asbolomeni, ii. Kimon, ch. 1.
Ascalis. _See_ Askalis.
Ascanius, i. Romulus, ch. 2.
Asculum in Apulia, ii. Pyrrhus, ch. 21.
Asculum, in Picenum, iii. Pompeius, ch. 4, and after.
Asea or Alsea, iv. Kleomenes, ch. 7.
Asia, frequent. The Asiatic orators, iv. Cicero, ch. 4. The Asiatic style of
speaking, iv. Antonius, ch. 2.
——, daughter of Themistokles, i. Themistokles, ch. 32.
Asiaticus, iv. Galba, ch. 20.
Asinarus and Asinaria, iii. Nikias, ch. 28.
Asinius Pollio, iii. Pompeius, ch. 72; Cæsar, chs. 32, 46, 52; Cato Minor,
ch. 53; iv. Antonius, ch. 9.
Askalis, iii. Sertorius, ch. 9.
Askalon, ii. Lucullus, ch. 42; iv. Cicero, ch. 4; Brutus, ch. 2.
Asklepiades, a grammarian, i. Solon, ch. 1.
——, son of Hipparinus, iii. Phokion, ch. 22.
Asopia, i. Solon, ch. 9.
Asopus, river in Bœotia, ii. Aristeides, chs. 11, 15.
——, father of Sinope, ii. Lucullus, ch. 23.
Aspasia, i. Perikles, chs. 24, 25, 30, 32.
——, or Milto, of Phokæa, i. Perikles, ch. 24; iv. Artaxerxes, chs. 26, 27,
28.
Aspendus, i. Alkibiades, ch. 26.
Aspetus, a name of Achilles, ii. Pyrrhus, ch. 1.
Aspis, at Argos, ii. Pyrrhus, ch. 32; iv. Kleomenes, chs. 17, 21.
Assus and Assia, ii. Sulla, chs. 16, 17.
Assyria, ii. Lucullus, ch. 26; iii. Crassus, ch. 22.
Astenius, of Kolophon, iii. Alexander, ch. 51.
84. Asterie, ii. Kimon, ch. 4.
Asteropus, iv. Kleomenes, ch. 10.
Astura, iv. Cicero, ch. 47.
Astyanax, iv. Brutus, ch. 23.
Astyochus, i. Alkibiades, ch. 25.
Astypalæa, i. Romulus, ch. 28.
Astyphilus, ii. Kimon, ch. 18.
Asylus, a god, i. Romulus, ch. 9.
Ateius, tribune of the people, iii. Crassus, ch. 16.
Ateius, Marcus, or Teius, ii. Sulla, ch. 14.
Atellius, iv. Brutus, ch. 39.
Athamania, and Athamanes, ii. Flamininus, ch. 15; iii. Pompeius, ch. 66.
Athanis, i. Timoleon, chs. 23, 37.
Athenodorus, surnamed Cordylio, a stoic philosopher, iii. Cato Minor, chs.
10, 16.
Athenodorus of Imbros, iii. Phokion, ch. 18.
——, son of Sandon, i. Poplicola, ch. 17.
Athenophanes, iii. Alexander, ch. 35.
Athens and the Athenians, frequent.
Athos, iii. Alexander, ch. 72.
Atilius. _See_ Attilius.
Atiso or Adige, ii. Marius, ch. 23.
Atlantic islands, iii. Sertorius, ch. 8.
——, ocean, i. Timoleon, ch. 20; iii. Sertorius, ch. 8; Eumenes, ch. 2;
Cæsar, ch. 23.
Atlantis, i. Solon, ch. 31.
Atossa, daughter of Artaxerxes II., iv. Artaxerxes, chs. 23, 26, 27, 30,
and after.
Atreus, ii. Kimon, ch. 7; iv. Cicero, ch. 5.
Atropatene and Atropatenians (Satrapenians), ii. Lucullus, ch. 31; iv.
Antonius, ch. 38.
85. Attaleia, iii. Pompeius, ch. 76.
Attalus, uncle of Kleopatra, wife of Philip, iii. Alexander, chs. 9, 10.
——, iii. Alexander, ch. 55.
——, I., king of Pergamus, ii. Flamininus, ch. 6; iv. Antonius, ch. 60.
——, iii. Philometor, i. Camillus, ch. 19; iv. Tib. Gracchus, ch. 14;
Demetrius, ch. 20.
Attes or Attis, i. Numa, ch. 4; iii. Sertorius. ch. 91.
Attia, mother of Augustus, iv. Cicero, ch. 44; Antonius, ch. 31.
Attica, frequent. _See_ especially i. Theseus, first chapters.
Atticus, Cicero’s correspondent, iv. Cicero, ch. 45; Brutus, chs. 26, 29.
Atticus, Julius, iv. Galba, ch. 26.
Attilia, iii. Cato Minor, chs. 7, 9, 24.
Attiliis, a probable correction for Hostilii, ii. Comparison of Cato and
Aristeides, ch. 1.
Attilius, Vergilio, iv. Galba, ch. 26.
——, Marcus (more correctly Caius), i. Numa, ch. 20.
——, iv. Brutus, ch. 39.
Attis, i. Numa, ch. 4; iii. Sertorius, ch. 1.
Attius. _See_ Tullus and Varus.
Aufidius, Tullus, i. Coriolanus, ch. 22, and after.
——, a lieutenant of Sertorius, iii. Sertorius, chs. 26, 27.
Aufidus, i. Fabius, ch. 15.
Augustus. _See_ Cæsar.
Aulis, ii. Pelopidas, ch. 21; Lysander, ch. 27; iii. Agesilaus, ch. 6.
Aurelia, mother of Cæsar, iii. Cæsar, ch. 9, and after; iv. Cicero, ch. 28.
Aurelius, Caius (in text Onatius), iii. Crassus, ch. 12; Pompeius, ch. 23.
——, Quintus, ii. Sulla, ch. 31.
Autokleides, iii. Nikias, ch. 23.
Autocthones, i. Theseus, ch. 3.
Autoleon, ii. Pyrrhus, ch. 9.
Autolykus, an athlete, ii. Lysander, ch. 15.
86. ——, founder of Sinope, ii. Lucullus, ch. 23.
Automatia, i. Timoleon, ch. 36.
Auximum, iii. Pompeius, ch. 6.
Aventine, i. Romulus, chs. 9, 20; Numa, ch. 15; iv. C. Gracchus, ch. 15.
Avillius, i. Romulus, ch. 14.
Axiochus, i. Perikles, ch. 24.
Axius, Crassus, iv. Cicero, ch. 25.
——, a river in Macedonia, iv. Demetrius, ch. 42.
Babyca, i. Lykurgus, ch. 6; ii. Pelopidas, ch. 17.
Babylon, Babylonia, Babylonians, ii. Lucullus, ch. 26; iii. Crassus, ch. 17;
comparison, ch. 4; Eumenes, ch. 3; Alexander, chs. 35, 57, 69, 73;
iv. Demetrius, ch. 7; Antonius, ch. 45; Artaxerxes, ch. 7.
Babylonian tapestry, ii. Cato Major, ch. 4.
Bacchæ of Euripides, iii. Crassus, ch. 33.
Bacchiadæ, ii. Lysander, ch. 1.
Bacchides, ii. Lucullus, ch. 18.
Bacchylides, i. Numa, ch. 4.
Bacillus, Lucius, ii. Sulla, ch. 9.
Bactria, Bactrians, iii. Crassus, ch. 16; Comparison, ch. 4; iv. Antonius,
ch. 37.
Bactrian horse, iii. Alexander, ch. 32.
Baebius, M., i. Numa, ch. 22.
Baetica, iii. Sertorius, chs. 8, note, 12.
Baetis, the Guadalquivir, ii. Cato Major, ch. 10; iii. Sertorius, chs. 8, 12.
Bagoas, iii. Alexander, ch. 49.
Baiæ, ii. Marius, ch. 34.
Balbus, ii. Sulla, ch. 29.
——, Cæsar’s friend, iii. Cæsar, ch. 50.
——, Postumius Balbus, probably Albus, i. Poplicola, ch. 22.
Balinus or Kebalinus, iii. Alexander, ch. 49.
Balissus, iii. Crassus, ch. 23.
87. Balte, i. Solon, ch. 12.
Bambyke, or Hierapolis, iv. Antonius, ch. 37.
Bandius, ii. Marcellus, chs. 10, 11.
Bantia, ii. Marcellus, ch. 29.
Barbius, iv. Galba, ch. 24.
Barca, a friend of Cato, iii. Cato Minor, ch. 37.
——, in Hannibal’s army, i. Fabius, ch. 17.
——, Hamilcar, ii. Cato Major, ch. 8.
Bardyæi, ii. Marius, chs. 43. 44.
Bardyllis, ii. Pyrrhus, ch. 9.
Bargylians, ii. Flamininus, ch. 12.
Barsine, daughter of Artabazus, wife of Alexander, iii. Eumenes, ch. 1;
Alexander, ch. 21.
Barsine, sister of preceding, wife of Eumenes, iii. Eumenes, ch. 1.
Barinus, Publius, iii. Crassus, ch. 9. Publius Varinius Glaber was his
name.
Basillus, Lucius, ii. Sulla, ch. 9.
Basilica Pauli. _See_ Paulus.
——, Porcia, ii. Cato Major, ch. 19.
Bastarnæ or Basternæ, i. Æmilius, chs. 9, 12.
Bataces, ii. Marius, ch. 17.
Batalus, iv. Demosthenes, ch. 4.
Batavians, iv. Otho, ch. 12.
Bathykles, i. Solon, ch. 4.
Batiates, Lentulus, iii. Crassus, ch. 8.
Baton, iv. Agis, ch. 15.
Battiadæ, i. Coriolanus, ch. 11.
Bedriacum, iv. Otho, chs. 11, 13.
Belaeus, ii. Marius, ch. 40.
Belbina, iv. Kleomenes, ch. 4.
Belgæ, iii. Pompeius, ch. 51; Cæsar, ch. 20.
88. Belitaras, iv. Artaxerxes, ch. 19.
Bellerophon, i. Coriolanus, ch. 32.
Bellinus, iii. Pompeius, ch. 24.
Bellona, ii. Sulla, chs. 7, 27, 30; iv. Cicero, ch. 13.
Beluris, iv. Artaxerxes, ch. 22.
Beneventum, ii. Pyrrhus, ch. 25.
Berenike of Chios, wife of Mithridates, ii. Lucullus, ch. 18.
Berenike, wife of Ptolemy, ii. Pyrrhus, chs. 4, 6.
Berenikis, ii. Pyrrhus, ch. 6.
Berœa, i. Pyrrhus, ch. 11; iii. Pompeius, ch. 64; iv. Demetrius, ch. 44.
Berytus, iv. Antonius, ch. 51.
Bessus, iii. Alexander, ch. 42.
Bestia, Calpurnius, consul B.C. 111, ii. Marius, ch. 9.
——, a tribune, iv. Cicero, ch. 23.
Bias of Priene, i. Solon, ch. 4.
Bibulus, Calphurnius, consul B.C. 59, iii. Pompeius, chs. 47, 48, 54;
Cæsar, ch. 14; Cato Minor, chs. 25, 31, 32, 47, 54; iv. Antonius, ch.
5.
Bibulus, step-son of Brutus, iv. Brutus, chs. 13, 23.
——, Publicius, a tribune, ii. Marcellus, ch. 27.
Bion, i. Theseus, ch. 26.
Birkenna, ii. Pyrrhus, ch. 9.
Bisaltæ, i. Perikles, ch. 11.
Bisanthe, i. Alkibiades, ch. 36.
Bithynia and Bithynians, i. Numa, ch. 4; Alkibiades, chs. 29, 37; ii. Cato
Major, ch. 9; Flamininus, ch. 20; Sulla, chs. 11, 22; Comparison, ch.
5; Lucullus, ch. 6, and after; iii. Sertorius, chs. 23, 24; Pompeius,
ch. 30; Cæsar, chs. 1, 50; iv. Brutus, chs. 19, 28.
Bithys, iv. Aratus, ch. 34.
Biton, i. Solon, ch. 27.
Blossius, iv. Tib. Gracchus, chs. 8, 17, 20.
Bocchoris, iv. Demetrius, ch. 27.
89. Bocchus, king of Mauritania, ii. Marius, chs. 10. 32; Sulla, chs. 3, 5, 6.
——, king of Mauritania, iv. Antonius, ch. 61.
Bœdromia, i. Theseus, ch. 27.
Bœorix, ii. Marius, ch. 25.
Bœotia and Bœotians, frequent. _See_ particularly ii. Pelopidas, chs. 14-
24; some passages in Themistokles, Perikles, and Alkibiades; ii.
Aristeides, ch. 19, and after; Lysander, ch. 27, and after; Sulla, chs.
15-21; Kimon, chs. 1, 2; iii. Agesilaus, chs. 6, 26, and after; Phokion
ch. 23, and after; iv. Demetrius, ch. 39; Aratus, chs. 16, 50.
Bœotian months, i. Camillus, ch. 19; ii. Pelopidas, ch. 25; Aristeides,
ch. 19.
Bola and the people of Bola, i. Coriolanus, ch. 28.
Bolla or Bovillæ, i. Coriolanus, ch. 29.
Bona Dea, iii. Cæsar, ch. 9; iv. Cicero, ch. 19.
Bononia, iv. Cicero, ch. 46.
Boutes, i. Romulus, ch. 21; ii. Kimon, ch. 7.
Bosporus, kingdom of, ii. Sulla, ch. 11; Lucullus, ch. 24; Comparison, ch.
3; iii. Pompeius, ch. 32; Kimmerian Bosporus, i. Theseus, ch. 27; iii.
Pompeius, ch. 38, &c.
Bottiæans, i. Theseus, ch. 16.
Boukephalus, iii. Alexander, chs. 6, 32, 44, 61.
Boukephalia, iii. Alexander, ch. 61.
Brachylles, ii. Flamininus, ch. 6.
Brasidas, i. Lykurgus, chs. 24, 30; ii. Lysander, chs. 1, 18; iii. Nikias, ch.
9.
Brauron, i. Solon, ch. 10.
Brennus, i. Camillus, chs. 17, 22, 28, 29.
Briges, iv. Brutus, ch. 45.
Britain and Britons, iii. Comparison of Nikias and Crassus, ch. 4;
Pompeius, ch. 51; Cæsar, chs. 16, 23; Cato Minor, ch. 51; but some
read _Germans_.
Britomartus or Viridomarus, i. Romulus, ch. 16; ii. Marcellus, chs. 6, 7, 8.
Brixellum, iv. Otho, chs. 5, 10, 18.
90. Brundusium or Brundisium, i. Aemilius, ch. 1636; ii. Cato Major, ch. 14;
Sulla, ch. 27; iii. Crassus, ch. 17; Pompeius, chs. 27, 62, 65; Cæsar,
chs. 35, 37, 38, 39; Cato Minor, ch. 15; iv. Cicero, chs. 32, 39;
Antonius, chs. 7, 35, 62; Brutus, ch. 47.
Bruti (Bruti and Cumæi), iii. Cæsar, ch. 61.
Bruttii and Bruttium, i. Fabius, chs. 21, 22; Timoleon, chs. 16, 20; iii.
Crassus, ch. 6; Cato Minor, ch. 52.
Bruttius Sura, ii. Sulla, chs. 11, 12.
Brutus, Lucius Junius, i. Poplicola, chs. 7, 7, 9, 10, 16; iii. Cæsar, ch. 61;
iv. Brutus, chs. 1, 9.
——, Titus and Tiberius, sons of Lucius, i. Poplicola, ch. 6.
——, first tribune of the people, i. Coriolanus, chs. 7, 13.
——, consul B.C. 138, iv. Tib. Gracchus, ch. 21.
Brutus, prætor in the time of Marius, ii. Sulla, ch. 9.
——, father of the following, iii. Pompeius, chs. 7, 16; iv. Brutus, ch. 4.
——, Marcus, iv. Life and Comparison with Dion; iii. Pompeius, chs. 16,
64, 80; Cæsar, chs. 46, 54, 57, 64-69; Cato Minor, chs. 36, 73; i.
Cicero, chs. 42, 43, 45, 47; Comparison, ch. 4; Antonius, chs. 11,
13-15, 21, 22; comparison, ch. 2; Dion, chs. 1, 2.
——, Decimus Albinus, iii. Cæsar, chs. 64, 66; iv. Antonius, ch. 11;
Brutus, chs. 12, 17 (note), 38.
——, a bailiff, iv. Brutus, ch. 1.
——, name of a book, iv. Brutus, chs. 2, 13.
Bubulci, i. Poplicola, ch. 11.
Bucephalus. _See_ Boukephalus.
Busiris, i. Theseus, ch. 11.
Butas, freedman of Cato, iii. Cato Minor, ch. 70.
——, a poet, i. Romulus, ch. 21.
Buteo, Fabius, i. Fabius, ch. 9.
Butes, more properly spelt Boutes, ii. Kimon, ch. 7.
Buthrotum, iv. Brutus, ch. 26.
Byllis, iv. Brutus, ch. 26.
91. Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.
More than just a book-buying platform, we strive to be a bridge
connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.
Join us on a journey of knowledge exploration, passion nurturing, and
personal growth every day!
ebookbell.com