SlideShare a Scribd company logo
Biomedical Signal Modeling 
Autoregressive Modeling Moving Average Modeling 
2013 
Valeriu Mihai 
MIEET
1 
Index 
 Introduction……………………………………………………………………………………………….pag.2 
 Biosignals…………………………………………………………………………………………………..pag.2 
 Signal encoding………………………………………………………………………………………….pag.3 
 Noise………………………………………………………………………………………………………….pag.3 
 Parametric system modeling……………………………………………………………………..pag.4 
 Linear Prediction Model…………………………………………………………………………….pag.4 
 Autoregressive modeling…………………………………………………………………………..pag.4 
 The least-squares method………………………………………………………………pag.5 
 The autocorrelation method………………………………………………………….pag.7 
 The covariance method………………………………………………………………….pag.7 
 Application to random signals………………………………………………………..pag.8 
 Computation of the gain factor……………………………………………………...pag.8 
 Computation of the model parameters………………………………………….pag.9 
 Spectral matching and parameterization………………………………………pag.12 
 Optimal model order……………………………………………………………………pag.13 
 Moving-average modeling……………………………………………………………………….pag.14 
 Illustration of application……………………………………………………………..pag.17 
 Practical work………………………………………………………………………………………….pag.18 
 Identification of voiced/unvoiced parts of signal………………………….pag.18 
 The AR modeling of the segments using command lpc...................pag.21 
 Bibliography…………………………………………………………………………………………….pag.25
2 
Introduction 
Modeling can contribute to understanding physical or physiological aspects of the related systems. We use mathematical model to represent the process or the system that generates the signal of interest. 
Interaction or communication with a biological system is done through biosignals. 
FIGURE 1.1 A classical systems view of a physiological system that receives an external input, or stimulus, which then evokes an output, or response. 
Classic examples include the knee-jerk reflex, where the input is mechanical force and the output is mechanical motion. Systems that produce an output without the need for an input stimulus, like the electrical activity of the heart, can be considered biosignal sources. 
Biosignals 
Much of the activity in biomedical engineering, be it clinical or in research, involves the measurement, processing, analysis, display, and/or generation of signals. Signals are variations in energy that carry information. Table 1.1 summarizes the different energy types that can be used to carry information and the associated variables that encode this information. Table 1.1 also shows the physiological measurements that involve these energy forms. Biological signals are usually encoded into variations of electrical, chemical, mechanical energy, although in thermal energy. 
For communication within the body, signals are primarily encoded as variations in electrical or chemical energy. When chemical energy is used, the encoding is usually done by varying the concentration of the chemical within a physiological compartment for example, the concentration of a hormone in the blood. 
The conversion of physiological energy to an electric signal is an important step, and often the first step, in gathering information for clinical or research use. The energy
3 
conversion task is done by a device termed a transducer. A transducer is a device that converts energy from one form to another. 
The energy that is converted by the input transducer may be generated by the physiological or it may be energy produced by an external source. 
FIGURE 1.2 The typical biomedical measurement system 
Many physiological processes produce energy that can be detected directly. For example, cardiac internal pressures are usually measured using a pressure transducer. 
Signal encoding 
Most encoding strategies can be divided into two broad categories: continuous and discrete. 
The typical analog signal is one whose amplitude varies in time as follows: 
x(t) = f (t) 
A continuous analog signal is converted to the digital domain, a process that involves slicing the signal in both amplitude and time, slicing is known as sampling: 
X[n] = x[1], x[2], x[3], . . . x[n] 
Noise 
Noise is what you do not want and signal is what you do want. In biomedical measurements, noise has four possible origins: 1) physiological variability; 2) environmental noise or interference; 3) measurement or transducer artifact; 4) electronic noise. 
Amplitude slicing adds noise to the signal, and the noise level is dependent on the size of the slice, but typical slices are so small this noise is usually ignored.
4 
Parametric system modeling 
Parametric modeling techniques find the parameters for a mathematical model describing a signal, system, or process. These techniques use known information about the system to determine the model. Applications for parametric modeling include speech, music synthesis, simulation, etc. 
The difference equation that gives the output of a general linear, time-invariant, discrete-time system is 
Equation 1.1 ARMA System 
With b0=1 
x(n) – input; 
y(n) – output; 
The parameter bl, where l=0,1,2,…,Q, indicate how the present and Q past samples of the input are combined, in a linear manner, to generate the present output sample; the parameters ak, where k=1,2,…,P, indicate how the past P samples of the output are linearly combined(in feedback loop) to produce the current output; G is a gain factor; and P and Q determine the order of the system. The summation over x represents the moving average or MA part of the system; the summation over y represents the autoregressive or AR part of the system; the entire system may be viewed as combined autoregressive, moving-average or ARMA system. 
Linear Prediction Model 
The use of the past input and output samples in computing the present output sample represents the memory of the system. The model also indicates that the present output sample may be predicted as a linear combination of the present and a few past input samples, and a few past output samples. For this reason, the model is also known as the linear prediction or LP model. 
Autoregressive modeling 
Applications: 
 Speech recognition and coding; 
 System identification; 
 Modeling and recognition of sonar, radar, geophysical signals; 
 Spectral analysis. 
Problem: How can we obtain an AR model when the input to the system that caused the given signal as its output is unknown? 
Solution: In the AR the output is modeled as a linear combination of P past values of the output and the present input sample as 
Equation 2.1 AR equation 
k - order of the AR model 
ak –parameters of the AR model 
G – gain
5 
Between pitch pulses Gx(n) is zero and y(n) can be predicted by summation of past samples. 
y(n)=-(a1y(n-1)+a2y(n-2)+…+aPy(n-P)) 
It should be noted that the model as in Equation 2.1 does not account for the presence of noise. 
The AR transfer function corresponding to Equation 2.1 is 
Equation 2.2 
In the case of biomedical signals such as the EEG (electroencephalogram) or the PCG (phonocardiogram), the input to the system is totally unknown. Then, we can only approximately predict the current sample of the output signal using its past values as 
Equation 2.3 Where the ~ indicates that the predicted value is only approximate. 
The error in the predicted value (also known as the residual) is 
Equation 2.4 
The general signal-flow diagram of the AR model viewed as a prediction or error filter is illustrated in Figure 2.1 
Figure 2.1 Signal-flow diagram of the AR model 
We estimate the best model by trying to minimize the mean squared error between the signal sample predicted by the model and the actual measured signal sample. 
 The least-squares method 
The least-squares method based upon the Yule-Walker equations. 
In the least-squares method, the parameters ak are obtained by minimizing the MSE (mean-squared error) with respect to all of the parameters.
6 
The total squared error (TSE) ε is given by (the following procedure is applicable for minimization of MSE) 
Equation 2.5 TSE 
Although the range of the summation in Equation 2.5 is important, we may minimize ε without specifying the range for the time being. Minimization of ε is performed by applying the conditions 
Equation 2.6 
to Equation 2.5,which yields 
Σ ( ) Σ ( ) ( ) 
Σ ( ) ( ) ΣΣ ( ) ( ) 
Σ ( ) ( ) Σ Σ ( ) ( ) 
and finally 
Equation 2.7 
y(n-i)- signal sample i time periods prior to the current sample at n 
For a given signal y(n), Equation 2.7 provides a set of P equations in the P unknowns ak, k = 1,2,. . . ,P , known as the normal equations. 
By expanding Equation 2.5 and using the relationship in Equation 2.7, the minimum TSE εP for the model of order P is obtained as 
Equation 2.8 Prediction error
7 
 The autocorrelation method 
If the range of summation in Equations 2.5 and 2.7 is specified to be - < n < , the error is minimized over an infinite duration, and we have 
Equation 2.9 
φy(i) – the ACF(autocorrelation function) of y(n) 
In practice, the signal y(n) will be available only over a finite interval, say 0 ≤n ≤N - 1; the given signal may then be assumed to be zero outside this range and treated as a windowed version of the true signal. 
Then, the ACF may be expressed as 
Equation 2.10 
If we will give a look to Equation 2.9 and Equation 2.7 we will found that Least-Squares Method is related with Autocorrelation Method and this relation is given by Equation 2.11. 
The normal equations then become 
Equation 2.11 
We now see that an AR model may be derived for a signal with the knowledge of only its ACF; the signal samples themselves are not required. 
The minimum TSE is given by 
Equation 2.12 Prediction error 
 The covariance method 
In deriving the autocorrelation method, the range of summation of the prediction error in Equations 2.5 and 2.7 was specified to be - < n < . If, instead, we specify the range of summation to be a finite interval, say, 0 ≤ n ≤ N - 1, we get 
Equation 2.13
8 
instead of Equation 2.11 based upon the ACF (autocorrelation function), and the minimum TSE (total squared error) is given by 
Equation 2.14 
instead of Equation 2.12, where 
Equation 2.15 
is the covariance of the signal y(n) in the specified interval. The matrix formed by the covariance function is symmetric as C(i, k) = C(k, i), similar to the ACF matrix in Equation 2.22; however, the elements along each diagonal will not be equal, as C ( i + 1,k + 1) = C ( i , k) + y(-i - 1)y(-k - 1) - y(N - 1 - i)y(N - 1 - k). Computation of the covariance coefficients also requires y(n) to be known for - P ≤ n ≤ N - 1. The distinctions disappear as the specified interval of summation (error minimization) tends to infinity. 
 Application to random signals 
If the signal y(n) is a sample of a random process, the error e(n) is also a sample of a random process. We then have to use the expectation operation to obtain the MSE as follows 
Equation 2.16 
Applying the condition for minimum error as in Equation 2.6, we get the normal equations as 
Equation 2.17 
The minimum MSE is 
Equation 2.18 Prediction error 
 Computation of the gain factor 
Since we assumed earlier that the input to the system being modeled is unknown, the gain parameter G is not important. Regardless, the derivation of G demonstrates a few important points. Equation 2.4 may be rewritten as 
Equation 2.19 Where Gx(n)=e(n)
9 
This condition indicates that the input signal is proportional to the error of prediction when the estimated model parameters are equal to the real system parameters. In the case when x(n) = δ(n), we have the impulse response h(n) at the output, and 
Equation 2.20 
Multiplying both sides of the expression above with h(n - i) and summing over all n, we get expressions in terms of the ACF(autocorrelation function) φh(i) of h(n) as 
Equation 2.21 
and 
Equation 2.22 
Due to the condition that the energy of the output of the system be equal to that of y(n), the condition φh(0) = φy(0) must be satisfied. 
Comparing Equations 2.21 and 2.11, we then have 
Equation 2.23 
Therefore, for a model of order P, the first (P+ 1) ACF terms of the impulse response h(n) must be equal to the corresponding ACF terms of the signal y(n) being modeled. 
It follows from Equations 2.19, 2.20, and 2.12 that 
Equation 2.24 
 Computation of the model parameters 
For low orders of the model, Equation 2.11 may be solved directly. However, direct methods may not be feasible when P is large. 
The normal equations in Equation 2.11 may be written in matrix form as 
Equation 2.25
10 
For real signals, the P x P ACF matrix is symmetric and the elements along any diagonal are identical. 
A procedure known as Durbin’s method or the Levinson-Durbin algorithm provides a recursive method to solve the normal equations in Equation 2.25. The procedure starts with a model order of 1; computes the model parameters, the error, and a secondary set of parameters known as the reflection coefficients; updates the model order and the parameters; and repeats the procedure until the model of the desired order is obtained. The Levinson-Durbin algorithm is summarized below. 
Initialize model order i = 0 and error ε0 = φy(0). Perform the following steps recursively for i = 1,2, . . . , P. 
1. Increment model order i and compute the ith reflection coefficient ϒi as 
Equation 2.26 
where ai-1,j denotes the jth model coefficient at iteration (i - 1); the iteration index is also the recursively updated model order. 
2. Let ai,i = ϒi. 
3. Update the predictor coefficients as 
Equation 2.27 
4. Compute the error value as 
Equation 2.28 
The final model parameters are given as ak = aP,k, 1 ≤ k ≤ P. The Levinson- Durbin algorithm computes the model parameters for all orders up to the desired order P. As the order of the model is increased, the TSE reduces, and hence we have 0 ≤ εi ≤ εi-1. The reflection coefficients may also be used to test the stability of the model (filter) being designed: lϒil < 1, i = 1,2,. . . , P, is the required condition for stability of the model of order P. 
Example Of Levinson-Durbin Algorithm: [ ( ) ( ) ( ) ( ) ] [ ] ( ) ( ) 
Calculate parameters a1 and a2. 
1) ( ) 
2) ( ) ( )
11 
3) ( ) ( ) 
4) ( ) 
( ( ) ( ) ) ( ) 
( ) ( ) ( ) 
5) ( ) ( ) 
( ) ( ( ) ( ) ) ( ) ( ) ( ) ( ) 
( ) ( ) ( ) ( ) ( ) 
6) ( ) ( ) ( ) ( ) ( ) 
7) ( ) ( ) ( ( ) ( ) ( ) ( ) ( ) )( ( ) ( ) ) 
( ) ( ) ( ) ( ) ( ) ( ) 
Solution: 
[ ]=[ ]
12 
 Spectral matching and parameterization 
The AR model was derived in the preceding section based upon time-domain formulations in the autocorrelation and covariance methods. We shall now see that equivalent formulations can be derived in the frequency domain, which can lead to a different interpretation of the model. Applying the z-transform to Equation 2.4, we get 
Equation 2.29 
and 
Equation 2.30 
where, 
Equation 2.31 
and E(z) is the z-transform of e(n). We can now view the error e(n) as the result of passing the signal being modeled y(n) through the filter A(z), which may be considered to be an inverse filter. In the case of y(n) being a deterministic signal, applying Parseval's theorem(Parseval's theorem tell us that the sum or integral of the square of a function is equal to the sum or integral of the square of its transform), the TSE(total squared error) to be minimized may be written as 
Equation 2.32 
where E(ω) is obtained by evaluating E(z) on the unit circle in the z-plane. Using Sy(ω) to represent the PSD(power spectrum) of y(n), we have 
Equation 2.33 
where A(ω) is the frequency response of the inverse filter, and is given by evaluating A(z) on the unit circle in the z-plane. 
From Equations 2.1, 2.2 and 2.30, we get 
Equation 2.34 
Here, y(ω) represents the PSD(power spectrum) of the modeled signal (n) that is an approximation of y(n) as in Equation 2.3. From Equation 2.29 we have
13 
Equation 2.35 
Now, y(ω) is the model’s approximation of Sy(ω). Comparing Equations 2.34 and 2.35, we see that the error PSD |E(ω)|² is modeled by a uniform (or “flat” or “white”) PSD equal to G². For this reason, the filter A(z) is also known as a “whitening” filter. 
From Equations 2.32, 2.34, and 2.35, we get the TSE as 
Equation 2.36 
As the model is derived by minimizing the TSE ε, we see now that the model is effectively minimizing the integrated ratio of the signal PSD Sy(ω) to its approximation 
y(ω). The equivalence of the model is described in the following terms: 
 As the model order P , the TSE is minimized, that is, εp 0. 
 For a model of order P, the first (P + 1) ACF values of its impulse response are equal to those of the signal being modeled. Increasing P increases the range of the delay parameter (time) over which the model ACF is equal to the signal ACF. 
 Given that the PSD and the ACF are Fourier-transform pairs, the preceding point leads to the frequency-domain statement that increasing P leads to a better fit of (ω) to Sy(ω). As P , the model ACF and PSD become identical to the signal ACF and PSD, respectively. Thus any spectrum may be approximated by an all-pole model of an appropriate order. 
 Optimal model order 
Problem: Given that the AR model performs better and better as the order P is increased, where do we stop? If the given signal is the output of a P-pole system, then an AR model of order P would be the optimal model with the minimum error. But how would one find in practice if the given signal was indeed produced by a P-pole system? 
Solution: One possibility to determine the optimal order for modeling a given signal is to follow the trend in the TSE as the model order P is increased. This is feasible in a recursive procedure such as the Levinson-Durbin algorithm, where models of all lower orders are computed in deriving a model of order P, and the error at each order is readily available. The procedure could be stopped when there is no significant reduction in the error as the model order is incremented. 
The use of a normalized error measure ε p defined as 
Equation 2.37 
As the model order P ,
14 
Equation 2.38 
ε min is a monotonically decreasing function of P, with ε 0 = 1 and ε = ε min; 
The incremental reduction in the normalized error may be checked with a condition such as 
Equation 2.39 
where is a small threshold. The optimal order may be considered to have been reached if the condition is satisfied for several consecutive model orders. 
Model parameters: The AR (all-pole) model H(z) and its inverse A(z) are uniquely characterized by any one set of the following sets of parameters 
 The model parameters ak, k = 1,2,. . . , P. The series of ak parameters is also equal to the impulse response of the inverse filter. 
 The impulse response h(n) of the AR model. 
 The poles of H(z), which are also the roots (zeros) of A(z) . 
 The reflection coefficients ϒi, i = 1,2,. . . , P. 
 The ACF (or PSD) of the ak coefficients. 
 The ACF (or PSD) of h(n). 
 The cepstrum of ak or h(n). 
Moving-average modeling 
When an ensemble of several realizations of an event is not available, synchronized averaging will not be possible. We are then forced to consider temporal averaging for noise removal, with the assumption that the processes involved are ergodic, that is, temporal statistics may be used instead of ensemble statistics. As temporal statistics are computed using a few samples of the signal along the time axis and the temporal window of samples is moved to obtain the output at various points of time, such a filtering procedure is called a moving-window averaging filter in general; the term moving-average (MA) filter is commonly used. This type of filter is really simple to implement, but is rarely used, because we cannot predict the error. 
The general form of an MA model is 
Equation 3.1 
The parameter bk, where k=0,1,2,…,N, indicate how the present and N past samples of the input are combined, in a linear manner, to generate the present output sample, and N determine the order of the system.
15 
Figure 3.1 The signal-flow diagram of a generic MA filter of order N. 
Applying the z-transform, we get the transfer function H(z) of the filter as 
Equation 3.2 
where X(z) and Y(z) are the z-transforms of z(n) and y(n), respectively 
An MA filter is a finite impulse response (FIR) filter with the following attributes and advantages: 
 The impulse response h(k) has a finite number of terms: h(k) =bk , k =0,1,2,…,N. 
 An FIR filter may be realized non-recursively with no feedback. 
 The output depends only on the present input sample and a few past input samples. 
 The filter is merely a set of tap weights of the delay stages, as illustrated in Figure 3.1. 
 The filter transfer function has no poles except at z=0: the filter is inherently stable. 
 The filter has linear phase if the series of tap weights is symmetric or antisymmetric. 
Increased smoothing may be achieved by averaging signal samples over longer time windows, at the expense of increased filter delay. If the signal samples over a window of eight samples are averaged, we get the output as 
Equation 3.3 
The impulse response of the filter is 
Equation 3.4
16 
The transfer function of the filter is 
Equation 3.5 
and the frequency response is given by 
Equation 3.6 
The frequency response of the 8-point MA filter is shown in Figure 3.2; the pole-zero plot of the filter is depicted in Figure 3.3. It is seen that the filter has zeros at 
Figure 3.2 Magnitude and phase responses of the 8-point moving-average filter.
17 
Figure 3.3 Pole-zero plot of the 8-point moving-average filter. 
 Illustration of application 
Figure 3.4 shows a segment of an ECG signal with high-frequency noise. Figure 3.5 shows the result of filtering the signal with the 8-point MA filter described above. Although the noise level has been reduced, some noise is still present in the result. This is due to the fact that the attenuation of the simple 8-point MA filter is not more than -20 dB at most frequencies (except near the zeros of the filter). 
Figure 3.4 ECG signal with high-frequency noise.
18 
Figure 3.5 The ECG signal with high-frequency noise in Figure 3.4 after filtering by the 8-point MA filter shown in Figure 3.2 
Practical work 
The file safety.wav contains the speech signal for the word “safety” uttered by a male speaker, sampled at 8 kHz. The signal has a significant amount of background noise (as it was recorded in a normal computer laboratory). 
 Problem 1: Identify voiced and unvoiced part of signal. 
In speech analysis, the voiced-unvoiced decision is usually performed in extracting the information from the speech signals. In this paper, we performed two methods to identify the voiced-unvoiced parts of speech from a speech signal. These are zero crossing rate (ZCR) and root mean square (RMS). 
Zero-crossing rate is a measure of number of times in a given time interval/frame that the amplitude of the speech signals passes through a value of zero. If the zero-crossing rate is high, the speech signal is unvoiced, while if the zero-crossing rate is low, the speech signal is voiced. The root mean square of the speech signal provides a representation that reflects these amplitude variations. Generally, the amplitude of unvoiced speech segments is much lower than the amplitude if voiced segments. 
RMS=√Σ ( ) 
ZCR= Σ ( ( )) ( ( )) 
where, 
sign(a)={
19 
Procedure to identify voiced/unvoiced parts of signal in MATLAB. 
When we windowing its mean multiply time sample, that is equal to convolution in the frequency domain and when we convolve with Hamming Window we have less destruction. Hamming window doesn’t destroy or change the proprieties of signal. 
First step: Take off silenced part of signal. 
MATLAB code: 
[soundx,fs] = wavread('safety.wav'); % converts the sound from wave to ascii format tf = [1:length(final)]/fs; t= [1:length(soundx)]/fs; x1=soundx(1:1568);%frame 1 of silence x2=soundx(2672:3145);%frame 2 of silence x3=soundx(6301:8745); %frame 3 of silence x4=soundx(9720:11430); %frame 4 of silence y1=soundx(1569:2671); %frame 1 without silence y2=soundx(3146:6300); %frame 2 without silence y3=soundx(8744:9719); %frame 3 without silence final=[y1;y2;y3]; %array with all frames subplot(2,1,1) plot(t,soundx) title('Original speech'); xlabel('Time in sec');ylabel('Amplitude'); subplot(2,1,2) plot(tf,final) title('Speech without silenced part'); xlabel('Time in sec');ylabel('Amplitude');
20 
 Results 
Figure 4.1 Original signal and signal without silenced part 
Second step: Identification of voiced and unvoiced parts of signal. 
MATLAB code: soundx = wavread('safety.wav'); x1=soundx(1:1568); x2=soundx(2672:3145); x3=soundx(6301:8745); x4=soundx(9720:11430); y1=soundx(1569:2671); y2=soundx(3146:6300); y3=soundx(8744:9719); final=[y1;y2;y3]; Fs=8000; len=401; overlap=400; ham=hamming(len); t = [1:length(final)]/Fs; framed_rms = buffer(final, len, overlap, 'nodelay');
21 
windowed_rms = diag(sparse(ham)) * framed_rms; rms = sum(windowed_rms.^2,1); sigdif = sign(final(2:end))-sign(final(1:end-1)); sigdif = [0; sigdif]; framed_zcr= buffer(sigdif, len, overlap,'nodelay'); windowed_zcr = diag(sparse(len)) * abs(framed_zcr); zcr = sum(windowed_zcr, 1) / (2*len); subplot(1,1,1); plot(t, final) title('Speech "Safety"') hold on; delay = (len - 1)/2; plot(t(delay+1:end - delay), rms, 'r') hold on; plot(t(delay+1:end-delay), zcr/max(zcr),'g'); xlabel('Time in sec '); legend('Original','RMS (voiced)','ZCR(unvoiced)'); hold off; 
 Results 
Figure 4.2 Identification of voiced and unvoiced parts of signal 
 Problem 2: Apply the AR modeling procedure to each segment using the command Ipc (Linear Predictive Coding) in MATLAB.
22 
lpc - determines the coefficients of a forward linear predictor by minimizing the prediction error in the least squares sense. It has applications in filter design and speech coding. 
Procedure to estimate signal in MATLAB. 
MATLAB code: 
y=wavread('safety.wav'); fs=8000; frame_duration=0.03; frame_len = frame_duration*fs; frame_step = 80; p = 10; V_UV = 0.3; Z = zeros(p,1); offset = 0; function [P] = pitch(y,Pmax,Pmin,V_UV); C = xcorr(y,Pmax,'coeff'); C=C(Pmax+1:2*Pmax+1); [Cmax,i]=max(C(Pmin:end)); P = (Pmin+i-2); if Cmax < V_UV P = 0; end; nwindow = (length(y)-frame_len)/frame_step; padsize = (nwindow - floor(nwindow))*frame_len; y = padarray(y,padsize,0,'post'); for i=1:(length(y)-frame_len)/frame_step frame = y((i-1)*(frame_step)+[1:frame_len]); [A,sigma2] = lpc(frame.*hamming(length(frame)),p);
23 
sigma = sqrt(sigma2); f0=pitch(frame,160,20,V_UV); exc = zeros(frame_step,1); if f0 == 0 exc = randn(frame_step,1); gain = sigma; offset = 0; else if offset >= frame_step offset = offset-frame_step; else exc(offset+1:f0:end) = 1; if mod(frame_step-offset,f0) == 0 offset = 0; else offset=f0-mod(frame_step-offset,f0); end; end; gain = sigma/sqrt(1/f0); end; [recon,Z] = filter(1,A,exc*gain,Z); if i==1 o = [recon]; f00 = [f0]; else o = [o;recon]; f00 = [f00,f0]; end; end; hold on plot(o,'r') hold on plot(y) xlabel('Segments');ylabel('Amplitude'); legend('Estimated Signal','Original Signal') title('Speech "Safety"')
24 
 Results 
Figure 4.3 Estimated signal and original signal
25 
Bibliography 
 “A speech/music discriminator based on RMS and Zero-Crossings” – Costa Panagiotakis and George Tziritas 
 “Biomedical Signal Analysis” - Jit Muthuswamy 
 “Biomedical Signal Analysis” – Rangaraj M.Rangayyan 
 “Circuits, signals and systems for bioengineers” – John Semmlow 
 http://www.ltrr.arizona.edu/~dmeko/notes_5.pdf 
 http://en.wikipedia.org/wiki/Autoregressive%E2%80%93moving- average_model 
 http://paulbourke.net/miscellaneous/ar/ 
 http://dea.brunel.ac.uk/cmsp/Home_Saeed_Vaseghi/Chapter13- Speech%20Processing.pdf 
 http://www-ee.uta.edu/eeweb/ip/Courses/SSP/Notes/HandOut1.pdf

More Related Content

PPTX
Isolation amplifier
PDF
Bio amplifiers - basics
PPT
410815333-Bio-Telemetry-ppt.ppt
PDF
EMI Unit 5 Bridges and Measurement of Physical Parameters
PPTX
Generation of fm
PPTX
Origin of biopotentials
PPTX
Embedded system design process
PPTX
Classes of amplifier
Isolation amplifier
Bio amplifiers - basics
410815333-Bio-Telemetry-ppt.ppt
EMI Unit 5 Bridges and Measurement of Physical Parameters
Generation of fm
Origin of biopotentials
Embedded system design process
Classes of amplifier

What's hot (20)

PDF
Electrical Hazards and Patient Safety in Biomedical Equipment
PPTX
Path Loss and Shadowing
PPTX
EEG Signal processing
PPTX
Adaptive filter
PDF
Comparison of modulation methods
PPTX
ECG Noise cancelling
PPTX
Classification of transducers
PPTX
non parametric methods for power spectrum estimaton
PPTX
Magnetic sensors
PDF
Unit 1 telemetry principles
PPT
PULSE CODE MODULATION (PCM)
PPT
Feedback amplifiers
PPT
Interfacing adc
PPTX
Biomedical Engineering
PPTX
Current mirror
PPTX
Dual Slope ADC.pptx
PPTX
Design of FIR Filters
PDF
NON PARAMETRIC METHOD
PPTX
Heart beat monitor system PPT
PPTX
Fir filter design using windows
Electrical Hazards and Patient Safety in Biomedical Equipment
Path Loss and Shadowing
EEG Signal processing
Adaptive filter
Comparison of modulation methods
ECG Noise cancelling
Classification of transducers
non parametric methods for power spectrum estimaton
Magnetic sensors
Unit 1 telemetry principles
PULSE CODE MODULATION (PCM)
Feedback amplifiers
Interfacing adc
Biomedical Engineering
Current mirror
Dual Slope ADC.pptx
Design of FIR Filters
NON PARAMETRIC METHOD
Heart beat monitor system PPT
Fir filter design using windows
Ad

Viewers also liked (20)

PDF
Biomedical signal processing syllabus
PPTX
Signal modelling
PPTX
Electrocardiogram (ECG or EKG)
DOCX
Real time signal processing
PDF
Circulator Autotuning
PDF
lesson 2 digital data acquisition and data processing
PDF
Magnetic Biasing Techniques for Circulators
PPSX
Magnetic material
PPTX
parametric method of power spectrum Estimation
PPTX
Plan your research career 10. car devselfassess
PPTX
Bioelectrical signals
PPTX
Mechanical vs electromagnetic waves
PPTX
High performance computing language,julia
PPT
4 3-3 frequency-modulation
PDF
Deblurring in ct
PDF
Nail Content Writing & Inspire Readers to Respond
PPTX
ELECTROCARDIOGRAPHY (ECG)
PDF
Applications of fluid mechanics
PPT
Research perspectives in biomedical signal processing
PDF
Transparent electronics
Biomedical signal processing syllabus
Signal modelling
Electrocardiogram (ECG or EKG)
Real time signal processing
Circulator Autotuning
lesson 2 digital data acquisition and data processing
Magnetic Biasing Techniques for Circulators
Magnetic material
parametric method of power spectrum Estimation
Plan your research career 10. car devselfassess
Bioelectrical signals
Mechanical vs electromagnetic waves
High performance computing language,julia
4 3-3 frequency-modulation
Deblurring in ct
Nail Content Writing & Inspire Readers to Respond
ELECTROCARDIOGRAPHY (ECG)
Applications of fluid mechanics
Research perspectives in biomedical signal processing
Transparent electronics
Ad

Similar to Biomedical signal modeling (20)

PDF
Slides_Neural Networks for Time Series Prediction
PPT
LeastSquaresParameterEstimation.ppt
PPTX
Av 738- Adaptive Filtering - Background Material
PDF
Section4 stochastic
PPT
AR model
PDF
Fundamentals Of Statistical Signal Processing--Estimation Theory-Kay.pdf
PDF
ECG Classification using Dynamic High Pass Filtering and Statistical Framewor...
PDF
Another Adaptive Approach to Novelty Detection in Time Series
PDF
SfN 2018: Machine learning and signal processing for neural oscillations
PDF
Rabiner
PPTX
Nonlinear Systems Term Project: Averaged Modeling of the Cardiovascular System
PDF
Analysis of Adaptive Algorithms
PPTX
Time series Modelling Basics
PDF
Optimum Signal Processing 2nd edition Sophocles Orfanidis
PDF
Nonparametric Bayesian models for AR and ARX identification (CSPA 2016)
PPTX
Waveform_codingUNIT-II_DC_-PPT.pptx
PPT
short term load forecasting load forecasting
PPTX
Waveform_codingUNIT-II_DC_-PPT.pptx
PPT
PSOCTSR-1.ppt
Slides_Neural Networks for Time Series Prediction
LeastSquaresParameterEstimation.ppt
Av 738- Adaptive Filtering - Background Material
Section4 stochastic
AR model
Fundamentals Of Statistical Signal Processing--Estimation Theory-Kay.pdf
ECG Classification using Dynamic High Pass Filtering and Statistical Framewor...
Another Adaptive Approach to Novelty Detection in Time Series
SfN 2018: Machine learning and signal processing for neural oscillations
Rabiner
Nonlinear Systems Term Project: Averaged Modeling of the Cardiovascular System
Analysis of Adaptive Algorithms
Time series Modelling Basics
Optimum Signal Processing 2nd edition Sophocles Orfanidis
Nonparametric Bayesian models for AR and ARX identification (CSPA 2016)
Waveform_codingUNIT-II_DC_-PPT.pptx
short term load forecasting load forecasting
Waveform_codingUNIT-II_DC_-PPT.pptx
PSOCTSR-1.ppt

Recently uploaded (20)

PPTX
2Systematics of Living Organisms t-.pptx
PPTX
TOTAL hIP ARTHROPLASTY Presentation.pptx
DOCX
Viruses (History, structure and composition, classification, Bacteriophage Re...
PDF
Biophysics 2.pdffffffffffffffffffffffffff
PDF
Placing the Near-Earth Object Impact Probability in Context
PPTX
cpcsea ppt.pptxssssssssssssssjjdjdndndddd
PDF
IFIT3 RNA-binding activity primores influenza A viruz infection and translati...
PDF
CAPERS-LRD-z9:AGas-enshroudedLittleRedDotHostingaBroad-lineActive GalacticNuc...
PPTX
GEN. BIO 1 - CELL TYPES & CELL MODIFICATIONS
PPTX
DRUG THERAPY FOR SHOCK gjjjgfhhhhh.pptx.
PDF
. Radiology Case Scenariosssssssssssssss
PDF
bbec55_b34400a7914c42429908233dbd381773.pdf
PDF
SEHH2274 Organic Chemistry Notes 1 Structure and Bonding.pdf
PDF
HPLC-PPT.docx high performance liquid chromatography
PPTX
BIOMOLECULES PPT........................
PDF
An interstellar mission to test astrophysical black holes
PPTX
Protein & Amino Acid Structures Levels of protein structure (primary, seconda...
PPTX
2. Earth - The Living Planet Module 2ELS
PPTX
Introduction to Fisheries Biotechnology_Lesson 1.pptx
PPTX
Classification Systems_TAXONOMY_SCIENCE8.pptx
2Systematics of Living Organisms t-.pptx
TOTAL hIP ARTHROPLASTY Presentation.pptx
Viruses (History, structure and composition, classification, Bacteriophage Re...
Biophysics 2.pdffffffffffffffffffffffffff
Placing the Near-Earth Object Impact Probability in Context
cpcsea ppt.pptxssssssssssssssjjdjdndndddd
IFIT3 RNA-binding activity primores influenza A viruz infection and translati...
CAPERS-LRD-z9:AGas-enshroudedLittleRedDotHostingaBroad-lineActive GalacticNuc...
GEN. BIO 1 - CELL TYPES & CELL MODIFICATIONS
DRUG THERAPY FOR SHOCK gjjjgfhhhhh.pptx.
. Radiology Case Scenariosssssssssssssss
bbec55_b34400a7914c42429908233dbd381773.pdf
SEHH2274 Organic Chemistry Notes 1 Structure and Bonding.pdf
HPLC-PPT.docx high performance liquid chromatography
BIOMOLECULES PPT........................
An interstellar mission to test astrophysical black holes
Protein & Amino Acid Structures Levels of protein structure (primary, seconda...
2. Earth - The Living Planet Module 2ELS
Introduction to Fisheries Biotechnology_Lesson 1.pptx
Classification Systems_TAXONOMY_SCIENCE8.pptx

Biomedical signal modeling

  • 1. Biomedical Signal Modeling Autoregressive Modeling Moving Average Modeling 2013 Valeriu Mihai MIEET
  • 2. 1 Index  Introduction……………………………………………………………………………………………….pag.2  Biosignals…………………………………………………………………………………………………..pag.2  Signal encoding………………………………………………………………………………………….pag.3  Noise………………………………………………………………………………………………………….pag.3  Parametric system modeling……………………………………………………………………..pag.4  Linear Prediction Model…………………………………………………………………………….pag.4  Autoregressive modeling…………………………………………………………………………..pag.4  The least-squares method………………………………………………………………pag.5  The autocorrelation method………………………………………………………….pag.7  The covariance method………………………………………………………………….pag.7  Application to random signals………………………………………………………..pag.8  Computation of the gain factor……………………………………………………...pag.8  Computation of the model parameters………………………………………….pag.9  Spectral matching and parameterization………………………………………pag.12  Optimal model order……………………………………………………………………pag.13  Moving-average modeling……………………………………………………………………….pag.14  Illustration of application……………………………………………………………..pag.17  Practical work………………………………………………………………………………………….pag.18  Identification of voiced/unvoiced parts of signal………………………….pag.18  The AR modeling of the segments using command lpc...................pag.21  Bibliography…………………………………………………………………………………………….pag.25
  • 3. 2 Introduction Modeling can contribute to understanding physical or physiological aspects of the related systems. We use mathematical model to represent the process or the system that generates the signal of interest. Interaction or communication with a biological system is done through biosignals. FIGURE 1.1 A classical systems view of a physiological system that receives an external input, or stimulus, which then evokes an output, or response. Classic examples include the knee-jerk reflex, where the input is mechanical force and the output is mechanical motion. Systems that produce an output without the need for an input stimulus, like the electrical activity of the heart, can be considered biosignal sources. Biosignals Much of the activity in biomedical engineering, be it clinical or in research, involves the measurement, processing, analysis, display, and/or generation of signals. Signals are variations in energy that carry information. Table 1.1 summarizes the different energy types that can be used to carry information and the associated variables that encode this information. Table 1.1 also shows the physiological measurements that involve these energy forms. Biological signals are usually encoded into variations of electrical, chemical, mechanical energy, although in thermal energy. For communication within the body, signals are primarily encoded as variations in electrical or chemical energy. When chemical energy is used, the encoding is usually done by varying the concentration of the chemical within a physiological compartment for example, the concentration of a hormone in the blood. The conversion of physiological energy to an electric signal is an important step, and often the first step, in gathering information for clinical or research use. The energy
  • 4. 3 conversion task is done by a device termed a transducer. A transducer is a device that converts energy from one form to another. The energy that is converted by the input transducer may be generated by the physiological or it may be energy produced by an external source. FIGURE 1.2 The typical biomedical measurement system Many physiological processes produce energy that can be detected directly. For example, cardiac internal pressures are usually measured using a pressure transducer. Signal encoding Most encoding strategies can be divided into two broad categories: continuous and discrete. The typical analog signal is one whose amplitude varies in time as follows: x(t) = f (t) A continuous analog signal is converted to the digital domain, a process that involves slicing the signal in both amplitude and time, slicing is known as sampling: X[n] = x[1], x[2], x[3], . . . x[n] Noise Noise is what you do not want and signal is what you do want. In biomedical measurements, noise has four possible origins: 1) physiological variability; 2) environmental noise or interference; 3) measurement or transducer artifact; 4) electronic noise. Amplitude slicing adds noise to the signal, and the noise level is dependent on the size of the slice, but typical slices are so small this noise is usually ignored.
  • 5. 4 Parametric system modeling Parametric modeling techniques find the parameters for a mathematical model describing a signal, system, or process. These techniques use known information about the system to determine the model. Applications for parametric modeling include speech, music synthesis, simulation, etc. The difference equation that gives the output of a general linear, time-invariant, discrete-time system is Equation 1.1 ARMA System With b0=1 x(n) – input; y(n) – output; The parameter bl, where l=0,1,2,…,Q, indicate how the present and Q past samples of the input are combined, in a linear manner, to generate the present output sample; the parameters ak, where k=1,2,…,P, indicate how the past P samples of the output are linearly combined(in feedback loop) to produce the current output; G is a gain factor; and P and Q determine the order of the system. The summation over x represents the moving average or MA part of the system; the summation over y represents the autoregressive or AR part of the system; the entire system may be viewed as combined autoregressive, moving-average or ARMA system. Linear Prediction Model The use of the past input and output samples in computing the present output sample represents the memory of the system. The model also indicates that the present output sample may be predicted as a linear combination of the present and a few past input samples, and a few past output samples. For this reason, the model is also known as the linear prediction or LP model. Autoregressive modeling Applications:  Speech recognition and coding;  System identification;  Modeling and recognition of sonar, radar, geophysical signals;  Spectral analysis. Problem: How can we obtain an AR model when the input to the system that caused the given signal as its output is unknown? Solution: In the AR the output is modeled as a linear combination of P past values of the output and the present input sample as Equation 2.1 AR equation k - order of the AR model ak –parameters of the AR model G – gain
  • 6. 5 Between pitch pulses Gx(n) is zero and y(n) can be predicted by summation of past samples. y(n)=-(a1y(n-1)+a2y(n-2)+…+aPy(n-P)) It should be noted that the model as in Equation 2.1 does not account for the presence of noise. The AR transfer function corresponding to Equation 2.1 is Equation 2.2 In the case of biomedical signals such as the EEG (electroencephalogram) or the PCG (phonocardiogram), the input to the system is totally unknown. Then, we can only approximately predict the current sample of the output signal using its past values as Equation 2.3 Where the ~ indicates that the predicted value is only approximate. The error in the predicted value (also known as the residual) is Equation 2.4 The general signal-flow diagram of the AR model viewed as a prediction or error filter is illustrated in Figure 2.1 Figure 2.1 Signal-flow diagram of the AR model We estimate the best model by trying to minimize the mean squared error between the signal sample predicted by the model and the actual measured signal sample.  The least-squares method The least-squares method based upon the Yule-Walker equations. In the least-squares method, the parameters ak are obtained by minimizing the MSE (mean-squared error) with respect to all of the parameters.
  • 7. 6 The total squared error (TSE) ε is given by (the following procedure is applicable for minimization of MSE) Equation 2.5 TSE Although the range of the summation in Equation 2.5 is important, we may minimize ε without specifying the range for the time being. Minimization of ε is performed by applying the conditions Equation 2.6 to Equation 2.5,which yields Σ ( ) Σ ( ) ( ) Σ ( ) ( ) ΣΣ ( ) ( ) Σ ( ) ( ) Σ Σ ( ) ( ) and finally Equation 2.7 y(n-i)- signal sample i time periods prior to the current sample at n For a given signal y(n), Equation 2.7 provides a set of P equations in the P unknowns ak, k = 1,2,. . . ,P , known as the normal equations. By expanding Equation 2.5 and using the relationship in Equation 2.7, the minimum TSE εP for the model of order P is obtained as Equation 2.8 Prediction error
  • 8. 7  The autocorrelation method If the range of summation in Equations 2.5 and 2.7 is specified to be - < n < , the error is minimized over an infinite duration, and we have Equation 2.9 φy(i) – the ACF(autocorrelation function) of y(n) In practice, the signal y(n) will be available only over a finite interval, say 0 ≤n ≤N - 1; the given signal may then be assumed to be zero outside this range and treated as a windowed version of the true signal. Then, the ACF may be expressed as Equation 2.10 If we will give a look to Equation 2.9 and Equation 2.7 we will found that Least-Squares Method is related with Autocorrelation Method and this relation is given by Equation 2.11. The normal equations then become Equation 2.11 We now see that an AR model may be derived for a signal with the knowledge of only its ACF; the signal samples themselves are not required. The minimum TSE is given by Equation 2.12 Prediction error  The covariance method In deriving the autocorrelation method, the range of summation of the prediction error in Equations 2.5 and 2.7 was specified to be - < n < . If, instead, we specify the range of summation to be a finite interval, say, 0 ≤ n ≤ N - 1, we get Equation 2.13
  • 9. 8 instead of Equation 2.11 based upon the ACF (autocorrelation function), and the minimum TSE (total squared error) is given by Equation 2.14 instead of Equation 2.12, where Equation 2.15 is the covariance of the signal y(n) in the specified interval. The matrix formed by the covariance function is symmetric as C(i, k) = C(k, i), similar to the ACF matrix in Equation 2.22; however, the elements along each diagonal will not be equal, as C ( i + 1,k + 1) = C ( i , k) + y(-i - 1)y(-k - 1) - y(N - 1 - i)y(N - 1 - k). Computation of the covariance coefficients also requires y(n) to be known for - P ≤ n ≤ N - 1. The distinctions disappear as the specified interval of summation (error minimization) tends to infinity.  Application to random signals If the signal y(n) is a sample of a random process, the error e(n) is also a sample of a random process. We then have to use the expectation operation to obtain the MSE as follows Equation 2.16 Applying the condition for minimum error as in Equation 2.6, we get the normal equations as Equation 2.17 The minimum MSE is Equation 2.18 Prediction error  Computation of the gain factor Since we assumed earlier that the input to the system being modeled is unknown, the gain parameter G is not important. Regardless, the derivation of G demonstrates a few important points. Equation 2.4 may be rewritten as Equation 2.19 Where Gx(n)=e(n)
  • 10. 9 This condition indicates that the input signal is proportional to the error of prediction when the estimated model parameters are equal to the real system parameters. In the case when x(n) = δ(n), we have the impulse response h(n) at the output, and Equation 2.20 Multiplying both sides of the expression above with h(n - i) and summing over all n, we get expressions in terms of the ACF(autocorrelation function) φh(i) of h(n) as Equation 2.21 and Equation 2.22 Due to the condition that the energy of the output of the system be equal to that of y(n), the condition φh(0) = φy(0) must be satisfied. Comparing Equations 2.21 and 2.11, we then have Equation 2.23 Therefore, for a model of order P, the first (P+ 1) ACF terms of the impulse response h(n) must be equal to the corresponding ACF terms of the signal y(n) being modeled. It follows from Equations 2.19, 2.20, and 2.12 that Equation 2.24  Computation of the model parameters For low orders of the model, Equation 2.11 may be solved directly. However, direct methods may not be feasible when P is large. The normal equations in Equation 2.11 may be written in matrix form as Equation 2.25
  • 11. 10 For real signals, the P x P ACF matrix is symmetric and the elements along any diagonal are identical. A procedure known as Durbin’s method or the Levinson-Durbin algorithm provides a recursive method to solve the normal equations in Equation 2.25. The procedure starts with a model order of 1; computes the model parameters, the error, and a secondary set of parameters known as the reflection coefficients; updates the model order and the parameters; and repeats the procedure until the model of the desired order is obtained. The Levinson-Durbin algorithm is summarized below. Initialize model order i = 0 and error ε0 = φy(0). Perform the following steps recursively for i = 1,2, . . . , P. 1. Increment model order i and compute the ith reflection coefficient ϒi as Equation 2.26 where ai-1,j denotes the jth model coefficient at iteration (i - 1); the iteration index is also the recursively updated model order. 2. Let ai,i = ϒi. 3. Update the predictor coefficients as Equation 2.27 4. Compute the error value as Equation 2.28 The final model parameters are given as ak = aP,k, 1 ≤ k ≤ P. The Levinson- Durbin algorithm computes the model parameters for all orders up to the desired order P. As the order of the model is increased, the TSE reduces, and hence we have 0 ≤ εi ≤ εi-1. The reflection coefficients may also be used to test the stability of the model (filter) being designed: lϒil < 1, i = 1,2,. . . , P, is the required condition for stability of the model of order P. Example Of Levinson-Durbin Algorithm: [ ( ) ( ) ( ) ( ) ] [ ] ( ) ( ) Calculate parameters a1 and a2. 1) ( ) 2) ( ) ( )
  • 12. 11 3) ( ) ( ) 4) ( ) ( ( ) ( ) ) ( ) ( ) ( ) ( ) 5) ( ) ( ) ( ) ( ( ) ( ) ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) 6) ( ) ( ) ( ) ( ) ( ) 7) ( ) ( ) ( ( ) ( ) ( ) ( ) ( ) )( ( ) ( ) ) ( ) ( ) ( ) ( ) ( ) ( ) Solution: [ ]=[ ]
  • 13. 12  Spectral matching and parameterization The AR model was derived in the preceding section based upon time-domain formulations in the autocorrelation and covariance methods. We shall now see that equivalent formulations can be derived in the frequency domain, which can lead to a different interpretation of the model. Applying the z-transform to Equation 2.4, we get Equation 2.29 and Equation 2.30 where, Equation 2.31 and E(z) is the z-transform of e(n). We can now view the error e(n) as the result of passing the signal being modeled y(n) through the filter A(z), which may be considered to be an inverse filter. In the case of y(n) being a deterministic signal, applying Parseval's theorem(Parseval's theorem tell us that the sum or integral of the square of a function is equal to the sum or integral of the square of its transform), the TSE(total squared error) to be minimized may be written as Equation 2.32 where E(ω) is obtained by evaluating E(z) on the unit circle in the z-plane. Using Sy(ω) to represent the PSD(power spectrum) of y(n), we have Equation 2.33 where A(ω) is the frequency response of the inverse filter, and is given by evaluating A(z) on the unit circle in the z-plane. From Equations 2.1, 2.2 and 2.30, we get Equation 2.34 Here, y(ω) represents the PSD(power spectrum) of the modeled signal (n) that is an approximation of y(n) as in Equation 2.3. From Equation 2.29 we have
  • 14. 13 Equation 2.35 Now, y(ω) is the model’s approximation of Sy(ω). Comparing Equations 2.34 and 2.35, we see that the error PSD |E(ω)|² is modeled by a uniform (or “flat” or “white”) PSD equal to G². For this reason, the filter A(z) is also known as a “whitening” filter. From Equations 2.32, 2.34, and 2.35, we get the TSE as Equation 2.36 As the model is derived by minimizing the TSE ε, we see now that the model is effectively minimizing the integrated ratio of the signal PSD Sy(ω) to its approximation y(ω). The equivalence of the model is described in the following terms:  As the model order P , the TSE is minimized, that is, εp 0.  For a model of order P, the first (P + 1) ACF values of its impulse response are equal to those of the signal being modeled. Increasing P increases the range of the delay parameter (time) over which the model ACF is equal to the signal ACF.  Given that the PSD and the ACF are Fourier-transform pairs, the preceding point leads to the frequency-domain statement that increasing P leads to a better fit of (ω) to Sy(ω). As P , the model ACF and PSD become identical to the signal ACF and PSD, respectively. Thus any spectrum may be approximated by an all-pole model of an appropriate order.  Optimal model order Problem: Given that the AR model performs better and better as the order P is increased, where do we stop? If the given signal is the output of a P-pole system, then an AR model of order P would be the optimal model with the minimum error. But how would one find in practice if the given signal was indeed produced by a P-pole system? Solution: One possibility to determine the optimal order for modeling a given signal is to follow the trend in the TSE as the model order P is increased. This is feasible in a recursive procedure such as the Levinson-Durbin algorithm, where models of all lower orders are computed in deriving a model of order P, and the error at each order is readily available. The procedure could be stopped when there is no significant reduction in the error as the model order is incremented. The use of a normalized error measure ε p defined as Equation 2.37 As the model order P ,
  • 15. 14 Equation 2.38 ε min is a monotonically decreasing function of P, with ε 0 = 1 and ε = ε min; The incremental reduction in the normalized error may be checked with a condition such as Equation 2.39 where is a small threshold. The optimal order may be considered to have been reached if the condition is satisfied for several consecutive model orders. Model parameters: The AR (all-pole) model H(z) and its inverse A(z) are uniquely characterized by any one set of the following sets of parameters  The model parameters ak, k = 1,2,. . . , P. The series of ak parameters is also equal to the impulse response of the inverse filter.  The impulse response h(n) of the AR model.  The poles of H(z), which are also the roots (zeros) of A(z) .  The reflection coefficients ϒi, i = 1,2,. . . , P.  The ACF (or PSD) of the ak coefficients.  The ACF (or PSD) of h(n).  The cepstrum of ak or h(n). Moving-average modeling When an ensemble of several realizations of an event is not available, synchronized averaging will not be possible. We are then forced to consider temporal averaging for noise removal, with the assumption that the processes involved are ergodic, that is, temporal statistics may be used instead of ensemble statistics. As temporal statistics are computed using a few samples of the signal along the time axis and the temporal window of samples is moved to obtain the output at various points of time, such a filtering procedure is called a moving-window averaging filter in general; the term moving-average (MA) filter is commonly used. This type of filter is really simple to implement, but is rarely used, because we cannot predict the error. The general form of an MA model is Equation 3.1 The parameter bk, where k=0,1,2,…,N, indicate how the present and N past samples of the input are combined, in a linear manner, to generate the present output sample, and N determine the order of the system.
  • 16. 15 Figure 3.1 The signal-flow diagram of a generic MA filter of order N. Applying the z-transform, we get the transfer function H(z) of the filter as Equation 3.2 where X(z) and Y(z) are the z-transforms of z(n) and y(n), respectively An MA filter is a finite impulse response (FIR) filter with the following attributes and advantages:  The impulse response h(k) has a finite number of terms: h(k) =bk , k =0,1,2,…,N.  An FIR filter may be realized non-recursively with no feedback.  The output depends only on the present input sample and a few past input samples.  The filter is merely a set of tap weights of the delay stages, as illustrated in Figure 3.1.  The filter transfer function has no poles except at z=0: the filter is inherently stable.  The filter has linear phase if the series of tap weights is symmetric or antisymmetric. Increased smoothing may be achieved by averaging signal samples over longer time windows, at the expense of increased filter delay. If the signal samples over a window of eight samples are averaged, we get the output as Equation 3.3 The impulse response of the filter is Equation 3.4
  • 17. 16 The transfer function of the filter is Equation 3.5 and the frequency response is given by Equation 3.6 The frequency response of the 8-point MA filter is shown in Figure 3.2; the pole-zero plot of the filter is depicted in Figure 3.3. It is seen that the filter has zeros at Figure 3.2 Magnitude and phase responses of the 8-point moving-average filter.
  • 18. 17 Figure 3.3 Pole-zero plot of the 8-point moving-average filter.  Illustration of application Figure 3.4 shows a segment of an ECG signal with high-frequency noise. Figure 3.5 shows the result of filtering the signal with the 8-point MA filter described above. Although the noise level has been reduced, some noise is still present in the result. This is due to the fact that the attenuation of the simple 8-point MA filter is not more than -20 dB at most frequencies (except near the zeros of the filter). Figure 3.4 ECG signal with high-frequency noise.
  • 19. 18 Figure 3.5 The ECG signal with high-frequency noise in Figure 3.4 after filtering by the 8-point MA filter shown in Figure 3.2 Practical work The file safety.wav contains the speech signal for the word “safety” uttered by a male speaker, sampled at 8 kHz. The signal has a significant amount of background noise (as it was recorded in a normal computer laboratory).  Problem 1: Identify voiced and unvoiced part of signal. In speech analysis, the voiced-unvoiced decision is usually performed in extracting the information from the speech signals. In this paper, we performed two methods to identify the voiced-unvoiced parts of speech from a speech signal. These are zero crossing rate (ZCR) and root mean square (RMS). Zero-crossing rate is a measure of number of times in a given time interval/frame that the amplitude of the speech signals passes through a value of zero. If the zero-crossing rate is high, the speech signal is unvoiced, while if the zero-crossing rate is low, the speech signal is voiced. The root mean square of the speech signal provides a representation that reflects these amplitude variations. Generally, the amplitude of unvoiced speech segments is much lower than the amplitude if voiced segments. RMS=√Σ ( ) ZCR= Σ ( ( )) ( ( )) where, sign(a)={
  • 20. 19 Procedure to identify voiced/unvoiced parts of signal in MATLAB. When we windowing its mean multiply time sample, that is equal to convolution in the frequency domain and when we convolve with Hamming Window we have less destruction. Hamming window doesn’t destroy or change the proprieties of signal. First step: Take off silenced part of signal. MATLAB code: [soundx,fs] = wavread('safety.wav'); % converts the sound from wave to ascii format tf = [1:length(final)]/fs; t= [1:length(soundx)]/fs; x1=soundx(1:1568);%frame 1 of silence x2=soundx(2672:3145);%frame 2 of silence x3=soundx(6301:8745); %frame 3 of silence x4=soundx(9720:11430); %frame 4 of silence y1=soundx(1569:2671); %frame 1 without silence y2=soundx(3146:6300); %frame 2 without silence y3=soundx(8744:9719); %frame 3 without silence final=[y1;y2;y3]; %array with all frames subplot(2,1,1) plot(t,soundx) title('Original speech'); xlabel('Time in sec');ylabel('Amplitude'); subplot(2,1,2) plot(tf,final) title('Speech without silenced part'); xlabel('Time in sec');ylabel('Amplitude');
  • 21. 20  Results Figure 4.1 Original signal and signal without silenced part Second step: Identification of voiced and unvoiced parts of signal. MATLAB code: soundx = wavread('safety.wav'); x1=soundx(1:1568); x2=soundx(2672:3145); x3=soundx(6301:8745); x4=soundx(9720:11430); y1=soundx(1569:2671); y2=soundx(3146:6300); y3=soundx(8744:9719); final=[y1;y2;y3]; Fs=8000; len=401; overlap=400; ham=hamming(len); t = [1:length(final)]/Fs; framed_rms = buffer(final, len, overlap, 'nodelay');
  • 22. 21 windowed_rms = diag(sparse(ham)) * framed_rms; rms = sum(windowed_rms.^2,1); sigdif = sign(final(2:end))-sign(final(1:end-1)); sigdif = [0; sigdif]; framed_zcr= buffer(sigdif, len, overlap,'nodelay'); windowed_zcr = diag(sparse(len)) * abs(framed_zcr); zcr = sum(windowed_zcr, 1) / (2*len); subplot(1,1,1); plot(t, final) title('Speech "Safety"') hold on; delay = (len - 1)/2; plot(t(delay+1:end - delay), rms, 'r') hold on; plot(t(delay+1:end-delay), zcr/max(zcr),'g'); xlabel('Time in sec '); legend('Original','RMS (voiced)','ZCR(unvoiced)'); hold off;  Results Figure 4.2 Identification of voiced and unvoiced parts of signal  Problem 2: Apply the AR modeling procedure to each segment using the command Ipc (Linear Predictive Coding) in MATLAB.
  • 23. 22 lpc - determines the coefficients of a forward linear predictor by minimizing the prediction error in the least squares sense. It has applications in filter design and speech coding. Procedure to estimate signal in MATLAB. MATLAB code: y=wavread('safety.wav'); fs=8000; frame_duration=0.03; frame_len = frame_duration*fs; frame_step = 80; p = 10; V_UV = 0.3; Z = zeros(p,1); offset = 0; function [P] = pitch(y,Pmax,Pmin,V_UV); C = xcorr(y,Pmax,'coeff'); C=C(Pmax+1:2*Pmax+1); [Cmax,i]=max(C(Pmin:end)); P = (Pmin+i-2); if Cmax < V_UV P = 0; end; nwindow = (length(y)-frame_len)/frame_step; padsize = (nwindow - floor(nwindow))*frame_len; y = padarray(y,padsize,0,'post'); for i=1:(length(y)-frame_len)/frame_step frame = y((i-1)*(frame_step)+[1:frame_len]); [A,sigma2] = lpc(frame.*hamming(length(frame)),p);
  • 24. 23 sigma = sqrt(sigma2); f0=pitch(frame,160,20,V_UV); exc = zeros(frame_step,1); if f0 == 0 exc = randn(frame_step,1); gain = sigma; offset = 0; else if offset >= frame_step offset = offset-frame_step; else exc(offset+1:f0:end) = 1; if mod(frame_step-offset,f0) == 0 offset = 0; else offset=f0-mod(frame_step-offset,f0); end; end; gain = sigma/sqrt(1/f0); end; [recon,Z] = filter(1,A,exc*gain,Z); if i==1 o = [recon]; f00 = [f0]; else o = [o;recon]; f00 = [f00,f0]; end; end; hold on plot(o,'r') hold on plot(y) xlabel('Segments');ylabel('Amplitude'); legend('Estimated Signal','Original Signal') title('Speech "Safety"')
  • 25. 24  Results Figure 4.3 Estimated signal and original signal
  • 26. 25 Bibliography  “A speech/music discriminator based on RMS and Zero-Crossings” – Costa Panagiotakis and George Tziritas  “Biomedical Signal Analysis” - Jit Muthuswamy  “Biomedical Signal Analysis” – Rangaraj M.Rangayyan  “Circuits, signals and systems for bioengineers” – John Semmlow  http://www.ltrr.arizona.edu/~dmeko/notes_5.pdf  http://en.wikipedia.org/wiki/Autoregressive%E2%80%93moving- average_model  http://paulbourke.net/miscellaneous/ar/  http://dea.brunel.ac.uk/cmsp/Home_Saeed_Vaseghi/Chapter13- Speech%20Processing.pdf  http://www-ee.uta.edu/eeweb/ip/Courses/SSP/Notes/HandOut1.pdf