SlideShare a Scribd company logo
Hardware Architecture for
Direction-of-Arrival Estimation
Algorithms (IN792)
▷ Direction of Arrival (DOA) Estimation refers to the process of
determining the direction from which a received signal was
transmitted. This is typically achieved using an array of sensors or
antennas that capture the signal's wavefronts.
2
DOA Estimation Primer (Definition)
▷ Radar: Air traffic control and defense systems.
▷ Sonar: Underwater exploration, navigation and detection
▷ Wireless Communications: Increase spatial diversity
▷ Audio Processing: Hearing aids and home assistants.
3
DOA Estimation Primer (Applications)
4
DOA Estimation (Single Source Model)
▷ A far-field source emits a narrowband
signal s(t) with center wavelength λ.
▷ The signals received at the M-sensor
uniform linear array (ULA) can be written
in the form of a vector as:
Array Steering Vector Noise Vector
Received Signal Vector
5
DOA Estimation (Multiple Source Model)
▷ The single source model can also be extended to multiple source
model wherein (P<M) sources emit signals onto the array. The
corresponding receivA far-field source emits a narrowband
signal s(t) with center wavelength λ. The corresponding received
signal matrix is given as:
Received Signal Matrix Array Manifold Matrix
6
DOA Estimation Algorithms
7
MUSIC Algorithm (Basic Principles)
▷ Separation of subspaces-
○ The MUSIC algorithm separates the received signal's
subspace via eigenvalue decomposition (EVD) into two
components: the signal subspace and the noise subspace.
○ Signal Subspace: Contains the directions of arrival (DOAs)
of the incoming signals.
○ Noise Subspace: Orthogonal to the signal subspace and
contains no information about the signal.
▷ Orthogonality Property:
○ If 𝑎(𝜃) is the steering vector for an angle 𝜃, it is orthogonal
to the noise subspace vectors, resulting in the MUSIC
pseudo-spectrum peaks at the DOAs.
8
MUSIC Algorithm (Math. Framework)
▷ The covariance matrix of the received signal can be expressed
as:
▷ The eigenvalue decomposition of the covariance matrix yields M
eigenvalues and eigenvectors, P of which correspond to the
signal subspace while the rest (M-P) correspond to the noise
subspace.
9
MUSIC Algorithm (Math. Framework)
▷ The array manifold matrix A spans the same subspace as the
signal eigenvector matrix Us
. Hence we have:
▷ Based on the above observation, the MUSIC spectrum can be
expressed as:
▷ The DOAs are directly estimated from the P peaks of the MUSIC
spectrum (wherein the denominator almost tends to zero).
10
MUSIC Algorithm (Hardware Approach)
▷ Existing hardware research aims to reduce computational
complexity and delay.
▷ Scopes of improvement:
○ Conversion of covariance matrix into a real matrix for easier
calculation of eigenvalues and eigenvectors.
○ Improvement of CORDIC rotation (angle related
computation) or parallelize using Jacobi algorithms
(eigenvalue computation) for better speed.
○ Eliminate EVD and directly obtain noise-subspace from the
sub-matrix of the covariance matrix.
○ Coarse and fine search stages in the final spectral search
stage.
11
Core Literature (Paper 1)
▷ Implemented on a 6 element Uniform Circular Array (UCA).
▷ Runs at a clock speed of 100MHz.
▷ Estimates DOA in 200 clock cycles (2 µs).
▷ FPGA used: ZedBoard Zynq
12
Core Literature (Paper 1)
Block diagram of the proposed optimized MUSIC Algorithm
13
Key Components (Paper 1)
▷ Virtual Array Reduction Block:
○ Objective-
■ Reduces the number of antenna signals processed
■ Lowers the computational load
■ Improves FPGA resource utilization.
○ Mechanism-
■ Compares signals in pairs to identify the strongest
three signals.
■ Outputs the selected signals with their corresponding
indices.
14
Key Components (Paper 1)
Virtual Array Reduction Block
15
Key Components (Paper 1)
▷ Covariance Matrix Calculator:
○ Objective-
■ Computes the reduced covariance matrix (3*3) in place
of original (6*6)
○ Mechanism-
■ Compute variance using:
■ Compute covariance using:
16
Key Components (Paper 1)
Covariance Matrix Calculator
17
Key Components (Paper 1)
▷ Normalization Block:
○ Objective-
■ Normalizes the covariance matrix to manage resource
usage without compromising accuracy.
○ Mechanism-
■ Scales down the values to fit within the available
precision, reducing the required number of bits.
18
Key Components (Paper 1)
Covariance Matrix Normalization Block
19
Key Components (Paper 1)
▷ Eigenvalue Decomposition Block:
○ Objective-
■ Decomposes the covariance matrix into its eigenvalues
and eigenvectors.
○ Optimizations-
■ Uses an algebraic method for simplification.
■ Largest eigenvalue corresponds to the signal subspace;
smaller eigenvalues correspond to the noise subspace.
■ Eigenvector computation based on the row-reduced
echelon form.
20
Key Components (Paper 1)
Eigenvalue and Eigenvector Computation Block
21
Key Components (Paper 1)
Optimized EVD Block
22
Key Components (Paper 1)
▷ Spectral Peak Search Block:
○ Objective-
■ Finds the direction of arrival (DOA) by searching for
peaks in the signal subspace spectrum.
○ Mechanism-
■ Searches within the 3dB intercept points of the
maximum receiving antenna.
■ Calculates the pseudo-spectrum to identify peaks
corresponding to DOAs.
23
Key Components (Paper 1)
Spectral Search and Final Angle Calculator Block
24
Design Performance (Paper 1)
25
Core Literature (Paper 2)
▷ Implemented on a 6,8 element Uniform Linear Array (ULA) and
Nested Array (NA).
▷ FPGA Used: Virtex-6, Zynq 7000
26
Core Literature (Paper 2)
Overall Structure of the proposed MUSIC Algorithm Hardware Acceleration
27
Key Components (Paper 2)
▷ Covariance Matrix Computation Block:
○ Objective-
■ Compute the covariance matrix for use in the MUSIC
algorithm
○ Mechanism-
■ Separate hardware structures for two designs: Design
1 (general purpose) and Design 2 (optimized for fixed
input sizes).
■ Design 1: Utilizes nested modules to calculate matrix
multiplication, addressing various input configurations,
and stores the real and imaginary parts of the input
data separately.
28
Key Components (Paper 2)
Overall structure for computing
Covariance Matrix (CM)
Sparse cum linear array
compatible design for CM
Parallelism enabled design for
Uniform linear array CM
29
Key Components (Paper 2)
▷ Covariance Matrix Computation Block:
○ Mechanism-
■ Compute the covariance matrix for use in the MUSIC
algorithm
■ Design 2: Employs parallel computation for fixed-size
inputs, calculating only the upper triangular matrix due
to its Hermitian property, enhancing efficiency.
■ Step-b: Vectorizes the matrix, calculates the sparse
array arrangement, and forms a new vector by
removing duplicated virtual arrays.
■ Step-c: Performs spatial smoothing by dividing the
virtual array into sub-arrays, obtaining a covariance
matrix that meets the full rank condition, crucial for
distinguishing signal and noise subspaces.
30
Key Components (Paper 2)
CM vectorizer (step-b) block for sparse array Spatial smoothing (step-c) block for sparse
array
31
Key Components (Paper 2)
▷ Covariance Matrix Computation Block:
○ Optimization-
■ Design 1: Flexible configuration to support various
input data sizes and structures, using nested module
architecture for stepwise multiplication.
■ Design 2: High parallelism, with simultaneous output of
valid element values using multiple memory and
complex multiplication modules.
■ Step-b: Efficiently removes duplicate virtual arrays and
forms a new vector for further processing.
■ Step-c: Applies spatial smoothing to ensure the
covariance matrix of the virtual array maintains full
rank, enabling accurate signal subspace identification.
32
Key Components (Paper 2)
▷ Complex Jacobi Decomposition Block:
○ Objective-
■ Perform eigenvalue decomposition of the Hermitian
matrix to determine eigenvalues and eigenvectors.
○ Mechanism-
■ Utilizes the Jacobi algorithm for iterative construction
of plane rotation matrices to diagonalize the Hermitian
matrix.
■ Hardware implementation updates the eigenvalue and
eigenvector matrices in parallel, improving
computation speed.
33
Key Components (Paper 2)
Complex Jacobi Algorithm Hardware architecture for
complex Jacobi algorithm
34
Key Components (Paper 2)
▷ Complex Jacobi Decomposition Block:
○ Optimization-
■ Focuses on the upper triangular matrix to find the
largest off-diagonal element, reducing computational
complexity.
■ Simplified trigonometric calculations derive sine and
cosine from tan(2θ), minimizing the need for complex
hardware functions.
■ Parallel updating of matrices to expedite iterations and
improve overall processing time.
35
Key Components (Paper 2)
Unitary Matrix computation
block in Jacobi architecture
cs module in Unitary Matrix
computation structure
36
Key Components (Paper 2)
▷ Peak Search Block:
○ Objective-
■ Identify the direction of arrival (DOA) by searching for
peaks in the pseudo-spectral function.
○ Mechanism-
■ Distinguishes signal and noise subspaces by comparing
eigenvalues of the covariance matrix.
■ Calculates the pseudo-spectral function using
pre-stored direction vector values to expedite
processing.
■ Employs a three-stage search process to accurately
pinpoint source angles by identifying troughs instead of
peaks, simplifying calculations.
37
Key Components (Paper 2)
Hardware architecture for
distinguishing noise subspace
Hardware architecture for
computing pseudo-spectrum
38
Key Components (Paper 2)
▷ Complex Jacobi Decomposition Block:
○ Optimization-
■ Stores pre-calculated direction vector values in ROM
to reduce computational overhead.
■ Changes peak search to trough search, eliminating
high-delay division operations.
■ Parallel computation in matrix multiplications during
peak search for faster and more efficient processing.
39
Design Performance (Paper 2)
40
Design Performance (Paper 2)
41
Design Performance (Paper 2)

More Related Content

PDF
Enterprise Scale Topological Data Analysis Using Spark
PDF
Enterprise Scale Topological Data Analysis Using Spark
PPTX
Strassen's Matrix Multiplication divide and conquere algorithm
PDF
Mobile antennae general Beamforming principles presentation
PDF
Improving The Performance of Viterbi Decoder using Window System
PDF
Digital Electronics-Unit II.pdf
PDF
MDCT audio coding with pulse vector quantizers
PPTX
big data analytics unit 2 notes for study
Enterprise Scale Topological Data Analysis Using Spark
Enterprise Scale Topological Data Analysis Using Spark
Strassen's Matrix Multiplication divide and conquere algorithm
Mobile antennae general Beamforming principles presentation
Improving The Performance of Viterbi Decoder using Window System
Digital Electronics-Unit II.pdf
MDCT audio coding with pulse vector quantizers
big data analytics unit 2 notes for study

Similar to Hardware Architecture for Direction-of-Arrival (DOA) Estimation Algorithms (20)

PPTX
Distributed_Array_Algos.pptx
PPT
On Complex Enumeration for Multiuser MIMO Vector Precoding
PDF
Wiener Filter Hardware Realization
PDF
Report Simulations of Communication Systems
PDF
Graph Algorithms, Sparse Algebra, and the GraphBLAS with Janice McMahon
PPT
hpc-unit-IV-2-dense-matrix-algorithms.ppt
PPT
Ieee project reversible logic gates by_amit
PPT
Ieee project reversible logic gates by_amit
PDF
Design and Implementation of Area Optimized, Low Complexity CMOS 32nm Technol...
PDF
CS345-Algorithms-II-Lecture-1-CS345-2016.pdf
PPT
Design and Hardware Implementation of Low-Complexity Multiuser Precoders (ETH...
PPTX
PDF
Multimedia communication jpeg
PDF
Espacios y subepacios vectoriales
PDF
Optical_Encoders; Optical_Encoders; Opti
PPTX
Reducing the dimensionality of data with neural networks
PPTX
Efficient anomaly detection via matrix sketching
PDF
201907 AutoML and Neural Architecture Search
PPTX
Evaluation of programs codes using machine learning
Distributed_Array_Algos.pptx
On Complex Enumeration for Multiuser MIMO Vector Precoding
Wiener Filter Hardware Realization
Report Simulations of Communication Systems
Graph Algorithms, Sparse Algebra, and the GraphBLAS with Janice McMahon
hpc-unit-IV-2-dense-matrix-algorithms.ppt
Ieee project reversible logic gates by_amit
Ieee project reversible logic gates by_amit
Design and Implementation of Area Optimized, Low Complexity CMOS 32nm Technol...
CS345-Algorithms-II-Lecture-1-CS345-2016.pdf
Design and Hardware Implementation of Low-Complexity Multiuser Precoders (ETH...
Multimedia communication jpeg
Espacios y subepacios vectoriales
Optical_Encoders; Optical_Encoders; Opti
Reducing the dimensionality of data with neural networks
Efficient anomaly detection via matrix sketching
201907 AutoML and Neural Architecture Search
Evaluation of programs codes using machine learning
Ad

Recently uploaded (20)

PPTX
material for studying about lift elevators escalation
PPTX
sdn_based_controller_for_mobile_network_traffic_management1.pptx
PPTX
"Fundamentals of Digital Image Processing: A Visual Approach"
PDF
How NGOs Save Costs with Affordable IT Rentals
PDF
Dozuki_Solution-hardware minimalization.
PPTX
5. MEASURE OF INTERIOR AND EXTERIOR- MATATAG CURRICULUM.pptx
PPTX
title _yeOPC_Poisoning_Presentation.pptx
PDF
-DIGITAL-INDIA.pdf one of the most prominent
PPTX
Presentacion compuuuuuuuuuuuuuuuuuuuuuuu
PPTX
Prograce_Present.....ggation_Simple.pptx
PPTX
DEATH AUDIT MAY 2025.pptxurjrjejektjtjyjjy
PPTX
02fdgfhfhfhghghhhhhhhhhhhhhhhhhhhhh.pptx
PPTX
A Clear View_ Interpreting Scope Numbers and Features
PPTX
PLC ANALOGUE DONE BY KISMEC KULIM TD 5 .0
PPTX
1.pptxsadafqefeqfeqfeffeqfqeqfeqefqfeqfqeffqe
PPTX
Lecture-3-Computer-programming for BS InfoTech
PPT
Lines and angles cbse class 9 math chemistry
PPTX
Embedded for Artificial Intelligence 1.pptx
PPTX
Wireless and Mobile Backhaul Market.pptx
PPTX
Entre CHtzyshshshshshshshzhhzzhhz 4MSt.pptx
material for studying about lift elevators escalation
sdn_based_controller_for_mobile_network_traffic_management1.pptx
"Fundamentals of Digital Image Processing: A Visual Approach"
How NGOs Save Costs with Affordable IT Rentals
Dozuki_Solution-hardware minimalization.
5. MEASURE OF INTERIOR AND EXTERIOR- MATATAG CURRICULUM.pptx
title _yeOPC_Poisoning_Presentation.pptx
-DIGITAL-INDIA.pdf one of the most prominent
Presentacion compuuuuuuuuuuuuuuuuuuuuuuu
Prograce_Present.....ggation_Simple.pptx
DEATH AUDIT MAY 2025.pptxurjrjejektjtjyjjy
02fdgfhfhfhghghhhhhhhhhhhhhhhhhhhhh.pptx
A Clear View_ Interpreting Scope Numbers and Features
PLC ANALOGUE DONE BY KISMEC KULIM TD 5 .0
1.pptxsadafqefeqfeqfeffeqfqeqfeqefqfeqfqeffqe
Lecture-3-Computer-programming for BS InfoTech
Lines and angles cbse class 9 math chemistry
Embedded for Artificial Intelligence 1.pptx
Wireless and Mobile Backhaul Market.pptx
Entre CHtzyshshshshshshshzhhzzhhz 4MSt.pptx
Ad

Hardware Architecture for Direction-of-Arrival (DOA) Estimation Algorithms

  • 1. Hardware Architecture for Direction-of-Arrival Estimation Algorithms (IN792)
  • 2. ▷ Direction of Arrival (DOA) Estimation refers to the process of determining the direction from which a received signal was transmitted. This is typically achieved using an array of sensors or antennas that capture the signal's wavefronts. 2 DOA Estimation Primer (Definition)
  • 3. ▷ Radar: Air traffic control and defense systems. ▷ Sonar: Underwater exploration, navigation and detection ▷ Wireless Communications: Increase spatial diversity ▷ Audio Processing: Hearing aids and home assistants. 3 DOA Estimation Primer (Applications)
  • 4. 4 DOA Estimation (Single Source Model) ▷ A far-field source emits a narrowband signal s(t) with center wavelength λ. ▷ The signals received at the M-sensor uniform linear array (ULA) can be written in the form of a vector as: Array Steering Vector Noise Vector Received Signal Vector
  • 5. 5 DOA Estimation (Multiple Source Model) ▷ The single source model can also be extended to multiple source model wherein (P<M) sources emit signals onto the array. The corresponding receivA far-field source emits a narrowband signal s(t) with center wavelength λ. The corresponding received signal matrix is given as: Received Signal Matrix Array Manifold Matrix
  • 7. 7 MUSIC Algorithm (Basic Principles) ▷ Separation of subspaces- ○ The MUSIC algorithm separates the received signal's subspace via eigenvalue decomposition (EVD) into two components: the signal subspace and the noise subspace. ○ Signal Subspace: Contains the directions of arrival (DOAs) of the incoming signals. ○ Noise Subspace: Orthogonal to the signal subspace and contains no information about the signal. ▷ Orthogonality Property: ○ If 𝑎(𝜃) is the steering vector for an angle 𝜃, it is orthogonal to the noise subspace vectors, resulting in the MUSIC pseudo-spectrum peaks at the DOAs.
  • 8. 8 MUSIC Algorithm (Math. Framework) ▷ The covariance matrix of the received signal can be expressed as: ▷ The eigenvalue decomposition of the covariance matrix yields M eigenvalues and eigenvectors, P of which correspond to the signal subspace while the rest (M-P) correspond to the noise subspace.
  • 9. 9 MUSIC Algorithm (Math. Framework) ▷ The array manifold matrix A spans the same subspace as the signal eigenvector matrix Us . Hence we have: ▷ Based on the above observation, the MUSIC spectrum can be expressed as: ▷ The DOAs are directly estimated from the P peaks of the MUSIC spectrum (wherein the denominator almost tends to zero).
  • 10. 10 MUSIC Algorithm (Hardware Approach) ▷ Existing hardware research aims to reduce computational complexity and delay. ▷ Scopes of improvement: ○ Conversion of covariance matrix into a real matrix for easier calculation of eigenvalues and eigenvectors. ○ Improvement of CORDIC rotation (angle related computation) or parallelize using Jacobi algorithms (eigenvalue computation) for better speed. ○ Eliminate EVD and directly obtain noise-subspace from the sub-matrix of the covariance matrix. ○ Coarse and fine search stages in the final spectral search stage.
  • 11. 11 Core Literature (Paper 1) ▷ Implemented on a 6 element Uniform Circular Array (UCA). ▷ Runs at a clock speed of 100MHz. ▷ Estimates DOA in 200 clock cycles (2 µs). ▷ FPGA used: ZedBoard Zynq
  • 12. 12 Core Literature (Paper 1) Block diagram of the proposed optimized MUSIC Algorithm
  • 13. 13 Key Components (Paper 1) ▷ Virtual Array Reduction Block: ○ Objective- ■ Reduces the number of antenna signals processed ■ Lowers the computational load ■ Improves FPGA resource utilization. ○ Mechanism- ■ Compares signals in pairs to identify the strongest three signals. ■ Outputs the selected signals with their corresponding indices.
  • 14. 14 Key Components (Paper 1) Virtual Array Reduction Block
  • 15. 15 Key Components (Paper 1) ▷ Covariance Matrix Calculator: ○ Objective- ■ Computes the reduced covariance matrix (3*3) in place of original (6*6) ○ Mechanism- ■ Compute variance using: ■ Compute covariance using:
  • 16. 16 Key Components (Paper 1) Covariance Matrix Calculator
  • 17. 17 Key Components (Paper 1) ▷ Normalization Block: ○ Objective- ■ Normalizes the covariance matrix to manage resource usage without compromising accuracy. ○ Mechanism- ■ Scales down the values to fit within the available precision, reducing the required number of bits.
  • 18. 18 Key Components (Paper 1) Covariance Matrix Normalization Block
  • 19. 19 Key Components (Paper 1) ▷ Eigenvalue Decomposition Block: ○ Objective- ■ Decomposes the covariance matrix into its eigenvalues and eigenvectors. ○ Optimizations- ■ Uses an algebraic method for simplification. ■ Largest eigenvalue corresponds to the signal subspace; smaller eigenvalues correspond to the noise subspace. ■ Eigenvector computation based on the row-reduced echelon form.
  • 20. 20 Key Components (Paper 1) Eigenvalue and Eigenvector Computation Block
  • 21. 21 Key Components (Paper 1) Optimized EVD Block
  • 22. 22 Key Components (Paper 1) ▷ Spectral Peak Search Block: ○ Objective- ■ Finds the direction of arrival (DOA) by searching for peaks in the signal subspace spectrum. ○ Mechanism- ■ Searches within the 3dB intercept points of the maximum receiving antenna. ■ Calculates the pseudo-spectrum to identify peaks corresponding to DOAs.
  • 23. 23 Key Components (Paper 1) Spectral Search and Final Angle Calculator Block
  • 25. 25 Core Literature (Paper 2) ▷ Implemented on a 6,8 element Uniform Linear Array (ULA) and Nested Array (NA). ▷ FPGA Used: Virtex-6, Zynq 7000
  • 26. 26 Core Literature (Paper 2) Overall Structure of the proposed MUSIC Algorithm Hardware Acceleration
  • 27. 27 Key Components (Paper 2) ▷ Covariance Matrix Computation Block: ○ Objective- ■ Compute the covariance matrix for use in the MUSIC algorithm ○ Mechanism- ■ Separate hardware structures for two designs: Design 1 (general purpose) and Design 2 (optimized for fixed input sizes). ■ Design 1: Utilizes nested modules to calculate matrix multiplication, addressing various input configurations, and stores the real and imaginary parts of the input data separately.
  • 28. 28 Key Components (Paper 2) Overall structure for computing Covariance Matrix (CM) Sparse cum linear array compatible design for CM Parallelism enabled design for Uniform linear array CM
  • 29. 29 Key Components (Paper 2) ▷ Covariance Matrix Computation Block: ○ Mechanism- ■ Compute the covariance matrix for use in the MUSIC algorithm ■ Design 2: Employs parallel computation for fixed-size inputs, calculating only the upper triangular matrix due to its Hermitian property, enhancing efficiency. ■ Step-b: Vectorizes the matrix, calculates the sparse array arrangement, and forms a new vector by removing duplicated virtual arrays. ■ Step-c: Performs spatial smoothing by dividing the virtual array into sub-arrays, obtaining a covariance matrix that meets the full rank condition, crucial for distinguishing signal and noise subspaces.
  • 30. 30 Key Components (Paper 2) CM vectorizer (step-b) block for sparse array Spatial smoothing (step-c) block for sparse array
  • 31. 31 Key Components (Paper 2) ▷ Covariance Matrix Computation Block: ○ Optimization- ■ Design 1: Flexible configuration to support various input data sizes and structures, using nested module architecture for stepwise multiplication. ■ Design 2: High parallelism, with simultaneous output of valid element values using multiple memory and complex multiplication modules. ■ Step-b: Efficiently removes duplicate virtual arrays and forms a new vector for further processing. ■ Step-c: Applies spatial smoothing to ensure the covariance matrix of the virtual array maintains full rank, enabling accurate signal subspace identification.
  • 32. 32 Key Components (Paper 2) ▷ Complex Jacobi Decomposition Block: ○ Objective- ■ Perform eigenvalue decomposition of the Hermitian matrix to determine eigenvalues and eigenvectors. ○ Mechanism- ■ Utilizes the Jacobi algorithm for iterative construction of plane rotation matrices to diagonalize the Hermitian matrix. ■ Hardware implementation updates the eigenvalue and eigenvector matrices in parallel, improving computation speed.
  • 33. 33 Key Components (Paper 2) Complex Jacobi Algorithm Hardware architecture for complex Jacobi algorithm
  • 34. 34 Key Components (Paper 2) ▷ Complex Jacobi Decomposition Block: ○ Optimization- ■ Focuses on the upper triangular matrix to find the largest off-diagonal element, reducing computational complexity. ■ Simplified trigonometric calculations derive sine and cosine from tan(2θ), minimizing the need for complex hardware functions. ■ Parallel updating of matrices to expedite iterations and improve overall processing time.
  • 35. 35 Key Components (Paper 2) Unitary Matrix computation block in Jacobi architecture cs module in Unitary Matrix computation structure
  • 36. 36 Key Components (Paper 2) ▷ Peak Search Block: ○ Objective- ■ Identify the direction of arrival (DOA) by searching for peaks in the pseudo-spectral function. ○ Mechanism- ■ Distinguishes signal and noise subspaces by comparing eigenvalues of the covariance matrix. ■ Calculates the pseudo-spectral function using pre-stored direction vector values to expedite processing. ■ Employs a three-stage search process to accurately pinpoint source angles by identifying troughs instead of peaks, simplifying calculations.
  • 37. 37 Key Components (Paper 2) Hardware architecture for distinguishing noise subspace Hardware architecture for computing pseudo-spectrum
  • 38. 38 Key Components (Paper 2) ▷ Complex Jacobi Decomposition Block: ○ Optimization- ■ Stores pre-calculated direction vector values in ROM to reduce computational overhead. ■ Changes peak search to trough search, eliminating high-delay division operations. ■ Parallel computation in matrix multiplications during peak search for faster and more efficient processing.