SlideShare a Scribd company logo
Automatic Target Recognition Using Recurrent
Neural Networks
Bharat Sehgal
Electronics and Electrical Engineering Dept
Indian Institute of Technology
Guwahati, India
bhara174102015@iitg.ac.in
Hanumant Singh Shekhawat
Electronics and Electrical Engineering Dept
Indian Institute of Technology
Guwahati, India
h.s.shekhawat@iitg.ac.in
Sumit Kumar Jana
Zeus Numerix Private Limited
Pune, India
sumit@zeusnumerix.com
Abstract—Automatic target recognition (ATR) using recurrent
neural networks (RNN) is being proposed in this work. When
electromagnetic waves from radar illuminate the targets, surface
currents are produced which results in scattering of the incident
energy. The scattered signal in the direction of radar is received as
the radar signature of the target. The radar cross section (RCS) is
an important feature extracted from the radar signature which
is used in this work for target identification. The RCS values
for each set of azimuth and elevation angles for a mono-static
configuration serves the purpose of the dataset for the recurrent
neural network (RNN)/long short-term memory (LSTM) model.
The classification accuracy of 93 percent was achieved using the
RNN/LSTM model.
Index Terms—Automatic Target Recognition, radar cross sec-
tion, RNN, LSTM
I. INTRODUCTION
Radars are the mainstay surveillance equipment of any
military in the world. Since the world war II, electronic
warfare has taken center stage and with developments in the
field of air crafts, missiles and UAVs, the capabilities of the
typical surveillance radars have been challenged. Ever since
the advent of stealth technology, the size of the targets as
measured by the radars have been reducing, and it has become
really challenging to detect, differentiate and recognize these
targets correctly. These targets are difficult to detect because
they exhibit a lower radar cross section (RCS). RCS is the
measure of the size of a target as seen by the radar. In the
field of radars, apart from detection and tracking, target recog-
nition has gained importance for both military and civilian
applications. Target recognition can be broadly classified as
cooperative and non-cooperative. Cooperative targets are those
targets which provide their information upon interrogation
while non-cooperative targets do the otherwise. In this work,
we consider the case of non-cooperative target recognition
(NCTR). There may be many reasons for a target to be non-
cooperative, like, hostile aircraft, transponder failure or even
absence of transponder. Failure to recognize the target in the
first case will lead to own casualties and in the second and third
case, it will cause fratricide. There have been many incidents in
the past where friendly air crafts have been fired upon because
of incorrect recognition. Hence to avoid such incidents, target
recognition assumes great importance in the modern era.
This paper aims to design, implement and test an automatic
target recognition (ATR) system with the recurrent neural
networks (RNN) using radar cross section (RCS) of the target
as a function of aspect angles for a mono-static radar.
Automatic target recognition is a classification problem
which is being addressed by the use of machine learning, in
particular, RNN in this work. RNNs are known to work well
when working with temporal sequences. The RNN can retain
outputs of the previous time steps which are used to decide for
the current step. The chain structure of the RNNs intuitively
shows that they can be used with sequences. In an RNN, input
at the current time-step comprises of the new information and
the output of the network at the previous time-step. Training
an RNN with any gradient-based optimization algorithm may
suffer from the vanishing gradient problem at some point in
time. Moreover, when information is required from several
time steps or more before deciding at the current time step,
then it may not be able to make a correct prediction. ’Long
Short Term Memory’(LSTM) overcome this problem to a great
extent. LSTM mitigates the vanishing gradient issue by adding
three gated units: forget gate, input and output gates, through
which the memory of past states can be efficiently controlled
[1]. Classification in this paper aims at identifying simple
shapes consisting of four plates with different orientations.
The RNN/LSTM is aptly suitable for this task because of the
temporal nature of the radar signature.
In general, target recognition is desired when the target is
located at a distance far from the observer. This means, a far-
field RCS estimation is our primary interest. A target must
be illuminated with a high-frequency incident ray, such that
scattering happens from all the major components. A high-
frequency illumination can bring out the maximum feature
which in turn helps to distinguish between different types of
targets. The high frequency term used here not only represents
the absolute frequency but also represents the electrical size of
the target i.e. the ratio of target size to the wavelength. There
are many methods for predicting the RCS of complex shapes
in high frequency optical region like physical optics (PO),
geometrical optical (GO), physical theory of diffraction (PTD),
geometrical theory of diffraction (GTD), the method of mo-
ments (MoM). However, physical optics (PO) combined with
the physical theory of diffraction (PTD) is the most balanced
in terms of its generality and efficiency. It involves estimating
surface currents based incident ray frequency, surface normals
and polarization. In the present class of analysis, we are
interested in a bulk of the scattering to identify the shape
of the target. Hence, the principal component of scattering
i.e. the specular reflection is essential and sufficient for the
estimation. Terms like edge diffraction (estimated using PTD)
can be neglected as we are less interested in the finer accuracy
of RCS for all possible aspect angle. Using PO, RCS data of
a target object can be estimated for a range of aspect and
elevation in a single go with the desired range of angular
resolution. The RCS is estimated using the following equation:
RCS = 4πR2 (Escat
)2
(Einc)2
(1)
where, R is the distance of the target to radar receiver
and parameters Escat
(scattered electric field responsible for
detection) and Einc
(incident electric field) can be calculated
using the method given in [2].
A lot of research has been done on automatic target recogni-
tion. Most of the researches are based on high range resolution
profiles (HRRP) [3], synthetic aperture radars (SAR) [4], and
jet engine modulation (JEM) [5]. Classification techniques
which classify targets by using their RCS directly are based
mostly on bi-static or multi-static configurations. In [6], Her-
man proposed an ATR system for passive radar. This system
classifies targets by directly using their RCS. Numerical com-
putation of RCS is done the method of moments implemented
in the fast Illinois solver code (FISC) software. However,
the technique is computationally expensive for tracking and
classification purpose as stated in [6]. In [7], [8], Ehrman et
al. developed a computationally economical ATR system for
passive radars. RCS is used directly for classification of tar-
gets. The RCS of various airplanes for simulated trajectories is
computed using the FISC software, a coordinated flight model,
the NEC2 software. In [9], ATR using RNN is proposed
with targets being ground targets with different shapes and
orientation. Radar cross section of these targets are measured
at different ranges and aspects and then the RCS data is used
to extract features to be used by RNN for classification. In
[10], Wengrowski proposed ATR approach using convolutional
neural networks (CNN) for simple geometric shapes from
their RCS data. In [11] Eric T Kouba proposed a radar target
identification approach using RNNs wherein HRRP was used
as feature to identify the radar target. It involved dividing
the target into small range bins and then identify the major
scattering centers within each bin by illuminating the target
with a high-frequency radar signal.
Our work focuses on target recognition by use of the RCS.
Identifying the target’s shape from its mono-static RCS is a
complex problem. Properties like wavelength, sampling rate
and also target’s properties like geometrical shape, surface
material, and orientation may have pronounced impact on
the RCS data. Environmental properties may also affect the
radar signature. In this work, aspect angle dependence of
RCS has been exploited by creating different trajectory models
Fig. 1. Target Set
of a fixed length. Open source tool called POFACETS has
been used for creating the RCS database [12]. For a single
trajectory, the aspect angles of the target continuously vary
with respect to the radar. Hence, we get a sequence of RCS
for the set of aspect angles in a trajectory. This sequence is
then fed to the RNN/LSTM network for training and testing
purpose.
The rest of the paper is organized in the following sections:
section II describes the generation of RCS database and section
III includes details of the proposed RNN model followed by
experiments and results in section IV. The proposed work is
concluded in section V.
II. GENERATING RCS DATA
A classical target identification process is carried out in four
steps:
1) Data Acquisition
2) Signal Processing
3) Feature extraction
4) Decision making
Fig. 2. Classification Process.
In this paper, we concentrate on the decision making by for-
mulating a neural network model that is capable of classifying
the target with acceptable accuracy.
Fig. 3. 3D RCS Maps of Target Set
The target set is shown in Fig 1. Each test target consists of
four identical, perfect electrical conductor (PEC) metal plates
with different orientation in one of the plates.
Once the target set has been generated, 3D RCS map for
each target is generated using POFACETS as shown in Fig. 3.
The database consists of RCS variations as a function of aspect
angles (elevation and azimuth) at a frequency of 300 MHz.
Aspects angles were varied from 10 to 70 degrees with interval
of 0.1 degree in azimuth and 10 to 70 degrees with a 1-degree
interval in elevation. RCS value was recorded for each set of
aspect angles which serve as a database. RCS data for all four
shapes were recorded in a similar manner. The RCS database
for all test targets is then combined with an addition of a label
for each target.
The database created using POFACETS is in the form of
point data i.e. for each set of azimuth and elevation, there is
an RCS value. When an aerial target is detected, its motion
will change the aspect angles according to its trajectory. To
simulate the target trajectories, a sliding window of size 8 was
used to create a dataset of trajectories and RCS. The trajectory
data-set was then randomly shuffled before it was divided into
testing and training sets in 4 : 1 ratio.
III. EXPERIMENTS
The trajectory dataset is evaluated using several RNN ar-
chitectures to find the best possible model which gives most
accurate results. A number of LSTM models were trained and
tested. It was observed that as we increase the depth of the
neural network by increasing the number of layers, the classifi-
cation accuracy first improves and beyond a certain number of
layers it starts to decrease because of the over-fitting. However
maximum accuracy was achieved using stacked LSTM layer
model comprising of two LSTM layers and six fully connected
layers.
The first LSTM layer has 200 memory units and it returns
sequences to ensure that the next LSTM layer also receives
sequences and not just a random data. We have five hidden
layers and then finally, a fully connected layer with neurons
equal to the number of distinct labels. The rectified linear unit
(RELU) was used as an activation function in all hidden layers.
The model was trained for 1, 000 epochs with a batch size of
512 with ’adam’ optimizer.
The proposed LSTM model is shown in Fig. 4. Other super-
Fig. 4. Proposed LSTM Model Architecture with two stacked LSTM layers
and five hidden layers. An LSTM network requires the input in the form [data
points, time steps, features] where data points are the number of trajectories,
time steps refers to the number of observations in each trajectory, and features
are the number of variables for the corresponding target label. The dimensions
of each layer is described as [none, ]. ’none’ here signifies that the layer is
capable of accepting input of any size and is not bound by the batch size as
specified during training or testing of the model.
vised machine learning classification techniques like support
vector machine (SVM), k-nearest neighbors (kNN), and deep
neural networks (DNN) were also implemented on the same
data set to carry out a comparative study of the results.
IV. RESULTS
Since ATR has been addressed as a classification problem,
therefore different machine learning classification techniques
were employed to carry out a comparative analysis with
respect to RNNs. Classification report of DNN and stacked
TABLE I
ACCURACY COMPARISON CHART FOR DIFFERENT MODELS
S.No. Model No. of Hidden Layers
No. of Units
Accuracy
1 2 3 4 5 6 7 8 9 10
1 LSTM 1 16 82.74%
2 LSTM 2 32 16 87.32%
3 LSTM 3 64 32 16 87.60%
4 LSTM 4 128 64 32 16 90.54%
5 LSTM 6 512 256 128 64 32 16 92.44%
6 LSTM 8 512 256 128 64 32 32 16 16 92.99%
7 LSTM 10 1024 512 256 128 128 64 64 32 32 16 92.08%
8 Stacked LSTM, 2 Layers 4 128 64 32 16 92.66%
9 Stacked LSTM, 2 Layers 5 256 128 64 32 16 93.18%
10 Stacked LSTM, 2 Layers 7 256 256 128 64 32 16 93.16%
11 Stacked LSTM, 2 Layers 8 1024 512 256 256 128 64 32 16 93.06%
12 Stacked LSTM, 2 Layers 9 512 256 256 128 128 64 64 32 16 92.55%
13 Stacked LSTM, 2 Layers 10 512 512 256 256 128 128 64 64 32 16 92.37%
14 Stacked LSTM, 3 Layers 3 64 32 16 83.70%
15 Stacked LSTM, 3 Layers 6 512 256 128 64 32 16 90.52%
16 LSTM CNN 4 128 64 32 16 86.88%
17 DNN 7 1024 512 256 128 64 32 16 85.12%
18 Stacked GRU 5 256 128 64 32 16 92.67%
TABLE II
ACCURACY COMPARISON CHART FOR DIFFERENT ACTIVATION FUNCTIONS
S.No. Activation Function Accuracy(%)
1 Tanh 90.93
2 ELU 91.77
3 RELU 93.18
4 Softplus 92.16
Fig. 5. Classification Report of DNN and stacked LSTM models
LSTM model is shown in Fig. 5. The confusion matrix for
the various classification techniques used is shown in Fig. 6.
Fig. 6. Confusion Matrix for Stacked LSTM, LSTM, SVM, and DNN
Classifier
The trajectory data-set was given as an input to the LSTM
model for training. The accuracy of the model is measured
by comparing the predicted output and the actual label of the
training set. The aim is to maximize accuracy by adjusting the
RNN’s design parameters and utilizing the dataset. The model
was trained for 1, 000 epochs where epoch is defined as the
number of times the complete dataset is passed through the
neural network. The plot of accuracy versus the number of
epochs for the RNN model is shown in Fig. 8.
Fig. 7. Accuracy Vs Epochs - DNN.
A comparison of results obtained by implementing different
neural network models is shown in Table I. It can be seen from
the comparison chart that stacked LSTM model comprising
two LSTM layers and five hidden layers produces the best
results. Models with same layer architecture but different
activation functions were trained and tested. It was observed
that rectified linear unit (RELU) activation function gives the
best results in terms of accuracy. Comparison of results for
various activation functions is shown in Table II.
Fig. 8. Accuracy Vs Epochs - Stacked LSTM
V. CONCLUSION
The target classification technique proposed in this work
has achieved a classification accuracy of 93% for the four
test models on an independent test data. On the other hand,
the accuracy achieved by a conventional DNN is 85%, and
that of SVM classifier is 87%. It is evident from the results
that the classification technique proposed here does perform
well in terms of accuracy. Design of the RNN model required
a lot of experiments to get the best possible architecture for
this dataset. Also, training of stacked LSTM model requires
high computational resources which can be met using high-
performance computing (HPC) platforms. In applications like
military and civil aviation, even the accuracy mentioned above
is far below par because one needs to be sure before taking
a decision which involves human life. Hence, more RNN
architectures are being tried to improve upon the classification
accuracy. Further research is now being carried out to classify
aircraft models like MIG-21, MIG-29, Sukhoi, F-16 etc. under
noisy conditions.
ACKNOWLEDGMENT
We would like to place on records our sincere thanks and
gratitude to Prof Gopal R Shevare, IIT, Bombay , Director,
Zeus Numerix Private Limited for all the guidance and support
extended throughout the process.
REFERENCES
[1] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural
computation, vol. 9, no. 8, pp. 1735–1780, 1997.
[2] S. Sefi, “Computational electromagnetics: Software development and
high frequency modelling of surface currents on perfect conductors,”
Ph.D. dissertation, KTH School of Computer Science and Communica-
tion, December 2005.
[3] A. K. Shaw, “Automatic target recognition using high-range resolution
data,” WRIGHT STATE UNIV DAYTON OH, Tech. Rep., 1998.
[4] S. Chen, H. Wang, F. Xu, and Y. Jin, “Target classification using the
deep convolutional networks for sar images,” IEEE Transactions on
Geoscience and Remote Sensing, vol. 54, no. 8, pp. 4806–4817, Aug
2016.
[5] M. R. Bell and R. A. Grubbs, “Jem modeling and measurement for radar
target identification,” IEEE Transactions on Aerospace and Electronic
Systems, vol. 29, no. 1, pp. 73–87, 1993.
[6] S. Herman and P. Moulin, “A particle filtering approach to fm-band
passive radar tracking and automatic target recognition,” in 2002 IEEE
Aerospace Conference Proceedings., vol. 4, 2002, pp. 4–1789.
[7] A. D. L. Lisa M. Ehrman, “Automated target recognition using passive
radar and coordinated flight models,” pp. 5094 – 5094 – 12, 2003.
[Online]. Available: https://doi.org/10.1117/12.488039
[8] A. Register, W. Blair, L. Ehrman, and P. K. Willett, “Using measured rcs
in a serial, decentralized fusion approach to radar-target classification,”
in Aerospace Conference, 2008 IEEE. IEEE, 2008, pp. 1–8.
[9] K. Bhattacharyya and K. Sarma, “Automatic target recognition (atr) sys-
tem using recurrent neural network (rnn) for pulse radar,” International
Journal of Computer Applications, vol. 50, 07 2012.
[10] K. D. Eric Wengrowski, Matthew Purri and A. Huston, “Deep convo-
lutional neural networks as a method to classify rotating objects based
on monostatic radar cross section.”
[11] E. T. Kouba, S. K. Rogers, D. W. Ruck, and K. W. Bauer, “Recurrent
neural networks for radar target identication,” in Applications of Artifi-
cial Neural Networks IV, vol. 1965. International Society for Optics
and Photonics, 1993, pp. 256–267.
[12] D. C. Jenn, “Rcs calculations using the physical optics codes (pofacets
manual),” Naval Postgraduate School.

More Related Content

PDF
16 15524 30625-1-sm edit septian
PDF
FUZZY CLUSTERING FOR IMPROVED POSITIONING
PDF
Error bounds for wireless localization in NLOS environments
PDF
Investigations on real time RSSI based outdoor target tracking using kalman f...
PDF
Deep Learning’s Application in Radar Signal Data
PDF
C42011318
PDF
3-d interpretation from single 2-d image for autonomous driving II
PDF
Depth Fusion from RGB and Depth Sensors IV
16 15524 30625-1-sm edit septian
FUZZY CLUSTERING FOR IMPROVED POSITIONING
Error bounds for wireless localization in NLOS environments
Investigations on real time RSSI based outdoor target tracking using kalman f...
Deep Learning’s Application in Radar Signal Data
C42011318
3-d interpretation from single 2-d image for autonomous driving II
Depth Fusion from RGB and Depth Sensors IV

What's hot (20)

PDF
Depth Fusion from RGB and Depth Sensors III
PDF
3-d interpretation from stereo images for autonomous driving
PDF
Driving behaviors for adas and autonomous driving xiv
PDF
Lidar for Autonomous Driving II (via Deep Learning)
PDF
3-d interpretation from single 2-d image IV
PDF
3-d interpretation from single 2-d image III
PDF
Topology estimation of a digital subscriber line
PDF
CP-NR Distributed Range Free Localization Algorithm in WSN
PDF
Fisheye Omnidirectional View in Autonomous Driving II
PDF
fusion of Camera and lidar for autonomous driving I
PDF
The International Journal of Engineering and Science (The IJES)
PDF
Driving Behavior for ADAS and Autonomous Driving VII
PDF
A Learning Automata Based Prediction Mechanism for Target Tracking in Wireles...
PDF
Neural Network Algorithm for Radar Signal Recognition
PDF
Multi sensor calibration by deep learning
PDF
Stereo Matching by Deep Learning
PDF
RuiLi_CVVT2016
PDF
L010628894
PDF
fusion of Camera and lidar for autonomous driving II
PDF
VLSI Architecture for Broadband MVDR Beamformer
Depth Fusion from RGB and Depth Sensors III
3-d interpretation from stereo images for autonomous driving
Driving behaviors for adas and autonomous driving xiv
Lidar for Autonomous Driving II (via Deep Learning)
3-d interpretation from single 2-d image IV
3-d interpretation from single 2-d image III
Topology estimation of a digital subscriber line
CP-NR Distributed Range Free Localization Algorithm in WSN
Fisheye Omnidirectional View in Autonomous Driving II
fusion of Camera and lidar for autonomous driving I
The International Journal of Engineering and Science (The IJES)
Driving Behavior for ADAS and Autonomous Driving VII
A Learning Automata Based Prediction Mechanism for Target Tracking in Wireles...
Neural Network Algorithm for Radar Signal Recognition
Multi sensor calibration by deep learning
Stereo Matching by Deep Learning
RuiLi_CVVT2016
L010628894
fusion of Camera and lidar for autonomous driving II
VLSI Architecture for Broadband MVDR Beamformer
Ad

Similar to Automatic Target Recognition Using Recurrent Neural Networks (20)

PDF
Introduction to Radar Target Recognition P. Tait
PDF
A Review on Identification of RADAR Range for the Target by using C Band
PDF
Deep Learning’s Application in Radar Signal Data II
PDF
Research on Space Target Recognition Algorithm Based on Empirical Mode Decomp...
PDF
YOLO BASED SHIP IMAGE DETECTION AND CLASSIFICATION
PDF
Multistatic Passive Radar Target Detection A Detection Theory Framework Amir ...
PDF
"Understanding Automotive Radar: Present and Future," a Presentation from NXP...
PDF
IRJET- Implementation of Stealth Tech on Tank
PDF
Person Detection in Maritime Search And Rescue Operations
PDF
Person Detection in Maritime Search And Rescue Operations
PDF
Research on Ship Detection in Visible Remote Sensing Images
PDF
Enhancement of SNR for Radars
PDF
Island Air Defence: Challenges, Novel Surveillance Concepts and Advanced Rada...
PPTX
Radar
PDF
An auto-encoder bio medical signal transmission through custom convolutional ...
DOC
STEALTH TECHNOLOGY|| RCS REDUCTION|| TECHNICAL PAPER
PDF
YOLO-G_A_Lightweight_Network_Model_for_Improving_t.pdf
Introduction to Radar Target Recognition P. Tait
A Review on Identification of RADAR Range for the Target by using C Band
Deep Learning’s Application in Radar Signal Data II
Research on Space Target Recognition Algorithm Based on Empirical Mode Decomp...
YOLO BASED SHIP IMAGE DETECTION AND CLASSIFICATION
Multistatic Passive Radar Target Detection A Detection Theory Framework Amir ...
"Understanding Automotive Radar: Present and Future," a Presentation from NXP...
IRJET- Implementation of Stealth Tech on Tank
Person Detection in Maritime Search And Rescue Operations
Person Detection in Maritime Search And Rescue Operations
Research on Ship Detection in Visible Remote Sensing Images
Enhancement of SNR for Radars
Island Air Defence: Challenges, Novel Surveillance Concepts and Advanced Rada...
Radar
An auto-encoder bio medical signal transmission through custom convolutional ...
STEALTH TECHNOLOGY|| RCS REDUCTION|| TECHNICAL PAPER
YOLO-G_A_Lightweight_Network_Model_for_Improving_t.pdf
Ad

More from Abhishek Jain (20)

PDF
Unsteady Problems & Separation Studies @ Zeus Numerix
PPTX
Optimization in CFD and Case Studies
PPTX
Aero Acoustic Field & its Modeling @ Zeus Numerix
PPTX
CEM Workshop Lectures (11/11): CEMExpert Usage of Almond Geometry for RCS Cal...
PPTX
CEM Workshop Lectures (10/11): Numerical Modeling of Radar Absorbing Materials
PPTX
CEM Workshop Lectures (9/11): Modelling Electromagnetics Field
PPTX
CEM Workshop Lectures (8/11): Method of moments
PPTX
CEM Workshop Lectures (7/11): PO/PTD Solver for RCS Computation
PPTX
CEM Workshop Lectures (6/11): FVTD Method in CEM
PPTX
CEM Workshop Lectures (5/11): Best Practices in RCS Prediction
PPTX
CEM Workshop Lectures (4/11): CEM of High Frequency Methods
PPTX
CEM Workshop Lectures (3/11): Mesh Generation in CEM
PPTX
CEM Workshop Lectures (1/11): ABC of CEM and RCS
PPTX
CFD Lecture (8/8): CFD in Chemical Systems
PPTX
CFD Lecture (7/8): Best Practices in CFD
PPTX
CFD Lecture (6/8): Solvers- Incompressible Flow
PPTX
CFD Lecture (5/8): Solvers- Compressible Flow
PPTX
CFD Lecture (4/8): Compressible Flow- Basics
PPTX
CFD Lecture (3/8): Mesh Generation in CFD
PPTX
CFD Lecture (2/8): Fluid Mechanics: CFD Perspective
Unsteady Problems & Separation Studies @ Zeus Numerix
Optimization in CFD and Case Studies
Aero Acoustic Field & its Modeling @ Zeus Numerix
CEM Workshop Lectures (11/11): CEMExpert Usage of Almond Geometry for RCS Cal...
CEM Workshop Lectures (10/11): Numerical Modeling of Radar Absorbing Materials
CEM Workshop Lectures (9/11): Modelling Electromagnetics Field
CEM Workshop Lectures (8/11): Method of moments
CEM Workshop Lectures (7/11): PO/PTD Solver for RCS Computation
CEM Workshop Lectures (6/11): FVTD Method in CEM
CEM Workshop Lectures (5/11): Best Practices in RCS Prediction
CEM Workshop Lectures (4/11): CEM of High Frequency Methods
CEM Workshop Lectures (3/11): Mesh Generation in CEM
CEM Workshop Lectures (1/11): ABC of CEM and RCS
CFD Lecture (8/8): CFD in Chemical Systems
CFD Lecture (7/8): Best Practices in CFD
CFD Lecture (6/8): Solvers- Incompressible Flow
CFD Lecture (5/8): Solvers- Compressible Flow
CFD Lecture (4/8): Compressible Flow- Basics
CFD Lecture (3/8): Mesh Generation in CFD
CFD Lecture (2/8): Fluid Mechanics: CFD Perspective

Recently uploaded (20)

PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PPTX
sap open course for s4hana steps from ECC to s4
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PDF
Unlocking AI with Model Context Protocol (MCP)
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PPTX
Big Data Technologies - Introduction.pptx
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PDF
Spectral efficient network and resource selection model in 5G networks
PPTX
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PPTX
Programs and apps: productivity, graphics, security and other tools
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
KodekX | Application Modernization Development
PDF
Electronic commerce courselecture one. Pdf
PDF
Review of recent advances in non-invasive hemoglobin estimation
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Network Security Unit 5.pdf for BCA BBA.
Diabetes mellitus diagnosis method based random forest with bat algorithm
sap open course for s4hana steps from ECC to s4
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
Unlocking AI with Model Context Protocol (MCP)
Understanding_Digital_Forensics_Presentation.pptx
The Rise and Fall of 3GPP – Time for a Sabbatical?
Big Data Technologies - Introduction.pptx
NewMind AI Weekly Chronicles - August'25 Week I
Spectral efficient network and resource selection model in 5G networks
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
20250228 LYD VKU AI Blended-Learning.pptx
Programs and apps: productivity, graphics, security and other tools
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
KodekX | Application Modernization Development
Electronic commerce courselecture one. Pdf
Review of recent advances in non-invasive hemoglobin estimation
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx

Automatic Target Recognition Using Recurrent Neural Networks

  • 1. Automatic Target Recognition Using Recurrent Neural Networks Bharat Sehgal Electronics and Electrical Engineering Dept Indian Institute of Technology Guwahati, India bhara174102015@iitg.ac.in Hanumant Singh Shekhawat Electronics and Electrical Engineering Dept Indian Institute of Technology Guwahati, India h.s.shekhawat@iitg.ac.in Sumit Kumar Jana Zeus Numerix Private Limited Pune, India sumit@zeusnumerix.com Abstract—Automatic target recognition (ATR) using recurrent neural networks (RNN) is being proposed in this work. When electromagnetic waves from radar illuminate the targets, surface currents are produced which results in scattering of the incident energy. The scattered signal in the direction of radar is received as the radar signature of the target. The radar cross section (RCS) is an important feature extracted from the radar signature which is used in this work for target identification. The RCS values for each set of azimuth and elevation angles for a mono-static configuration serves the purpose of the dataset for the recurrent neural network (RNN)/long short-term memory (LSTM) model. The classification accuracy of 93 percent was achieved using the RNN/LSTM model. Index Terms—Automatic Target Recognition, radar cross sec- tion, RNN, LSTM I. INTRODUCTION Radars are the mainstay surveillance equipment of any military in the world. Since the world war II, electronic warfare has taken center stage and with developments in the field of air crafts, missiles and UAVs, the capabilities of the typical surveillance radars have been challenged. Ever since the advent of stealth technology, the size of the targets as measured by the radars have been reducing, and it has become really challenging to detect, differentiate and recognize these targets correctly. These targets are difficult to detect because they exhibit a lower radar cross section (RCS). RCS is the measure of the size of a target as seen by the radar. In the field of radars, apart from detection and tracking, target recog- nition has gained importance for both military and civilian applications. Target recognition can be broadly classified as cooperative and non-cooperative. Cooperative targets are those targets which provide their information upon interrogation while non-cooperative targets do the otherwise. In this work, we consider the case of non-cooperative target recognition (NCTR). There may be many reasons for a target to be non- cooperative, like, hostile aircraft, transponder failure or even absence of transponder. Failure to recognize the target in the first case will lead to own casualties and in the second and third case, it will cause fratricide. There have been many incidents in the past where friendly air crafts have been fired upon because of incorrect recognition. Hence to avoid such incidents, target recognition assumes great importance in the modern era. This paper aims to design, implement and test an automatic target recognition (ATR) system with the recurrent neural networks (RNN) using radar cross section (RCS) of the target as a function of aspect angles for a mono-static radar. Automatic target recognition is a classification problem which is being addressed by the use of machine learning, in particular, RNN in this work. RNNs are known to work well when working with temporal sequences. The RNN can retain outputs of the previous time steps which are used to decide for the current step. The chain structure of the RNNs intuitively shows that they can be used with sequences. In an RNN, input at the current time-step comprises of the new information and the output of the network at the previous time-step. Training an RNN with any gradient-based optimization algorithm may suffer from the vanishing gradient problem at some point in time. Moreover, when information is required from several time steps or more before deciding at the current time step, then it may not be able to make a correct prediction. ’Long Short Term Memory’(LSTM) overcome this problem to a great extent. LSTM mitigates the vanishing gradient issue by adding three gated units: forget gate, input and output gates, through which the memory of past states can be efficiently controlled [1]. Classification in this paper aims at identifying simple shapes consisting of four plates with different orientations. The RNN/LSTM is aptly suitable for this task because of the temporal nature of the radar signature. In general, target recognition is desired when the target is located at a distance far from the observer. This means, a far- field RCS estimation is our primary interest. A target must be illuminated with a high-frequency incident ray, such that scattering happens from all the major components. A high- frequency illumination can bring out the maximum feature which in turn helps to distinguish between different types of targets. The high frequency term used here not only represents the absolute frequency but also represents the electrical size of the target i.e. the ratio of target size to the wavelength. There are many methods for predicting the RCS of complex shapes in high frequency optical region like physical optics (PO), geometrical optical (GO), physical theory of diffraction (PTD), geometrical theory of diffraction (GTD), the method of mo- ments (MoM). However, physical optics (PO) combined with the physical theory of diffraction (PTD) is the most balanced
  • 2. in terms of its generality and efficiency. It involves estimating surface currents based incident ray frequency, surface normals and polarization. In the present class of analysis, we are interested in a bulk of the scattering to identify the shape of the target. Hence, the principal component of scattering i.e. the specular reflection is essential and sufficient for the estimation. Terms like edge diffraction (estimated using PTD) can be neglected as we are less interested in the finer accuracy of RCS for all possible aspect angle. Using PO, RCS data of a target object can be estimated for a range of aspect and elevation in a single go with the desired range of angular resolution. The RCS is estimated using the following equation: RCS = 4πR2 (Escat )2 (Einc)2 (1) where, R is the distance of the target to radar receiver and parameters Escat (scattered electric field responsible for detection) and Einc (incident electric field) can be calculated using the method given in [2]. A lot of research has been done on automatic target recogni- tion. Most of the researches are based on high range resolution profiles (HRRP) [3], synthetic aperture radars (SAR) [4], and jet engine modulation (JEM) [5]. Classification techniques which classify targets by using their RCS directly are based mostly on bi-static or multi-static configurations. In [6], Her- man proposed an ATR system for passive radar. This system classifies targets by directly using their RCS. Numerical com- putation of RCS is done the method of moments implemented in the fast Illinois solver code (FISC) software. However, the technique is computationally expensive for tracking and classification purpose as stated in [6]. In [7], [8], Ehrman et al. developed a computationally economical ATR system for passive radars. RCS is used directly for classification of tar- gets. The RCS of various airplanes for simulated trajectories is computed using the FISC software, a coordinated flight model, the NEC2 software. In [9], ATR using RNN is proposed with targets being ground targets with different shapes and orientation. Radar cross section of these targets are measured at different ranges and aspects and then the RCS data is used to extract features to be used by RNN for classification. In [10], Wengrowski proposed ATR approach using convolutional neural networks (CNN) for simple geometric shapes from their RCS data. In [11] Eric T Kouba proposed a radar target identification approach using RNNs wherein HRRP was used as feature to identify the radar target. It involved dividing the target into small range bins and then identify the major scattering centers within each bin by illuminating the target with a high-frequency radar signal. Our work focuses on target recognition by use of the RCS. Identifying the target’s shape from its mono-static RCS is a complex problem. Properties like wavelength, sampling rate and also target’s properties like geometrical shape, surface material, and orientation may have pronounced impact on the RCS data. Environmental properties may also affect the radar signature. In this work, aspect angle dependence of RCS has been exploited by creating different trajectory models Fig. 1. Target Set of a fixed length. Open source tool called POFACETS has been used for creating the RCS database [12]. For a single trajectory, the aspect angles of the target continuously vary with respect to the radar. Hence, we get a sequence of RCS for the set of aspect angles in a trajectory. This sequence is then fed to the RNN/LSTM network for training and testing purpose. The rest of the paper is organized in the following sections: section II describes the generation of RCS database and section III includes details of the proposed RNN model followed by experiments and results in section IV. The proposed work is concluded in section V. II. GENERATING RCS DATA A classical target identification process is carried out in four steps: 1) Data Acquisition 2) Signal Processing 3) Feature extraction 4) Decision making Fig. 2. Classification Process. In this paper, we concentrate on the decision making by for- mulating a neural network model that is capable of classifying the target with acceptable accuracy.
  • 3. Fig. 3. 3D RCS Maps of Target Set The target set is shown in Fig 1. Each test target consists of four identical, perfect electrical conductor (PEC) metal plates with different orientation in one of the plates. Once the target set has been generated, 3D RCS map for each target is generated using POFACETS as shown in Fig. 3. The database consists of RCS variations as a function of aspect angles (elevation and azimuth) at a frequency of 300 MHz. Aspects angles were varied from 10 to 70 degrees with interval of 0.1 degree in azimuth and 10 to 70 degrees with a 1-degree interval in elevation. RCS value was recorded for each set of aspect angles which serve as a database. RCS data for all four shapes were recorded in a similar manner. The RCS database for all test targets is then combined with an addition of a label for each target. The database created using POFACETS is in the form of point data i.e. for each set of azimuth and elevation, there is an RCS value. When an aerial target is detected, its motion will change the aspect angles according to its trajectory. To simulate the target trajectories, a sliding window of size 8 was used to create a dataset of trajectories and RCS. The trajectory data-set was then randomly shuffled before it was divided into testing and training sets in 4 : 1 ratio. III. EXPERIMENTS The trajectory dataset is evaluated using several RNN ar- chitectures to find the best possible model which gives most accurate results. A number of LSTM models were trained and tested. It was observed that as we increase the depth of the neural network by increasing the number of layers, the classifi- cation accuracy first improves and beyond a certain number of layers it starts to decrease because of the over-fitting. However maximum accuracy was achieved using stacked LSTM layer model comprising of two LSTM layers and six fully connected layers. The first LSTM layer has 200 memory units and it returns sequences to ensure that the next LSTM layer also receives sequences and not just a random data. We have five hidden layers and then finally, a fully connected layer with neurons equal to the number of distinct labels. The rectified linear unit (RELU) was used as an activation function in all hidden layers. The model was trained for 1, 000 epochs with a batch size of 512 with ’adam’ optimizer. The proposed LSTM model is shown in Fig. 4. Other super- Fig. 4. Proposed LSTM Model Architecture with two stacked LSTM layers and five hidden layers. An LSTM network requires the input in the form [data points, time steps, features] where data points are the number of trajectories, time steps refers to the number of observations in each trajectory, and features are the number of variables for the corresponding target label. The dimensions of each layer is described as [none, ]. ’none’ here signifies that the layer is capable of accepting input of any size and is not bound by the batch size as specified during training or testing of the model. vised machine learning classification techniques like support vector machine (SVM), k-nearest neighbors (kNN), and deep neural networks (DNN) were also implemented on the same data set to carry out a comparative study of the results. IV. RESULTS Since ATR has been addressed as a classification problem, therefore different machine learning classification techniques were employed to carry out a comparative analysis with respect to RNNs. Classification report of DNN and stacked
  • 4. TABLE I ACCURACY COMPARISON CHART FOR DIFFERENT MODELS S.No. Model No. of Hidden Layers No. of Units Accuracy 1 2 3 4 5 6 7 8 9 10 1 LSTM 1 16 82.74% 2 LSTM 2 32 16 87.32% 3 LSTM 3 64 32 16 87.60% 4 LSTM 4 128 64 32 16 90.54% 5 LSTM 6 512 256 128 64 32 16 92.44% 6 LSTM 8 512 256 128 64 32 32 16 16 92.99% 7 LSTM 10 1024 512 256 128 128 64 64 32 32 16 92.08% 8 Stacked LSTM, 2 Layers 4 128 64 32 16 92.66% 9 Stacked LSTM, 2 Layers 5 256 128 64 32 16 93.18% 10 Stacked LSTM, 2 Layers 7 256 256 128 64 32 16 93.16% 11 Stacked LSTM, 2 Layers 8 1024 512 256 256 128 64 32 16 93.06% 12 Stacked LSTM, 2 Layers 9 512 256 256 128 128 64 64 32 16 92.55% 13 Stacked LSTM, 2 Layers 10 512 512 256 256 128 128 64 64 32 16 92.37% 14 Stacked LSTM, 3 Layers 3 64 32 16 83.70% 15 Stacked LSTM, 3 Layers 6 512 256 128 64 32 16 90.52% 16 LSTM CNN 4 128 64 32 16 86.88% 17 DNN 7 1024 512 256 128 64 32 16 85.12% 18 Stacked GRU 5 256 128 64 32 16 92.67% TABLE II ACCURACY COMPARISON CHART FOR DIFFERENT ACTIVATION FUNCTIONS S.No. Activation Function Accuracy(%) 1 Tanh 90.93 2 ELU 91.77 3 RELU 93.18 4 Softplus 92.16 Fig. 5. Classification Report of DNN and stacked LSTM models LSTM model is shown in Fig. 5. The confusion matrix for the various classification techniques used is shown in Fig. 6. Fig. 6. Confusion Matrix for Stacked LSTM, LSTM, SVM, and DNN Classifier The trajectory data-set was given as an input to the LSTM model for training. The accuracy of the model is measured by comparing the predicted output and the actual label of the training set. The aim is to maximize accuracy by adjusting the RNN’s design parameters and utilizing the dataset. The model was trained for 1, 000 epochs where epoch is defined as the number of times the complete dataset is passed through the neural network. The plot of accuracy versus the number of epochs for the RNN model is shown in Fig. 8. Fig. 7. Accuracy Vs Epochs - DNN. A comparison of results obtained by implementing different neural network models is shown in Table I. It can be seen from the comparison chart that stacked LSTM model comprising two LSTM layers and five hidden layers produces the best results. Models with same layer architecture but different activation functions were trained and tested. It was observed that rectified linear unit (RELU) activation function gives the best results in terms of accuracy. Comparison of results for various activation functions is shown in Table II.
  • 5. Fig. 8. Accuracy Vs Epochs - Stacked LSTM V. CONCLUSION The target classification technique proposed in this work has achieved a classification accuracy of 93% for the four test models on an independent test data. On the other hand, the accuracy achieved by a conventional DNN is 85%, and that of SVM classifier is 87%. It is evident from the results that the classification technique proposed here does perform well in terms of accuracy. Design of the RNN model required a lot of experiments to get the best possible architecture for this dataset. Also, training of stacked LSTM model requires high computational resources which can be met using high- performance computing (HPC) platforms. In applications like military and civil aviation, even the accuracy mentioned above is far below par because one needs to be sure before taking a decision which involves human life. Hence, more RNN architectures are being tried to improve upon the classification accuracy. Further research is now being carried out to classify aircraft models like MIG-21, MIG-29, Sukhoi, F-16 etc. under noisy conditions. ACKNOWLEDGMENT We would like to place on records our sincere thanks and gratitude to Prof Gopal R Shevare, IIT, Bombay , Director, Zeus Numerix Private Limited for all the guidance and support extended throughout the process. REFERENCES [1] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997. [2] S. Sefi, “Computational electromagnetics: Software development and high frequency modelling of surface currents on perfect conductors,” Ph.D. dissertation, KTH School of Computer Science and Communica- tion, December 2005. [3] A. K. Shaw, “Automatic target recognition using high-range resolution data,” WRIGHT STATE UNIV DAYTON OH, Tech. Rep., 1998. [4] S. Chen, H. Wang, F. Xu, and Y. Jin, “Target classification using the deep convolutional networks for sar images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 54, no. 8, pp. 4806–4817, Aug 2016. [5] M. R. Bell and R. A. Grubbs, “Jem modeling and measurement for radar target identification,” IEEE Transactions on Aerospace and Electronic Systems, vol. 29, no. 1, pp. 73–87, 1993. [6] S. Herman and P. Moulin, “A particle filtering approach to fm-band passive radar tracking and automatic target recognition,” in 2002 IEEE Aerospace Conference Proceedings., vol. 4, 2002, pp. 4–1789. [7] A. D. L. Lisa M. Ehrman, “Automated target recognition using passive radar and coordinated flight models,” pp. 5094 – 5094 – 12, 2003. [Online]. Available: https://doi.org/10.1117/12.488039 [8] A. Register, W. Blair, L. Ehrman, and P. K. Willett, “Using measured rcs in a serial, decentralized fusion approach to radar-target classification,” in Aerospace Conference, 2008 IEEE. IEEE, 2008, pp. 1–8. [9] K. Bhattacharyya and K. Sarma, “Automatic target recognition (atr) sys- tem using recurrent neural network (rnn) for pulse radar,” International Journal of Computer Applications, vol. 50, 07 2012. [10] K. D. Eric Wengrowski, Matthew Purri and A. Huston, “Deep convo- lutional neural networks as a method to classify rotating objects based on monostatic radar cross section.” [11] E. T. Kouba, S. K. Rogers, D. W. Ruck, and K. W. Bauer, “Recurrent neural networks for radar target identication,” in Applications of Artifi- cial Neural Networks IV, vol. 1965. International Society for Optics and Photonics, 1993, pp. 256–267. [12] D. C. Jenn, “Rcs calculations using the physical optics codes (pofacets manual),” Naval Postgraduate School.