SlideShare a Scribd company logo
Vol.:(0123456789)
1 3
Archives of Computational Methods in Engineering (2021) 28:4425–4447
https://doi.org/10.1007/s11831-021-09540-7
ORIGINAL PAPER
Image Fusion Techniques: A Survey
Harpreet Kaur1
· Deepika Koundal2
· Virender Kadyan3
Received: 17 May 2020 / Accepted: 10 January 2021 / Published online: 24 January 2021
© CIMNE, Barcelona, Spain 2021
Abstract
The necessity of image fusion is growing in recently in image processing applications due to the tremendous amount of acqui-
sition systems. Fusion of images is defined as an alignment of noteworthy Information from diverse sensors using various
mathematical models to generate a single compound image. The fusion of images is used for integrating the complementary
multi-temporal, multi-view and multi-sensor Information into a single image with improved image quality and by keeping
the integrity of important features. It is considered as a vital pre-processing phase for several applications such as robot
vision, aerial, satellite imaging, medical imaging, and a robot or vehicle guidance. In this paper, various state-of-art image
fusion methods of diverse levels with their pros and cons, various spatial and transform based method with quality metrics
and their applications in different domains have been discussed. Finally, this review has concluded various future directions
for different applications of image fusion.
1 Introduction
Image fusion (IF) is an emerging field for generating an
Infrmative image with the integration of images obtained by
different sensors for decision making [1]. The analytical and
visual image quality can be improved by integrating differ-
ent images. Effective image fusion is capable of preserving
vital Information by extracting all important Information
from the images without producing any inconsistencies in
the output image. After fusion, the fused image is more suit-
able for the machine and human perception. The first step
of fusion is Image Registration (IR) in which source image
is mapped with respect to the reference image. This type of
mapping is performed to match the equivalent image on the
basis of confident features for further analysis. IF and IR are
perceived as vital assistants to produce valuable Informa-
tion in several domains [2]. According to the literature, the
number of scientific papers has been increased dramatically
since 2011 and reached to the peak 21,672 in 2019 which
can be illustrated in Fig. 1. This fast-rising trend can be rec-
ognized due to the increased demand for high-performance
image fusion techniques with low cost. Recently, various
techniques like multi-scale decomposition and sparse rep-
resentation have been introduced that bring several ways for
improving the image fusion performance. There is a need
for efficient fusion method due to variations between cor-
responding images in various applications. For instance,
numerous satellites are increasing nowadays to acquire aerial
images with diverse spectral, spatial and temporal resolu-
tions in the domain of remote sensing. The IF is basically a
collection of image Information achieved by several imag-
ing parameters such as aperture settings or dynamic range,
spectral response or position of the camera or the use of
polarization filters. The Information of interest is extracted
from different images with the help of appropriate image
fusion methods which can further be used for traffic control,
reconnaissance, driver assistance or quality assessment.
Various techniques of image fusion can be classified as
pixel level, decision level and feature level. Pixel level tech-
niques for image fusion directly integrate the Information
from input images for further computer processing tasks
* Deepika Koundal
koundal@gmail.com
Harpreet Kaur
harpreet.kaur.045@gmail.com
Virender Kadyan
vkadyan@ddn.upes.ac.in
1
Department of Computer Science, Chitkara University
Institute of Engineering and Technology, Chitkara
University, Punjab, India
2
Department of Virtualization, School of Computer
Science, University of Petroleum and Energy Studies,
Bidholi, Dehradun, India
3
Department of Informatics, School of Computer
Science, University of Petroleum and Energy Studies,
Bidholi, Dehradun, India
4426 H. Kaur et al.
1 3
[3]. Feature level techniques for image fusion entails the
extractions of relevant features that is pixel intensities, tex-
tures or edges that are compounded to create supplementary
merged features [4, 5]. In decision level fusion techniques
for images, the input images are processed one at a time
for the extraction of Information [4]. There are a variety of
IF classifications based on the datasets such as multi-focus,
multi-spectral, multi-scale, multi-temporal, and multi-sen-
sor. In multi-focus techniques for image fusion, Informa-
tion from several images of a similar scene is fused to get
one composite image [6]. In addition, the multi-source and
multi-sensor IF methods recommend superior features for
representing Information which is not visible to a human
visible system and are utilized for medical diagnosis applica-
tions. The Information generated from merged images can
be employed for localization of abnormalities accurately [7].
The temporal modeling will give details of all clinical vari-
ables and reduce the risk of Information failure [8]. The fast-
rising trend can be a major factor of image fusion techniques
having low cost and high performance.
Currently, several techniques like sparse representation
(SR) and multi-scale decomposition have been anticipated that
help in enhancing the image fusion performance [3]. SR is a
theory of image representation, which is employed to an image
processing tasks such as interpolation, denoising, and recog-
nition [1]. The multi-spectral (MS) image is used for remote
sensing that merges their features to get an understandable
image using the corresponding Information and spatiotemporal
correlation [9]. IF has grown to be an influential solution by
merging the images captured through diverse sensors. Images
of diverse types such as infrared, visible, MRI and CT are
superior input images for multimodal fusion [1]. Currently,
Deep learning is a very active topic in image fusion. It has
gained great success in this area for solving different type of
problems such as image processing and computer vision. It
is widely used for image fusion [10]. Due to recent techno-
logical advancements, various imaging fusion techniques have
been utilized in many applications including video surveil-
lance, security, remote sensing, machine vision, and medical
imaging.
Still there are number of challenges associated with image
fusion which have to be explored. Moreover, an appropriate,
accurate and reliable fusion technique is required for the vari-
ous types of images for different domains that should be easily
interpretable to obtain better results. Besides, image fusion
techniques must be robust against uncontrollable acquisition
conditions or inexpensive computation time in real-time sys-
tems as mis-registration is a major error found while fusing
images. This paper presents an outline of various IF techniques
with their applications in diverse domains. Also, various chal-
lenges, shortcomings, and benefits of image fusion techniques
have been discussed.
Rest of the paper is organized in different sections. Sec-
tion 2 discusses the image fusion process. Section 3 gives the
various image fusion techniques. Section 4 presents the taxo-
nomical view of the image fusion methods. Section 5 explains
various image fusion applications. In Sect. 6, the evaluation
metrics of fusion are discussed. Section 7 delivered the percep-
tive deliberations and prospects for future work. Finally, the
papers with an outline of major ideas are concluded.
Fig. 1  According to the lit-
erature, the number of articles
related to image fusion
4427
Image Fusion Techniques: A Survey	
1 3
2 
Image Fusion Process
As discussed earlier, the goal of IF is to produce a merged
image with the integration of Information from more than
one image. Figure 2 demonstrates the major steps involved
in IF process. In wide-ranging, the registration is meas-
ured as an optimization issue which is used to exploit the
similarity as well as to reduce the cost. The Image registra-
tion procedure is used to align the subsequent features of
various images with respect to a reference image. In this
procedure, multiple source images are used for registration
in which the original image is recognized as a reference
image and the original images are aligned through refer-
ence image. In feature extraction, the significant features
of registered images are extracted to produce several fea-
ture maps.
By employing a decision operator whose main objective
is to label the registered images with respect to pixel or
feature maps, a set of decision maps are produced. Seman-
tic equivalence obtained the decision or feature maps that
might not pass on to a similar object. It is employed to
connect these maps to a common object to perform fusion.
This process is redundant for the source obtained from
a similar kind of sensors. Then, radiometric calibration
is employed on spatially aligned images. Afterward, the
transformation of feature maps is performed on an ordi-
nary scale to get end result in a similar representation for-
mat. Finally, IF merge the consequential images into a one
resultant image containing an enhanced explanation of the
image. The main goal of fusion is getting more Infrmative
fused image [2].
3 Image Fusion Techniques
IF techniques can be classify as spatial and frequency
domain. The spatial technique deals with pixel values of
the input images in which the pixels values are manipu-
lated to attain a suitable outcome. The entire synthesis
operations are evaluated using Fourier Transform (FT) of
the image and then IFT is evaluated to obtain a resulting
image. Other IF techniques are PCA, IHS and high pass
filtering and brovey method [12].
Discrete transform fusion techniques are extensively
used in image fusion as compared to pyramid based fusion
technique. Different types of IF techniques are shown in
Fig. 3 [13].
3.1 Spatial Based Techniques
The Spatial based technique is a simple image fusion
method consist of Max–Min, Minimum, Maximum, Sim-
ple Average and Simple Block Replace techniques [14]
[15]. Table 1shows the diverse spatial domain based meth-
ods with their pros and cons.
3.1.1 Simple Average
It is a fusion technique used to combined images by aver-
aging the pixels. This technique focused on all regions
of the image and if the images are taken from the same
type of sensor then it works well [16]. If the images have
high brightness and high contrast then it will produce good
results.
3.1.2 Minimum Technique
It selects the lowest intensity value of the pixels from
images andproduceda fused image [14]. It is used for
darker images [17].
3.1.3 Maximum Technique
It selects the pixels values of high intensity from images
to yield fused image [12].
3.1.4 Max–Min Technique
It selects the averaging values of the pixels smallest and
largest from the entire source images and produced the
resultant merged image.
Fig. 2  The main steps of IF procedure
4428 H. Kaur et al.
1 3
3.1.5 Simple Block Replace Technique
It adds all images of pixel values and takes the block aver-
age for it. It is based on pixel neighboring block images.
3.1.6 Weighted Averaging Technique
It assigned the weights to every pixel in the source images.
The resultant image is produced by the weighted sum
Fig. 3  Image Fusion Techniques
Table 1  shows the diverse spatial domain based methods with their pros and cons as per the literature review
Fusion Techniques Advantages Disadvantages
Averaging [14, 21]
Minimum pixel value [14, 21]
Simple block replacement [15]
Maximum pixel value [14, 21]
Max- min [15]
Simple, easy to recognize and implement Decreases the image quality and reduces noise into
final fused resultant image
Produced blurred images. Not appropriate for real time
applications
Weighted averaging [18] Improves the detection reliability Enhances the SNR
Principal component based
analysis [22, 23]
Simple and more efficient, high spatial quality, lesser
computational time
Resulted in color distortion and spectral degradation
Hue intensity saturation[21] Efficient and simple. high sharpening ability and Fast
processing
Color distortion
Brovey [14] Extremely straightforward and more efficient method.
Faster processing time. Gives Red–Green–Blue
images with superior degree of contrast
Color distortion
Guided filtering[24] It performs well in image smoothing or enhancement,
flash or no-flash imaging, matting or feathering and
joint upsampling
On the sparse inputs it is not directly applicable. It has
a common drawback; it may have halos near some
edges like other explicit filters
4429
Image Fusion Techniques: A Survey	
1 3
of every pixel value in source images [18]. This method
improves the detection reliability of the output image.
3.1.7 Hue Intensity Saturation (HIS)
It is a basic fusion color technique that converted the
Red–Green–Blue image into HIS components and then
intensity levels are divided with panchromatic (PAN)
image. Spatial contains intensity Information and spectral
contains both hue and saturation Information of the image.
It performs in the bands and has three multispectral bands
Red–Green–Blue (RGB) of low resolution. In the end, the
inverse transformation is performed to convert the HIS space
to the original RGB space for yielding fused image [12]. It
is a very straightforward technique to combine the images
features and provides a high spatial quality image. In remote
sensing images it gives the best result and major drawback
is that it involved only three bands [19].
3.1.8 Brovey Transform Method
Gillespie et al. suggested Brovey Transform in 1987. It is
a straightforward technique for merging data from more
than one sensor. It overcomes the three band problems. It
standardized the three multispectral bands used for RGB to
append the intensity and brightness into the image [13]. It
includes an RGB color transform technique that is known as
color normalization transform to avoid disadvantages of the
multiplicative technique [12]. It is helpful for visual Inter-
pretation but generates spectral distortion [19].
3.1.9 Principal Component Analysis (PCA)
It is a statistical method on the basis of orthogonal trans-
formation for converting a set of observations of a possibly
correlated variable into principal components that are set of
linearly uncorrelated variables. The main drawback of PCA
is spectral degradation and color distortion [9].
3.1.10 Guided filtering
It works as a boundary smoothing and preserving opera-
tor similar to the admired bilateral filter. It has enhanced
performance near the boundaries. It has a hypothetical link
with Laplacian matrix. It is a fast and non-estimated linear
time algorithm, whose density is not dependent on the mask
size. This filter is more efficient and effective in graphics and
computer vision applications with joint upsampling, haze
removal, detail smoothing and noise reduction [20]. IF is
also used in medical domain to identity the various diseases.
In which article, author perform experiment on brain images
and prove that Guided filter provides better results as com-
pared to Principal component analysis and multi-resolution
singular value decomposition technique [161].
3.2 Frequency Domain
These techniques decomposed the multiscale coefficients
from the input images [25]. Spatial distortion can be handled
by the frequency method. Table 2 lists the various frequency
domain based methods with their pros and cons.
Table 2  shows the diverse frequency domain based methods with their pros and cons as discussed by several authors
Fusion Techniques Advantages Disadvantages
Laplacian/Gausian pyramid [27, 28]
Low pass pyramid ratio [29]
Morphological pyramid [28]
Gradient pyramid [30]
Filter subtract decimate [30]
Provides better image quality of a representa-
tion for multi focus images
Provide almost same result. Number of break-
down levels affects the IF results
Discrete cosine transform (DCT) [38] Decomposed images into series of waveform
and used for real time applications
Low quality fused image
Discrete wavelet technique with Haar fusion
[39]
Produced better quality of fused image with
good SNR. Reduced the spectral distortion
Merged image has fewer spatial resolutions
Kekre’s wavelet transform fusion [43, 44] Used for any size of images and its final fused
result is more Infrmative
Computationally complex
Kekre’s hybrid wavelet based transform fusion
[45, 46]
It gives better results It cannot be used images integer power of two
Stationary wavelet transform (SWT) [35–37] Give better result at level 2 of decomposition Time consuming process
Stationary wavelet transform (SWT) and Cur-
velet Transform
Suitable for real time applications Very time consuming process
4430 H. Kaur et al.
1 3
3.2.1 Laplacian Pyramid Fusion Technique
It uses the interpolation sequence and Gaussian pyramid for
multi-resolution analysis for image fusion. Saleem et al. have
reported an improved IF technique using a contrast pyramid
transform on multi-source images [151]. But it is suffered by
the drawback of extraction ability which can be overcome by
multi-scale decomposition. Further, Li et al. improved the
gradient pyramid multi-source IF method which attains high
band coefficient with the help of gradient direction opera-
tor [9].
3.2.2 Discrete Transform Fusion Method [14]
Discrete transform based fusion take composite
images. Firstly, if the images are colored then RGB
(Red–Green–Blue) components of the multiple images are
separated subsequently, discrete transformation on images is
applied and then the average of multiple images is computed
an inverse transformation is applied at the end to obtain a
fused image. DWT (Discrete wavelets transform) is a better
IF method as compared to other fusion methods like Lapla-
cian pyramid method, Curvelet transforms method etc [26].
3.2.3 Discrete Cosine Transform (DCT)
In image fusion, DCT has various types like DCTma (DCT
magnitude), DCTcm (DCT contrast measure), DCTch (DCT
contrast highest), DCTe (DCT energy) and DCTav (DCT
average) [29]. This technique does not give a better result
with the size of the block less than 8×8. In the DCT domain,
DCTav is straightforward and basic method of image fusion.
DCTe and DCTma methods performed well in image fusion.
This technique is straightforward and used in factual time
applications.
3.2.4 Discrete Wavelet Transform (DWT) Method
DWT method decomposes the two or more images into vari-
ous high and low-frequency bands [31]. This method mini-
mized the spectral distortion in the resultant fused images by
producing the good signal to noise ratio with fewer spatial
resolution as compared to the pixel-based method. Wave-
let fusion performed superior to the spatial domain fusion
method with respect to minimizing the color distortions [15].
3.2.5 Kekre’s Wavelet Transform (KWT) Method
Kekre’s Wavelet Transform method is obtained from
Kekre’s transforms [32]. It can generate KWT matrices of
((2 N)*(2 N)), ((3 N)*(3 N)),…., ((N2)*(N2)) from Kekre’s
transform method matrix [33]. It can be used for more
than one images and the fused image is far good than other
methods.
3.2.6 Kekre’s Hybrid Wavelet Transform (KHWT) Method
KHWT method has been derived from hybrid wavelet
transforms. Many authors suggested that kekre-hadamard
wavelet method gives more brightness. Hybrid kekre-DCT
wavelet method gives good results. In this method, the best
two matrices are combined into a hybrid wavelet transforms
method. It cannot be used images integer power of two [45].
3.2.7 Stationary Wavelet Transform (SWT) Method
DWT method has a disadvantage of translation invariance
and Stationary Wavelet Transform overcome this problem
[34]. This technique provides a better output at decompo-
sition level 2 and time inefficient process [35] [36] [37].
SWT derived from DWT method. It is a new type of wavelet
transform method with translation invariance. It provides
enhanced analysis of image facts. The next second invention
curvelet transform method is additionally suitable for 2-D
image edges.
3.2.8 Curvelet Transform Method
SWT has a better characteristic in time–frequency. It can
achieve well result for devising in smooth. The second gen-
eration Curvelet is a new multi-scale transform; it breaks the
disadvantages of wavelet method in representing directions
of boundaries in the image [11, 40–42].
3.3 Deep Learning
Another technique which is most widely used for image
fusion is Deep Learning in various domains. Several deep
learning based image fusion methods have been presented
for multi-focus image fusion, multi-exposure image fusion,
multi-modal image fusion, multi-spectral (MS) image
fusion, and hyper-spectral (HS) image fusion, show-
ing various advantages. Various recent advances related
to deep learning based image are discussed in [10]. In
another way, deep learning and case-based reasoning tech-
niques are used with image fusion to enhance the out-
come of segmentation. In this article, author used artificial
intelligence to improve the results of segmentation and
its implementation done on kidney and tumour images.
This process complete in three layers: Fusion layer, Seg-
mentation layer, Data layer [159]. The multi-view deep
learning model is also used in Covid-19 for validation and
testing sets of chest CT images. It is more helpful to iden-
tify the diagnosis problem. Data is collected from various
4431
Image Fusion Techniques: A Survey	
1 3
hospitals of china [160]. The reasons behind the popularity
of deep learning based methods for image fusion are pre-
sented as that deep learning model are able to extract the
most elective features automatically from data without any
human intervention. These models are also able to char-
acterize various complex relationships between targeting
data and input. Deep learning models are gaining popular-
ity in providing potential image representation approaches
which could be useful to the study of image fusion. Com-
monly used deep learning models in image fusion are Con-
volutional Neural Network (CNN), Convolutional Sparse
Representation (CSR) and Stacked Autoencoder (SAE).
Table 3.listed the various advantages and disadvantages
of deep learning based image fusion methods.
4 
Image Fusion categorization
It is the process which integrates the source image and
reference image into one image. Diverse techniques were
anticipated by different authors to achieve the required
fusion objective. A Single sensor, multi-sensor, multi-
modal, multi-view, multi-focus, and multi-temporal illus-
trates major classes of such methods which are discussed
below.
4.1 Single Sensor
A number of images are merged to produce a fused image
with the best possible Information. For instance, human
operators are not able to perceive desired objects in light-
ing variant and noisy environment which can be high-
lighted in the end fused image. The inadequacy of this
type of system is due to imaging sensors that are used in
many sensing areas. The resolution of images is limited
by the sensors and the conditions in which the system is
operated in its dynamic range. Such as visible band sensor
(digital camera) is appropriate for illuminated day-light
scenes but is not appropriate for the badly illuminated
nighttime environment or under fog or rain that is unfa-
vorable conditions [47].
4.2 Multi Sensors
Multi-sensor IF overpowers the confines of a one sensor IF
by integration of images from a number of sensors to form
a compound image. An infrared (IR) camera accompanies
the digital camera to obtain the final image from individual
images. The infrared camera is suitable in inadequate illu-
minated environments and the digital camera is suitable for
day-light views. It is used in machine vision, a military area
such as in medical imaging, robotics and object detection.
It is mostly used to resolve the merged Information of the
numerous images [47]. According to the literature review,
Table 4 shows the various Multi-sensors techniques dis-
cussed by several authors.
4.3 Multi‑view Fusion
Multi-view images have diverse views at the similar time.
It is also known as Mono-modal fusion [47]. The existing
methods didn’t achieve acceptable performances in all cases,
especially when one of the estimations is not high-quality; in
this case, they are unable to discard it [49, 86, 152]. Accord-
ing to the literature review, Table 5 shows the various Multi-
view techniques discussed by several authors.
4.4 Multi‑modal Fusion
It is the process of integrating multi-modal images from one
or more imaging modalities to enhance the quality of an
image. The various models are multispectral, panchromatic,
infrared, remote sensing and visible images. Table 6 evident
the various Multi-modal techniques discussed by several
authors according to the literature review.
4.5 Multi‑focus Fusion
It is an efficient method for integrating the Information
from several images with a similar sight into a wide-rang-
ing image. The compound image is more Infrmative than
Table 3  shows deep learning based image fusion methods
Fusion Techniques Advantages Disadvantages
CNN [153–156] Able to extract features and representation can learn most elective features from
training data without any human intervention
High computational cost
CSR [157] Compute sparse representation of an entire image shift-invariant representation
approach elective in details preservation less sensitive to mis-registration
Need a lot of training data
SAE [158] Two phase based training mechanism have a high potential when the scale of labeled
data for supervised learning is limited
If you don’t have a good GPU
they are quite slow to train (for
complex tasks)
4432 H. Kaur et al.
1 3
Table
4  Multi
Sensor
image
fusion
techniques
reported
in
literature
Here
p
represents
the
literature
review
on
different
fields
in
various
categorizations
by
different
authors
NN,
Neural
network,
DWT,
Discrete
wavelet
transform,
DCT,
Discrete
cosine
transform,
PNN,
Pulse-coupled
neural
network,
WT,
Wavelet
transform,
SWT,
Stationary
wavelet
transform,
ANN,
Artificial
neural
network,,
SI-DWT,
Shift
invariant
discrete
wavelet
transform,
PCNN,
Pulse-coupled
neural
network,
PCA,
Principal
component
analysis,
NSCT,
Non-subsampled
contourlet
transform,
CT,
Contourlet
transform,
HIS,
Hue
intensity
saturation,
DT-CWT​
,
Dual-tree
complex
wavelet
transform,
HVS,
Hue
value
saturation,
RVS,
Regression
variable
substitution.
HPF,
High
pass
filter,
SVR,
Synthetic
variable
ratio
Authors
Neural
networks
classifications
Wavelet
based
classifications
Contourlet
based
clas-
sifications
Other
classifications
NN
ANN
PNN
PCNN
WT
DWT
SWT
DCT
DT-CWT​
SI-DWT
CT
NSCT
PCA
IHS
Entropy
Fuzzy
HVS
RVS
HPF
SVR
Paramanandham
[48]
p
p
p
p
p
p
p
p
Ehlers
[49]
p
p
p
Yoonsuk
Choi
[50]
p
p
p
p
p
Ross
[51]
p
p
Li
[52]
p
Egfin
Nirmala
[53]
p
p
p
Pohl
[54]
p
p
p
p
p
p
Uttam
Kumar
[55]
p
p
p
Meti
[56]
p
p
p
p
Makode
[57]
p
p
p
p
p
Chang
[58]
p
p
p
p
Jiang
[15]
p
p
p
p
p
Hall
[59]
p
p
Satish
Kumar
[60]
p
Pawar
[61]
p
p
p
p
p
p
Lemeshewsky
[62]
p
p
p
p
Dengi
[63]
p
p
Li
[64]
p
Zheng
[65]
p
Li
[66]
p
p
p
p
4433
Image Fusion Techniques: A Survey	
1 3
Table
5  Multi-View
image
fusion
techniques
reported
in
literature
Here
p
represents
the
literature
review
on
different
fields
in
various
categorizations
by
different
authors
CNN,
Convolution
neural
network,
KNN,
K
nearest
neighbor,
PSNR,
Peak
signal
to
noise
ratio,
LSFM,
Light
sheet
fluorenscence
microscopy,
AVG,
Averaging,
SNR,
Signal
to
noise
ratio,
KPCA,
Kernal
principal
component
analysis,
TP,
Temporal
projection,
TMIP,
Temporal
motion
interpolation
projection,
SVM,
Support
vector
machines,
DVC,
Distributed
video
coding,
ICA,
Independ-
ent
component
analysis
Authors
Neural
networks
clas-
sification
Wavelet
classifications
Contourlet
classifications
Other
classifications
CNN
KNN
WT
DWT
DCT
CT
NSCT
PSNR
DVC
LSFM
AVG
SNR
PCA
KPCA
SVM
FUZZY
ICA
TP
TMIP
Petrazzuoli
[67]
p
p
Cheung
[68]
p
Gelman
[69]
p
p
Maugey
[70]
p
p
p
Artigas
[71]
p
p
Jose
L
[72]
p
p
p
C.
Guillemat
[73]
p
p
Guo
[74]
p
p
p
Wang
[75]
p
p
Rajpoot
[76]
p
p
p
Forre
[77]
p
p
p
Zhang
[78]
p
p
Yongpeng
Li
[79]
p
Frederic
Dufaux
[80]
p
p
p
p
Das
[81]
p
p
Swoger
[82]
p
Seng
[83]
p
Rahul
Kavi
[84]
p
p
Kiska
[85]
p
p
p
p
p
Liu
[81]
p
4434 H. Kaur et al.
1 3
Table
6  Multi-Modal
image
fusion
techniques
reported
in
literature
Here
p
represents
the
literature
review
on
different
fields
in
various
categorizations
by
different
authors
CWT​
,
Complex
wavelet
transform,
DDWD,
Dual-tree
wavelet
transform,
PSO,
Particle
swarm
optimization,
MWT,
Wavelet
transform
Authors
Neural
networks
clas-
sifications
Wavelet
based
classifications
Contourlet
based
classifi-
cations
Other
classifications
NN
PCNN
ANN
WT
DWT
SWT
CWT​
MWT
DDWD
DTCWT​
CT
NSCT
PSO
PCA
Avg
Entropy
Fuzzy
logic
Wei
li
et
al.
[87]
p
p
Yong
et
al.
[88]
p
Max
et
al.
[89]
p
p
Kor
et
al.
[90]
p
Deron
et
al.
[91]
p
Zhao
et
al.
[92]
p
p
Richa
et
al.
[93]
p
Guihong
Qu
et
al.
[94]
p
Wang
et
al.
[95]
p
Sharmila
et
al.
[96]
p
p
p
p
Rajiv
et
al.
[97]
p
p
p
p
p
p
p
Bhavana
et
al.
[98]
p
p
Anitha
et
al.
[99]
p
p
p
Anjali
et
al.
[100]
p
p
Periyavattam
et
al.
[101]
p
Pradeep
et
al.
[102]
p
p
Guruprasad
et
al.
[103]
p
p
Kiran
et
al.
[104]
p
Anna
et
al.
[95]
p
Gaurav
et
al.
[105]
p
p
p
p
Amir
et
al.
[106]
p
Devanna
et
al.[107]
p
p
p
p
p
p
Senthil
et
al.[105]
p
p
4435
Image Fusion Techniques: A Survey	
1 3
Table
7  Multi-Focus
fusion
techniques
reported
in
literature
Here
p
represents
the
literature
review
on
different
fields
in
various
categorizations
by
different
authors
SPCNN,
Standard
PCNN,
LSW,
Lifting
stationary
wavelet,
MWT,
Wavelet
transform,
DDWD,
Dual-tree
wavelet
transform,
EOG,
Energy
of
image
gradient,
HMM,
Hidden
markov
modeling,
PSO,
Particle
swarm
optimization
Authors
Neural
networks
classifications
Wavelet
based
classifications
Contourlet
based
clas-
sifications
Other
classifications
ANN
PNN
PCNN
SPCNN
DWT
SWT
CWT​
LSW
DCT
DT-CWT​
CT
NSCT
NSST
PCA
HIS
Entropy
Fuzzy
EOG
HMM
Avg
Maruthi
[4]
p
p
p
p
p
p
p
p
p
p
p
p
p
p
Wang
[95]
Bhatnagar
[105]
Anish
[108]
p
p
p
p
p
p
p
p
p
Li
[109]
p
p
p
p
Wang
[110]
p
p
p
Li
[111]
p
p
p
Huang
[112]
p
p
p
Garg
[113]
p
p
p
Kaur
[114]
p
p
p
p
p
p
p
p
Kaur
[115]
p
p
Liu
[116]
p
Malhotra
[117]
p
p
Sulaiman
[118]
p
p
p
p
Li
[119]
p
p
p
Haghigha
[120]
p
p
p
Tian
[121]
p
Yang
[122]
p
p
p
p
Malik
[123]
p
p
Chai
[124]
p
p
Qu
[125]
p
p
p
Maruthi
[126]
p
p
4436 H. Kaur et al.
1 3
Table
8  Multi-Temporal
fusion
techniques
reported
in
literature
Here
p
represents
the
literature
review
on
different
fields
in
various
categorizations
by
different
authors
CSS,
Content
selection
strategy,
CVA,
Change
vector
analysis,
SVM,
Support
vector
machine,
ICC,
Iterative
compound
classification,
PCC,
Post
classification
comparison,
CD,
Change
detec-
tion,
LDP,
Local
derivative
pattern,
OCE,
Overall
cross
entropy,
HPF,
High
pass
filter
Authors
Neural
networks
clas-
sifications
Wavelet
based
classifications
Contourlet
based
classi-
fications
Other
classifications
PNN
DWT
DT-DWT
SI-DWT
CT
NSCT
PCA
HIS
CSS
CVA
SVM
ICC
PCC
CD
LDP
PSNR
OCE
FUZZY
ENTROPY
HPF
Anitha
[127]
p
p
p
Pawar
[128]
p
Parthiban
[129]
p
p
p
Momeni
[130]
p
p
p
p
Han
Pan
[131]
p
p
p
p
p
p
Silivana
[132]
p
Jain
[133]
p
p
Ferretti
[134]
p
Peijun
Du
[135]
p
p
p
p
Wang
[136]
p
Mittal
[137]
p
p
p
p
Wisetphanichkij
[138]
p
p
Visalakshi
[139]
p
p
p
p
p
p
Bovalo
[140]
p
Liu
[141]
p
p
p
Celik
[142]
p
p
p
p
Xiaojun
Yang
[143]
p
p
Bruzzone
[144]
p
p
p
Demir
[145]
p
p
p
Zhong
[146]
p
p
4437
Image Fusion Techniques: A Survey	
1 3
input images [6]. It gives better visual quality of an image.
According to the literature review, Table 7 shows the various
Multi-focus techniques discussed by several authors.
4.6 Multi‑temporal Fusion
Multi-temporal fusion captures the same scene at differ-
ent times. Long and short-term observations are required
because of the estimation of the occurrence of changes on
the ground. Because of the revisit observation satellites,
remote sensing images are obtained at diverse times for a
given area. Multi-temporal images are vital for detecting
land surface variations in broad geographical areas. Accord-
ing to the literature review; Table 8 shows the various Multi-
temporal techniques discussed by several authors.
5 
Main Applications in Diverse Domains
In current years, IF has been widely used in many different
applications such as medical diagnosis, surveillance, pho-
tography and remote sensing. Here, various challenging and
issues are discussed related to different fields [3].
5.1 Remote Sensing Applications
In accumulation to the modalities discussed above, it has
numerous IF techniques such as Synthetic Aperture Radar,
ranging and light detection and moderate resolution imaging
spectroradiometer that have been useful in IF applications.
Byun et al. have given the area based IF scheme for combin-
ing panchromatic, multispectral and synthetic aperture radar
images [1]. Temporal data fusion and high spatial approach
is used to produce synthetic Landsat imagery by combining
Landsat and moderate resolution imaging spectroradiometer
data [1]. Moreover, the synthesis of air-bone hyper-spec-
tral and Light Detection and Ranging (LiDAR) data is
researched recently by a combination of spectral Informa-
tion. Various datasets have been provided by Earth imaging
satellites like Quickbird, Worldview-2, and IKONOS for the
applications of pansharpening. Co-registered hyper-spectral
and multispectral images are more complex to obtain as
compared to multispectral and panchromatic images. Moreo-
ver, air-bone hyper-spectral data and LiDAR are accessible.
For occurrence, the IEEE Geoscience and Remote Sensing
Society Data Fusion 2013 and 2015 Contests have distrib-
uted numerous hyper-spectral, color and light detection and
ranging data for research purposes. In this field application,
numbers of satellites are mounted to acquire remote sens-
ing images with diverse spatial, temporal and spectral reso-
lutions. Moreover, in this field, classification and change
detection has been providing by Google Maps or Earth prod-
ucts that are effectively applied to construct the imagery
seen. This is a supplementary difficult problem as compared
with pansharpening, the multichannel multispectral image
contains both spatial Information and spectral Information.
Therefore, pansharpening is unsuitable or incompetent for
the IF of hyperspectral and multispectral images. The fore-
most challenge in this domain is accomplished as below:
(1) Spatial and spectral distortions The image datasets
frequently reveal variations in spatial and spectral
structures which causes more distortions with spatial
or spectral artifacts during image fusion.
(2) Mis-registration The next most important challenge in
this domain is how to decrease the misregistration rate.
The remote sensing input images are regularly obtained
from diverse times, acquisitions or spectral bands. Even
the panchromatic and multispectral datasets provided
by a similar platform, the one or more sensors may not
Fig. 4  Examples of IF in remote sensing domain. a PAN b MS c Fused image
4438 H. Kaur et al.
1 3
give accurate results in the same direction; their gain-
ing moments may be different. Therefore, in order to
resolve this, prior to IF, the images are required to be
registered. Conversely, registration is the challenging
process because of the variations between input images
as they are provided with diverse acquisitions. Figure 4
shows the fusion of Panchromatic and Multi-spectral
images that is achieved by the Principal Component
Analysis (PCA) transformation correspondingly.
5.2 Medical Domain Applications
Harvard Medical School has provided a brain image data-
set of registered Computerized Tomography and Magnetic
Resonance Imaging. Figure 6 shows an example of IF in
medical diagnosis by fusing CT and MRI. The CT is used
for capturing the bone structures with high spatial resolu-
tions and MRI is used to capture the soft tissue structures
like the heart, eyes, and brain. The CT and MRI can be
used collectively with IF techniques to enhance accuracy
and sensible medical applicability. The main challenging
of this field is also accomplished as below.
(1) Lack of medical crisis oriented IF methods The main
motive of IF is to assist the improved clinical results.
The clinical crisis is still a big challenge and nontrivial
tasks in the medical field.
(2) Objective image fusion performance estimate The main
difficulty in this domain is how to evaluate the IF per-
formance. There are diverse clinical issues of IF, which
preferred the IF effect may be fairly dissimilar.
(3) Mis-registration The inaccurate registration of objects
suffered from poor performance in the medical domain.
Figure 5 illustrates the fusion of MRI and CT images.
In this, the fusion of images is achieved by the guided
filtering based technique with image statistics.
Fig. 5  Examples of IF in medical diagnosis domain. a MRI b CT c Fused image
Fig. 6  Examples of IF in surveillance domain. a Visible image b Infrared image c Fused image
4439
Image Fusion Techniques: A Survey	
1 3
5.3 Surveillance Domain Applications
Figure 6 shows examples of IF in the surveillance domain
that is infrared and visible images fusion. Its high tempera-
ture makes it able to “see in the night” even without enlight-
enment as it is sensitive to objects. Infrared images give bad
spatial resolution and it can be overcome by fusion technique
by the visible and infrared image. Moreover, the fusion of
visible and infrared images has been introduced for another
surveillance domain problem in face recognition, image
dehazing, and military reconnaissance. The main challenges
of in this domain are as:
(1) Computing efficiency In this domain, effective IF algo-
rithms should merge the Information of innovative
images to get the final resultant image. Other promi-
nently, in these domains usually engages continuous
real-time monitoring.
(2) Imperfect environmental conditions The major dif-
ficulty in this field, the images may be acquired at
imperfect circumstances. Due to the weather and
enlightenment condition, the input images may contain
under-exposure and serious noise. Fusion of visible and
infrared image is shown in Fig. 6a, b. In this outline,
the fusions of both images are achieved by the guided
filtering and image statistics.
5.4 Photography Domain Applications
Figure 7 shows examples of IF in the photography domain,
the fusion of multi-focus images. It is not possible for all
objects with diverse distances from camera due to its
restricted depths to be all-in-focus within a single shot of
cameras. Due to the restricted depths of the camera, it is
not possible to be all-in-focus within a single shot of cam-
eras for all objects with diverse distances. To overcome
this, the multi-focus IF method is used to merge several
images with a similar scene having diverse focus points for
generating an all-in-focusresultant image. This resultant
compound image can well defend the significant Informa-
tion from the source image. It is more desirable in several
image processing tasks and machine vision. In Fig. 8, the
data sources used in the photography domain. The various
challenges which are faced in this domain are:
(1) Effect of moving target objects In this domain, multi-
exposure and multi-focus images are constantly pro-
vided by diverse times. In these circumstances, dur-
ing the capturing process moving objects may become
visible in diverse locations. The moving objects might
produce inconsistencies into the fused image.
(2) Relevance in consumer electronics In this, images are
taken from numerous shots with diverse settings of the
camera. The challenge is to combine the multi-exposure
and multi-focus IF methods into consumer electronics
to produce a compound image of high quality in real-
time. IF of multi-focus images (Back-focus image and
Fore-focus image) is shown in Fig. 7a, b. In this outline,
IF of multi-focus images is achieved by guided filtering
based technique and image.
5.5 Applications in Other Domains
Many other applications that are used for fusion like object
recognition, tracking, object detection etc.
Fig. 7  Examples of IF in photography domain. a Back-focus Image b Fore-focus image c Fused image
4440 H. Kaur et al.
1 3
5.6 Recognition Application
In an image one or more objects are visible. The main aim
of recognition to recognize the objects clearly. Face recog-
nition is a major application in this domain. Recognition
algorithm used infrared and visible IF method. It has two
types to recognize the image. In the first type, first fused the
images after that recognize its objects. In the next type, it is
embedded with the reorganization process,and it isproposed
by Faten et al. It can help for improving the recognition
precision with the help of narrowband and enhancement the
fusion results.
5.7 Detection and Tracking Application
Detection and tracking are used in infrared and visible IF. It
isused in real-life applications: fruit detection, object detec-
tion. It determines the accurate place of the objects at a simi-
lar time. Detection fusion algorithm can be differentiate into
two classes, in the first class detect the objects before fusing
and in the second classes, fuse the images before detecting
objects. He at el. introduced an algorithm with multilevel
image fusion and it enhanced the target detection of the
object. Pixel and feature level image fusion isalso consid-
ered in this method and it shows the relationship between
high and low-frequency Information, which is ignored in
the wavelet transform IF. The main motive of this method
to enhance target visibility.
Target tracking algorithm is similar to the detection
algorithm. It should determine the relationship between the
frames of the target objects in a particular time sequence.
Stephen et al. introduced an enhanced target tracking
approach through visible and infrared IF by using PCA-
weighted fusion rule. In most of the algorithms: detection,
recognition and tracking are independent to each other which
is designed to recover the features or visibility of the actual
images [4].
5.8 Performance Evaluation Measure
A number of performance evaluation metrics have been
anticipated to evaluate the performances of diverse IF tech-
niques. They can be categorized as subjective and objective
assessment measures [52]. Subjective assessment measures
play important role in IF as it evaluates the fused image
quality based on human visual perception. It can compare
various fusion techniques and methods according to stand-
ards like image details, image distortion, and object compe-
tence. In the infrared and visible IF subjective evaluation is
more popular and reliable. Its disadvantages are high cost,
time-consuming, irreproducibility and human intervention.
Objective assessment is carried out to quantify the fused
image quality quantitatively. It is not biased by observers and
highly consistent with visual perception. It’s arriving from
diverse types, which are based on image gradient, similar
structure, Information theory, human visual perception and
statistics [1]. A number of metrics for quantifying the qual-
ity of fused images is presented in this survey. Some other
evaluation measures of fused images are divided into two
groups. Evaluation measures are further categorized as a
reference and non-reference evaluation measures. Evalua-
tion measures based on the reference image are given below.
i. The mean of the square error (MSE) computes the
error and the real differentiation between the ideal or
expected results [1, 147]. This metric is defined as
follow:
where A and B is the ideal and compound image
respectively that can be evaluated, i and j is the pixel
row and column index respectively, m and n is the
height and width of image implying the numeral or
pixel rows and columns respectively.
ii. The Structural similarity index metric (SSIM) quanti-
fies the similarity between one or more images. It is
designed by modeling any contrast distortion and radi-
ometric. It is a combination of the luminance image
distortion and the combination of contrast distortion,
loss correlation and structure distortion between
source images and the final image [1, 11, 147–150].
This metric is defined as follow:
where the 𝜇x average of x, 𝜇y the average of y, 𝜎2
x the
variance of x, 𝜎2
y
the variance of y, 𝜎xy the covariance
of x and y,c1 and c2 two variable to stabilize the divi-
sion with the weak denominator
iii. The peak signal to noise ratio (PSNR) is used to com-
pute the ratio of peak power and noise value power [1,
11, 147, 149, 150]. This metric is defined as follow:
Here, r indicates the peak value of the fused image.
If the PSNR value is high that means fused image
closer to the input image and less distortion in the
fusion method
iv. Erreur Relative Globale Adimensionnelle de Syn-
these employed to quantify the image quality from the
fusion of high spatial resolution images. This method
(1)
MSE =
1
mn
m
∑
i=1
n
∑
j=1
(
Aij − Bij
)2
(2)
SSIM(x, y) =
(
2𝜇x𝜇y + c1
)(
2𝜎xy + c2
)
(
𝜇2
x + 𝜇2
y + c1
)(
𝜎2
x + 𝜎2
y + c2
)
(3)
PSNR = 10log10
{
r2
MSE
}
4441
Image Fusion Techniques: A Survey	
1 3
is introduced by Lucien Wald [148, 151]. This metric
is defined as follow:
where h and l show the size of the PAN and MS
images, N indicates the total number of pixels in the
fused image. Mk is the mean radiance value of the MS
image for the Bk band [151]
v. Overall cross entropy (OCE) is used to determine the
difference between one or more source images to get
a fused image. If the OCF value is smaller, it means it
provide better results [11, 12]. This metric is defined
as follow:
Here CE indicates the cross-entropy of the images, it
effects the entropy of two source images fA and fB and
F is fused image
vi. Visual Information fidelity (VIF) is used to measure
the distortions of images. It includes blur, local or
global contrast changes and additive noises [1, 150].
This metric is defined as follow:
vii. Mutual Information provides the Information quan-
tity detail of source images, which are merged in the
resultant image. The highest Mutual Information rep-
resents the effectiveness of the IF technique [1, 11,
141, 150, 151]. This is defined as follows
where PA(a) and PF(f) denote the marginal histogram
of input image A and the fused image is F. PA,F(a, f)
indicate joint histogram of input image A and fused
image is F. if mutual Information value is high it
means fusion performance is good
viii. Spectral Angle Mapper (SAM) calculates spectral sim-
ilar content between original and final fused images
by looking between two vectors at the angle [21]. This
metric is defined as follow:
(4)
ERGAS = 100
h
l
√
√
√
√ 1
N
N
∑
k=1
RMSE
(
Bk
)2/
(
Mk
)2
(5)
OCE
(
fA, fB;F
)
= CE(fA;F) + CE(fB;F)∕2
(6)
VIF = Distorted image information∕Reference image information
(7)
MIAF =
∑
a,f
PA,F(a, f)log
[
PAF(a, f)
PA(a)PF(f)
]
(8)
α = cos−1
⎛
⎜
⎜
⎜
⎜
⎝
∑b
i=1
tiri
�∑b
i=1
t2
i
�1∕2
�∑b
i=1
r2
i
�1∕2
⎞
⎟
⎟
⎟
⎟
⎠
Here, b is denoted the number of bands and ti and ri
stand for an ­
ith
band of test and reference images
ix. Signal to noise Ratio (SNR) is used to determine the
noise. The larger the signal to noise value enhanced
the resultant compound image [11, 12]. This metric is
defined as follow:
where Ir(x, y) indicates the intensity of the pixel of the
estimated image and If (x, y) indicates the intensity of
the pixel of the source image. If the signal to noise
value is high it means estimation error is small and
better performance in the fusion.
		  Non-reference image quality evaluation measures
which do not need reference image are given below.
i. Standard Deviation (SD) spread the data in the whole
image [1, 151]. This metric is defined as follow:
where
−
Hdenotes the mean value of fused image. If the
value of SD is high that means fused images achieve a
good visibility effect.
ii. Spatial Frequency error (SFE) is a quantitative meas-
ure to objectively assess the superiority of the final the
fused image [149]. This is defined as follow:
where SFf-SF of the fused image, SFr-SF of the refer-
ence image.
iii. Entropy (EN) is used to evaluate the Information
content of an image and it produce sensitive noise in
the image. Image with large Information content has
low cross entropy [1, 11, 12, 149, 151]. This metric is
defined as follow:
where, L is denotes the total number of gray levels
and pl denotes the normalized histogram of the cor-
responding gray level in the IF. If the EN value is
higher it means it contains more Information and bet-
ter performance in the image fusion.
iv. The universal image quality index (UIQI) is motivated
by Human Visual System. It is based on the structural
(9)
SNR = 10log10
� ∑P
x=1
∑Q
y=1
�
Ir(x, y
�
)2
∑P
x=1
∑Q
y=1
�
Ir(x, y
�
− If (x, y))2
�
(10)
SD =
√
√
√
√
m
∑
i=1
n
∑
j=1
(
h(i, j) − H
)2
(11)
SFE =
SFf − SFr
SFr
(12)
EN = −
L−1
∑
l=0
pllog2pl
4442 H. Kaur et al.
1 3
Information of final fused resultant images with the
combination of loss of correlation, contrast distor-
tion, and luminance distortion [11, 150]. This metric is
defined as follow:
where 𝜎 is variance, 𝜇 is the average.
v. Fusion mutual Information metric evaluates the degree
of dependence of the one or more images. It is based
on Mutual Information (MI) and measures the feature
Information which is transformed from input image to
fused image. It enhanced the image quality [1, 11]. This
metric is defined as follow:
where A, B are input images and F is the fused image. If
FMI value is high it indicates that considerable Informa-
tion transfer from input to the fused image.
vi. Spatial frequency (SF) is an image quality index it’s
called spatial row frequency (RF) and column frequency
(CF) based on horizontal and vertical gradients. The
Spatial frequency evaluation metric can calculate the
gradient distribution of an image effectively and it gives
more texture detail of the image [1, 11, 150, 151]. This
metric is defined as follow:
where F is a fused image. A fused image with high SF
contain the rich edges and textures Information.
vii. The large Mean gradient (MG) measurement implies
that the composite image capture the rich edges and tex-
tures Information. Its fusion performance is better [1,
149]. This metric is defined as follow:
where F is a final fused image.
(13)
UIQI =
𝜎I1IF
𝜎I1×𝜎IF
.
𝜇I1IF
(
𝜇I1
)2
+
(
𝜇I1
)2
.
2𝜎I1×𝜎IF
(
𝜎I1
)2
+
(
𝜎I1
)2
(14)
FMI = MIA,F + MIB,F
(15)
SF =
√
RF2 + CF2
(16)
RF =
√
√
√
√
M
∑
i=1
N
∑
j=1
(F(i, j) − F(i, j − 1))2
(17)
CF =
√
√
√
√
M
∑
i=1
N
∑
j=1
(F(i, j) − F(i − 1, j))2
(18)
MG =
1
(M − 1)(N − 1)
×
M−1
∑
x=1
N−1
∑
y=1
√
((F(x, y) − F(x − 1, y))2
+ (F(x, y) − F(x, −1y))2
)/
2
ix. Average Difference (AD) is the propositional value
of the differentiation between the actual or ideal data.
[147]. This metric is defined as follow:
ix. Average Gradient (AG) measurement is used to measure
the gradient Information of the composite image. It pro-
vides the texture detail of an image [1, 150, 151]. The
AG metric is defined as follow:
If AG metric value is high that means it contains more
gradient Information and better performance in the fused
algorithm.
x. Normalized cross correlation (NCC) is employed to
determine similar content between input and fused
image [147]. This metric is defined as follow:
xi. Mean absolute error (MAE) of the related pixels in origi-
nal and final fused images [11]. This is defined as fol-
low:
xii. Normalized Absolute Error (NAE) is a quality meas-
ure that normalized the error value with respect to the
expected or ideal value. It isprovide the dissimilarity
between the actual and desired outcome which is further
divided bythe sum of the expected values [147]. This
metric is defined as follow:
xiii Correlation determines the correlation between refer-
enced and resultant image. If the reference image and
resultant image value are one it means both images are
exactly the same and if it is less than one it means both
images have more dissimilarity [11].
(19)
AD =
1
mn
m
∑
i=1
n
∑
j=1
|
|
|
(
Aij − Bij
)|
|
|
(20)
AG =
1
MN
M
∑
i=1
N
∑
j=1
√
∇F2
x(i, j) + ∇F2
y(i, j)
2
(21)
NCC =
∑m
i=1
∑n
j=1
�
Aij*Bij
�
∑m
i=1
∑n
j=1
�
A2
ij
�
(22)
MAE =
1
N2
M
∑
i=1
N
∑
j=1
s(i, j) − y(i, j)
(23)
NAE =
∑m
i=1
∑n
j=1
�
�
�
�
Aij − Bij
��
�
�
∑m
i=1
∑n
j=1
�
Aij
�
4443
Image Fusion Techniques: A Survey	
1 3
6  Discussion
Despite the various constraints which are handled by
several researchers, still number of research and devel-
opment in the field of image fusion is growing day by
day. Image fusion has several open-ended difficulties in
different domains. The main aim is to discuss the current
challenges and future trends of image fusion that arise
in various domains, such as surveillance, photography,
medical diagnosis, and remote sensing are analyzed in
the fusion processes. This paper has discussed various
spatial and frequency domain methods as well as their
performance evaluation measures. Simple image fusion
techniques cannot be used in actual applications. PCA,
hue intensity saturation and Brovey methods are compu-
tationally proficient, high-speed and extremely straight-
forward but resulted in distortion of color. Images fused
with Principal component analysis have a spatial advan-
tage but resulted in spectral degradation. The guided fil-
tering is an easy, computationally efficient method and is
more suitable forreal-world applications. The number of
decomposition levels affects the pyramid decomposition
in image fusion outcome. Every algorithm has its own
advantages and disadvantages. The main challenge faced
in remote sensing field is to reduce the visual distortions
after fusing panchromatic (PAN), hyperspectral (HS) and
multi-spectral (MS) images. This is because source images
are captured using different sensors with similar platform
but do not focus on a same direction as well as their gain-
ing moments are not exactly the same. The dataset and its
accessibility represent a restriction that is faced by many
researchers. The progress of image fusion has increased its
interest in colored images and its enhancement. The aim
of color contrast enhancement is to produce an appealing
image with bright color and clarity of the visual scene.
Recently, researchers have used neutrosophy in image
fusion, used to remove noise and to enhance the quality
of single photon emission tomography (SPET), computed
tomography (CT), magnetic resonance imaging (MRI),
and positron emission tomography (PET) image. This
integration of neutrosophy with image fusion resulted in
noise reduction and better visibility of the fused image.
Deep learning is the rising trend to develop the automated
application. It extremely applied in various applications
such as face recognition, speech recognition, object detec-
tion and medical imaging. The integration of quantitative
and qualitative measures is the accurate way to determine
which particular fusion technique is better for certain
application. The various challenges which generally are
faced by researchers is to design image transformation and
fusion strategies. Moreover, the lack of effective image
representation approaches and widely recognized fusion
evaluation metrics for performance evaluation of image
fusion techniques is also of great concern. Whereas the
recent progresses in machine learning and deep learn-
ing based image fusion shows a huge potential for future
improvement in image fusion.
7  Conclusion
Recently, the area of image fusion is attracting more atten-
tion. In this paper, various image fusion techniques with
their pros and cons, different methods with state-of-art has
been discussed. Different applications like medical image,
remote sensing, photography and surveillance images have
been discussed with their challenges. Finally, the different
evaluation metrics for image fusion techniques with or with-
out reference has been discussed. Therefore, it is concluded
from survey that each image fusion technique is meant for a
specific application and can be used in various combinations
to obtain better results. In future, new deep neural networks
based image fusion methods will be developed for various
domains to improve the efficiency of fusion procedure by
implementing the algorithm with parallel computing unit.
Compliance with Ethical Standards
Conflict of Interest There is no conflict of interest.
References
1. Ma J, Ma Y, Li C (2019) Infrared and visible image fusion meth-
ods and applications: a survey. Inf Fus 1(45):153–178
2. El-Gamal FE, Elmogy M, Atwan A (2016) Current trends in
medical image registration and fusion. Egyptian Inform J
17(1):99–124
3. Li S, Kang X, Fang L, Hu J, Yin H (2017 Jan) Pixel-level image
fusion: a survey of the state of the art. Inf Fus 1(33):100–112
4. Maruthi R, Lakshmi I (2017) Multi-focus image fusion methods–
a survey. Comput Eng 19(4):9–25
5. Meher B, Agrawal S, Panda R, Abraham A (2019) A survey on
region based image fusion methods. Inf Fus 1(48):119–132
6. Liu Z, Chai Y, Yin H, Zhou J, Zhu Z (2017) A novel multi-focus
image fusion approach based on image decomposition. Inf Fus
1(35):102–116
7. James AP, Dasarathy BV (2014) Medical image fusion: a survey
of the state of the art. Inf Fus 1(19):4–19
8. Madkour M, Benhaddou D, Tao C (2016) Temporal data rep-
resentation, normalization, extraction, and reasoning: a review
from clinical domain. Comput Methods Programs Biomed
1(128):52–68
9. Bai L, Xu C, Wang C (2015) A review of fusion methods
of multi-spectral image. Optik-Int J Light Electron Optics
126(24):4804–4807
10. Liu Y, Chen X, Wang Z, Wang ZJ, Ward RK, Wang X (2018)
Deep learning for pixel-level image fusion: recent advances and
future prospects. Inf Fus 1(42):158–173
4444 H. Kaur et al.
1 3
11. Du J, Li W, Lu K, Xiao B (2016) An overview of multi-modal
medical image fusion. Neurocomputing 26(215):3–20
12. Morris C, Rajesh RS (2014) Survey of spatial domain image
fusion techniques. Int JAdv Res Comput Sci Eng Inf Technol
2(3):249–254
13. Mishra D, Palkar B (2015) Image fusion techniques: a review.
Int J Comput Appl 130(9):7–13
14. Jasiunas MD, Kearney DA, Hopf J, Wigley GB (2002) Image
fusion for uninhabited airborne vehicles. In: 2002 IEEE Inter-
national conference on field-programmable technology, 2002.
(FPT). Proceedings, p 348–351. IEEE
15. Dong J, Dafang Z, Yaohuan H, Jinying F (2011) Survey of
multispectral image fusion techniques in remote sensing appli-
cations. In: Zheng Y (ed) Image fusion and its applications.
Alcorn State University, USA
16. Banu RS (2011) Medical image fusion by the analysis of pixel
level multi-sensor using discrete wavelet Transform. In: Pro-
ceedings of the national conference on emerging trends in com-
puting science, p 291–297
17. Bavachan B, Krishnan DP (2014) A survey on image fusion
techniques. IJRCCT 3(3):049–052
18. Song L, Lin Y, Feng W, Zhao M (2009) A novel automatic
weighted image fusion algorithm. In: 2009. ISA 2009. Inter-
national Workshop on Intelligent Systems and Applications, p
1–4
19. Singh N, Tanwar, P (2012) Image fusion using improved con-
tourlet transform technique. Int J Recent Technol Eng (IJRTE),
vol 1, no. 2
20. He K, Sun J, Tang X (2010) Guided image filtering. European
conference on computer vision. Springer, Berlin, pp 1–14
21. Harris JR, Murray R, Hirose T (1990) IHS transform for the
integration of radar imagery with other remotely sensed data.
Photogramm Eng Remote Sens 56(12):1631–1641
22. Smith LI (2002) A tutorial on principal components analysis.
Statistics 51(1):52
23. Li S, Kang X, Hu J (2013) Image fusion with guided filtering.
IEEE Trans Image Process 22(7):2864–2875
24. Sadjadi F (2005) Comparative image fusion analysais. In: 2005
Computer society conference on computer vision and pattern
recognition-workshops (CVPR’05)-Workshops, p 8–8. IEEE
25. Yang J, Ma Y, Yao W, Lu WT (2008) A spatial domain and
frequency domain integrated approach to fusion multifocus
images. The International archives of the photogrammetry,
remote sensing and spatial Inf sciences, 37(PART B7).
26. Wu D, Yang A, Zhu L, Zhang C (2014) Survey of multi-sensor
image fusion. International conference on life system modeling
and simulation. Springer, Berlin, pp 358–367
27. Olkkonen H, Pesola P (1996) Gaussian pyramid wavelet trans-
form for multiresolution analysis of images. Graphic Models
Image Process 58(4):394–398
28. Ramac LC, Uner MK, Varshney PK, Alford MG, Ferris DD
(1998) Morphological filters and wavelet-based image fusion
for concealed weapons detection. In Sensor Fusion: Architec-
tures, Algorithms, and Applications II vol 3376, p 110–120.
International Society for Optics and Photonics.
29. Toet A (1989) Image fusion by a ratio of low-pass pyramid.
Pattern Recogn Lett 9(4):245–253
30. Burt PJ (1992) A gradient pyramid basis for pattern-selective
image fusion. Proc SID 1992:467–470
31. Chandrasekhar C, Viswanath A, NarayanaReddy S (2013)
FPGA Implementation of image fusion technique using DWT
for micro air vehicle applications. 4(8): 307–315
32. Krishnamoorthy S, Soman KP (2010) Implementation and
comparative study of image fusion algorithms. Int J Comput
Appl 9(2):25–35
33. Kekre HB, Sarode T, Dhannawat R (2012) Kekre’s wavelet trans-
form for image fusion and comparison with other pixel based
image fusion techniques. Int J Comput Sci Inf Secur 10(3):23–31
34. Klein LA (1993) Society of photo-optical instrumentation engi-
neers (SPIE) 405 fieldston road Bellingham. United States, WA
35. Borwonwatanadelokd P, Rattanapitak W, Udomhunsakul S
(2009) Multi-focus image fusion based on stationary wavelet
transform and extended spatial frequency measurement. In: 2009
International Conference on Electronic Computer Technology, p
77–81. IEEE
36. Udomhunsakul S, Yamsang P, Tumthong S, Borwonwatanadelok
P (2011) Multiresolution edge fusion using SWT and SFM. Proc
World Congr Eng 2:6–8
37. Kannan K, Perumal SA, Arulmozhi K (2010) Performance com-
parison of various levels of fusion of multi-focused images using
wavelet transform. Int J Comput Appl 1(6):71–78
38. Naidu VPS (2012) Discrete cosine transform based image fusion
techniques. J Commun, Navig Signal Process 1(1):35–45
39. Singh R, Khare A (2013) Multiscale medical image
fusion in wavelet domain. Sci World J 1–10. https​://doi.
org/10.1155/2013/52103​4
40. Mallat S (1999) A wavelet tour of signal processing. Academic
press, Elsevier
41. Pajares G, De La Cruz JM (2004) A wavelet-based image fusion
tutorial. Pattern Recogn 37(9):1855–1872
42. Burrus CS, Gopinath RA, Guo H, Odegard JE, Selesnick IW
(1998) Introduction to wavelets and wavelet transforms: a primer,
vol 1. Prentice hall, New Jersey
43. Kekre HB, Athawale A, Sadavarti D (2010) Algorithm to gener-
ate Kekre’s Wavelet transform from Kekre’s Transform. Int J Eng
Sci Technol 2(5):756–767
44. Kekre HB, Sarode T, Dhannawat R (2012) Implementa-
tion and comparison of different transform techniques using
Kekre’s wavelet transform for image fusion. Int J Comput Appl
44(10):41–48
45. Dhannawat R, Sarode T (2013) Kekre’s Hybrid wavelet trans-
form technique with DCT WALSH HARTLEY and kekre’s
transform for image fusion. Int J Comput Eng Technol (IJCET)
4(1):195–202
46. Kekre HB, Sarode T, Dhannawat R (2012) Image fusion using
Kekre’s hybrid wavelet transform. In: 2012 International Confer-
ence on Communication, Information  Computing Technology
(ICCICT), p 1–6
47. Sharma M (2016) A review: image fusion techniques and appli-
cations. Int J Comput Sci Inf Technol 7(3):1082–1085
48. Paramanandham N, Rajendiran K (2018) Infrared and visible
image fusion using discrete cosine transform and swarm intel-
ligence for surveillance applications. Infrared Phys Technol
88:13–22
49. Ehlers M, Klonus S, Astrand PJ (2008) Quality assessment for
multi-sensor multi-date image fusion. In: Proceedings of the
XXIth International Congress ISPRS, p 499–506.
50. Choi Y, Latifi S (2012) Contourlet based multi-sensor image
fusion. In: Proceedings of the 2012 International Conference on
Information and Knowledge Engineering IKE, vol 12, p 16–19
51. Ross WD, Waxman AM, Streilein WW, Aguiiar M, Verly J, Liu
F, Rak S (2000) Multi-sensor 3D image fusion and interactive
search. In: 2000. FUSION 2000. Proceedings of the Third Inter-
national Conference on Inf Fusion, vol 1, p TUC3–10
52. Li M, Cai W, Tan Z (2006) A region-based multi-sensor image
fusion scheme using pulse-coupled neural network. Pattern Rec-
ogn Lett 27(16):1948–1956
53. Nirmala DE, Vignesh RK, Vaidehi V (2013). Fusion of multi-
sensor images using nonsubsampled contourlet transform and
4445
Image Fusion Techniques: A Survey	
1 3
fuzzy logic. In: 2013 IEEE International conference on fuzzy
systems (FUZZ) p 1–8
54. Pohl C, Van Genderen JL (1998) Review article multisensor
image fusion in remote sensing: concepts, methods and appli-
cations. Int J Remote Sens 19(5):823–854
55. Kumar U, Mukhopadhyay C, Ramachandra TV (2009) Fusion
of multisensor data: review and comparative analysis. In: 2009.
GCIS’09. WRI Global Congress on Intelligent Systems. vol 2,
p 418–422, IEEE
56. Subhas AM Multi sensor data fusion for sensor validation.
International Journal of Advanced Computer Technology
(IJACT), Survey Paper ISSN:2319–7900.
57. Makode PN, Khan J (2017) A review on multi-focus digital
image pair fusion using multi-scale image. Wavelet Decom-
position 3(1):575–579
58. Chang NB, Bai K, Imen S, Chen CF, Gao W (2016) Multi-
sensor satellite image fusion and networking for all-weather
environmental monitoring. IEEE Syst J 12(2):1341–1357
59. Hall DL, Llinas J (1997) An introduction to multisensor data
fusion. Proc IEEE 85(1):6–23
60. Kumar NS, Shanthi C (2007) A survey and analysis of pixel
level multisensor medical image fusion using discrete wavelet
transform. IETE Tech Rev 24(2):113–125
61. Panwar SA, Malwadkar S (2015) A review: image fusion tech-
niques for multisensor images. Int J Adv Res Electr, Electr
Instrum Eng 4(1):406–410
62. Lemeshewsky GP (1999) Multispectral multisensor image
fusion using wavelet transforms. In Visual information pro-
cessing VIII, vol. 3716, p 214–223. International Society for
Optics and Photonics
63. Deng C, Cao H, Cao C, Wang S (2007) Multisensor image
fusion using fast discrete curvelet transform. In: MIPPR 2007:
Remote sensing and GIS data processing and applications and
innovative multispectral technology and applications, vol.
6790, p 679004. International Society for Optics and Photonics
64. Li H, Manjunath BS, Mitra SK (1995) Multisensor image
fusion using the wavelet transform. Graphic Models Image
Process 57(3):235–245
65. Zheng Y, Zheng P (2010) Multisensor image fusion using a
pulse coupled neural network. International conference on
artificial intelligence and computational intelligence. Springer,
Berlin, pp 79–87
66. Li Y, Song GH, Yang SC (2011) Multi-sensor image fusion by
NSCT-PCNN transform. Int Conf Comput Sci Automat Eng
(CSAE) 4:638–642
67. Petrazzuoli G, Cagnazzo M, Pesquet-Popescu B (2013) Novel
solutions for side Information generation and fusion in multi-
view DVC. EURASIP J Adv Signal Process 2013(1):154
68. Cheung G, Ortega A, Cheung NM (2011) Interactive streaming
of stored multiview video using redundant frame structures.
IEEE Trans Image Process 20(3):744–761
69. Gelman A, Dragotti PL, Velisavljević V (2011) Interactive
multiview image coding. In: 2011 18th IEEE international
conference on image processing (ICIP), p 601–604
70. Maugey T, Miled W, Cagnazzo M, Pesquet-Popescu B (2009)
Fusion schemes for multiview distributed video coding. In:
2009 17th European signal processing conference. p 559–563
71. Artigas X, Angeli E, Torres L (2006) Side Information gen-
eration for multiview distributed video coding using a fusion
approach. In: Proceedings of the 7th nordic signal processing
symposium-NORSIG 2006, p 250–253, IEEE
72. Rubio-Guivernau JL, Gurchenkov V, Luengo-Oroz MA, Dulo-
quin L, Bourgine P, Santos A, Ledesma-Carbayo MJ (2011)
Wavelet-based image fusion in multi-view three-dimensional
microscopy. BioInfrmatics 28(2):238–245
73. Guillemot C, Pereira F, Torres L, Ebrahimi T, Leonardi R,
Ostermann J (2007) Distributed monoview and multiview
video coding. IEEE Signal Process Mag 24(5):67–76
74. Guo X, Lu Y, Wu F, Gao W, Li S (2006) Distributed multi-
view video coding. Vis Commun Image Process (VCIP)
6077:60770T
75. Wang RS, Wang Y (2000) Multiview video sequence analysis,
compression, and virtual viewpoint synthesis. IEEE Trans Cir-
cuits Syst Video Technol 10(3):397–410
76. Rajpoot K, Noble JA, Grau V, Szmigielski C, Becher H (2009)
Multiview RT3D echocardiography image fusion. International
conference on functional imaging and modeling of the heart.
Springer, Berlin, pp 134–143
77. Ferre P, Agrafiotis D, Bull D (2007) Fusion methods for side
information generation in multi-view distributed video cod-
ing systems. In: 2007 IEEE International conference on image
processing ICIP, vol 6, p VI-409. IEEE
78. Zhang ZG, Bian HY, Song ZQ, Xu H (2014) A multi-view
sonar image fusion method based on nonsubsampled contourlet
transform and morphological modification. Appl Mech Mater
530:567–570
79. Li Y, Liu H, Liu X, Ma S, Zhao D, Gao W (2009) Multi-
hypothesis based multi-view distributed video coding. In:
2009. PCS 2009 Picture coding symposium. p. 1–4
80. Dufaux F (2011) Support vector machine based fusion for
multi-view distributed video coding. In: 2011 17th Interna-
tional conference on digital signal processing (DSP), p 1–7
81. Das R, Thepade S, Ghosh S (2015) Content based image rec-
ognition by Information fusion with multiview features. Int J
Inf Technol Comput Sci 7(10):61–73
82. Swoger J, Verveer P, Greger K, Huisken J, Stelzer EH (2007)
Multi-view image fusion improves resolution in three-dimen-
sional microscopy. Opt Express 15(13):8029–8042
83. Seng CH, Bouzerdoum A, Tivive FHC, Amin MG (2010)
Fuzzy logic-based image fusion for multi-view through-the-
wall radar. In: 2010 International conference on digital image
computing: techniques and applications (DICTA), p 423–428
84. Kavi R, Kulathumani V, Rohit F, Kecojevic V (2016) Mul-
tiview fusion for activity recognition using deep neural net-
works. J Electron Imaging 25(4):043010
85. Kisku DR, Mehrotra H, Rattani A, Sing JK, Gupta P (2009)
Multiview Gabor face recognition by fusion of PCA and
canonical covariate through feature weighting. In: Applica-
tions of Digital Image Processing XXXII (vol 7443, p 744308).
International Society for Optics and Photonics.
86. Liu K, Kang G (2017) Multiview convolutional neural net-
works for lung nodule classification. Int J Imaging Syst Tech-
nol 27(1):12–22
87. Li W, Zhu XF (2005) A new algorithm of multi-modality
medical image fusion based on pulse-coupled neural networks.
International conference on natural computation. Springer,
Berlin, pp 995–1001
88. Viergever MA, van den Elsen PA, Stokking R (1992) Inte-
grated presentation of multimodal brain images. Brain Topogr
5(2):135–145
89. Rodrigues D, Virani HA, Kutty S (2014) Multimodal image
fusion techniques for medical images using wavelets. Image
2(3):310–313
90. Yang Y, Que Y, Huang S, Lin P (2016) Multimodal sensor medi-
cal image fusion based on type-2 fuzzy logic in NSCT domain.
IEEE Sens J 16(10):3735–3745
91. Kor S, Tiwary U (2004) Feature level fusion of multimodal
medical images in lifting wavelet transform domain. In: 2004
26th Annual international conference of the IEEE engineering in
medicine and biology society IEMBS’04, vol. 1: p 1479–1482.
IEEE
4446 H. Kaur et al.
1 3
92. Zhao Y, Zhao Q, Hao A (2014) Multimodal medical image
fusion using improved multi-channel PCNN. Bio-Med Mater
Eng 24(1):221–228
93. Singh R, Vatsa M, Noore A (2009) Multimodal medical image
fusion using redundant discrete wavelet transform. In: 2009
Seventh international conference on advances in pattern rec-
ognition ICAPR’09, p 232–235. IEEE
94. Qu G, Zhang D, Yan P (2001) Medical image fusion by wavelet
transform modulus maxima. Opt Expr 9(4):184–190
95. Wang A, Sun H, Guan Y (2006) The application of wavelet
transform to multi-modality medical image fusion. In: 2006.
Proceedings of the IEEE International conference on network-
ing, sensing and control ICNSC’06, p 270–274, IEEE
96. Sharmila K, Rajkumar S, Vijayarajan V (2013) Hybrid method
for multimodality medical image fusion using discrete wavelet
transform and entropy concepts with quantitative analysis. In:
2013 International conference on communications and signal
processing (ICCSP), p 489–493
97. Singh R, Khare A (2014) Fusion of multimodal medical images
using Daubechies complex wavelet transform–a multiresolu-
tion approach. Inf Fus 19:49–60
98. Bhavana V, Krishnappa HK (2015) Multi-modality medical
image fusion using discrete wavelet transform. Procedia Com-
put Sci 70:625–631
99. Anitha S, Subhashini T, Kamaraju M (2015) A novel multi-
modal medical image fusion approach based on phase congru-
ency and directive contrast in NSCT domain. Int J Comput
Appl 129(10):30–35
100. Pure AA, Gupta N, Shrivastava M (2013) An overview of dif-
ferent image fusion methods for medical applications. Int J Sci
Eng Res 4(7):129
101. Gomathi PS, Kalaavathi B (2016) Multimodal medical image
fusion in non-subsampled contourlet transform domain. Cir-
cuits Syst 7(8):1598–1610
102. Patil MPP, Deshpande KB (2015) New technique for image
fusion using DDWT and PSO in medical field. Int J Rec Innov
Trends Comput Commun 3(4):2251–2254
103. Guruprasad S, Kurian MZ, Suma HN (2013) A medical multi-
modality image fusion of CT/PET with PCA, DWT methods.
J Dental Mater Tech 4(2):677–681
104. Parmar K, Kher RK, Thakkar FN (2012) Analysis of CT and
MRI image fusion using wavelet transform. In: 2012 Inter-
national conference on communication systems and network
technologies (CSNT), p 124–127
105. Bhatnagar G, Wu QJ, Liu Z (2013) Directive contrast based
multimodal medical image fusion in NSCT domain. IEEE
Trans Multimed 15(5):1014–1024
106. Al-Bakrei AFP (2012) Brian image fusion of MRI-CT multi-
modality systems using DWT and hybrid enhancement fusion
algorithms. J Babylon Univ/Eng Sci 20(1):258–269
107. Swathi PS, Sheethal MS, Paul V (2016) Survey on multimodal
medical image fusion techniques. Int J Sci, Eng Comput Tech-
nol 6(1):33
108. Anish A, Jebaseeli TJ (2012) A Survey on multi-focus imagefu-
sion methods. Int J Adv Res Comput Eng Technol (IJARCET)
1(8):319–324
109. Li H, Chai Y, Yin H, Liu G (2012) Multifocus image fusion
and denoising scheme based on homogeneity similarity. Optics
Commun 285(2):91–100
110. Wang Z, Ma Y, Gu J (2010) Multi-focus image fusion using
PCNN. Pattern Recogn 43(6):2003–2016
111. Li S, Kwok JT, Wang Y (2002) Multifocus image fusion using
artificial neural networks. Pattern Recogn Lett 23(8):985–997
112. Huang W, Jing Z (2007) Multi-focus image fusion using pulse
coupled neural network. Pattern Recogn Lett 28(9):1123–1132
113. Garg R, Gupta P, Kaur H (2014) Survey on multi-focus image
fusion algorithms. In: 2014 Recent Advances in Engineering and
Computational Sciences (RAECS). p 1–5
114. Kaur G, Kaur P (2016) Survey on multifocus image fusion tech-
niques. In: International conference on electrical, electronics, and
optimization techniques (ICEEOT). p 1420–1424
115. Kaur P, Sharma ER (2015) A study of various multi-focus
image fusion techniques. Int J Comput Sci Info Techonol
6(5):1139–1146
116. Liu L, Bian H, Shao G (2013) An effective wavelet-based scheme
for multi-focus image fusion. In 2013 IEEE International confer-
ence on mechatronics and automation (ICMA), p 1720–1725
117. Malhotra G, Chopra DV (2014) Improved multi-focus image
fusion using ac-dct, edge preserving smoothing  DRSHE. In:
Proceedings of international conference on computer science,
cloud computing and applications, p 24–25
118. Sulaiman M (2016) A survey on various multifocus image fusion
techniques. Int J Sci Technol Eng IJSTE 3(5):107–111
119. Li Q, Du J, Song F, Wang C, Liu H, Lu C (2013) Region-based
multi-focus image fusion using the local spatial frequency. In:
2013 25th Chinese control and decision conference (CCDC), p
3792–3796
120. Haghighat MBA, Aghagolzadeh A, Seyedarabi H (2011) Multi-
focus image fusion for visual sensor networks in DCT domain.
Comput Electr Eng 37(5):789–797
121. Tian J, Chen L (2012) Adaptive multi-focus image fusion using
a wavelet-based statistical sharpness measure. Signal Process
92(9):2137–2146
122. Yang Y (2011) A novel DWT based multi-focus image fusion
method. Procedia Eng 24:177–181
123. Malik AS (ed) (2011) Depth map and 3D imaging applications:
algorithms and technologies: algorithms and technologies. Her-
shey, IGI Global
124. Chai Y, Li H, Li Z (2011) Multifocus image fusion scheme using
focused region detection and multiresolution. Optics Commun
284(19):4376–4389
125. Qu X, Yan J (2007). Multi-focus image fusion algorithm based on
regional firing characteristic of pulse coupled neural networks.
In: 2007 BIC-TA 2007 Second international conference on bio-
inspired computing: theories and applications, p 62–66
126. Maruthi R, Sankarasubramanian K (2007) Multi focus image
fusion based on the Inf level in the regions of the images. J Theor
Appl Inf Technol 3(4):80–85
127. Anitha AJ, Vijayasangeetha S (2016) Building change detec-
tion on multi-temporal VHR SAR image based on second level
decomposition and fuzzyrule. Int J 4(7)
128. Pawar TA (2014) Change detection approach for images
using image fusion and C-means clustering algorithm. Int J
2(10):303–307
129. Parthiban L (2014) Fusion of MRI and CT images with double
density dual tree discrete wavelet transform. Int J Comput Sci
Eng Technol 5(2):168–172
130. Momeni S, Pourghassem H (2014) An automatic fuzzy-based
multi-temporal brain digital subtraction angiography image
fusion algorithm using Curvelet transform and content selection
strategy. J Med Syst 38(8):70
131. Pan H, Jing Z, Liu R, Jin B (2012) Simultaneous spatial-temporal
image fusion using Kalman filtered compressed sensing. Opt Eng
51(5):057005
132. Dellepiane SG, Angiati E (2012) A new method for cross-nor-
malization and multitemporal visualization of SAR images for
the detection of flooded areas. IEEE Trans Geosci Remote Sens
50(7):2765–2779
133. Jan S (2012) Multi temporal image fusion of earthquake satellite
images. Int J Adv Res Comput Sci 3(5)
4447
Image Fusion Techniques: A Survey	
1 3
134. Ferretti R, Dellepiane S (2015) Color spaces in data fusion of
multi-temporal images. International conference on image analy-
sis and processing. Springer, Cham, pp 612–622
135. Du P, Liu S, Xia J, Zhao Y (2013) Inf fusion techniques for
change detection from multi-temporal remote sensing images.
Inf Fusion 14(1):19–27
136. Wang B, Choi J, Choi S, Lee S, Wu P, Gao Y (2017) Image
fusion-based land cover change detection using multi-temporal
high-resolution satellite images. Remote Sens 9(8):804
137. Mittal M (2015) Hybrid image fusion using curvelet and wavelet
transform using PCA and SVM. Int J Sci Emerg Technol Latest
Trends 22(1):28–35
138. Wisetphanichkij S, Dejhan K, Cheevasuvit F, Mitatha S, Netbut
C (1999) Multi-temporal cloud removing based on image fusion
with additive wavelet decomposition. Faculty of Engineering and
Research Center for Communication and Inf Technology.
139. Visalakshi S (2017) Multitemporal image fusion based on sta-
tionary wavelet transform and change detection using LDP analy-
sis. International Journal of Engineering Science and Comput-
ing, p 14082
140. Bovolo F (2009) A multilevel parcel-based approach to change
detection in very high resolution multitemporal images. IEEE
Geosci Remote Sens Lett 6(1):33–37
141. Liu S, Bruzzone L, Bovolo F, Du P (2015) Hierarchical unsuper-
vised change detection in multitemporal hyperspectral images.
IEEE Trans Geosci Remote Sens 53(1):244–260
142. Celik T, Ma KK (2011) Multitemporal image change detection
using undecimated discrete wavelet transform and active con-
tours. IEEE Trans Geosci Remote Sens 49(2):706–716
143. Yang X, Chen L (2010) Using multi-temporal remote sensor
imagery to detect earthquake-triggered landslides. Int J Appl
Earth Obs Geoinf 12(6):487–495
144. Bruzzone L, Serpico SB (1997) An iterative technique for the
detection of land-cover transitions in multitemporal remote-
sensing images. IEEE Trans Geosci Remote Sens 35(4):858–867
145. Demir B, Bovolo F, Bruzzone L (2012) Detection of land-cover
transitions in multitemporal remote sensing images with active-
learning-based compound classification. IEEE Trans Geosci
Remote Sens 50(5):1930–1941
146. Zhong J, Wang R (2006) Multi-temporal remote sensing change
detection based on independent component analysis. Int J Remote
Sens 27(10):2055–2061
147. Patil V, Sale D, Joshi MA (2013) Image fusion methods and qual-
ity assessment parameters. Asian J Eng Appl Technol 2(1):40–46
148. Kosesoy I, Cetin M, Tepecik A (2015 Jul) A toolbox for
teaching image fusion in matlab. Procedia-Soc Behav Sci
25(197):525–530
149. Paramanandham N, Rajendiran K (2018) Multi sensor image
fusion for surveillance applications using hybrid image fusion
algorithm. Multimedia Tools Appl 77(10):12405–12436
150. Jin X, Jiang Q, Yao S, Zhou D, Nie R, Hai J, He K (2017 Sep)
A survey of infrared and visual image fusion methods. Infrared
Phys Technol 1(85):478–501
151. Dogra A, Goyal B, Agrawal S (2017) From multi-scale decom-
position to non-multi-scale decomposition methods: a compre-
hensive survey of image fusion techniques and its applications.
IEEE Access 5:16040–16067
152. Saleem A, Beghdadi A, Boashash B (2012 Dec 1) Image fusion-
based contrast enhancement. EURASIP J Image Video Process
2012(1):10
153. Liu Y, Chen X, Peng H, Wang Z (2017) Multi-focus image fusion
with a deep convolutional neural network. Inf Fusion 36:191–207
154. Du C, Gao S (2017) Image segmentation-based multi-focus
image fusion through multiscale convolutional neural network.
IEEE Access 5:15750–15761
155. Liu Y, Chen X, Cheng J Peng H (2017) A medical image fusion
method based on convolutional neural networks. In: 2017 Pro-
ceedings of 20th International conference on information fusion,
p 1–7, IEEE
156. Masi G, Cozzolino D, Verdoliva L, Scarpa G (2016) Pansharpen-
ing by convolutional neural networks. Remote Sens 8(594):1–22
157. Liu Y, Chen X, Ward R, Wang Z (2016) Image fusion with
convolutional sparse representation. IEEE Signal Process Lett
23(12):1882–1886
158. Huang W, Xiao L, Wei Z, Liu H, Tang S (2015) A new pan-
sharpening method with deep neural networks. IEEE Geosci
Remote Sens Lett 12(5):1037–1041
159. Corbat L, Nauval M, Henriet J, Lapayre JC (2020) A fusion
method based on deep learning and case-based reasoning which
improves the resulting medical image segmentations. Expert Syst
Appl 147:113200
160. Wu X, Hui H, Niu M, Li L, Wang L, He B, Yang X, Li L, Li H,
Tian J, Zha Y (2020) Deep learning-based multi-view fusion
model for screening 2019 novel coronavirus pneumonia: a
multicentre study. Eur J Radiol. https​://doi.org/10.1016/j.ejrad​
.2020.10904​1
161. Kaur H, Koundal D, Kadyan V (2019) Multi modal image fusion:
comparative analysis. In: 2019 International conference on com-
munication and signal processing (ICCSP), p 0758–0761. IEEE
Publisher’s Note Springer Nature remains neutral with regard to
jurisdictional claims in published maps and institutional affiliations.

More Related Content

PDF
IRJET- An Improvised Multi Focus Image Fusion Algorithm through Quadtree
PDF
P045058186
PDF
IRJET - Review of Various Multi-Focus Image Fusion Methods
PDF
Medical Image Fusion Using Discrete Wavelet Transform
PDF
IRJET - Symmetric Image Registration based on Intensity and Spatial Informati...
PDF
IRJET- Copy-Move Forgery Detection using Discrete Wavelet Transform (DWT) Method
PDF
Property based fusion for multifocus images
PDF
06 17443 an neuro fuzzy...
IRJET- An Improvised Multi Focus Image Fusion Algorithm through Quadtree
P045058186
IRJET - Review of Various Multi-Focus Image Fusion Methods
Medical Image Fusion Using Discrete Wavelet Transform
IRJET - Symmetric Image Registration based on Intensity and Spatial Informati...
IRJET- Copy-Move Forgery Detection using Discrete Wavelet Transform (DWT) Method
Property based fusion for multifocus images
06 17443 an neuro fuzzy...

Similar to Image Fusion a survey Image Fusion a survey (20)

PDF
Fusion for medical image based on discrete wavelet transform coefficient
PDF
Efficient resampling features and convolution neural network model for image ...
PDF
Efficient resampling features and convolution neural network model for image ...
PDF
Improved Weighted Least Square Filter Based Pan Sharpening using Fuzzy Logic
PDF
IRJET- Saliency based Image Co-Segmentation
PDF
INFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSION
PDF
FACE PHOTO-SKETCH RECOGNITION USING DEEP LEARNING TECHNIQUES - A REVIEW
PDF
RADAR Image Fusion Using Wavelet Transform
PDF
40120140501006
PDF
Image super resolution using Generative Adversarial Network.
PDF
IRJET- Image Enhancement using Various Discrete Wavelet Transformation Fi...
PDF
Dd25624627
PDF
An Enhanced Method to Detect Copy Move Forgey in Digital Image Processing usi...
PDF
F44083035
PDF
International Journal of Engineering Research and Development
PDF
E1083237
PDF
ADOPTING AND IMPLEMENTATION OF SELF ORGANIZING FEATURE MAP FOR IMAGE FUSION
PDF
ADOPTING AND IMPLEMENTATION OF SELF ORGANIZING FEATURE MAP FOR IMAGE FUSION
PDF
Strong Image Alignment for Meddling Recognision Purpose
PDF
A novel approach to Image Fusion using combination of Wavelet Transform and C...
Fusion for medical image based on discrete wavelet transform coefficient
Efficient resampling features and convolution neural network model for image ...
Efficient resampling features and convolution neural network model for image ...
Improved Weighted Least Square Filter Based Pan Sharpening using Fuzzy Logic
IRJET- Saliency based Image Co-Segmentation
INFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSION
FACE PHOTO-SKETCH RECOGNITION USING DEEP LEARNING TECHNIQUES - A REVIEW
RADAR Image Fusion Using Wavelet Transform
40120140501006
Image super resolution using Generative Adversarial Network.
IRJET- Image Enhancement using Various Discrete Wavelet Transformation Fi...
Dd25624627
An Enhanced Method to Detect Copy Move Forgey in Digital Image Processing usi...
F44083035
International Journal of Engineering Research and Development
E1083237
ADOPTING AND IMPLEMENTATION OF SELF ORGANIZING FEATURE MAP FOR IMAGE FUSION
ADOPTING AND IMPLEMENTATION OF SELF ORGANIZING FEATURE MAP FOR IMAGE FUSION
Strong Image Alignment for Meddling Recognision Purpose
A novel approach to Image Fusion using combination of Wavelet Transform and C...
Ad

Recently uploaded (20)

PPTX
Cell Types and Its function , kingdom of life
PPTX
A powerpoint presentation on the Revised K-10 Science Shaping Paper
PDF
Trump Administration's workforce development strategy
PDF
Practical Manual AGRO-233 Principles and Practices of Natural Farming
PDF
SOIL: Factor, Horizon, Process, Classification, Degradation, Conservation
PPTX
Onco Emergencies - Spinal cord compression Superior vena cava syndrome Febr...
PDF
A systematic review of self-coping strategies used by university students to ...
PDF
What if we spent less time fighting change, and more time building what’s rig...
PDF
Supply Chain Operations Speaking Notes -ICLT Program
PDF
Computing-Curriculum for Schools in Ghana
PPTX
CHAPTER IV. MAN AND BIOSPHERE AND ITS TOTALITY.pptx
PPTX
Introduction to Building Materials
PPTX
Final Presentation General Medicine 03-08-2024.pptx
PDF
medical_surgical_nursing_10th_edition_ignatavicius_TEST_BANK_pdf.pdf
PDF
1_English_Language_Set_2.pdf probationary
PDF
LNK 2025 (2).pdf MWEHEHEHEHEHEHEHEHEHEHE
PDF
RTP_AR_KS1_Tutor's Guide_English [FOR REPRODUCTION].pdf
PDF
Paper A Mock Exam 9_ Attempt review.pdf.
PDF
advance database management system book.pdf
PPTX
Digestion and Absorption of Carbohydrates, Proteina and Fats
Cell Types and Its function , kingdom of life
A powerpoint presentation on the Revised K-10 Science Shaping Paper
Trump Administration's workforce development strategy
Practical Manual AGRO-233 Principles and Practices of Natural Farming
SOIL: Factor, Horizon, Process, Classification, Degradation, Conservation
Onco Emergencies - Spinal cord compression Superior vena cava syndrome Febr...
A systematic review of self-coping strategies used by university students to ...
What if we spent less time fighting change, and more time building what’s rig...
Supply Chain Operations Speaking Notes -ICLT Program
Computing-Curriculum for Schools in Ghana
CHAPTER IV. MAN AND BIOSPHERE AND ITS TOTALITY.pptx
Introduction to Building Materials
Final Presentation General Medicine 03-08-2024.pptx
medical_surgical_nursing_10th_edition_ignatavicius_TEST_BANK_pdf.pdf
1_English_Language_Set_2.pdf probationary
LNK 2025 (2).pdf MWEHEHEHEHEHEHEHEHEHEHE
RTP_AR_KS1_Tutor's Guide_English [FOR REPRODUCTION].pdf
Paper A Mock Exam 9_ Attempt review.pdf.
advance database management system book.pdf
Digestion and Absorption of Carbohydrates, Proteina and Fats
Ad

Image Fusion a survey Image Fusion a survey

  • 1. Vol.:(0123456789) 1 3 Archives of Computational Methods in Engineering (2021) 28:4425–4447 https://doi.org/10.1007/s11831-021-09540-7 ORIGINAL PAPER Image Fusion Techniques: A Survey Harpreet Kaur1 · Deepika Koundal2 · Virender Kadyan3 Received: 17 May 2020 / Accepted: 10 January 2021 / Published online: 24 January 2021 © CIMNE, Barcelona, Spain 2021 Abstract The necessity of image fusion is growing in recently in image processing applications due to the tremendous amount of acqui- sition systems. Fusion of images is defined as an alignment of noteworthy Information from diverse sensors using various mathematical models to generate a single compound image. The fusion of images is used for integrating the complementary multi-temporal, multi-view and multi-sensor Information into a single image with improved image quality and by keeping the integrity of important features. It is considered as a vital pre-processing phase for several applications such as robot vision, aerial, satellite imaging, medical imaging, and a robot or vehicle guidance. In this paper, various state-of-art image fusion methods of diverse levels with their pros and cons, various spatial and transform based method with quality metrics and their applications in different domains have been discussed. Finally, this review has concluded various future directions for different applications of image fusion. 1 Introduction Image fusion (IF) is an emerging field for generating an Infrmative image with the integration of images obtained by different sensors for decision making [1]. The analytical and visual image quality can be improved by integrating differ- ent images. Effective image fusion is capable of preserving vital Information by extracting all important Information from the images without producing any inconsistencies in the output image. After fusion, the fused image is more suit- able for the machine and human perception. The first step of fusion is Image Registration (IR) in which source image is mapped with respect to the reference image. This type of mapping is performed to match the equivalent image on the basis of confident features for further analysis. IF and IR are perceived as vital assistants to produce valuable Informa- tion in several domains [2]. According to the literature, the number of scientific papers has been increased dramatically since 2011 and reached to the peak 21,672 in 2019 which can be illustrated in Fig. 1. This fast-rising trend can be rec- ognized due to the increased demand for high-performance image fusion techniques with low cost. Recently, various techniques like multi-scale decomposition and sparse rep- resentation have been introduced that bring several ways for improving the image fusion performance. There is a need for efficient fusion method due to variations between cor- responding images in various applications. For instance, numerous satellites are increasing nowadays to acquire aerial images with diverse spectral, spatial and temporal resolu- tions in the domain of remote sensing. The IF is basically a collection of image Information achieved by several imag- ing parameters such as aperture settings or dynamic range, spectral response or position of the camera or the use of polarization filters. The Information of interest is extracted from different images with the help of appropriate image fusion methods which can further be used for traffic control, reconnaissance, driver assistance or quality assessment. Various techniques of image fusion can be classified as pixel level, decision level and feature level. Pixel level tech- niques for image fusion directly integrate the Information from input images for further computer processing tasks * Deepika Koundal koundal@gmail.com Harpreet Kaur harpreet.kaur.045@gmail.com Virender Kadyan vkadyan@ddn.upes.ac.in 1 Department of Computer Science, Chitkara University Institute of Engineering and Technology, Chitkara University, Punjab, India 2 Department of Virtualization, School of Computer Science, University of Petroleum and Energy Studies, Bidholi, Dehradun, India 3 Department of Informatics, School of Computer Science, University of Petroleum and Energy Studies, Bidholi, Dehradun, India
  • 2. 4426 H. Kaur et al. 1 3 [3]. Feature level techniques for image fusion entails the extractions of relevant features that is pixel intensities, tex- tures or edges that are compounded to create supplementary merged features [4, 5]. In decision level fusion techniques for images, the input images are processed one at a time for the extraction of Information [4]. There are a variety of IF classifications based on the datasets such as multi-focus, multi-spectral, multi-scale, multi-temporal, and multi-sen- sor. In multi-focus techniques for image fusion, Informa- tion from several images of a similar scene is fused to get one composite image [6]. In addition, the multi-source and multi-sensor IF methods recommend superior features for representing Information which is not visible to a human visible system and are utilized for medical diagnosis applica- tions. The Information generated from merged images can be employed for localization of abnormalities accurately [7]. The temporal modeling will give details of all clinical vari- ables and reduce the risk of Information failure [8]. The fast- rising trend can be a major factor of image fusion techniques having low cost and high performance. Currently, several techniques like sparse representation (SR) and multi-scale decomposition have been anticipated that help in enhancing the image fusion performance [3]. SR is a theory of image representation, which is employed to an image processing tasks such as interpolation, denoising, and recog- nition [1]. The multi-spectral (MS) image is used for remote sensing that merges their features to get an understandable image using the corresponding Information and spatiotemporal correlation [9]. IF has grown to be an influential solution by merging the images captured through diverse sensors. Images of diverse types such as infrared, visible, MRI and CT are superior input images for multimodal fusion [1]. Currently, Deep learning is a very active topic in image fusion. It has gained great success in this area for solving different type of problems such as image processing and computer vision. It is widely used for image fusion [10]. Due to recent techno- logical advancements, various imaging fusion techniques have been utilized in many applications including video surveil- lance, security, remote sensing, machine vision, and medical imaging. Still there are number of challenges associated with image fusion which have to be explored. Moreover, an appropriate, accurate and reliable fusion technique is required for the vari- ous types of images for different domains that should be easily interpretable to obtain better results. Besides, image fusion techniques must be robust against uncontrollable acquisition conditions or inexpensive computation time in real-time sys- tems as mis-registration is a major error found while fusing images. This paper presents an outline of various IF techniques with their applications in diverse domains. Also, various chal- lenges, shortcomings, and benefits of image fusion techniques have been discussed. Rest of the paper is organized in different sections. Sec- tion 2 discusses the image fusion process. Section 3 gives the various image fusion techniques. Section 4 presents the taxo- nomical view of the image fusion methods. Section 5 explains various image fusion applications. In Sect. 6, the evaluation metrics of fusion are discussed. Section 7 delivered the percep- tive deliberations and prospects for future work. Finally, the papers with an outline of major ideas are concluded. Fig. 1  According to the lit- erature, the number of articles related to image fusion
  • 3. 4427 Image Fusion Techniques: A Survey 1 3 2  Image Fusion Process As discussed earlier, the goal of IF is to produce a merged image with the integration of Information from more than one image. Figure 2 demonstrates the major steps involved in IF process. In wide-ranging, the registration is meas- ured as an optimization issue which is used to exploit the similarity as well as to reduce the cost. The Image registra- tion procedure is used to align the subsequent features of various images with respect to a reference image. In this procedure, multiple source images are used for registration in which the original image is recognized as a reference image and the original images are aligned through refer- ence image. In feature extraction, the significant features of registered images are extracted to produce several fea- ture maps. By employing a decision operator whose main objective is to label the registered images with respect to pixel or feature maps, a set of decision maps are produced. Seman- tic equivalence obtained the decision or feature maps that might not pass on to a similar object. It is employed to connect these maps to a common object to perform fusion. This process is redundant for the source obtained from a similar kind of sensors. Then, radiometric calibration is employed on spatially aligned images. Afterward, the transformation of feature maps is performed on an ordi- nary scale to get end result in a similar representation for- mat. Finally, IF merge the consequential images into a one resultant image containing an enhanced explanation of the image. The main goal of fusion is getting more Infrmative fused image [2]. 3 Image Fusion Techniques IF techniques can be classify as spatial and frequency domain. The spatial technique deals with pixel values of the input images in which the pixels values are manipu- lated to attain a suitable outcome. The entire synthesis operations are evaluated using Fourier Transform (FT) of the image and then IFT is evaluated to obtain a resulting image. Other IF techniques are PCA, IHS and high pass filtering and brovey method [12]. Discrete transform fusion techniques are extensively used in image fusion as compared to pyramid based fusion technique. Different types of IF techniques are shown in Fig. 3 [13]. 3.1 Spatial Based Techniques The Spatial based technique is a simple image fusion method consist of Max–Min, Minimum, Maximum, Sim- ple Average and Simple Block Replace techniques [14] [15]. Table 1shows the diverse spatial domain based meth- ods with their pros and cons. 3.1.1 Simple Average It is a fusion technique used to combined images by aver- aging the pixels. This technique focused on all regions of the image and if the images are taken from the same type of sensor then it works well [16]. If the images have high brightness and high contrast then it will produce good results. 3.1.2 Minimum Technique It selects the lowest intensity value of the pixels from images andproduceda fused image [14]. It is used for darker images [17]. 3.1.3 Maximum Technique It selects the pixels values of high intensity from images to yield fused image [12]. 3.1.4 Max–Min Technique It selects the averaging values of the pixels smallest and largest from the entire source images and produced the resultant merged image. Fig. 2  The main steps of IF procedure
  • 4. 4428 H. Kaur et al. 1 3 3.1.5 Simple Block Replace Technique It adds all images of pixel values and takes the block aver- age for it. It is based on pixel neighboring block images. 3.1.6 Weighted Averaging Technique It assigned the weights to every pixel in the source images. The resultant image is produced by the weighted sum Fig. 3  Image Fusion Techniques Table 1  shows the diverse spatial domain based methods with their pros and cons as per the literature review Fusion Techniques Advantages Disadvantages Averaging [14, 21] Minimum pixel value [14, 21] Simple block replacement [15] Maximum pixel value [14, 21] Max- min [15] Simple, easy to recognize and implement Decreases the image quality and reduces noise into final fused resultant image Produced blurred images. Not appropriate for real time applications Weighted averaging [18] Improves the detection reliability Enhances the SNR Principal component based analysis [22, 23] Simple and more efficient, high spatial quality, lesser computational time Resulted in color distortion and spectral degradation Hue intensity saturation[21] Efficient and simple. high sharpening ability and Fast processing Color distortion Brovey [14] Extremely straightforward and more efficient method. Faster processing time. Gives Red–Green–Blue images with superior degree of contrast Color distortion Guided filtering[24] It performs well in image smoothing or enhancement, flash or no-flash imaging, matting or feathering and joint upsampling On the sparse inputs it is not directly applicable. It has a common drawback; it may have halos near some edges like other explicit filters
  • 5. 4429 Image Fusion Techniques: A Survey 1 3 of every pixel value in source images [18]. This method improves the detection reliability of the output image. 3.1.7 Hue Intensity Saturation (HIS) It is a basic fusion color technique that converted the Red–Green–Blue image into HIS components and then intensity levels are divided with panchromatic (PAN) image. Spatial contains intensity Information and spectral contains both hue and saturation Information of the image. It performs in the bands and has three multispectral bands Red–Green–Blue (RGB) of low resolution. In the end, the inverse transformation is performed to convert the HIS space to the original RGB space for yielding fused image [12]. It is a very straightforward technique to combine the images features and provides a high spatial quality image. In remote sensing images it gives the best result and major drawback is that it involved only three bands [19]. 3.1.8 Brovey Transform Method Gillespie et al. suggested Brovey Transform in 1987. It is a straightforward technique for merging data from more than one sensor. It overcomes the three band problems. It standardized the three multispectral bands used for RGB to append the intensity and brightness into the image [13]. It includes an RGB color transform technique that is known as color normalization transform to avoid disadvantages of the multiplicative technique [12]. It is helpful for visual Inter- pretation but generates spectral distortion [19]. 3.1.9 Principal Component Analysis (PCA) It is a statistical method on the basis of orthogonal trans- formation for converting a set of observations of a possibly correlated variable into principal components that are set of linearly uncorrelated variables. The main drawback of PCA is spectral degradation and color distortion [9]. 3.1.10 Guided filtering It works as a boundary smoothing and preserving opera- tor similar to the admired bilateral filter. It has enhanced performance near the boundaries. It has a hypothetical link with Laplacian matrix. It is a fast and non-estimated linear time algorithm, whose density is not dependent on the mask size. This filter is more efficient and effective in graphics and computer vision applications with joint upsampling, haze removal, detail smoothing and noise reduction [20]. IF is also used in medical domain to identity the various diseases. In which article, author perform experiment on brain images and prove that Guided filter provides better results as com- pared to Principal component analysis and multi-resolution singular value decomposition technique [161]. 3.2 Frequency Domain These techniques decomposed the multiscale coefficients from the input images [25]. Spatial distortion can be handled by the frequency method. Table 2 lists the various frequency domain based methods with their pros and cons. Table 2  shows the diverse frequency domain based methods with their pros and cons as discussed by several authors Fusion Techniques Advantages Disadvantages Laplacian/Gausian pyramid [27, 28] Low pass pyramid ratio [29] Morphological pyramid [28] Gradient pyramid [30] Filter subtract decimate [30] Provides better image quality of a representa- tion for multi focus images Provide almost same result. Number of break- down levels affects the IF results Discrete cosine transform (DCT) [38] Decomposed images into series of waveform and used for real time applications Low quality fused image Discrete wavelet technique with Haar fusion [39] Produced better quality of fused image with good SNR. Reduced the spectral distortion Merged image has fewer spatial resolutions Kekre’s wavelet transform fusion [43, 44] Used for any size of images and its final fused result is more Infrmative Computationally complex Kekre’s hybrid wavelet based transform fusion [45, 46] It gives better results It cannot be used images integer power of two Stationary wavelet transform (SWT) [35–37] Give better result at level 2 of decomposition Time consuming process Stationary wavelet transform (SWT) and Cur- velet Transform Suitable for real time applications Very time consuming process
  • 6. 4430 H. Kaur et al. 1 3 3.2.1 Laplacian Pyramid Fusion Technique It uses the interpolation sequence and Gaussian pyramid for multi-resolution analysis for image fusion. Saleem et al. have reported an improved IF technique using a contrast pyramid transform on multi-source images [151]. But it is suffered by the drawback of extraction ability which can be overcome by multi-scale decomposition. Further, Li et al. improved the gradient pyramid multi-source IF method which attains high band coefficient with the help of gradient direction opera- tor [9]. 3.2.2 Discrete Transform Fusion Method [14] Discrete transform based fusion take composite images. Firstly, if the images are colored then RGB (Red–Green–Blue) components of the multiple images are separated subsequently, discrete transformation on images is applied and then the average of multiple images is computed an inverse transformation is applied at the end to obtain a fused image. DWT (Discrete wavelets transform) is a better IF method as compared to other fusion methods like Lapla- cian pyramid method, Curvelet transforms method etc [26]. 3.2.3 Discrete Cosine Transform (DCT) In image fusion, DCT has various types like DCTma (DCT magnitude), DCTcm (DCT contrast measure), DCTch (DCT contrast highest), DCTe (DCT energy) and DCTav (DCT average) [29]. This technique does not give a better result with the size of the block less than 8×8. In the DCT domain, DCTav is straightforward and basic method of image fusion. DCTe and DCTma methods performed well in image fusion. This technique is straightforward and used in factual time applications. 3.2.4 Discrete Wavelet Transform (DWT) Method DWT method decomposes the two or more images into vari- ous high and low-frequency bands [31]. This method mini- mized the spectral distortion in the resultant fused images by producing the good signal to noise ratio with fewer spatial resolution as compared to the pixel-based method. Wave- let fusion performed superior to the spatial domain fusion method with respect to minimizing the color distortions [15]. 3.2.5 Kekre’s Wavelet Transform (KWT) Method Kekre’s Wavelet Transform method is obtained from Kekre’s transforms [32]. It can generate KWT matrices of ((2 N)*(2 N)), ((3 N)*(3 N)),…., ((N2)*(N2)) from Kekre’s transform method matrix [33]. It can be used for more than one images and the fused image is far good than other methods. 3.2.6 Kekre’s Hybrid Wavelet Transform (KHWT) Method KHWT method has been derived from hybrid wavelet transforms. Many authors suggested that kekre-hadamard wavelet method gives more brightness. Hybrid kekre-DCT wavelet method gives good results. In this method, the best two matrices are combined into a hybrid wavelet transforms method. It cannot be used images integer power of two [45]. 3.2.7 Stationary Wavelet Transform (SWT) Method DWT method has a disadvantage of translation invariance and Stationary Wavelet Transform overcome this problem [34]. This technique provides a better output at decompo- sition level 2 and time inefficient process [35] [36] [37]. SWT derived from DWT method. It is a new type of wavelet transform method with translation invariance. It provides enhanced analysis of image facts. The next second invention curvelet transform method is additionally suitable for 2-D image edges. 3.2.8 Curvelet Transform Method SWT has a better characteristic in time–frequency. It can achieve well result for devising in smooth. The second gen- eration Curvelet is a new multi-scale transform; it breaks the disadvantages of wavelet method in representing directions of boundaries in the image [11, 40–42]. 3.3 Deep Learning Another technique which is most widely used for image fusion is Deep Learning in various domains. Several deep learning based image fusion methods have been presented for multi-focus image fusion, multi-exposure image fusion, multi-modal image fusion, multi-spectral (MS) image fusion, and hyper-spectral (HS) image fusion, show- ing various advantages. Various recent advances related to deep learning based image are discussed in [10]. In another way, deep learning and case-based reasoning tech- niques are used with image fusion to enhance the out- come of segmentation. In this article, author used artificial intelligence to improve the results of segmentation and its implementation done on kidney and tumour images. This process complete in three layers: Fusion layer, Seg- mentation layer, Data layer [159]. The multi-view deep learning model is also used in Covid-19 for validation and testing sets of chest CT images. It is more helpful to iden- tify the diagnosis problem. Data is collected from various
  • 7. 4431 Image Fusion Techniques: A Survey 1 3 hospitals of china [160]. The reasons behind the popularity of deep learning based methods for image fusion are pre- sented as that deep learning model are able to extract the most elective features automatically from data without any human intervention. These models are also able to char- acterize various complex relationships between targeting data and input. Deep learning models are gaining popular- ity in providing potential image representation approaches which could be useful to the study of image fusion. Com- monly used deep learning models in image fusion are Con- volutional Neural Network (CNN), Convolutional Sparse Representation (CSR) and Stacked Autoencoder (SAE). Table 3.listed the various advantages and disadvantages of deep learning based image fusion methods. 4  Image Fusion categorization It is the process which integrates the source image and reference image into one image. Diverse techniques were anticipated by different authors to achieve the required fusion objective. A Single sensor, multi-sensor, multi- modal, multi-view, multi-focus, and multi-temporal illus- trates major classes of such methods which are discussed below. 4.1 Single Sensor A number of images are merged to produce a fused image with the best possible Information. For instance, human operators are not able to perceive desired objects in light- ing variant and noisy environment which can be high- lighted in the end fused image. The inadequacy of this type of system is due to imaging sensors that are used in many sensing areas. The resolution of images is limited by the sensors and the conditions in which the system is operated in its dynamic range. Such as visible band sensor (digital camera) is appropriate for illuminated day-light scenes but is not appropriate for the badly illuminated nighttime environment or under fog or rain that is unfa- vorable conditions [47]. 4.2 Multi Sensors Multi-sensor IF overpowers the confines of a one sensor IF by integration of images from a number of sensors to form a compound image. An infrared (IR) camera accompanies the digital camera to obtain the final image from individual images. The infrared camera is suitable in inadequate illu- minated environments and the digital camera is suitable for day-light views. It is used in machine vision, a military area such as in medical imaging, robotics and object detection. It is mostly used to resolve the merged Information of the numerous images [47]. According to the literature review, Table 4 shows the various Multi-sensors techniques dis- cussed by several authors. 4.3 Multi‑view Fusion Multi-view images have diverse views at the similar time. It is also known as Mono-modal fusion [47]. The existing methods didn’t achieve acceptable performances in all cases, especially when one of the estimations is not high-quality; in this case, they are unable to discard it [49, 86, 152]. Accord- ing to the literature review, Table 5 shows the various Multi- view techniques discussed by several authors. 4.4 Multi‑modal Fusion It is the process of integrating multi-modal images from one or more imaging modalities to enhance the quality of an image. The various models are multispectral, panchromatic, infrared, remote sensing and visible images. Table 6 evident the various Multi-modal techniques discussed by several authors according to the literature review. 4.5 Multi‑focus Fusion It is an efficient method for integrating the Information from several images with a similar sight into a wide-rang- ing image. The compound image is more Infrmative than Table 3  shows deep learning based image fusion methods Fusion Techniques Advantages Disadvantages CNN [153–156] Able to extract features and representation can learn most elective features from training data without any human intervention High computational cost CSR [157] Compute sparse representation of an entire image shift-invariant representation approach elective in details preservation less sensitive to mis-registration Need a lot of training data SAE [158] Two phase based training mechanism have a high potential when the scale of labeled data for supervised learning is limited If you don’t have a good GPU they are quite slow to train (for complex tasks)
  • 8. 4432 H. Kaur et al. 1 3 Table 4  Multi Sensor image fusion techniques reported in literature Here p represents the literature review on different fields in various categorizations by different authors NN, Neural network, DWT, Discrete wavelet transform, DCT, Discrete cosine transform, PNN, Pulse-coupled neural network, WT, Wavelet transform, SWT, Stationary wavelet transform, ANN, Artificial neural network,, SI-DWT, Shift invariant discrete wavelet transform, PCNN, Pulse-coupled neural network, PCA, Principal component analysis, NSCT, Non-subsampled contourlet transform, CT, Contourlet transform, HIS, Hue intensity saturation, DT-CWT​ , Dual-tree complex wavelet transform, HVS, Hue value saturation, RVS, Regression variable substitution. HPF, High pass filter, SVR, Synthetic variable ratio Authors Neural networks classifications Wavelet based classifications Contourlet based clas- sifications Other classifications NN ANN PNN PCNN WT DWT SWT DCT DT-CWT​ SI-DWT CT NSCT PCA IHS Entropy Fuzzy HVS RVS HPF SVR Paramanandham [48] p p p p p p p p Ehlers [49] p p p Yoonsuk Choi [50] p p p p p Ross [51] p p Li [52] p Egfin Nirmala [53] p p p Pohl [54] p p p p p p Uttam Kumar [55] p p p Meti [56] p p p p Makode [57] p p p p p Chang [58] p p p p Jiang [15] p p p p p Hall [59] p p Satish Kumar [60] p Pawar [61] p p p p p p Lemeshewsky [62] p p p p Dengi [63] p p Li [64] p Zheng [65] p Li [66] p p p p
  • 9. 4433 Image Fusion Techniques: A Survey 1 3 Table 5  Multi-View image fusion techniques reported in literature Here p represents the literature review on different fields in various categorizations by different authors CNN, Convolution neural network, KNN, K nearest neighbor, PSNR, Peak signal to noise ratio, LSFM, Light sheet fluorenscence microscopy, AVG, Averaging, SNR, Signal to noise ratio, KPCA, Kernal principal component analysis, TP, Temporal projection, TMIP, Temporal motion interpolation projection, SVM, Support vector machines, DVC, Distributed video coding, ICA, Independ- ent component analysis Authors Neural networks clas- sification Wavelet classifications Contourlet classifications Other classifications CNN KNN WT DWT DCT CT NSCT PSNR DVC LSFM AVG SNR PCA KPCA SVM FUZZY ICA TP TMIP Petrazzuoli [67] p p Cheung [68] p Gelman [69] p p Maugey [70] p p p Artigas [71] p p Jose L [72] p p p C. Guillemat [73] p p Guo [74] p p p Wang [75] p p Rajpoot [76] p p p Forre [77] p p p Zhang [78] p p Yongpeng Li [79] p Frederic Dufaux [80] p p p p Das [81] p p Swoger [82] p Seng [83] p Rahul Kavi [84] p p Kiska [85] p p p p p Liu [81] p
  • 10. 4434 H. Kaur et al. 1 3 Table 6  Multi-Modal image fusion techniques reported in literature Here p represents the literature review on different fields in various categorizations by different authors CWT​ , Complex wavelet transform, DDWD, Dual-tree wavelet transform, PSO, Particle swarm optimization, MWT, Wavelet transform Authors Neural networks clas- sifications Wavelet based classifications Contourlet based classifi- cations Other classifications NN PCNN ANN WT DWT SWT CWT​ MWT DDWD DTCWT​ CT NSCT PSO PCA Avg Entropy Fuzzy logic Wei li et al. [87] p p Yong et al. [88] p Max et al. [89] p p Kor et al. [90] p Deron et al. [91] p Zhao et al. [92] p p Richa et al. [93] p Guihong Qu et al. [94] p Wang et al. [95] p Sharmila et al. [96] p p p p Rajiv et al. [97] p p p p p p p Bhavana et al. [98] p p Anitha et al. [99] p p p Anjali et al. [100] p p Periyavattam et al. [101] p Pradeep et al. [102] p p Guruprasad et al. [103] p p Kiran et al. [104] p Anna et al. [95] p Gaurav et al. [105] p p p p Amir et al. [106] p Devanna et al.[107] p p p p p p Senthil et al.[105] p p
  • 11. 4435 Image Fusion Techniques: A Survey 1 3 Table 7  Multi-Focus fusion techniques reported in literature Here p represents the literature review on different fields in various categorizations by different authors SPCNN, Standard PCNN, LSW, Lifting stationary wavelet, MWT, Wavelet transform, DDWD, Dual-tree wavelet transform, EOG, Energy of image gradient, HMM, Hidden markov modeling, PSO, Particle swarm optimization Authors Neural networks classifications Wavelet based classifications Contourlet based clas- sifications Other classifications ANN PNN PCNN SPCNN DWT SWT CWT​ LSW DCT DT-CWT​ CT NSCT NSST PCA HIS Entropy Fuzzy EOG HMM Avg Maruthi [4] p p p p p p p p p p p p p p Wang [95] Bhatnagar [105] Anish [108] p p p p p p p p p Li [109] p p p p Wang [110] p p p Li [111] p p p Huang [112] p p p Garg [113] p p p Kaur [114] p p p p p p p p Kaur [115] p p Liu [116] p Malhotra [117] p p Sulaiman [118] p p p p Li [119] p p p Haghigha [120] p p p Tian [121] p Yang [122] p p p p Malik [123] p p Chai [124] p p Qu [125] p p p Maruthi [126] p p
  • 12. 4436 H. Kaur et al. 1 3 Table 8  Multi-Temporal fusion techniques reported in literature Here p represents the literature review on different fields in various categorizations by different authors CSS, Content selection strategy, CVA, Change vector analysis, SVM, Support vector machine, ICC, Iterative compound classification, PCC, Post classification comparison, CD, Change detec- tion, LDP, Local derivative pattern, OCE, Overall cross entropy, HPF, High pass filter Authors Neural networks clas- sifications Wavelet based classifications Contourlet based classi- fications Other classifications PNN DWT DT-DWT SI-DWT CT NSCT PCA HIS CSS CVA SVM ICC PCC CD LDP PSNR OCE FUZZY ENTROPY HPF Anitha [127] p p p Pawar [128] p Parthiban [129] p p p Momeni [130] p p p p Han Pan [131] p p p p p p Silivana [132] p Jain [133] p p Ferretti [134] p Peijun Du [135] p p p p Wang [136] p Mittal [137] p p p p Wisetphanichkij [138] p p Visalakshi [139] p p p p p p Bovalo [140] p Liu [141] p p p Celik [142] p p p p Xiaojun Yang [143] p p Bruzzone [144] p p p Demir [145] p p p Zhong [146] p p
  • 13. 4437 Image Fusion Techniques: A Survey 1 3 input images [6]. It gives better visual quality of an image. According to the literature review, Table 7 shows the various Multi-focus techniques discussed by several authors. 4.6 Multi‑temporal Fusion Multi-temporal fusion captures the same scene at differ- ent times. Long and short-term observations are required because of the estimation of the occurrence of changes on the ground. Because of the revisit observation satellites, remote sensing images are obtained at diverse times for a given area. Multi-temporal images are vital for detecting land surface variations in broad geographical areas. Accord- ing to the literature review; Table 8 shows the various Multi- temporal techniques discussed by several authors. 5  Main Applications in Diverse Domains In current years, IF has been widely used in many different applications such as medical diagnosis, surveillance, pho- tography and remote sensing. Here, various challenging and issues are discussed related to different fields [3]. 5.1 Remote Sensing Applications In accumulation to the modalities discussed above, it has numerous IF techniques such as Synthetic Aperture Radar, ranging and light detection and moderate resolution imaging spectroradiometer that have been useful in IF applications. Byun et al. have given the area based IF scheme for combin- ing panchromatic, multispectral and synthetic aperture radar images [1]. Temporal data fusion and high spatial approach is used to produce synthetic Landsat imagery by combining Landsat and moderate resolution imaging spectroradiometer data [1]. Moreover, the synthesis of air-bone hyper-spec- tral and Light Detection and Ranging (LiDAR) data is researched recently by a combination of spectral Informa- tion. Various datasets have been provided by Earth imaging satellites like Quickbird, Worldview-2, and IKONOS for the applications of pansharpening. Co-registered hyper-spectral and multispectral images are more complex to obtain as compared to multispectral and panchromatic images. Moreo- ver, air-bone hyper-spectral data and LiDAR are accessible. For occurrence, the IEEE Geoscience and Remote Sensing Society Data Fusion 2013 and 2015 Contests have distrib- uted numerous hyper-spectral, color and light detection and ranging data for research purposes. In this field application, numbers of satellites are mounted to acquire remote sens- ing images with diverse spatial, temporal and spectral reso- lutions. Moreover, in this field, classification and change detection has been providing by Google Maps or Earth prod- ucts that are effectively applied to construct the imagery seen. This is a supplementary difficult problem as compared with pansharpening, the multichannel multispectral image contains both spatial Information and spectral Information. Therefore, pansharpening is unsuitable or incompetent for the IF of hyperspectral and multispectral images. The fore- most challenge in this domain is accomplished as below: (1) Spatial and spectral distortions The image datasets frequently reveal variations in spatial and spectral structures which causes more distortions with spatial or spectral artifacts during image fusion. (2) Mis-registration The next most important challenge in this domain is how to decrease the misregistration rate. The remote sensing input images are regularly obtained from diverse times, acquisitions or spectral bands. Even the panchromatic and multispectral datasets provided by a similar platform, the one or more sensors may not Fig. 4  Examples of IF in remote sensing domain. a PAN b MS c Fused image
  • 14. 4438 H. Kaur et al. 1 3 give accurate results in the same direction; their gain- ing moments may be different. Therefore, in order to resolve this, prior to IF, the images are required to be registered. Conversely, registration is the challenging process because of the variations between input images as they are provided with diverse acquisitions. Figure 4 shows the fusion of Panchromatic and Multi-spectral images that is achieved by the Principal Component Analysis (PCA) transformation correspondingly. 5.2 Medical Domain Applications Harvard Medical School has provided a brain image data- set of registered Computerized Tomography and Magnetic Resonance Imaging. Figure 6 shows an example of IF in medical diagnosis by fusing CT and MRI. The CT is used for capturing the bone structures with high spatial resolu- tions and MRI is used to capture the soft tissue structures like the heart, eyes, and brain. The CT and MRI can be used collectively with IF techniques to enhance accuracy and sensible medical applicability. The main challenging of this field is also accomplished as below. (1) Lack of medical crisis oriented IF methods The main motive of IF is to assist the improved clinical results. The clinical crisis is still a big challenge and nontrivial tasks in the medical field. (2) Objective image fusion performance estimate The main difficulty in this domain is how to evaluate the IF per- formance. There are diverse clinical issues of IF, which preferred the IF effect may be fairly dissimilar. (3) Mis-registration The inaccurate registration of objects suffered from poor performance in the medical domain. Figure 5 illustrates the fusion of MRI and CT images. In this, the fusion of images is achieved by the guided filtering based technique with image statistics. Fig. 5  Examples of IF in medical diagnosis domain. a MRI b CT c Fused image Fig. 6  Examples of IF in surveillance domain. a Visible image b Infrared image c Fused image
  • 15. 4439 Image Fusion Techniques: A Survey 1 3 5.3 Surveillance Domain Applications Figure 6 shows examples of IF in the surveillance domain that is infrared and visible images fusion. Its high tempera- ture makes it able to “see in the night” even without enlight- enment as it is sensitive to objects. Infrared images give bad spatial resolution and it can be overcome by fusion technique by the visible and infrared image. Moreover, the fusion of visible and infrared images has been introduced for another surveillance domain problem in face recognition, image dehazing, and military reconnaissance. The main challenges of in this domain are as: (1) Computing efficiency In this domain, effective IF algo- rithms should merge the Information of innovative images to get the final resultant image. Other promi- nently, in these domains usually engages continuous real-time monitoring. (2) Imperfect environmental conditions The major dif- ficulty in this field, the images may be acquired at imperfect circumstances. Due to the weather and enlightenment condition, the input images may contain under-exposure and serious noise. Fusion of visible and infrared image is shown in Fig. 6a, b. In this outline, the fusions of both images are achieved by the guided filtering and image statistics. 5.4 Photography Domain Applications Figure 7 shows examples of IF in the photography domain, the fusion of multi-focus images. It is not possible for all objects with diverse distances from camera due to its restricted depths to be all-in-focus within a single shot of cameras. Due to the restricted depths of the camera, it is not possible to be all-in-focus within a single shot of cam- eras for all objects with diverse distances. To overcome this, the multi-focus IF method is used to merge several images with a similar scene having diverse focus points for generating an all-in-focusresultant image. This resultant compound image can well defend the significant Informa- tion from the source image. It is more desirable in several image processing tasks and machine vision. In Fig. 8, the data sources used in the photography domain. The various challenges which are faced in this domain are: (1) Effect of moving target objects In this domain, multi- exposure and multi-focus images are constantly pro- vided by diverse times. In these circumstances, dur- ing the capturing process moving objects may become visible in diverse locations. The moving objects might produce inconsistencies into the fused image. (2) Relevance in consumer electronics In this, images are taken from numerous shots with diverse settings of the camera. The challenge is to combine the multi-exposure and multi-focus IF methods into consumer electronics to produce a compound image of high quality in real- time. IF of multi-focus images (Back-focus image and Fore-focus image) is shown in Fig. 7a, b. In this outline, IF of multi-focus images is achieved by guided filtering based technique and image. 5.5 Applications in Other Domains Many other applications that are used for fusion like object recognition, tracking, object detection etc. Fig. 7  Examples of IF in photography domain. a Back-focus Image b Fore-focus image c Fused image
  • 16. 4440 H. Kaur et al. 1 3 5.6 Recognition Application In an image one or more objects are visible. The main aim of recognition to recognize the objects clearly. Face recog- nition is a major application in this domain. Recognition algorithm used infrared and visible IF method. It has two types to recognize the image. In the first type, first fused the images after that recognize its objects. In the next type, it is embedded with the reorganization process,and it isproposed by Faten et al. It can help for improving the recognition precision with the help of narrowband and enhancement the fusion results. 5.7 Detection and Tracking Application Detection and tracking are used in infrared and visible IF. It isused in real-life applications: fruit detection, object detec- tion. It determines the accurate place of the objects at a simi- lar time. Detection fusion algorithm can be differentiate into two classes, in the first class detect the objects before fusing and in the second classes, fuse the images before detecting objects. He at el. introduced an algorithm with multilevel image fusion and it enhanced the target detection of the object. Pixel and feature level image fusion isalso consid- ered in this method and it shows the relationship between high and low-frequency Information, which is ignored in the wavelet transform IF. The main motive of this method to enhance target visibility. Target tracking algorithm is similar to the detection algorithm. It should determine the relationship between the frames of the target objects in a particular time sequence. Stephen et al. introduced an enhanced target tracking approach through visible and infrared IF by using PCA- weighted fusion rule. In most of the algorithms: detection, recognition and tracking are independent to each other which is designed to recover the features or visibility of the actual images [4]. 5.8 Performance Evaluation Measure A number of performance evaluation metrics have been anticipated to evaluate the performances of diverse IF tech- niques. They can be categorized as subjective and objective assessment measures [52]. Subjective assessment measures play important role in IF as it evaluates the fused image quality based on human visual perception. It can compare various fusion techniques and methods according to stand- ards like image details, image distortion, and object compe- tence. In the infrared and visible IF subjective evaluation is more popular and reliable. Its disadvantages are high cost, time-consuming, irreproducibility and human intervention. Objective assessment is carried out to quantify the fused image quality quantitatively. It is not biased by observers and highly consistent with visual perception. It’s arriving from diverse types, which are based on image gradient, similar structure, Information theory, human visual perception and statistics [1]. A number of metrics for quantifying the qual- ity of fused images is presented in this survey. Some other evaluation measures of fused images are divided into two groups. Evaluation measures are further categorized as a reference and non-reference evaluation measures. Evalua- tion measures based on the reference image are given below. i. The mean of the square error (MSE) computes the error and the real differentiation between the ideal or expected results [1, 147]. This metric is defined as follow: where A and B is the ideal and compound image respectively that can be evaluated, i and j is the pixel row and column index respectively, m and n is the height and width of image implying the numeral or pixel rows and columns respectively. ii. The Structural similarity index metric (SSIM) quanti- fies the similarity between one or more images. It is designed by modeling any contrast distortion and radi- ometric. It is a combination of the luminance image distortion and the combination of contrast distortion, loss correlation and structure distortion between source images and the final image [1, 11, 147–150]. This metric is defined as follow: where the 𝜇x average of x, 𝜇y the average of y, 𝜎2 x the variance of x, 𝜎2 y the variance of y, 𝜎xy the covariance of x and y,c1 and c2 two variable to stabilize the divi- sion with the weak denominator iii. The peak signal to noise ratio (PSNR) is used to com- pute the ratio of peak power and noise value power [1, 11, 147, 149, 150]. This metric is defined as follow: Here, r indicates the peak value of the fused image. If the PSNR value is high that means fused image closer to the input image and less distortion in the fusion method iv. Erreur Relative Globale Adimensionnelle de Syn- these employed to quantify the image quality from the fusion of high spatial resolution images. This method (1) MSE = 1 mn m ∑ i=1 n ∑ j=1 ( Aij − Bij )2 (2) SSIM(x, y) = ( 2𝜇x𝜇y + c1 )( 2𝜎xy + c2 ) ( 𝜇2 x + 𝜇2 y + c1 )( 𝜎2 x + 𝜎2 y + c2 ) (3) PSNR = 10log10 { r2 MSE }
  • 17. 4441 Image Fusion Techniques: A Survey 1 3 is introduced by Lucien Wald [148, 151]. This metric is defined as follow: where h and l show the size of the PAN and MS images, N indicates the total number of pixels in the fused image. Mk is the mean radiance value of the MS image for the Bk band [151] v. Overall cross entropy (OCE) is used to determine the difference between one or more source images to get a fused image. If the OCF value is smaller, it means it provide better results [11, 12]. This metric is defined as follow: Here CE indicates the cross-entropy of the images, it effects the entropy of two source images fA and fB and F is fused image vi. Visual Information fidelity (VIF) is used to measure the distortions of images. It includes blur, local or global contrast changes and additive noises [1, 150]. This metric is defined as follow: vii. Mutual Information provides the Information quan- tity detail of source images, which are merged in the resultant image. The highest Mutual Information rep- resents the effectiveness of the IF technique [1, 11, 141, 150, 151]. This is defined as follows where PA(a) and PF(f) denote the marginal histogram of input image A and the fused image is F. PA,F(a, f) indicate joint histogram of input image A and fused image is F. if mutual Information value is high it means fusion performance is good viii. Spectral Angle Mapper (SAM) calculates spectral sim- ilar content between original and final fused images by looking between two vectors at the angle [21]. This metric is defined as follow: (4) ERGAS = 100 h l √ √ √ √ 1 N N ∑ k=1 RMSE ( Bk )2/ ( Mk )2 (5) OCE ( fA, fB;F ) = CE(fA;F) + CE(fB;F)∕2 (6) VIF = Distorted image information∕Reference image information (7) MIAF = ∑ a,f PA,F(a, f)log [ PAF(a, f) PA(a)PF(f) ] (8) α = cos−1 ⎛ ⎜ ⎜ ⎜ ⎜ ⎝ ∑b i=1 tiri �∑b i=1 t2 i �1∕2 �∑b i=1 r2 i �1∕2 ⎞ ⎟ ⎟ ⎟ ⎟ ⎠ Here, b is denoted the number of bands and ti and ri stand for an ­ ith band of test and reference images ix. Signal to noise Ratio (SNR) is used to determine the noise. The larger the signal to noise value enhanced the resultant compound image [11, 12]. This metric is defined as follow: where Ir(x, y) indicates the intensity of the pixel of the estimated image and If (x, y) indicates the intensity of the pixel of the source image. If the signal to noise value is high it means estimation error is small and better performance in the fusion.   Non-reference image quality evaluation measures which do not need reference image are given below. i. Standard Deviation (SD) spread the data in the whole image [1, 151]. This metric is defined as follow: where − Hdenotes the mean value of fused image. If the value of SD is high that means fused images achieve a good visibility effect. ii. Spatial Frequency error (SFE) is a quantitative meas- ure to objectively assess the superiority of the final the fused image [149]. This is defined as follow: where SFf-SF of the fused image, SFr-SF of the refer- ence image. iii. Entropy (EN) is used to evaluate the Information content of an image and it produce sensitive noise in the image. Image with large Information content has low cross entropy [1, 11, 12, 149, 151]. This metric is defined as follow: where, L is denotes the total number of gray levels and pl denotes the normalized histogram of the cor- responding gray level in the IF. If the EN value is higher it means it contains more Information and bet- ter performance in the image fusion. iv. The universal image quality index (UIQI) is motivated by Human Visual System. It is based on the structural (9) SNR = 10log10 � ∑P x=1 ∑Q y=1 � Ir(x, y � )2 ∑P x=1 ∑Q y=1 � Ir(x, y � − If (x, y))2 � (10) SD = √ √ √ √ m ∑ i=1 n ∑ j=1 ( h(i, j) − H )2 (11) SFE = SFf − SFr SFr (12) EN = − L−1 ∑ l=0 pllog2pl
  • 18. 4442 H. Kaur et al. 1 3 Information of final fused resultant images with the combination of loss of correlation, contrast distor- tion, and luminance distortion [11, 150]. This metric is defined as follow: where 𝜎 is variance, 𝜇 is the average. v. Fusion mutual Information metric evaluates the degree of dependence of the one or more images. It is based on Mutual Information (MI) and measures the feature Information which is transformed from input image to fused image. It enhanced the image quality [1, 11]. This metric is defined as follow: where A, B are input images and F is the fused image. If FMI value is high it indicates that considerable Informa- tion transfer from input to the fused image. vi. Spatial frequency (SF) is an image quality index it’s called spatial row frequency (RF) and column frequency (CF) based on horizontal and vertical gradients. The Spatial frequency evaluation metric can calculate the gradient distribution of an image effectively and it gives more texture detail of the image [1, 11, 150, 151]. This metric is defined as follow: where F is a fused image. A fused image with high SF contain the rich edges and textures Information. vii. The large Mean gradient (MG) measurement implies that the composite image capture the rich edges and tex- tures Information. Its fusion performance is better [1, 149]. This metric is defined as follow: where F is a final fused image. (13) UIQI = 𝜎I1IF 𝜎I1×𝜎IF . 𝜇I1IF ( 𝜇I1 )2 + ( 𝜇I1 )2 . 2𝜎I1×𝜎IF ( 𝜎I1 )2 + ( 𝜎I1 )2 (14) FMI = MIA,F + MIB,F (15) SF = √ RF2 + CF2 (16) RF = √ √ √ √ M ∑ i=1 N ∑ j=1 (F(i, j) − F(i, j − 1))2 (17) CF = √ √ √ √ M ∑ i=1 N ∑ j=1 (F(i, j) − F(i − 1, j))2 (18) MG = 1 (M − 1)(N − 1) × M−1 ∑ x=1 N−1 ∑ y=1 √ ((F(x, y) − F(x − 1, y))2 + (F(x, y) − F(x, −1y))2 )/ 2 ix. Average Difference (AD) is the propositional value of the differentiation between the actual or ideal data. [147]. This metric is defined as follow: ix. Average Gradient (AG) measurement is used to measure the gradient Information of the composite image. It pro- vides the texture detail of an image [1, 150, 151]. The AG metric is defined as follow: If AG metric value is high that means it contains more gradient Information and better performance in the fused algorithm. x. Normalized cross correlation (NCC) is employed to determine similar content between input and fused image [147]. This metric is defined as follow: xi. Mean absolute error (MAE) of the related pixels in origi- nal and final fused images [11]. This is defined as fol- low: xii. Normalized Absolute Error (NAE) is a quality meas- ure that normalized the error value with respect to the expected or ideal value. It isprovide the dissimilarity between the actual and desired outcome which is further divided bythe sum of the expected values [147]. This metric is defined as follow: xiii Correlation determines the correlation between refer- enced and resultant image. If the reference image and resultant image value are one it means both images are exactly the same and if it is less than one it means both images have more dissimilarity [11]. (19) AD = 1 mn m ∑ i=1 n ∑ j=1 | | | ( Aij − Bij )| | | (20) AG = 1 MN M ∑ i=1 N ∑ j=1 √ ∇F2 x(i, j) + ∇F2 y(i, j) 2 (21) NCC = ∑m i=1 ∑n j=1 � Aij*Bij � ∑m i=1 ∑n j=1 � A2 ij � (22) MAE = 1 N2 M ∑ i=1 N ∑ j=1 s(i, j) − y(i, j) (23) NAE = ∑m i=1 ∑n j=1 � � � � Aij − Bij �� � � ∑m i=1 ∑n j=1 � Aij �
  • 19. 4443 Image Fusion Techniques: A Survey 1 3 6  Discussion Despite the various constraints which are handled by several researchers, still number of research and devel- opment in the field of image fusion is growing day by day. Image fusion has several open-ended difficulties in different domains. The main aim is to discuss the current challenges and future trends of image fusion that arise in various domains, such as surveillance, photography, medical diagnosis, and remote sensing are analyzed in the fusion processes. This paper has discussed various spatial and frequency domain methods as well as their performance evaluation measures. Simple image fusion techniques cannot be used in actual applications. PCA, hue intensity saturation and Brovey methods are compu- tationally proficient, high-speed and extremely straight- forward but resulted in distortion of color. Images fused with Principal component analysis have a spatial advan- tage but resulted in spectral degradation. The guided fil- tering is an easy, computationally efficient method and is more suitable forreal-world applications. The number of decomposition levels affects the pyramid decomposition in image fusion outcome. Every algorithm has its own advantages and disadvantages. The main challenge faced in remote sensing field is to reduce the visual distortions after fusing panchromatic (PAN), hyperspectral (HS) and multi-spectral (MS) images. This is because source images are captured using different sensors with similar platform but do not focus on a same direction as well as their gain- ing moments are not exactly the same. The dataset and its accessibility represent a restriction that is faced by many researchers. The progress of image fusion has increased its interest in colored images and its enhancement. The aim of color contrast enhancement is to produce an appealing image with bright color and clarity of the visual scene. Recently, researchers have used neutrosophy in image fusion, used to remove noise and to enhance the quality of single photon emission tomography (SPET), computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET) image. This integration of neutrosophy with image fusion resulted in noise reduction and better visibility of the fused image. Deep learning is the rising trend to develop the automated application. It extremely applied in various applications such as face recognition, speech recognition, object detec- tion and medical imaging. The integration of quantitative and qualitative measures is the accurate way to determine which particular fusion technique is better for certain application. The various challenges which generally are faced by researchers is to design image transformation and fusion strategies. Moreover, the lack of effective image representation approaches and widely recognized fusion evaluation metrics for performance evaluation of image fusion techniques is also of great concern. Whereas the recent progresses in machine learning and deep learn- ing based image fusion shows a huge potential for future improvement in image fusion. 7  Conclusion Recently, the area of image fusion is attracting more atten- tion. In this paper, various image fusion techniques with their pros and cons, different methods with state-of-art has been discussed. Different applications like medical image, remote sensing, photography and surveillance images have been discussed with their challenges. Finally, the different evaluation metrics for image fusion techniques with or with- out reference has been discussed. Therefore, it is concluded from survey that each image fusion technique is meant for a specific application and can be used in various combinations to obtain better results. In future, new deep neural networks based image fusion methods will be developed for various domains to improve the efficiency of fusion procedure by implementing the algorithm with parallel computing unit. Compliance with Ethical Standards Conflict of Interest There is no conflict of interest. References 1. Ma J, Ma Y, Li C (2019) Infrared and visible image fusion meth- ods and applications: a survey. Inf Fus 1(45):153–178 2. El-Gamal FE, Elmogy M, Atwan A (2016) Current trends in medical image registration and fusion. Egyptian Inform J 17(1):99–124 3. Li S, Kang X, Fang L, Hu J, Yin H (2017 Jan) Pixel-level image fusion: a survey of the state of the art. Inf Fus 1(33):100–112 4. Maruthi R, Lakshmi I (2017) Multi-focus image fusion methods– a survey. Comput Eng 19(4):9–25 5. Meher B, Agrawal S, Panda R, Abraham A (2019) A survey on region based image fusion methods. Inf Fus 1(48):119–132 6. Liu Z, Chai Y, Yin H, Zhou J, Zhu Z (2017) A novel multi-focus image fusion approach based on image decomposition. Inf Fus 1(35):102–116 7. James AP, Dasarathy BV (2014) Medical image fusion: a survey of the state of the art. Inf Fus 1(19):4–19 8. Madkour M, Benhaddou D, Tao C (2016) Temporal data rep- resentation, normalization, extraction, and reasoning: a review from clinical domain. Comput Methods Programs Biomed 1(128):52–68 9. Bai L, Xu C, Wang C (2015) A review of fusion methods of multi-spectral image. Optik-Int J Light Electron Optics 126(24):4804–4807 10. Liu Y, Chen X, Wang Z, Wang ZJ, Ward RK, Wang X (2018) Deep learning for pixel-level image fusion: recent advances and future prospects. Inf Fus 1(42):158–173
  • 20. 4444 H. Kaur et al. 1 3 11. Du J, Li W, Lu K, Xiao B (2016) An overview of multi-modal medical image fusion. Neurocomputing 26(215):3–20 12. Morris C, Rajesh RS (2014) Survey of spatial domain image fusion techniques. Int JAdv Res Comput Sci Eng Inf Technol 2(3):249–254 13. Mishra D, Palkar B (2015) Image fusion techniques: a review. Int J Comput Appl 130(9):7–13 14. Jasiunas MD, Kearney DA, Hopf J, Wigley GB (2002) Image fusion for uninhabited airborne vehicles. In: 2002 IEEE Inter- national conference on field-programmable technology, 2002. (FPT). Proceedings, p 348–351. IEEE 15. Dong J, Dafang Z, Yaohuan H, Jinying F (2011) Survey of multispectral image fusion techniques in remote sensing appli- cations. In: Zheng Y (ed) Image fusion and its applications. Alcorn State University, USA 16. Banu RS (2011) Medical image fusion by the analysis of pixel level multi-sensor using discrete wavelet Transform. In: Pro- ceedings of the national conference on emerging trends in com- puting science, p 291–297 17. Bavachan B, Krishnan DP (2014) A survey on image fusion techniques. IJRCCT 3(3):049–052 18. Song L, Lin Y, Feng W, Zhao M (2009) A novel automatic weighted image fusion algorithm. In: 2009. ISA 2009. Inter- national Workshop on Intelligent Systems and Applications, p 1–4 19. Singh N, Tanwar, P (2012) Image fusion using improved con- tourlet transform technique. Int J Recent Technol Eng (IJRTE), vol 1, no. 2 20. He K, Sun J, Tang X (2010) Guided image filtering. European conference on computer vision. Springer, Berlin, pp 1–14 21. Harris JR, Murray R, Hirose T (1990) IHS transform for the integration of radar imagery with other remotely sensed data. Photogramm Eng Remote Sens 56(12):1631–1641 22. Smith LI (2002) A tutorial on principal components analysis. Statistics 51(1):52 23. Li S, Kang X, Hu J (2013) Image fusion with guided filtering. IEEE Trans Image Process 22(7):2864–2875 24. Sadjadi F (2005) Comparative image fusion analysais. In: 2005 Computer society conference on computer vision and pattern recognition-workshops (CVPR’05)-Workshops, p 8–8. IEEE 25. Yang J, Ma Y, Yao W, Lu WT (2008) A spatial domain and frequency domain integrated approach to fusion multifocus images. The International archives of the photogrammetry, remote sensing and spatial Inf sciences, 37(PART B7). 26. Wu D, Yang A, Zhu L, Zhang C (2014) Survey of multi-sensor image fusion. International conference on life system modeling and simulation. Springer, Berlin, pp 358–367 27. Olkkonen H, Pesola P (1996) Gaussian pyramid wavelet trans- form for multiresolution analysis of images. Graphic Models Image Process 58(4):394–398 28. Ramac LC, Uner MK, Varshney PK, Alford MG, Ferris DD (1998) Morphological filters and wavelet-based image fusion for concealed weapons detection. In Sensor Fusion: Architec- tures, Algorithms, and Applications II vol 3376, p 110–120. International Society for Optics and Photonics. 29. Toet A (1989) Image fusion by a ratio of low-pass pyramid. Pattern Recogn Lett 9(4):245–253 30. Burt PJ (1992) A gradient pyramid basis for pattern-selective image fusion. Proc SID 1992:467–470 31. Chandrasekhar C, Viswanath A, NarayanaReddy S (2013) FPGA Implementation of image fusion technique using DWT for micro air vehicle applications. 4(8): 307–315 32. Krishnamoorthy S, Soman KP (2010) Implementation and comparative study of image fusion algorithms. Int J Comput Appl 9(2):25–35 33. Kekre HB, Sarode T, Dhannawat R (2012) Kekre’s wavelet trans- form for image fusion and comparison with other pixel based image fusion techniques. Int J Comput Sci Inf Secur 10(3):23–31 34. Klein LA (1993) Society of photo-optical instrumentation engi- neers (SPIE) 405 fieldston road Bellingham. United States, WA 35. Borwonwatanadelokd P, Rattanapitak W, Udomhunsakul S (2009) Multi-focus image fusion based on stationary wavelet transform and extended spatial frequency measurement. In: 2009 International Conference on Electronic Computer Technology, p 77–81. IEEE 36. Udomhunsakul S, Yamsang P, Tumthong S, Borwonwatanadelok P (2011) Multiresolution edge fusion using SWT and SFM. Proc World Congr Eng 2:6–8 37. Kannan K, Perumal SA, Arulmozhi K (2010) Performance com- parison of various levels of fusion of multi-focused images using wavelet transform. Int J Comput Appl 1(6):71–78 38. Naidu VPS (2012) Discrete cosine transform based image fusion techniques. J Commun, Navig Signal Process 1(1):35–45 39. Singh R, Khare A (2013) Multiscale medical image fusion in wavelet domain. Sci World J 1–10. https​://doi. org/10.1155/2013/52103​4 40. Mallat S (1999) A wavelet tour of signal processing. Academic press, Elsevier 41. Pajares G, De La Cruz JM (2004) A wavelet-based image fusion tutorial. Pattern Recogn 37(9):1855–1872 42. Burrus CS, Gopinath RA, Guo H, Odegard JE, Selesnick IW (1998) Introduction to wavelets and wavelet transforms: a primer, vol 1. Prentice hall, New Jersey 43. Kekre HB, Athawale A, Sadavarti D (2010) Algorithm to gener- ate Kekre’s Wavelet transform from Kekre’s Transform. Int J Eng Sci Technol 2(5):756–767 44. Kekre HB, Sarode T, Dhannawat R (2012) Implementa- tion and comparison of different transform techniques using Kekre’s wavelet transform for image fusion. Int J Comput Appl 44(10):41–48 45. Dhannawat R, Sarode T (2013) Kekre’s Hybrid wavelet trans- form technique with DCT WALSH HARTLEY and kekre’s transform for image fusion. Int J Comput Eng Technol (IJCET) 4(1):195–202 46. Kekre HB, Sarode T, Dhannawat R (2012) Image fusion using Kekre’s hybrid wavelet transform. In: 2012 International Confer- ence on Communication, Information Computing Technology (ICCICT), p 1–6 47. Sharma M (2016) A review: image fusion techniques and appli- cations. Int J Comput Sci Inf Technol 7(3):1082–1085 48. Paramanandham N, Rajendiran K (2018) Infrared and visible image fusion using discrete cosine transform and swarm intel- ligence for surveillance applications. Infrared Phys Technol 88:13–22 49. Ehlers M, Klonus S, Astrand PJ (2008) Quality assessment for multi-sensor multi-date image fusion. In: Proceedings of the XXIth International Congress ISPRS, p 499–506. 50. Choi Y, Latifi S (2012) Contourlet based multi-sensor image fusion. In: Proceedings of the 2012 International Conference on Information and Knowledge Engineering IKE, vol 12, p 16–19 51. Ross WD, Waxman AM, Streilein WW, Aguiiar M, Verly J, Liu F, Rak S (2000) Multi-sensor 3D image fusion and interactive search. In: 2000. FUSION 2000. Proceedings of the Third Inter- national Conference on Inf Fusion, vol 1, p TUC3–10 52. Li M, Cai W, Tan Z (2006) A region-based multi-sensor image fusion scheme using pulse-coupled neural network. Pattern Rec- ogn Lett 27(16):1948–1956 53. Nirmala DE, Vignesh RK, Vaidehi V (2013). Fusion of multi- sensor images using nonsubsampled contourlet transform and
  • 21. 4445 Image Fusion Techniques: A Survey 1 3 fuzzy logic. In: 2013 IEEE International conference on fuzzy systems (FUZZ) p 1–8 54. Pohl C, Van Genderen JL (1998) Review article multisensor image fusion in remote sensing: concepts, methods and appli- cations. Int J Remote Sens 19(5):823–854 55. Kumar U, Mukhopadhyay C, Ramachandra TV (2009) Fusion of multisensor data: review and comparative analysis. In: 2009. GCIS’09. WRI Global Congress on Intelligent Systems. vol 2, p 418–422, IEEE 56. Subhas AM Multi sensor data fusion for sensor validation. International Journal of Advanced Computer Technology (IJACT), Survey Paper ISSN:2319–7900. 57. Makode PN, Khan J (2017) A review on multi-focus digital image pair fusion using multi-scale image. Wavelet Decom- position 3(1):575–579 58. Chang NB, Bai K, Imen S, Chen CF, Gao W (2016) Multi- sensor satellite image fusion and networking for all-weather environmental monitoring. IEEE Syst J 12(2):1341–1357 59. Hall DL, Llinas J (1997) An introduction to multisensor data fusion. Proc IEEE 85(1):6–23 60. Kumar NS, Shanthi C (2007) A survey and analysis of pixel level multisensor medical image fusion using discrete wavelet transform. IETE Tech Rev 24(2):113–125 61. Panwar SA, Malwadkar S (2015) A review: image fusion tech- niques for multisensor images. Int J Adv Res Electr, Electr Instrum Eng 4(1):406–410 62. Lemeshewsky GP (1999) Multispectral multisensor image fusion using wavelet transforms. In Visual information pro- cessing VIII, vol. 3716, p 214–223. International Society for Optics and Photonics 63. Deng C, Cao H, Cao C, Wang S (2007) Multisensor image fusion using fast discrete curvelet transform. In: MIPPR 2007: Remote sensing and GIS data processing and applications and innovative multispectral technology and applications, vol. 6790, p 679004. International Society for Optics and Photonics 64. Li H, Manjunath BS, Mitra SK (1995) Multisensor image fusion using the wavelet transform. Graphic Models Image Process 57(3):235–245 65. Zheng Y, Zheng P (2010) Multisensor image fusion using a pulse coupled neural network. International conference on artificial intelligence and computational intelligence. Springer, Berlin, pp 79–87 66. Li Y, Song GH, Yang SC (2011) Multi-sensor image fusion by NSCT-PCNN transform. Int Conf Comput Sci Automat Eng (CSAE) 4:638–642 67. Petrazzuoli G, Cagnazzo M, Pesquet-Popescu B (2013) Novel solutions for side Information generation and fusion in multi- view DVC. EURASIP J Adv Signal Process 2013(1):154 68. Cheung G, Ortega A, Cheung NM (2011) Interactive streaming of stored multiview video using redundant frame structures. IEEE Trans Image Process 20(3):744–761 69. Gelman A, Dragotti PL, Velisavljević V (2011) Interactive multiview image coding. In: 2011 18th IEEE international conference on image processing (ICIP), p 601–604 70. Maugey T, Miled W, Cagnazzo M, Pesquet-Popescu B (2009) Fusion schemes for multiview distributed video coding. In: 2009 17th European signal processing conference. p 559–563 71. Artigas X, Angeli E, Torres L (2006) Side Information gen- eration for multiview distributed video coding using a fusion approach. In: Proceedings of the 7th nordic signal processing symposium-NORSIG 2006, p 250–253, IEEE 72. Rubio-Guivernau JL, Gurchenkov V, Luengo-Oroz MA, Dulo- quin L, Bourgine P, Santos A, Ledesma-Carbayo MJ (2011) Wavelet-based image fusion in multi-view three-dimensional microscopy. BioInfrmatics 28(2):238–245 73. Guillemot C, Pereira F, Torres L, Ebrahimi T, Leonardi R, Ostermann J (2007) Distributed monoview and multiview video coding. IEEE Signal Process Mag 24(5):67–76 74. Guo X, Lu Y, Wu F, Gao W, Li S (2006) Distributed multi- view video coding. Vis Commun Image Process (VCIP) 6077:60770T 75. Wang RS, Wang Y (2000) Multiview video sequence analysis, compression, and virtual viewpoint synthesis. IEEE Trans Cir- cuits Syst Video Technol 10(3):397–410 76. Rajpoot K, Noble JA, Grau V, Szmigielski C, Becher H (2009) Multiview RT3D echocardiography image fusion. International conference on functional imaging and modeling of the heart. Springer, Berlin, pp 134–143 77. Ferre P, Agrafiotis D, Bull D (2007) Fusion methods for side information generation in multi-view distributed video cod- ing systems. In: 2007 IEEE International conference on image processing ICIP, vol 6, p VI-409. IEEE 78. Zhang ZG, Bian HY, Song ZQ, Xu H (2014) A multi-view sonar image fusion method based on nonsubsampled contourlet transform and morphological modification. Appl Mech Mater 530:567–570 79. Li Y, Liu H, Liu X, Ma S, Zhao D, Gao W (2009) Multi- hypothesis based multi-view distributed video coding. In: 2009. PCS 2009 Picture coding symposium. p. 1–4 80. Dufaux F (2011) Support vector machine based fusion for multi-view distributed video coding. In: 2011 17th Interna- tional conference on digital signal processing (DSP), p 1–7 81. Das R, Thepade S, Ghosh S (2015) Content based image rec- ognition by Information fusion with multiview features. Int J Inf Technol Comput Sci 7(10):61–73 82. Swoger J, Verveer P, Greger K, Huisken J, Stelzer EH (2007) Multi-view image fusion improves resolution in three-dimen- sional microscopy. Opt Express 15(13):8029–8042 83. Seng CH, Bouzerdoum A, Tivive FHC, Amin MG (2010) Fuzzy logic-based image fusion for multi-view through-the- wall radar. In: 2010 International conference on digital image computing: techniques and applications (DICTA), p 423–428 84. Kavi R, Kulathumani V, Rohit F, Kecojevic V (2016) Mul- tiview fusion for activity recognition using deep neural net- works. J Electron Imaging 25(4):043010 85. Kisku DR, Mehrotra H, Rattani A, Sing JK, Gupta P (2009) Multiview Gabor face recognition by fusion of PCA and canonical covariate through feature weighting. In: Applica- tions of Digital Image Processing XXXII (vol 7443, p 744308). International Society for Optics and Photonics. 86. Liu K, Kang G (2017) Multiview convolutional neural net- works for lung nodule classification. Int J Imaging Syst Tech- nol 27(1):12–22 87. Li W, Zhu XF (2005) A new algorithm of multi-modality medical image fusion based on pulse-coupled neural networks. International conference on natural computation. Springer, Berlin, pp 995–1001 88. Viergever MA, van den Elsen PA, Stokking R (1992) Inte- grated presentation of multimodal brain images. Brain Topogr 5(2):135–145 89. Rodrigues D, Virani HA, Kutty S (2014) Multimodal image fusion techniques for medical images using wavelets. Image 2(3):310–313 90. Yang Y, Que Y, Huang S, Lin P (2016) Multimodal sensor medi- cal image fusion based on type-2 fuzzy logic in NSCT domain. IEEE Sens J 16(10):3735–3745 91. Kor S, Tiwary U (2004) Feature level fusion of multimodal medical images in lifting wavelet transform domain. In: 2004 26th Annual international conference of the IEEE engineering in medicine and biology society IEMBS’04, vol. 1: p 1479–1482. IEEE
  • 22. 4446 H. Kaur et al. 1 3 92. Zhao Y, Zhao Q, Hao A (2014) Multimodal medical image fusion using improved multi-channel PCNN. Bio-Med Mater Eng 24(1):221–228 93. Singh R, Vatsa M, Noore A (2009) Multimodal medical image fusion using redundant discrete wavelet transform. In: 2009 Seventh international conference on advances in pattern rec- ognition ICAPR’09, p 232–235. IEEE 94. Qu G, Zhang D, Yan P (2001) Medical image fusion by wavelet transform modulus maxima. Opt Expr 9(4):184–190 95. Wang A, Sun H, Guan Y (2006) The application of wavelet transform to multi-modality medical image fusion. In: 2006. Proceedings of the IEEE International conference on network- ing, sensing and control ICNSC’06, p 270–274, IEEE 96. Sharmila K, Rajkumar S, Vijayarajan V (2013) Hybrid method for multimodality medical image fusion using discrete wavelet transform and entropy concepts with quantitative analysis. In: 2013 International conference on communications and signal processing (ICCSP), p 489–493 97. Singh R, Khare A (2014) Fusion of multimodal medical images using Daubechies complex wavelet transform–a multiresolu- tion approach. Inf Fus 19:49–60 98. Bhavana V, Krishnappa HK (2015) Multi-modality medical image fusion using discrete wavelet transform. Procedia Com- put Sci 70:625–631 99. Anitha S, Subhashini T, Kamaraju M (2015) A novel multi- modal medical image fusion approach based on phase congru- ency and directive contrast in NSCT domain. Int J Comput Appl 129(10):30–35 100. Pure AA, Gupta N, Shrivastava M (2013) An overview of dif- ferent image fusion methods for medical applications. Int J Sci Eng Res 4(7):129 101. Gomathi PS, Kalaavathi B (2016) Multimodal medical image fusion in non-subsampled contourlet transform domain. Cir- cuits Syst 7(8):1598–1610 102. Patil MPP, Deshpande KB (2015) New technique for image fusion using DDWT and PSO in medical field. Int J Rec Innov Trends Comput Commun 3(4):2251–2254 103. Guruprasad S, Kurian MZ, Suma HN (2013) A medical multi- modality image fusion of CT/PET with PCA, DWT methods. J Dental Mater Tech 4(2):677–681 104. Parmar K, Kher RK, Thakkar FN (2012) Analysis of CT and MRI image fusion using wavelet transform. In: 2012 Inter- national conference on communication systems and network technologies (CSNT), p 124–127 105. Bhatnagar G, Wu QJ, Liu Z (2013) Directive contrast based multimodal medical image fusion in NSCT domain. IEEE Trans Multimed 15(5):1014–1024 106. Al-Bakrei AFP (2012) Brian image fusion of MRI-CT multi- modality systems using DWT and hybrid enhancement fusion algorithms. J Babylon Univ/Eng Sci 20(1):258–269 107. Swathi PS, Sheethal MS, Paul V (2016) Survey on multimodal medical image fusion techniques. Int J Sci, Eng Comput Tech- nol 6(1):33 108. Anish A, Jebaseeli TJ (2012) A Survey on multi-focus imagefu- sion methods. Int J Adv Res Comput Eng Technol (IJARCET) 1(8):319–324 109. Li H, Chai Y, Yin H, Liu G (2012) Multifocus image fusion and denoising scheme based on homogeneity similarity. Optics Commun 285(2):91–100 110. Wang Z, Ma Y, Gu J (2010) Multi-focus image fusion using PCNN. Pattern Recogn 43(6):2003–2016 111. Li S, Kwok JT, Wang Y (2002) Multifocus image fusion using artificial neural networks. Pattern Recogn Lett 23(8):985–997 112. Huang W, Jing Z (2007) Multi-focus image fusion using pulse coupled neural network. Pattern Recogn Lett 28(9):1123–1132 113. Garg R, Gupta P, Kaur H (2014) Survey on multi-focus image fusion algorithms. In: 2014 Recent Advances in Engineering and Computational Sciences (RAECS). p 1–5 114. Kaur G, Kaur P (2016) Survey on multifocus image fusion tech- niques. In: International conference on electrical, electronics, and optimization techniques (ICEEOT). p 1420–1424 115. Kaur P, Sharma ER (2015) A study of various multi-focus image fusion techniques. Int J Comput Sci Info Techonol 6(5):1139–1146 116. Liu L, Bian H, Shao G (2013) An effective wavelet-based scheme for multi-focus image fusion. In 2013 IEEE International confer- ence on mechatronics and automation (ICMA), p 1720–1725 117. Malhotra G, Chopra DV (2014) Improved multi-focus image fusion using ac-dct, edge preserving smoothing DRSHE. In: Proceedings of international conference on computer science, cloud computing and applications, p 24–25 118. Sulaiman M (2016) A survey on various multifocus image fusion techniques. Int J Sci Technol Eng IJSTE 3(5):107–111 119. Li Q, Du J, Song F, Wang C, Liu H, Lu C (2013) Region-based multi-focus image fusion using the local spatial frequency. In: 2013 25th Chinese control and decision conference (CCDC), p 3792–3796 120. Haghighat MBA, Aghagolzadeh A, Seyedarabi H (2011) Multi- focus image fusion for visual sensor networks in DCT domain. Comput Electr Eng 37(5):789–797 121. Tian J, Chen L (2012) Adaptive multi-focus image fusion using a wavelet-based statistical sharpness measure. Signal Process 92(9):2137–2146 122. Yang Y (2011) A novel DWT based multi-focus image fusion method. Procedia Eng 24:177–181 123. Malik AS (ed) (2011) Depth map and 3D imaging applications: algorithms and technologies: algorithms and technologies. Her- shey, IGI Global 124. Chai Y, Li H, Li Z (2011) Multifocus image fusion scheme using focused region detection and multiresolution. Optics Commun 284(19):4376–4389 125. Qu X, Yan J (2007). Multi-focus image fusion algorithm based on regional firing characteristic of pulse coupled neural networks. In: 2007 BIC-TA 2007 Second international conference on bio- inspired computing: theories and applications, p 62–66 126. Maruthi R, Sankarasubramanian K (2007) Multi focus image fusion based on the Inf level in the regions of the images. J Theor Appl Inf Technol 3(4):80–85 127. Anitha AJ, Vijayasangeetha S (2016) Building change detec- tion on multi-temporal VHR SAR image based on second level decomposition and fuzzyrule. Int J 4(7) 128. Pawar TA (2014) Change detection approach for images using image fusion and C-means clustering algorithm. Int J 2(10):303–307 129. Parthiban L (2014) Fusion of MRI and CT images with double density dual tree discrete wavelet transform. Int J Comput Sci Eng Technol 5(2):168–172 130. Momeni S, Pourghassem H (2014) An automatic fuzzy-based multi-temporal brain digital subtraction angiography image fusion algorithm using Curvelet transform and content selection strategy. J Med Syst 38(8):70 131. Pan H, Jing Z, Liu R, Jin B (2012) Simultaneous spatial-temporal image fusion using Kalman filtered compressed sensing. Opt Eng 51(5):057005 132. Dellepiane SG, Angiati E (2012) A new method for cross-nor- malization and multitemporal visualization of SAR images for the detection of flooded areas. IEEE Trans Geosci Remote Sens 50(7):2765–2779 133. Jan S (2012) Multi temporal image fusion of earthquake satellite images. Int J Adv Res Comput Sci 3(5)
  • 23. 4447 Image Fusion Techniques: A Survey 1 3 134. Ferretti R, Dellepiane S (2015) Color spaces in data fusion of multi-temporal images. International conference on image analy- sis and processing. Springer, Cham, pp 612–622 135. Du P, Liu S, Xia J, Zhao Y (2013) Inf fusion techniques for change detection from multi-temporal remote sensing images. Inf Fusion 14(1):19–27 136. Wang B, Choi J, Choi S, Lee S, Wu P, Gao Y (2017) Image fusion-based land cover change detection using multi-temporal high-resolution satellite images. Remote Sens 9(8):804 137. Mittal M (2015) Hybrid image fusion using curvelet and wavelet transform using PCA and SVM. Int J Sci Emerg Technol Latest Trends 22(1):28–35 138. Wisetphanichkij S, Dejhan K, Cheevasuvit F, Mitatha S, Netbut C (1999) Multi-temporal cloud removing based on image fusion with additive wavelet decomposition. Faculty of Engineering and Research Center for Communication and Inf Technology. 139. Visalakshi S (2017) Multitemporal image fusion based on sta- tionary wavelet transform and change detection using LDP analy- sis. International Journal of Engineering Science and Comput- ing, p 14082 140. Bovolo F (2009) A multilevel parcel-based approach to change detection in very high resolution multitemporal images. IEEE Geosci Remote Sens Lett 6(1):33–37 141. Liu S, Bruzzone L, Bovolo F, Du P (2015) Hierarchical unsuper- vised change detection in multitemporal hyperspectral images. IEEE Trans Geosci Remote Sens 53(1):244–260 142. Celik T, Ma KK (2011) Multitemporal image change detection using undecimated discrete wavelet transform and active con- tours. IEEE Trans Geosci Remote Sens 49(2):706–716 143. Yang X, Chen L (2010) Using multi-temporal remote sensor imagery to detect earthquake-triggered landslides. Int J Appl Earth Obs Geoinf 12(6):487–495 144. Bruzzone L, Serpico SB (1997) An iterative technique for the detection of land-cover transitions in multitemporal remote- sensing images. IEEE Trans Geosci Remote Sens 35(4):858–867 145. Demir B, Bovolo F, Bruzzone L (2012) Detection of land-cover transitions in multitemporal remote sensing images with active- learning-based compound classification. IEEE Trans Geosci Remote Sens 50(5):1930–1941 146. Zhong J, Wang R (2006) Multi-temporal remote sensing change detection based on independent component analysis. Int J Remote Sens 27(10):2055–2061 147. Patil V, Sale D, Joshi MA (2013) Image fusion methods and qual- ity assessment parameters. Asian J Eng Appl Technol 2(1):40–46 148. Kosesoy I, Cetin M, Tepecik A (2015 Jul) A toolbox for teaching image fusion in matlab. Procedia-Soc Behav Sci 25(197):525–530 149. Paramanandham N, Rajendiran K (2018) Multi sensor image fusion for surveillance applications using hybrid image fusion algorithm. Multimedia Tools Appl 77(10):12405–12436 150. Jin X, Jiang Q, Yao S, Zhou D, Nie R, Hai J, He K (2017 Sep) A survey of infrared and visual image fusion methods. Infrared Phys Technol 1(85):478–501 151. Dogra A, Goyal B, Agrawal S (2017) From multi-scale decom- position to non-multi-scale decomposition methods: a compre- hensive survey of image fusion techniques and its applications. IEEE Access 5:16040–16067 152. Saleem A, Beghdadi A, Boashash B (2012 Dec 1) Image fusion- based contrast enhancement. EURASIP J Image Video Process 2012(1):10 153. Liu Y, Chen X, Peng H, Wang Z (2017) Multi-focus image fusion with a deep convolutional neural network. Inf Fusion 36:191–207 154. Du C, Gao S (2017) Image segmentation-based multi-focus image fusion through multiscale convolutional neural network. IEEE Access 5:15750–15761 155. Liu Y, Chen X, Cheng J Peng H (2017) A medical image fusion method based on convolutional neural networks. In: 2017 Pro- ceedings of 20th International conference on information fusion, p 1–7, IEEE 156. Masi G, Cozzolino D, Verdoliva L, Scarpa G (2016) Pansharpen- ing by convolutional neural networks. Remote Sens 8(594):1–22 157. Liu Y, Chen X, Ward R, Wang Z (2016) Image fusion with convolutional sparse representation. IEEE Signal Process Lett 23(12):1882–1886 158. Huang W, Xiao L, Wei Z, Liu H, Tang S (2015) A new pan- sharpening method with deep neural networks. IEEE Geosci Remote Sens Lett 12(5):1037–1041 159. Corbat L, Nauval M, Henriet J, Lapayre JC (2020) A fusion method based on deep learning and case-based reasoning which improves the resulting medical image segmentations. Expert Syst Appl 147:113200 160. Wu X, Hui H, Niu M, Li L, Wang L, He B, Yang X, Li L, Li H, Tian J, Zha Y (2020) Deep learning-based multi-view fusion model for screening 2019 novel coronavirus pneumonia: a multicentre study. Eur J Radiol. https​://doi.org/10.1016/j.ejrad​ .2020.10904​1 161. Kaur H, Koundal D, Kadyan V (2019) Multi modal image fusion: comparative analysis. In: 2019 International conference on com- munication and signal processing (ICCSP), p 0758–0761. IEEE Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.