SlideShare a Scribd company logo
International Journal of Electrical and Computer Engineering (IJECE)
Vol. 12, No. 5, October 2022, pp. 4970~4977
ISSN: 2088-8708, DOI: 10.11591/ijece.v12i5.pp4970-4977  4970
Journal homepage: http://ijece.iaescore.com
Super resolution image reconstruction via dual dictionary
learning in sparse environment
Shashi Kiran Seetharamaswamy1
, Suresh Kaggere Veeranna2
1
Department of Telecommunication Engineering, Jawaharlal Nehru National College of Engineering, Shivamogga, Karnataka, India
2
Department of Electronics and Communication Engineering, Siddaganga Institute of Technology, Tumakuru, Karnataka, India
Article Info ABSTRACT
Article history:
Received Apr 24, 2021
Revised Mar 3, 2022
Accepted Apr 1, 2022
Patch-based super resolution is a method in which spatial features from a
low-resolution (LR) patch are used as references for the reconstruction of
high-resolution (HR) image patches. Sparse representation for each patch is
extracted. These coefficients obtained are used to recover HR patch. One
dictionary is trained for LR image patches, and another dictionary is trained
for HR image patches and both dictionaries are jointly trained. In the
proposed method, high frequency (HF) details required are treated as
combination of main high frequency (MHF) and residual high frequency
(RHF). Hence, dual-dictionary learning is proposed for main dictionary
learning and residual dictionary learning. This is required to recover MHF
and RHF respectively for recovering finer image details. Experiments are
carried out to test the proposed technique on different test images. The
results illustrate the efficacy of the proposed algorithm.
Keywords:
Dictionary learning
Sparse
Super-resolution
This is an open access article under the CC BY-SA license.
Corresponding Author:
Shashi Kiran Seetharamaswamy
Department of Telecommunication Engineering, Jawaharlal Nehru National College of Engineering
Shivamogga, Karnataka, India
Email: shashikiran@jnnce.ac.in
1. INTRODUCTION
Super-resolution (SR) is the process of recovering a high-resolution (HR) image from one or more
low-resolution (LR) input images. Many areas like satellite imaging, high-definition television (HDTV),
microscopy, traffic surveillance, military, security monitoring, medical diagnosis, and remote sensing imaging
require good quality images for accurate analysis. Known variables in LR images are less than the unknown
variables in HR images. Generally, sufficient number of LR images will not be available. Also, blurring
operators are unknown. Hence, SR reconstruction becomes ill-posed problem. Many regularization techniques
are discussed for the solution of ill-posed problem [1], [2].
The present work aims to recover the SR version of an image from a LR image. In conventional
dictionary learning, one dictionary is used to train LR image patches, and another dictionary is used to train
HR image patches. HR image is recovered using sparse representation. In this approach, it is difficult to
completely recover high-frequency details due to the limitation of size of the dictionary. To overcome the
above problem, high frequency to be recovered can be considered as a combination of main high frequency
(MHF) and residual high frequency (RHF).
The proposed method comprises of dual dictionary learning levels. It is a two-layer algorithm. High
frequency details are estimated by step-by-step procedure using distinct dictionaries. Primarily, MHF is first
recovered from main dictionary learning which reduces the gap of the frequency spectrum. Afterwards, RHF is
reconstructed from residual dictionary learning which results in shorter gap of the frequency spectrum. The
method is analogous to coarse to fine recovery and yields better results. Orthogonal matching pursuit (OMP) is
Int J Elec & Comp Eng ISSN: 2088-8708 
Super resolution image reconstruction via dual dictionary learning … (Shashi Kiran Seetharamaswamy)
4971
used for generating sparse representation coefficients for patches. K-means singular value decomposition
(K-SVD) algorithm is used for training the dictionaries.
This paper is arranged as follows. Section 2 revisits the related work regarding dictionary learning.
Section 3 introduces sparse coding and dictionary learning concepts. Section 4 presents mathematical basics of
dictionary learning. Section 5 discuss the proposed method of SR from dual dictionary learning. Section 6
depicts experimental evaluation and summarizes results. Conclusion is done in section 7.
2. RELATED WORK
Dictionary learning is one of important approach of single-image super-resolution [3]. Dictionary
learning for SR was introduced by Yang et al. [4] in which two dictionaries were jointly trained, one for LR
image patches and the other for HR image patches. Zhang et al. [5] developed a computationally efficient
method by replacing the sparse recovery step by matrix multiplication. He et al. [6] used Bayesian method
employing a beta process prior for learning the dictionaries which was more consistent between the two
feature spaces. Bhatia et al. [7] proposed a technique that used coupled dictionary learning by utilizing
example-based super-resolution for high fidelity reconstruction. Yang et al. [8] presented regularized K-SVD
for training dictionary and employed regularized orthogonal matching pursuit (ROMP) for sparse
representation coefficients for patches. Ahmed et al. [9] discussed coupled dictionaries in which group of
clustered data are designed based on correlation between data patches. By this, recovery of fine details is
achieved. Dictionary learning methods use large number of image features for learning and also performance
reduces for complex images. This limitation was overcome by Zhao et al. [10] by utilizing deep learning
features with dictionary technique. It was difficult to represent different images with a single universal
dictionary. Hence, Yang et al. [11] introduced the fuzzy clustering and weighted method to overcome this
limitation. Deeba et al. [12] proposed integrated dictionary learning in which residual image learning is
combined with K-SVD algorithm. In this, wavelets are used which yields better sparsity and structural details
about the image. Huang and Dragotti [13] addressed the problem of single image super-resolution by using
deep dictionary learning architecture. Instead of multilayer dictionaries, 𝐿 dictionaries are used which are
divided into synthesis model and the analysis model. High level features are extracted from analysis
dictionaries and regression function is optimized by the synthesis dictionary. Each method aimed to improve
the reconstructed super-resolution image to the next level by using different algorithms and through various
approaches.
3. SPARSE CODING AND DICTIONARY LEARNING
Sparse coding is a learning method for obtaining sparse representation of the input. Any signal or an
image patch can be represented as a linear combination of only few basic elements. Each basic element is
known as atom. Many numbers of atoms form a dictionary. A high-dimensional signal can be recovered with
only a few linear measurements with the condition that the signal is sparse. Most of the natural images can be
represented in sparse representation. If the image is not sparse, the image can be converted into sparse by
predefined dictionaries like discrete cosine transform (DCT), discrete Fourier transform (DFT), wavelets,
contourlets, and curvelets. But these dictionaries are suitable only for particular images. Learning the
dictionary instead of using predefined dictionaries will highly improve the performance [14]. In dictionary
learning, dictionary is tuned to the input images or signals.
Different types of dictionary learning algorithms are available, namely method of optimal directions
(MOD), K-SVD, stochastic gradient descent, Lagrange dual method and least absolute shrinkage and selection
operator (LASSO). The process of updating the dictionary is simple in MOD. Performance of K-SVD is better
than MOD but it has higher computational complexity for updating the atoms. Stochastic gradient descent is
fast compared to MOD and K-SVD. Unlike K-SVD, stochastic gradient descent works well with less number
of training samples. The advantage of Lagrange dual method is that it has lesser computationally complexity.
LASSO can solve the 𝑙1 minimization more efficiently. It minimizes the least square error which yields the
globally optimal solution. Based on sparsity promoting function, sparse coding methods are classified into
three types: a) 𝑙0 norm method, b) 𝑙1 norm method, and c) non-convex sparsity promoting function [15].
4. MATHEMATICAL BASICS OF DICTIONARY LEARNING
Let 𝐷 ∈ 𝑅𝑛𝑋𝐾
be an overcomplete dictionary of K atoms (K > n). If a signal 𝑥 ∈ 𝑅𝑛
is represented as
a sparse linear combination with respect to 𝐷, then 𝑥 can be treated as 𝑥 = 𝐷 ∝0 where ∝0∈ 𝑅𝑘
is a vector
with very few non-zero elements. Usually, few measurements 𝑦 are made from 𝑥 as in (1) [4]:
𝑦 = 𝐿𝑥 = 𝐿𝐷 ∝0 (1)
 ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 12, No. 5, October 2022: 4970-4977
4972
where 𝐿 ∈ 𝑅𝐾𝑋𝑛
with k < n is a projection matrix. 𝑥 is a HR image patch and 𝑦 is its LR image patch. If 𝐷 is
overcomplete, 𝑥 = 𝐷 ∝ is underdetermined for unknown coefficients ∝. Hence 𝑦 = 𝐿𝐷 ∝ is more
underdetermined. It can be proved that the sparsest solution ∝0 to this equation will be unique. Hence, sparse
representation of a HR image patch 𝑥 can be recovered from the LR image patch.
Two coupled dictionaries are utilized. 𝐷𝑙 is used for LR patches and 𝐷ℎ is used for HR patches.
Sparse representation of LR patch is obtained from 𝐷𝑙. These sparse coefficients are used to recover the
corresponding HR patch in 𝐷ℎ. For the SR of test image, learnt dictionaries are applied to test image. Sparse
coefficients of LR image are obtained and are used to select the more suitable patch in the dictionary which
will be most appropriate for the patches.
5. PROPOSED METHOD
The proposed method consists of two stages. First one is dictionary learning stage and second one is
image synthesis stage. In dictionary learning stage, dual dictionaries are trained. They are main dictionary
(MD) and residual dictionary (RD). Image super-resolution stage takes input image and performs super
resolution using the trained model from the previous stage.
5.1. Dictionary learning stage
Two dictionaries named as Main dictionary and Residual dictionary are learnt using sparse
representation [16]. Figure 1 depicts training stage. Initially, a set of training HR images are collected. To
derive a LR low-frequency image 𝐿𝐿𝐹, a HR training image denoted by 𝐻𝑂𝑅𝐺 is blurred and then
down-sampled. Bicubic interpolation is done on 𝐿𝐿𝐹 resulting in HR low-frequency image denoted by 𝐻𝐿𝐹.
By subtracting 𝐻𝐿𝐹 from 𝐻𝑂𝑅𝐺, HR high-frequency image 𝐻𝐻𝐹 is generated. Afterwards, MD is constructed
which is made up of two coupled sub-dictionaries. They are called as low-frequency main dictionary (LMD)
and high-frequency main dictionary (HMD). Patches are extracted from 𝐻𝐿𝐹 and 𝐻𝐻𝐹 to build the training
data 𝑇 = {𝑝ℎ
𝑘
, 𝑝𝑙
𝑘
}𝑘
. Set of patches derived from the HR image 𝐻𝐻𝐹 is 𝑝ℎ
𝑘
. The patches are constructed by first
extracting patches from images obtained by filtering 𝐻𝐿𝐹 with high-pass filters is 𝑝𝑙
𝑘
.
Figure 1. Process of dictionary learning stage
Next step is training the dictionary. The set of patches {𝑝𝑙
𝑘
}𝑘
are trained by the K-SVD algorithm
resulting in LMD as (2):
𝐿𝑀𝐷, {𝑞𝑘} =
𝑎𝑟𝑔𝑚𝑖𝑛
𝐿𝑀𝐷, {𝑞𝑘}
∑ ‖𝑝𝑙
𝑘
− 𝐿𝑀𝐷. 𝑞𝑘
‖2
2
𝑘 s.t ‖𝑞𝑘‖0 ≤ 𝐿 ∀𝑘, (2)
where {𝑞𝑘}𝑘 are sparse representation vectors [5]. Here, assumption is made that patch 𝑝ℎ
𝑘
can be recovered
by approximating 𝑝ℎ
𝑘
≈ 𝐻𝑀𝐷. 𝑞𝑘
. Hence, HMD can be obtained by minimizing mean error.
𝐻𝑀𝐷 =
𝑎𝑟𝑔𝑚𝑖𝑛
𝐻𝑀𝐷
∑ ‖𝑝ℎ
𝑘
− 𝐻𝑀𝐷. 𝑞𝑘
‖2
2
𝑘 (3)
Let the matrix 𝑃ℎ consist of {𝑝ℎ
𝑘
}𝑘
and matrix Q consist of {𝑞𝑘}𝑘 [5]. Therefore:
𝐻𝑀𝐷 =
𝑎𝑟𝑔𝑚𝑖𝑛
𝐻𝑀𝐷
∑ ‖𝑃ℎ − 𝐻𝑀𝐷. 𝑄‖2
2
𝑘 (4)
The solution for (4) is (5).
Int J Elec & Comp Eng ISSN: 2088-8708 
Super resolution image reconstruction via dual dictionary learning … (Shashi Kiran Seetharamaswamy)
4973
𝐻𝑀𝐷 = 𝑃ℎ𝑄𝑇
(𝑄𝑄𝑇
)−1
(5)
Next, residual dictionary is trained as follows: utilizing the main dictionary and 𝐻𝐿𝐹, HR MHF image is
obtained. It is denoted by 𝐻𝑀𝐻𝐹, and using 𝐻𝑀𝐻𝐹, HR temporary image (𝐻𝑇𝑀𝑃) is obtained which consists of
more details than 𝐻𝐿𝐹 and HR RHF image denoted by 𝐻𝑅𝐻𝐹. Thus, residual dictionary is obtained by utilizing
𝐻𝑇𝑀𝑃and 𝐻𝑅𝐻𝐹. Both MD and RD are combinedly called as dual dictionaries.
5.2. Image super-resolution stage
In this stage, an input LR image is converted into estimated high-resolution image as in Figure 2. It
is assumed that input LR image is developed by HR image by the similar blur and down sampled by the same
amount which is done in the learning stage. In the first stage, input LR image denoted by 𝐿𝑖𝑛𝑝𝑢𝑡 is
interpolated by bicubic method which results in HR low frequency image denoted by 𝐻𝐿𝐹. High-resolution
MHF image denoted by 𝐻𝑀𝐻𝐹 is obtained from 𝐻𝐿𝐹 and MD. OMP is employed to obtain {𝑝𝑙
𝑘
}𝑘
and the
sparse vectors {𝑞𝑘}𝑘 as (6). Also, 𝐻𝐿𝐹 is filtered with the similar high pass filters used in the learning stage.
{𝑝̂ℎ
𝑘
}𝑘
= {𝐻𝑀𝐷. 𝑞𝑘}𝑘 (6)
High-resolution patches {𝑝̂ℎ
𝑘
}𝑘
are generated by the product of HMD and vectors {𝑞𝑘}𝑘 as in (5). Let
𝑆𝑘 be defined as an operator which extracts a patch from the HR image in location k. The HR MHF image,
𝐻𝑀𝐻𝐹 is constructed by solving the minimization problem.
𝐻𝑀𝐻𝐹 =
𝑎𝑟𝑔𝑚𝑖𝑛
𝐻𝑀𝐻𝐹
∑ ‖𝑆𝑘𝐻𝑀𝐻𝐹 − 𝑝̂ℎ
𝑘
‖2
2
𝑘 (7)
The above optimization problem can be solved by least square solution, which is given by (8).
𝐻𝑀𝐻𝐹 = [∑ 𝑆𝑘
𝑇
𝑆𝑘
𝑘 ]−1 ∑ 𝑆𝑘
𝑇
𝑘 𝑝̂ℎ
𝑘
(8)
Afterwards, the high-resolution temporary image, 𝐻𝑇𝑀𝑃 is generated by summing 𝐻𝐿𝐹 with 𝐻𝑀𝐻𝐹.
Next, by using residual dictionary and 𝐻𝑇𝑀𝑃, similar image reconstruction is done resulting in synthesis of
𝐻𝑅𝐻𝐹. Finally, HR estimated image, 𝐻𝐸𝑆𝑇 is generated by adding 𝐻𝑇𝑀𝑃 and 𝐻𝑅𝐻𝐹. Figure 2 depicts the
complete operation.
Figure 2. Process of image super-resolution stage
6. EXPERIMENTAL RESULTS
Results of proposed method are discussed in this section. Based on [17], various dictionary sizes are
tried, and it was observed after trial and error that size of 500 atoms yielded better results. Hence, number of
atoms in the dictionary in main dictionary learning and residual dictionary learning are set to 500. Number of
atoms to use in the representation of each image patch is set to 3 [18], [19]. Too large or too small patch size
tends to yield a smooth or unwanted artifact [20]. Hence image patch size is taken as 9×9 and is overlapped
by one pixel between adjacent patches. The down-sampling is set to scale factor of two. 5×5 Gaussian filter is
used for blurring. Convolution function is used to extract features. Experiments are conducted in MATLAB
R2018a platform. The dictionary is trained by K-SVD dictionary training algorithm. The trained main
dictionary and residual dictionary files are stored as .mat files. The experiments are carried out on two
standard data sets, set 5 and set 14. The test images of set 5 are shown in Figure 3.
The different stages of obtaining super-resolution image from the LR image is depicted in Figure 4
by taking an example of LR image such as ‘man’ image. The input image of size 512×512 is shown in
 ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 12, No. 5, October 2022: 4970-4977
4974
Figure 4(a). HR low frequency image 𝐻𝐿𝐹 is obtained by interpolating low-resolution image by bicubic
method which is shown in Figure 4(b). Utilizing the main dictionary and 𝐻𝐿𝐹, HR MHF image denoted by
𝐻𝑀𝐻𝐹 is obtained which is as shown in Figure 4(c). HR RHF image denoted by 𝐻𝑅𝐻𝐹 is shown in Figure 4(d).
The final super-resolution image is shown in Figure 4(e). It can be noticed that the SR image has less visual
artifacts and has sharper results.
Figure 3. Set 5 test images
(a) (b) (c) (d) (e)
Figure 4. Different stages of obtaining super-resolution image: (a) input image, (b) 𝐻𝐿𝐹, (c) 𝐻𝑀𝐻𝐹, (d) 𝐻𝑅𝐻𝐹,
and (e) super-resolution image
Table 1 tabulates peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM)
values for the images of set 5. Table 2 tabulates PSNR and SSIM values for ten images of set 14. Table 3
tabulates PSNR and SSIM values for ten images of B100 dataset. Results of proposed method are compared
with state-of-the-art SR algorithms. Table 4 tabulates PSNR values for various methods and proposed
methods for scale factor x2 on Set 5 and Set14 datasets. Table 5 tabulates SSIM values for different methods
and proposed methods for scale factor x2 on Set 5 and Set 14 datasets. From Tables 4 and 5, it can be observed
that the proposed method is superior when compared to other methods in terms of quantitative results.
Table 1. PSNR and SSIM for images of Set 5
Sl. No. Image PSNR SSIM
1 Baby 39.5 0.9628
2 Bird 39.76 0.9645
3 Butterfly 37.62 0.9588
4 Head 39.12 0.9614
5 Woman 37.2 0.9588
Table 2. PSNR and SSIM for images of Set 14
Sl. No. Image PSNR SSIM
1 Baboon 30.10 0.8612
2 Barbara 32.96 0.9113
3 Coastguard 32.07 0.9023
4 Face 36.90 0.9591
5 Flowers 33.29 0.9143
6 Foreman 35.10 0.9432
7 Lenna 35.61 0.9522
8 Man 33.76 0.9178
9 Monarch 34.21 0.9203
10 Pepper 37.31 0.9561
Table 3. PSNR and SSIM for ten images of B100 dataset
Sl. No. Image PSNR SSIM
1 189080 30.17 0.8312
2 227092 33.51 0.8640
3 14037 32.42 0.8166
4 45096 33.72 0.8359
5 106024 31.10 0.8642
6 143090 31.82 0.8590
7 241004 30.52 0.8551
8 253055 31.01 0.8945
9 260058 31.72 0.8561
10 296007 30.96 0.8518
Int J Elec & Comp Eng ISSN: 2088-8708 
Super resolution image reconstruction via dual dictionary learning … (Shashi Kiran Seetharamaswamy)
4975
Table 4. Benchmark results. Average PSNR for scale factor x2 on Set 5 and Set 14 datasets
Sl. No. Method Set5 Set14
1. Bicubic 33.66 30.23
2. Neighbor embedding with locally linear embedding (NE + LLE) [21] 35.77 31.76
3. Anchored neighborhood regression (ANR) [22] 35.83 31.80
4. KK [23] 36.20 32.11
5. SelfExSR [24] 36.49 32.44
6. VA+ [25] 36.54 32.28
7. RFL [26] 36.55 32.36
8. Super-resolution convolutional neural network (SRCNN) [27] 36.65 32.29
9. Sparse coding based network (SCN) [28] 36.76 32.48
10. Very deep super-resolution convolutional networks (VDSR) [29] 37.53 32.97
11. Deeply-recursive convolutional network (DRCN) [30] 37.63 33.04
12. Unfolding super resolution network (USRNet) [31] 37.72 33.49
13. Deep recursive residual network (DRRN) [32] 37.74 33.23
14. Information distillation network (IDN) [33] 37.83 33.30
15. MADNet [34] 37.94 33.46
16. Enhanced deep super-resolution network (EDSR) [35] 38.20 34.02
17. Residual feature aggregation network (RFANet) [36] 38.26 34.16
18. Residual dense network (RDN) [37] 38.30 34.10
19. Proposed dual dictionary learning method 38.64 34.52
Table 5. Benchmark results. SSIM for scale factor x2 on Set 5 and Set 14 datasets
Sl. No. Method Set5 Set14
1. Bicubic 0.9299 0.8688
2. SRCNN [27] 0.9542 0.9063
3. SCN [28] 0.9590 0.9123
4. VDSR [29] 0.9587 0.9124
5. DRCN [30] 0.9588 0.9118
6. DRRN [32] 0.9591 0.9136
7. IDN [33] 0.9600 0.9148
8. MADNet [34] 0.9604 0.9167
9. EDSR [35] 0.9602 0.9195
10. RFANet [36] 0.9615 0.9220
11. RDN [37] 0.9614 0.9212
12. Proposed dual dictionary learning method 0.9614 0.9213
Visual results are evaluated for set 5 images in Figure 5. Figures 5(a) to 5(e) shows the LR images
of baby, bird, butterfly, head and woman images and Figures 5(f) to 5(j) shows the corresponding HR images
of baby, bird, butterfly, head and woman images respectively. It can be observed that the proposed method
results in higher image quality.
(a) (b) (c) (d) (e)
(f) (g) (h) (i) (j)
Figure 5. Low-resolution and high-resolution images of baby, bird, butterfly, head and woman; (a) LR image
of baby, (b) LR image of bird, (c) LR image of butterfly, (d) LR image of head, (e) LR image of woman,
(f) HR image of baby, (g) HR image of bird, (h) HR image of butterfly, (i) HR image of head,
and (j) HR image of woman
 ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 12, No. 5, October 2022: 4970-4977
4976
7. CONCLUSION
The paper presented a method for SR based on dual dictionary learning and sparse representation.
This method can reconstruct lost high frequency details by utilizing main dictionary learning and residual
dictionary learning. The qualitative results given in the experimental section demonstrate that SR image
obtained is of higher quality. The improved PSNR of 38.64 for Set 5 dataset and 34.52 for Set 14 dataset as
compared to other methods also justifies the improvement in quantitative result.
REFERENCES
[1] S. Farsiu, M. D. Robinson, M. Elad, and P. Milanfar, “Fast and robust multiframe super resolution,” IEEE Transactions on Image
Processing, vol. 13, no. 10, pp. 1327–1344, Oct. 2004, doi: 10.1109/TIP.2004.834669.
[2] K. V. Suresh, G. M. Kumar, and A. N. Rajagopalan, “Superresolution of license plates in real traffic videos,” IEEE Transactions
on Intelligent Transportation Systems, vol. 8, no. 2, pp. 321–331, Jun. 2007, doi: 10.1109/TITS.2007.895291.
[3] S. S. Kiran and K. V. Suresh, “Challenges in sparse image reconstruction,” International Journal of Image and Graphics, vol. 21,
no. 03, Jul. 2021, doi: 10.1142/S0219467821500261.
[4] J. Yang, J. Wright, T. S. Huang, and Yi Ma, “Image super-resolution via sparse representation,” IEEE Transactions on Image
Processing, vol. 19, no. 11, pp. 2861–2873, Nov. 2010, doi: 10.1109/TIP.2010.2050625.
[5] H. C. Zhang, Y. Zhang, and T. S. Huang, “Efficient sparse representation based image super resolution via dual dictionary
learning,” in 2011 IEEE International Conference on Multimedia and Expo, Jul. 2011, pp. 1–6, doi:
10.1109/ICME.2011.6011877.
[6] L. He, H. Qi, and R. Zaretzki, “Beta process joint dictionary learning for coupled feature spaces with application to single image
super-resolution,” in 2013 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 2013, pp. 345–352, doi:
10.1109/CVPR.2013.51.
[7] K. K. Bhatia, A. N. Price, W. Shi, J. V. Hajnal, and D. Rueckert, “Super-resolution reconstruction of cardiac MRI using coupled
dictionary learning,” in 2014 IEEE 11th International Symposium on Biomedical Imaging (ISBI), Apr. 2014, pp. 947–950, doi:
10.1109/ISBI.2014.6868028.
[8] J. Yang, X. Zhang, W. Peng, and Z. Liu, “A novel regularized K-SVD dictionary learning based medical image super-resolution
algorithm,” Multimedia Tools and Applications, vol. 75, no. 21, pp. 13107–13120, Nov. 2016, doi: 10.1007/s11042-015-2744-9.
[9] J. Ahmed and M. A. Shah, “Single image super-resolution by directionally structured coupled dictionary learning,” EURASIP
Journal on Image and Video Processing, vol. 2016, no. 1, Dec. 2016, doi: 10.1186/s13640-016-0141-6.
[10] L. Zhao, Q. Sun, and Z. Zhang, “Single image super-resolution based on deep learning features and dictionary model,” IEEE
Access, vol. 5, pp. 17126–17135, 2017, doi: 10.1109/ACCESS.2017.2736058.
[11] X. Yang, W. Wu, K. Liu, W. Chen, and Z. Zhou, “Multiple dictionary pairs learning and sparse representation-based infrared
image super-resolution with improved fuzzy clustering,” Soft Computing, vol. 22, no. 5, pp. 1385–1398, Mar. 2018, doi:
10.1007/s00500-017-2812-3.
[12] F. Deeba, S. Kun, W. Wang, J. Ahmed, and B. Qadir, “Wavelet integrated residual dictionary training for single image super-
resolution,” Multimedia Tools and Applications, vol. 78, no. 19, pp. 27683–27701, Oct. 2019, doi: 10.1007/s11042-019-07850-4.
[13] J.-J. Huang and P. L. Dragotti, “A deep dictionary model for image super-resolution,” in 2018 IEEE International Conference on
Acoustics, Speech and Signal Processing (ICASSP), Apr. 2018, pp. 6777–6781, doi: 10.1109/ICASSP.2018.8461651.
[14] M. Elad and M. Aharon, “Image denoising via sparse and redundant representations over learned dictionaries,” IEEE
Transactions on Image Processing, vol. 15, no. 12, pp. 3736–3745, Dec. 2006, doi: 10.1109/TIP.2006.881969.
[15] C. Bao, H. Ji, Y. Quan, and Z. Shen, “Dictionary learning for sparse coding: Algorithms and convergence analysis,” IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 7, pp. 1356–1369, Jul. 2016, doi:
10.1109/TPAMI.2015.2487966.
[16] J. Zhang, C. Zhao, R. Xiong, S. Ma, and D. Zhao, “Image super-resolution via dual-dictionary learning and sparse
representation,” in 2012 IEEE International Symposium on Circuits and Systems, May 2012, pp. 1688–1691, doi:
10.1109/ISCAS.2012.6271583.
[17] B. Dumitrescu and P. Irofti, “Optimizing dictionary size,” in Dictionary Learning Algorithms and Applications, Cham: Springer
International Publishing, 2018, pp. 145–165.
[18] Y. Lu, J. Zhao, and G. Wang, “Few-view image reconstruction with dual dictionaries,” Physics in Medicine and Biology, vol. 57,
no. 1, pp. 173–189, Jan. 2012, doi: 10.1088/0031-9155/57/1/173.
[19] Q. Zhang, Y. Liu, R. S. Blum, J. Han, and D. Tao, “Sparse representation based multi-sensor image fusion for multi-focus and
multi-modality images: A review,” Information Fusion, vol. 40, pp. 57–75, Mar. 2018, doi: 10.1016/j.inffus.2017.05.006.
[20] Y. Han, Y. Zhao, and Q. Wang, “Dictionary learning based noisy image super-resolution via distance penalty weight model,”
PLOS ONE, vol. 12, no. 7, p. e0182165, Jul. 2017, doi: 10.1371/journal.pone.0182165.
[21] H. Chang, D.-Y. Yeung, and Y. Xiong, “Super-resolution through neighbor embedding,” in Proceedings of the 2004 IEEE
Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004., 2004, vol. 1, pp. 275–282, doi:
10.1109/CVPR.2004.1315043.
[22] R. Timofte, V. De, and L. van Gool, “Anchored neighborhood regression for fast example-based super-resolution,” in 2013 IEEE
International Conference on Computer Vision, Dec. 2013, pp. 1920–1927, doi: 10.1109/ICCV.2013.241.
[23] K. I. Kim and Kwon, “Single-image super-resolution using sparse regression and natural image prior,” IEEE Transactions on
Pattern Analysis and Machine Intelligence, vol. 32, no. 6, pp. 1127–1133, Jun. 2010, doi: 10.1109/TPAMI.2010.25.
[24] J.-B. Huang, A. Singh, and N. Ahuja, “Single image super-resolution from transformed self-exemplars,” in 2015 IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2015, pp. 5197–5206, doi: 10.1109/CVPR.2015.7299156.
[25] R. Timofte, V. De Smet, and L. van Gool, “A+: Adjusted anchored neighborhood regression for fast super-resolution,” in Lecture
Notes in Computer Science, 2015, pp. 111–126.
[26] S. Schulter, C. Leistner, and H. Bischof, “Fast and accurate image upscaling with super-resolution forests,” in 2015 IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2015, pp. 3791–3799, doi: 10.1109/CVPR.2015.7299003.
[27] C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Transactions on
Pattern Analysis and Machine Intelligence, vol. 38, no. 2, pp. 295–307, Feb. 2016, doi: 10.1109/TPAMI.2015.2439281.
[28] Z. Wang, D. Liu, J. Yang, W. Han, and T. Huang, “Deep networks for image super-resolution with sparse prior,” in 2015 IEEE
Int J Elec & Comp Eng ISSN: 2088-8708 
Super resolution image reconstruction via dual dictionary learning … (Shashi Kiran Seetharamaswamy)
4977
International Conference on Computer Vision (ICCV), Dec. 2015, pp. 370–378, doi: 10.1109/ICCV.2015.50.
[29] J. Kim, J. K. Lee, and K. M. Lee, “Accurate image super-resolution using very deep convolutional networks,” in 2016 IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2016, pp. 1646–1654, doi: 10.1109/CVPR.2016.182.
[30] J. Kim, J. K. Lee, and K. M. Lee, “Deeply-recursive convolutional network for image super-resolution,” in 2016 IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2016, pp. 1637–1645, doi: 10.1109/CVPR.2016.181.
[31] K. Zhang, L. Van Gool, and R. Timofte, “Deep unfolding network for image super-resolution,” 2020, doi: 10.3929/ethz-b-
000460815.
[32] Y. Tai, J. Yang, and X. Liu, “Image super-resolution via deep recursive residual network,” in 2017 IEEE Conference on
Computer Vision and Pattern Recognition (CVPR), Jul. 2017, pp. 2790–2798, doi: 10.1109/CVPR.2017.298.
[33] Z. Hui, X. Wang, and X. Gao, “Fast and accurate single image super-resolution via information distillation network,” in 2018
IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun. 2018, pp. 723–731, doi: 10.1109/CVPR.2018.00082.
[34] R. Lan, L. Sun, Z. Liu, H. Lu, C. Pang, and X. Luo, “MADNet: A fast and lightweight network for single-image super
resolution,” IEEE Transactions on Cybernetics, vol. 51, no. 3, pp. 1443–1453, Mar. 2021, doi: 10.1109/TCYB.2020.2970104.
[35] B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, “Enhanced deep residual networks for single image super-resolution,” in 2017
IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Jul. 2017, pp. 1132–1140, doi:
10.1109/CVPRW.2017.151.
[36] J. Liu, W. Zhang, Y. Tang, J. Tang, and G. Wu, “Residual feature aggregation network for image super-resolution,” in 2020
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2020, pp. 2356–2365, doi:
10.1109/CVPR42600.2020.00243.
[37] Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu, “Residual dense network for image super-resolution,” in 2018 IEEE/CVF
Conference on Computer Vision and Pattern Recognition, Jun. 2018, pp. 2472–2481, doi: 10.1109/CVPR.2018.00262.
BIOGRAPHIES OF AUTHORS
Shashi Kiran Seetharamaswamy received B.E. degree in Electronics and
Communication engineering in the year 1990 and M.Sc. (Engg.) in the year 2011 from
Visvesvaraya Technological University, India. He is working as faculty in the department of
Electronics and Telecommunication Engineering, JNN College of Engineering, Shivamogga.
Currently he is pursuing Ph. D in Visvesvaraya Technological University, India. His research
interests include character recognition, pattern recognition and image processing. He can be
contacted at email: shashikiran@jnnce.ac.in.
Suresh Kaggere Veeranna received B.E. degree in Electronics and
Communication engineering in the year 1990 and M.Tech. in Industrial electronics in the year
1993 both from the University of Mysore, India. From March 1990 to September 1991, he
served as a faculty in the department of electronics and communication engineering, Kalpataru
Institute of Technology, Tiptur, India. Since 1993, he is working as a faculty in the department
of electronics and communication engineering, Siddaganga Institute of Technology, Tumkur,
India. He completed Ph. D in the department of electrical engineering, Indian Institute of
Technology, Madras in 2007. His research interests include signal processing and computer
vision. He can be contacted at email: sureshkvsit@sit.ac.in.

More Related Content

PDF
Image Super-Resolution Reconstruction Based On Multi-Dictionary Learning
PDF
Asilomar09 compressive superres
PDF
Image reconstruction through compressive sampling matching pursuit and curvel...
PDF
5 single image super resolution using
PDF
Image Super-Resolution using Single Image Semi Coupled Dictionary Learning
PDF
Learning Based Single Frame Image Super-resolution Using Fast Discrete Curvel...
PDF
SINGLE IMAGE SUPER RESOLUTION: A COMPARATIVE STUDY
PDF
Single Image Super Resolution using Interpolation and Discrete Wavelet Transform
Image Super-Resolution Reconstruction Based On Multi-Dictionary Learning
Asilomar09 compressive superres
Image reconstruction through compressive sampling matching pursuit and curvel...
5 single image super resolution using
Image Super-Resolution using Single Image Semi Coupled Dictionary Learning
Learning Based Single Frame Image Super-resolution Using Fast Discrete Curvel...
SINGLE IMAGE SUPER RESOLUTION: A COMPARATIVE STUDY
Single Image Super Resolution using Interpolation and Discrete Wavelet Transform

Similar to Super resolution image reconstruction via dual dictionary learning in sparse environment (20)

DOCX
Image super resolution based on
PDF
Single Image Super-Resolution Using Analytical Solution for L2-L2 Algorithm
PDF
OBTAINING SUPER-RESOLUTION IMAGES BY COMBINING LOW-RESOLUTION IMAGES WITH HIG...
PDF
Literature Review on Single Image Super Resolution
PDF
Image resolution enhancement via multi surface fitting
PDF
Image Restitution Using Non-Locally Centralized Sparse Representation Model
PDF
Survey on Single image Super Resolution Techniques
PDF
Survey on Single image Super Resolution Techniques
PDF
Gi2429352937
PDF
Tuto part2
PDF
Highly Adaptive Image Restoration In Compressive Sensing Applications Using S...
DOCX
Ieee transactions on image processing
PDF
A comprehensive study of different image super resolution reconstruction algo...
PDF
A comprehensive study of different image super resolution reconstruction algo...
PDF
Super resolution in deep learning era - Jaejun Yoo
PDF
Ijecet 06 10_002
PDF
Ijecet 06 10_002
PDF
PDF
Image Restoration UsingNonlocally Centralized Sparse Representation and histo...
PDF
Analysis of Various Single Frame Super Resolution Techniques for better PSNR
Image super resolution based on
Single Image Super-Resolution Using Analytical Solution for L2-L2 Algorithm
OBTAINING SUPER-RESOLUTION IMAGES BY COMBINING LOW-RESOLUTION IMAGES WITH HIG...
Literature Review on Single Image Super Resolution
Image resolution enhancement via multi surface fitting
Image Restitution Using Non-Locally Centralized Sparse Representation Model
Survey on Single image Super Resolution Techniques
Survey on Single image Super Resolution Techniques
Gi2429352937
Tuto part2
Highly Adaptive Image Restoration In Compressive Sensing Applications Using S...
Ieee transactions on image processing
A comprehensive study of different image super resolution reconstruction algo...
A comprehensive study of different image super resolution reconstruction algo...
Super resolution in deep learning era - Jaejun Yoo
Ijecet 06 10_002
Ijecet 06 10_002
Image Restoration UsingNonlocally Centralized Sparse Representation and histo...
Analysis of Various Single Frame Super Resolution Techniques for better PSNR
Ad

More from IJECEIAES (20)

PDF
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...
PDF
Embedded machine learning-based road conditions and driving behavior monitoring
PDF
Advanced control scheme of doubly fed induction generator for wind turbine us...
PDF
Neural network optimizer of proportional-integral-differential controller par...
PDF
An improved modulation technique suitable for a three level flying capacitor ...
PDF
A review on features and methods of potential fishing zone
PDF
Electrical signal interference minimization using appropriate core material f...
PDF
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
PDF
Bibliometric analysis highlighting the role of women in addressing climate ch...
PDF
Voltage and frequency control of microgrid in presence of micro-turbine inter...
PDF
Enhancing battery system identification: nonlinear autoregressive modeling fo...
PDF
Smart grid deployment: from a bibliometric analysis to a survey
PDF
Use of analytical hierarchy process for selecting and prioritizing islanding ...
PDF
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...
PDF
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...
PDF
Adaptive synchronous sliding control for a robot manipulator based on neural ...
PDF
Remote field-programmable gate array laboratory for signal acquisition and de...
PDF
Detecting and resolving feature envy through automated machine learning and m...
PDF
Smart monitoring technique for solar cell systems using internet of things ba...
PDF
An efficient security framework for intrusion detection and prevention in int...
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...
Embedded machine learning-based road conditions and driving behavior monitoring
Advanced control scheme of doubly fed induction generator for wind turbine us...
Neural network optimizer of proportional-integral-differential controller par...
An improved modulation technique suitable for a three level flying capacitor ...
A review on features and methods of potential fishing zone
Electrical signal interference minimization using appropriate core material f...
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
Bibliometric analysis highlighting the role of women in addressing climate ch...
Voltage and frequency control of microgrid in presence of micro-turbine inter...
Enhancing battery system identification: nonlinear autoregressive modeling fo...
Smart grid deployment: from a bibliometric analysis to a survey
Use of analytical hierarchy process for selecting and prioritizing islanding ...
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...
Adaptive synchronous sliding control for a robot manipulator based on neural ...
Remote field-programmable gate array laboratory for signal acquisition and de...
Detecting and resolving feature envy through automated machine learning and m...
Smart monitoring technique for solar cell systems using internet of things ba...
An efficient security framework for intrusion detection and prevention in int...
Ad

Recently uploaded (20)

PPTX
bas. eng. economics group 4 presentation 1.pptx
PPTX
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
PPTX
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
PPTX
additive manufacturing of ss316l using mig welding
PDF
TFEC-4-2020-Design-Guide-for-Timber-Roof-Trusses.pdf
PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PPTX
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
PDF
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
PDF
Automation-in-Manufacturing-Chapter-Introduction.pdf
PDF
PPT on Performance Review to get promotions
PPTX
CH1 Production IntroductoryConcepts.pptx
PDF
Model Code of Practice - Construction Work - 21102022 .pdf
PDF
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
PPTX
Geodesy 1.pptx...............................................
PPTX
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
PDF
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PPTX
Lecture Notes Electrical Wiring System Components
PPTX
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
PPTX
OOP with Java - Java Introduction (Basics)
bas. eng. economics group 4 presentation 1.pptx
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
additive manufacturing of ss316l using mig welding
TFEC-4-2020-Design-Guide-for-Timber-Roof-Trusses.pdf
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
Automation-in-Manufacturing-Chapter-Introduction.pdf
PPT on Performance Review to get promotions
CH1 Production IntroductoryConcepts.pptx
Model Code of Practice - Construction Work - 21102022 .pdf
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
Geodesy 1.pptx...............................................
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
Lecture Notes Electrical Wiring System Components
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
OOP with Java - Java Introduction (Basics)

Super resolution image reconstruction via dual dictionary learning in sparse environment

  • 1. International Journal of Electrical and Computer Engineering (IJECE) Vol. 12, No. 5, October 2022, pp. 4970~4977 ISSN: 2088-8708, DOI: 10.11591/ijece.v12i5.pp4970-4977  4970 Journal homepage: http://ijece.iaescore.com Super resolution image reconstruction via dual dictionary learning in sparse environment Shashi Kiran Seetharamaswamy1 , Suresh Kaggere Veeranna2 1 Department of Telecommunication Engineering, Jawaharlal Nehru National College of Engineering, Shivamogga, Karnataka, India 2 Department of Electronics and Communication Engineering, Siddaganga Institute of Technology, Tumakuru, Karnataka, India Article Info ABSTRACT Article history: Received Apr 24, 2021 Revised Mar 3, 2022 Accepted Apr 1, 2022 Patch-based super resolution is a method in which spatial features from a low-resolution (LR) patch are used as references for the reconstruction of high-resolution (HR) image patches. Sparse representation for each patch is extracted. These coefficients obtained are used to recover HR patch. One dictionary is trained for LR image patches, and another dictionary is trained for HR image patches and both dictionaries are jointly trained. In the proposed method, high frequency (HF) details required are treated as combination of main high frequency (MHF) and residual high frequency (RHF). Hence, dual-dictionary learning is proposed for main dictionary learning and residual dictionary learning. This is required to recover MHF and RHF respectively for recovering finer image details. Experiments are carried out to test the proposed technique on different test images. The results illustrate the efficacy of the proposed algorithm. Keywords: Dictionary learning Sparse Super-resolution This is an open access article under the CC BY-SA license. Corresponding Author: Shashi Kiran Seetharamaswamy Department of Telecommunication Engineering, Jawaharlal Nehru National College of Engineering Shivamogga, Karnataka, India Email: shashikiran@jnnce.ac.in 1. INTRODUCTION Super-resolution (SR) is the process of recovering a high-resolution (HR) image from one or more low-resolution (LR) input images. Many areas like satellite imaging, high-definition television (HDTV), microscopy, traffic surveillance, military, security monitoring, medical diagnosis, and remote sensing imaging require good quality images for accurate analysis. Known variables in LR images are less than the unknown variables in HR images. Generally, sufficient number of LR images will not be available. Also, blurring operators are unknown. Hence, SR reconstruction becomes ill-posed problem. Many regularization techniques are discussed for the solution of ill-posed problem [1], [2]. The present work aims to recover the SR version of an image from a LR image. In conventional dictionary learning, one dictionary is used to train LR image patches, and another dictionary is used to train HR image patches. HR image is recovered using sparse representation. In this approach, it is difficult to completely recover high-frequency details due to the limitation of size of the dictionary. To overcome the above problem, high frequency to be recovered can be considered as a combination of main high frequency (MHF) and residual high frequency (RHF). The proposed method comprises of dual dictionary learning levels. It is a two-layer algorithm. High frequency details are estimated by step-by-step procedure using distinct dictionaries. Primarily, MHF is first recovered from main dictionary learning which reduces the gap of the frequency spectrum. Afterwards, RHF is reconstructed from residual dictionary learning which results in shorter gap of the frequency spectrum. The method is analogous to coarse to fine recovery and yields better results. Orthogonal matching pursuit (OMP) is
  • 2. Int J Elec & Comp Eng ISSN: 2088-8708  Super resolution image reconstruction via dual dictionary learning … (Shashi Kiran Seetharamaswamy) 4971 used for generating sparse representation coefficients for patches. K-means singular value decomposition (K-SVD) algorithm is used for training the dictionaries. This paper is arranged as follows. Section 2 revisits the related work regarding dictionary learning. Section 3 introduces sparse coding and dictionary learning concepts. Section 4 presents mathematical basics of dictionary learning. Section 5 discuss the proposed method of SR from dual dictionary learning. Section 6 depicts experimental evaluation and summarizes results. Conclusion is done in section 7. 2. RELATED WORK Dictionary learning is one of important approach of single-image super-resolution [3]. Dictionary learning for SR was introduced by Yang et al. [4] in which two dictionaries were jointly trained, one for LR image patches and the other for HR image patches. Zhang et al. [5] developed a computationally efficient method by replacing the sparse recovery step by matrix multiplication. He et al. [6] used Bayesian method employing a beta process prior for learning the dictionaries which was more consistent between the two feature spaces. Bhatia et al. [7] proposed a technique that used coupled dictionary learning by utilizing example-based super-resolution for high fidelity reconstruction. Yang et al. [8] presented regularized K-SVD for training dictionary and employed regularized orthogonal matching pursuit (ROMP) for sparse representation coefficients for patches. Ahmed et al. [9] discussed coupled dictionaries in which group of clustered data are designed based on correlation between data patches. By this, recovery of fine details is achieved. Dictionary learning methods use large number of image features for learning and also performance reduces for complex images. This limitation was overcome by Zhao et al. [10] by utilizing deep learning features with dictionary technique. It was difficult to represent different images with a single universal dictionary. Hence, Yang et al. [11] introduced the fuzzy clustering and weighted method to overcome this limitation. Deeba et al. [12] proposed integrated dictionary learning in which residual image learning is combined with K-SVD algorithm. In this, wavelets are used which yields better sparsity and structural details about the image. Huang and Dragotti [13] addressed the problem of single image super-resolution by using deep dictionary learning architecture. Instead of multilayer dictionaries, 𝐿 dictionaries are used which are divided into synthesis model and the analysis model. High level features are extracted from analysis dictionaries and regression function is optimized by the synthesis dictionary. Each method aimed to improve the reconstructed super-resolution image to the next level by using different algorithms and through various approaches. 3. SPARSE CODING AND DICTIONARY LEARNING Sparse coding is a learning method for obtaining sparse representation of the input. Any signal or an image patch can be represented as a linear combination of only few basic elements. Each basic element is known as atom. Many numbers of atoms form a dictionary. A high-dimensional signal can be recovered with only a few linear measurements with the condition that the signal is sparse. Most of the natural images can be represented in sparse representation. If the image is not sparse, the image can be converted into sparse by predefined dictionaries like discrete cosine transform (DCT), discrete Fourier transform (DFT), wavelets, contourlets, and curvelets. But these dictionaries are suitable only for particular images. Learning the dictionary instead of using predefined dictionaries will highly improve the performance [14]. In dictionary learning, dictionary is tuned to the input images or signals. Different types of dictionary learning algorithms are available, namely method of optimal directions (MOD), K-SVD, stochastic gradient descent, Lagrange dual method and least absolute shrinkage and selection operator (LASSO). The process of updating the dictionary is simple in MOD. Performance of K-SVD is better than MOD but it has higher computational complexity for updating the atoms. Stochastic gradient descent is fast compared to MOD and K-SVD. Unlike K-SVD, stochastic gradient descent works well with less number of training samples. The advantage of Lagrange dual method is that it has lesser computationally complexity. LASSO can solve the 𝑙1 minimization more efficiently. It minimizes the least square error which yields the globally optimal solution. Based on sparsity promoting function, sparse coding methods are classified into three types: a) 𝑙0 norm method, b) 𝑙1 norm method, and c) non-convex sparsity promoting function [15]. 4. MATHEMATICAL BASICS OF DICTIONARY LEARNING Let 𝐷 ∈ 𝑅𝑛𝑋𝐾 be an overcomplete dictionary of K atoms (K > n). If a signal 𝑥 ∈ 𝑅𝑛 is represented as a sparse linear combination with respect to 𝐷, then 𝑥 can be treated as 𝑥 = 𝐷 ∝0 where ∝0∈ 𝑅𝑘 is a vector with very few non-zero elements. Usually, few measurements 𝑦 are made from 𝑥 as in (1) [4]: 𝑦 = 𝐿𝑥 = 𝐿𝐷 ∝0 (1)
  • 3.  ISSN: 2088-8708 Int J Elec & Comp Eng, Vol. 12, No. 5, October 2022: 4970-4977 4972 where 𝐿 ∈ 𝑅𝐾𝑋𝑛 with k < n is a projection matrix. 𝑥 is a HR image patch and 𝑦 is its LR image patch. If 𝐷 is overcomplete, 𝑥 = 𝐷 ∝ is underdetermined for unknown coefficients ∝. Hence 𝑦 = 𝐿𝐷 ∝ is more underdetermined. It can be proved that the sparsest solution ∝0 to this equation will be unique. Hence, sparse representation of a HR image patch 𝑥 can be recovered from the LR image patch. Two coupled dictionaries are utilized. 𝐷𝑙 is used for LR patches and 𝐷ℎ is used for HR patches. Sparse representation of LR patch is obtained from 𝐷𝑙. These sparse coefficients are used to recover the corresponding HR patch in 𝐷ℎ. For the SR of test image, learnt dictionaries are applied to test image. Sparse coefficients of LR image are obtained and are used to select the more suitable patch in the dictionary which will be most appropriate for the patches. 5. PROPOSED METHOD The proposed method consists of two stages. First one is dictionary learning stage and second one is image synthesis stage. In dictionary learning stage, dual dictionaries are trained. They are main dictionary (MD) and residual dictionary (RD). Image super-resolution stage takes input image and performs super resolution using the trained model from the previous stage. 5.1. Dictionary learning stage Two dictionaries named as Main dictionary and Residual dictionary are learnt using sparse representation [16]. Figure 1 depicts training stage. Initially, a set of training HR images are collected. To derive a LR low-frequency image 𝐿𝐿𝐹, a HR training image denoted by 𝐻𝑂𝑅𝐺 is blurred and then down-sampled. Bicubic interpolation is done on 𝐿𝐿𝐹 resulting in HR low-frequency image denoted by 𝐻𝐿𝐹. By subtracting 𝐻𝐿𝐹 from 𝐻𝑂𝑅𝐺, HR high-frequency image 𝐻𝐻𝐹 is generated. Afterwards, MD is constructed which is made up of two coupled sub-dictionaries. They are called as low-frequency main dictionary (LMD) and high-frequency main dictionary (HMD). Patches are extracted from 𝐻𝐿𝐹 and 𝐻𝐻𝐹 to build the training data 𝑇 = {𝑝ℎ 𝑘 , 𝑝𝑙 𝑘 }𝑘 . Set of patches derived from the HR image 𝐻𝐻𝐹 is 𝑝ℎ 𝑘 . The patches are constructed by first extracting patches from images obtained by filtering 𝐻𝐿𝐹 with high-pass filters is 𝑝𝑙 𝑘 . Figure 1. Process of dictionary learning stage Next step is training the dictionary. The set of patches {𝑝𝑙 𝑘 }𝑘 are trained by the K-SVD algorithm resulting in LMD as (2): 𝐿𝑀𝐷, {𝑞𝑘} = 𝑎𝑟𝑔𝑚𝑖𝑛 𝐿𝑀𝐷, {𝑞𝑘} ∑ ‖𝑝𝑙 𝑘 − 𝐿𝑀𝐷. 𝑞𝑘 ‖2 2 𝑘 s.t ‖𝑞𝑘‖0 ≤ 𝐿 ∀𝑘, (2) where {𝑞𝑘}𝑘 are sparse representation vectors [5]. Here, assumption is made that patch 𝑝ℎ 𝑘 can be recovered by approximating 𝑝ℎ 𝑘 ≈ 𝐻𝑀𝐷. 𝑞𝑘 . Hence, HMD can be obtained by minimizing mean error. 𝐻𝑀𝐷 = 𝑎𝑟𝑔𝑚𝑖𝑛 𝐻𝑀𝐷 ∑ ‖𝑝ℎ 𝑘 − 𝐻𝑀𝐷. 𝑞𝑘 ‖2 2 𝑘 (3) Let the matrix 𝑃ℎ consist of {𝑝ℎ 𝑘 }𝑘 and matrix Q consist of {𝑞𝑘}𝑘 [5]. Therefore: 𝐻𝑀𝐷 = 𝑎𝑟𝑔𝑚𝑖𝑛 𝐻𝑀𝐷 ∑ ‖𝑃ℎ − 𝐻𝑀𝐷. 𝑄‖2 2 𝑘 (4) The solution for (4) is (5).
  • 4. Int J Elec & Comp Eng ISSN: 2088-8708  Super resolution image reconstruction via dual dictionary learning … (Shashi Kiran Seetharamaswamy) 4973 𝐻𝑀𝐷 = 𝑃ℎ𝑄𝑇 (𝑄𝑄𝑇 )−1 (5) Next, residual dictionary is trained as follows: utilizing the main dictionary and 𝐻𝐿𝐹, HR MHF image is obtained. It is denoted by 𝐻𝑀𝐻𝐹, and using 𝐻𝑀𝐻𝐹, HR temporary image (𝐻𝑇𝑀𝑃) is obtained which consists of more details than 𝐻𝐿𝐹 and HR RHF image denoted by 𝐻𝑅𝐻𝐹. Thus, residual dictionary is obtained by utilizing 𝐻𝑇𝑀𝑃and 𝐻𝑅𝐻𝐹. Both MD and RD are combinedly called as dual dictionaries. 5.2. Image super-resolution stage In this stage, an input LR image is converted into estimated high-resolution image as in Figure 2. It is assumed that input LR image is developed by HR image by the similar blur and down sampled by the same amount which is done in the learning stage. In the first stage, input LR image denoted by 𝐿𝑖𝑛𝑝𝑢𝑡 is interpolated by bicubic method which results in HR low frequency image denoted by 𝐻𝐿𝐹. High-resolution MHF image denoted by 𝐻𝑀𝐻𝐹 is obtained from 𝐻𝐿𝐹 and MD. OMP is employed to obtain {𝑝𝑙 𝑘 }𝑘 and the sparse vectors {𝑞𝑘}𝑘 as (6). Also, 𝐻𝐿𝐹 is filtered with the similar high pass filters used in the learning stage. {𝑝̂ℎ 𝑘 }𝑘 = {𝐻𝑀𝐷. 𝑞𝑘}𝑘 (6) High-resolution patches {𝑝̂ℎ 𝑘 }𝑘 are generated by the product of HMD and vectors {𝑞𝑘}𝑘 as in (5). Let 𝑆𝑘 be defined as an operator which extracts a patch from the HR image in location k. The HR MHF image, 𝐻𝑀𝐻𝐹 is constructed by solving the minimization problem. 𝐻𝑀𝐻𝐹 = 𝑎𝑟𝑔𝑚𝑖𝑛 𝐻𝑀𝐻𝐹 ∑ ‖𝑆𝑘𝐻𝑀𝐻𝐹 − 𝑝̂ℎ 𝑘 ‖2 2 𝑘 (7) The above optimization problem can be solved by least square solution, which is given by (8). 𝐻𝑀𝐻𝐹 = [∑ 𝑆𝑘 𝑇 𝑆𝑘 𝑘 ]−1 ∑ 𝑆𝑘 𝑇 𝑘 𝑝̂ℎ 𝑘 (8) Afterwards, the high-resolution temporary image, 𝐻𝑇𝑀𝑃 is generated by summing 𝐻𝐿𝐹 with 𝐻𝑀𝐻𝐹. Next, by using residual dictionary and 𝐻𝑇𝑀𝑃, similar image reconstruction is done resulting in synthesis of 𝐻𝑅𝐻𝐹. Finally, HR estimated image, 𝐻𝐸𝑆𝑇 is generated by adding 𝐻𝑇𝑀𝑃 and 𝐻𝑅𝐻𝐹. Figure 2 depicts the complete operation. Figure 2. Process of image super-resolution stage 6. EXPERIMENTAL RESULTS Results of proposed method are discussed in this section. Based on [17], various dictionary sizes are tried, and it was observed after trial and error that size of 500 atoms yielded better results. Hence, number of atoms in the dictionary in main dictionary learning and residual dictionary learning are set to 500. Number of atoms to use in the representation of each image patch is set to 3 [18], [19]. Too large or too small patch size tends to yield a smooth or unwanted artifact [20]. Hence image patch size is taken as 9×9 and is overlapped by one pixel between adjacent patches. The down-sampling is set to scale factor of two. 5×5 Gaussian filter is used for blurring. Convolution function is used to extract features. Experiments are conducted in MATLAB R2018a platform. The dictionary is trained by K-SVD dictionary training algorithm. The trained main dictionary and residual dictionary files are stored as .mat files. The experiments are carried out on two standard data sets, set 5 and set 14. The test images of set 5 are shown in Figure 3. The different stages of obtaining super-resolution image from the LR image is depicted in Figure 4 by taking an example of LR image such as ‘man’ image. The input image of size 512×512 is shown in
  • 5.  ISSN: 2088-8708 Int J Elec & Comp Eng, Vol. 12, No. 5, October 2022: 4970-4977 4974 Figure 4(a). HR low frequency image 𝐻𝐿𝐹 is obtained by interpolating low-resolution image by bicubic method which is shown in Figure 4(b). Utilizing the main dictionary and 𝐻𝐿𝐹, HR MHF image denoted by 𝐻𝑀𝐻𝐹 is obtained which is as shown in Figure 4(c). HR RHF image denoted by 𝐻𝑅𝐻𝐹 is shown in Figure 4(d). The final super-resolution image is shown in Figure 4(e). It can be noticed that the SR image has less visual artifacts and has sharper results. Figure 3. Set 5 test images (a) (b) (c) (d) (e) Figure 4. Different stages of obtaining super-resolution image: (a) input image, (b) 𝐻𝐿𝐹, (c) 𝐻𝑀𝐻𝐹, (d) 𝐻𝑅𝐻𝐹, and (e) super-resolution image Table 1 tabulates peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) values for the images of set 5. Table 2 tabulates PSNR and SSIM values for ten images of set 14. Table 3 tabulates PSNR and SSIM values for ten images of B100 dataset. Results of proposed method are compared with state-of-the-art SR algorithms. Table 4 tabulates PSNR values for various methods and proposed methods for scale factor x2 on Set 5 and Set14 datasets. Table 5 tabulates SSIM values for different methods and proposed methods for scale factor x2 on Set 5 and Set 14 datasets. From Tables 4 and 5, it can be observed that the proposed method is superior when compared to other methods in terms of quantitative results. Table 1. PSNR and SSIM for images of Set 5 Sl. No. Image PSNR SSIM 1 Baby 39.5 0.9628 2 Bird 39.76 0.9645 3 Butterfly 37.62 0.9588 4 Head 39.12 0.9614 5 Woman 37.2 0.9588 Table 2. PSNR and SSIM for images of Set 14 Sl. No. Image PSNR SSIM 1 Baboon 30.10 0.8612 2 Barbara 32.96 0.9113 3 Coastguard 32.07 0.9023 4 Face 36.90 0.9591 5 Flowers 33.29 0.9143 6 Foreman 35.10 0.9432 7 Lenna 35.61 0.9522 8 Man 33.76 0.9178 9 Monarch 34.21 0.9203 10 Pepper 37.31 0.9561 Table 3. PSNR and SSIM for ten images of B100 dataset Sl. No. Image PSNR SSIM 1 189080 30.17 0.8312 2 227092 33.51 0.8640 3 14037 32.42 0.8166 4 45096 33.72 0.8359 5 106024 31.10 0.8642 6 143090 31.82 0.8590 7 241004 30.52 0.8551 8 253055 31.01 0.8945 9 260058 31.72 0.8561 10 296007 30.96 0.8518
  • 6. Int J Elec & Comp Eng ISSN: 2088-8708  Super resolution image reconstruction via dual dictionary learning … (Shashi Kiran Seetharamaswamy) 4975 Table 4. Benchmark results. Average PSNR for scale factor x2 on Set 5 and Set 14 datasets Sl. No. Method Set5 Set14 1. Bicubic 33.66 30.23 2. Neighbor embedding with locally linear embedding (NE + LLE) [21] 35.77 31.76 3. Anchored neighborhood regression (ANR) [22] 35.83 31.80 4. KK [23] 36.20 32.11 5. SelfExSR [24] 36.49 32.44 6. VA+ [25] 36.54 32.28 7. RFL [26] 36.55 32.36 8. Super-resolution convolutional neural network (SRCNN) [27] 36.65 32.29 9. Sparse coding based network (SCN) [28] 36.76 32.48 10. Very deep super-resolution convolutional networks (VDSR) [29] 37.53 32.97 11. Deeply-recursive convolutional network (DRCN) [30] 37.63 33.04 12. Unfolding super resolution network (USRNet) [31] 37.72 33.49 13. Deep recursive residual network (DRRN) [32] 37.74 33.23 14. Information distillation network (IDN) [33] 37.83 33.30 15. MADNet [34] 37.94 33.46 16. Enhanced deep super-resolution network (EDSR) [35] 38.20 34.02 17. Residual feature aggregation network (RFANet) [36] 38.26 34.16 18. Residual dense network (RDN) [37] 38.30 34.10 19. Proposed dual dictionary learning method 38.64 34.52 Table 5. Benchmark results. SSIM for scale factor x2 on Set 5 and Set 14 datasets Sl. No. Method Set5 Set14 1. Bicubic 0.9299 0.8688 2. SRCNN [27] 0.9542 0.9063 3. SCN [28] 0.9590 0.9123 4. VDSR [29] 0.9587 0.9124 5. DRCN [30] 0.9588 0.9118 6. DRRN [32] 0.9591 0.9136 7. IDN [33] 0.9600 0.9148 8. MADNet [34] 0.9604 0.9167 9. EDSR [35] 0.9602 0.9195 10. RFANet [36] 0.9615 0.9220 11. RDN [37] 0.9614 0.9212 12. Proposed dual dictionary learning method 0.9614 0.9213 Visual results are evaluated for set 5 images in Figure 5. Figures 5(a) to 5(e) shows the LR images of baby, bird, butterfly, head and woman images and Figures 5(f) to 5(j) shows the corresponding HR images of baby, bird, butterfly, head and woman images respectively. It can be observed that the proposed method results in higher image quality. (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) Figure 5. Low-resolution and high-resolution images of baby, bird, butterfly, head and woman; (a) LR image of baby, (b) LR image of bird, (c) LR image of butterfly, (d) LR image of head, (e) LR image of woman, (f) HR image of baby, (g) HR image of bird, (h) HR image of butterfly, (i) HR image of head, and (j) HR image of woman
  • 7.  ISSN: 2088-8708 Int J Elec & Comp Eng, Vol. 12, No. 5, October 2022: 4970-4977 4976 7. CONCLUSION The paper presented a method for SR based on dual dictionary learning and sparse representation. This method can reconstruct lost high frequency details by utilizing main dictionary learning and residual dictionary learning. The qualitative results given in the experimental section demonstrate that SR image obtained is of higher quality. The improved PSNR of 38.64 for Set 5 dataset and 34.52 for Set 14 dataset as compared to other methods also justifies the improvement in quantitative result. REFERENCES [1] S. Farsiu, M. D. Robinson, M. Elad, and P. Milanfar, “Fast and robust multiframe super resolution,” IEEE Transactions on Image Processing, vol. 13, no. 10, pp. 1327–1344, Oct. 2004, doi: 10.1109/TIP.2004.834669. [2] K. V. Suresh, G. M. Kumar, and A. N. Rajagopalan, “Superresolution of license plates in real traffic videos,” IEEE Transactions on Intelligent Transportation Systems, vol. 8, no. 2, pp. 321–331, Jun. 2007, doi: 10.1109/TITS.2007.895291. [3] S. S. Kiran and K. V. Suresh, “Challenges in sparse image reconstruction,” International Journal of Image and Graphics, vol. 21, no. 03, Jul. 2021, doi: 10.1142/S0219467821500261. [4] J. Yang, J. Wright, T. S. Huang, and Yi Ma, “Image super-resolution via sparse representation,” IEEE Transactions on Image Processing, vol. 19, no. 11, pp. 2861–2873, Nov. 2010, doi: 10.1109/TIP.2010.2050625. [5] H. C. Zhang, Y. Zhang, and T. S. Huang, “Efficient sparse representation based image super resolution via dual dictionary learning,” in 2011 IEEE International Conference on Multimedia and Expo, Jul. 2011, pp. 1–6, doi: 10.1109/ICME.2011.6011877. [6] L. He, H. Qi, and R. Zaretzki, “Beta process joint dictionary learning for coupled feature spaces with application to single image super-resolution,” in 2013 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 2013, pp. 345–352, doi: 10.1109/CVPR.2013.51. [7] K. K. Bhatia, A. N. Price, W. Shi, J. V. Hajnal, and D. Rueckert, “Super-resolution reconstruction of cardiac MRI using coupled dictionary learning,” in 2014 IEEE 11th International Symposium on Biomedical Imaging (ISBI), Apr. 2014, pp. 947–950, doi: 10.1109/ISBI.2014.6868028. [8] J. Yang, X. Zhang, W. Peng, and Z. Liu, “A novel regularized K-SVD dictionary learning based medical image super-resolution algorithm,” Multimedia Tools and Applications, vol. 75, no. 21, pp. 13107–13120, Nov. 2016, doi: 10.1007/s11042-015-2744-9. [9] J. Ahmed and M. A. Shah, “Single image super-resolution by directionally structured coupled dictionary learning,” EURASIP Journal on Image and Video Processing, vol. 2016, no. 1, Dec. 2016, doi: 10.1186/s13640-016-0141-6. [10] L. Zhao, Q. Sun, and Z. Zhang, “Single image super-resolution based on deep learning features and dictionary model,” IEEE Access, vol. 5, pp. 17126–17135, 2017, doi: 10.1109/ACCESS.2017.2736058. [11] X. Yang, W. Wu, K. Liu, W. Chen, and Z. Zhou, “Multiple dictionary pairs learning and sparse representation-based infrared image super-resolution with improved fuzzy clustering,” Soft Computing, vol. 22, no. 5, pp. 1385–1398, Mar. 2018, doi: 10.1007/s00500-017-2812-3. [12] F. Deeba, S. Kun, W. Wang, J. Ahmed, and B. Qadir, “Wavelet integrated residual dictionary training for single image super- resolution,” Multimedia Tools and Applications, vol. 78, no. 19, pp. 27683–27701, Oct. 2019, doi: 10.1007/s11042-019-07850-4. [13] J.-J. Huang and P. L. Dragotti, “A deep dictionary model for image super-resolution,” in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Apr. 2018, pp. 6777–6781, doi: 10.1109/ICASSP.2018.8461651. [14] M. Elad and M. Aharon, “Image denoising via sparse and redundant representations over learned dictionaries,” IEEE Transactions on Image Processing, vol. 15, no. 12, pp. 3736–3745, Dec. 2006, doi: 10.1109/TIP.2006.881969. [15] C. Bao, H. Ji, Y. Quan, and Z. Shen, “Dictionary learning for sparse coding: Algorithms and convergence analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 7, pp. 1356–1369, Jul. 2016, doi: 10.1109/TPAMI.2015.2487966. [16] J. Zhang, C. Zhao, R. Xiong, S. Ma, and D. Zhao, “Image super-resolution via dual-dictionary learning and sparse representation,” in 2012 IEEE International Symposium on Circuits and Systems, May 2012, pp. 1688–1691, doi: 10.1109/ISCAS.2012.6271583. [17] B. Dumitrescu and P. Irofti, “Optimizing dictionary size,” in Dictionary Learning Algorithms and Applications, Cham: Springer International Publishing, 2018, pp. 145–165. [18] Y. Lu, J. Zhao, and G. Wang, “Few-view image reconstruction with dual dictionaries,” Physics in Medicine and Biology, vol. 57, no. 1, pp. 173–189, Jan. 2012, doi: 10.1088/0031-9155/57/1/173. [19] Q. Zhang, Y. Liu, R. S. Blum, J. Han, and D. Tao, “Sparse representation based multi-sensor image fusion for multi-focus and multi-modality images: A review,” Information Fusion, vol. 40, pp. 57–75, Mar. 2018, doi: 10.1016/j.inffus.2017.05.006. [20] Y. Han, Y. Zhao, and Q. Wang, “Dictionary learning based noisy image super-resolution via distance penalty weight model,” PLOS ONE, vol. 12, no. 7, p. e0182165, Jul. 2017, doi: 10.1371/journal.pone.0182165. [21] H. Chang, D.-Y. Yeung, and Y. Xiong, “Super-resolution through neighbor embedding,” in Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004., 2004, vol. 1, pp. 275–282, doi: 10.1109/CVPR.2004.1315043. [22] R. Timofte, V. De, and L. van Gool, “Anchored neighborhood regression for fast example-based super-resolution,” in 2013 IEEE International Conference on Computer Vision, Dec. 2013, pp. 1920–1927, doi: 10.1109/ICCV.2013.241. [23] K. I. Kim and Kwon, “Single-image super-resolution using sparse regression and natural image prior,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 6, pp. 1127–1133, Jun. 2010, doi: 10.1109/TPAMI.2010.25. [24] J.-B. Huang, A. Singh, and N. Ahuja, “Single image super-resolution from transformed self-exemplars,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2015, pp. 5197–5206, doi: 10.1109/CVPR.2015.7299156. [25] R. Timofte, V. De Smet, and L. van Gool, “A+: Adjusted anchored neighborhood regression for fast super-resolution,” in Lecture Notes in Computer Science, 2015, pp. 111–126. [26] S. Schulter, C. Leistner, and H. Bischof, “Fast and accurate image upscaling with super-resolution forests,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2015, pp. 3791–3799, doi: 10.1109/CVPR.2015.7299003. [27] C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 2, pp. 295–307, Feb. 2016, doi: 10.1109/TPAMI.2015.2439281. [28] Z. Wang, D. Liu, J. Yang, W. Han, and T. Huang, “Deep networks for image super-resolution with sparse prior,” in 2015 IEEE
  • 8. Int J Elec & Comp Eng ISSN: 2088-8708  Super resolution image reconstruction via dual dictionary learning … (Shashi Kiran Seetharamaswamy) 4977 International Conference on Computer Vision (ICCV), Dec. 2015, pp. 370–378, doi: 10.1109/ICCV.2015.50. [29] J. Kim, J. K. Lee, and K. M. Lee, “Accurate image super-resolution using very deep convolutional networks,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2016, pp. 1646–1654, doi: 10.1109/CVPR.2016.182. [30] J. Kim, J. K. Lee, and K. M. Lee, “Deeply-recursive convolutional network for image super-resolution,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2016, pp. 1637–1645, doi: 10.1109/CVPR.2016.181. [31] K. Zhang, L. Van Gool, and R. Timofte, “Deep unfolding network for image super-resolution,” 2020, doi: 10.3929/ethz-b- 000460815. [32] Y. Tai, J. Yang, and X. Liu, “Image super-resolution via deep recursive residual network,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jul. 2017, pp. 2790–2798, doi: 10.1109/CVPR.2017.298. [33] Z. Hui, X. Wang, and X. Gao, “Fast and accurate single image super-resolution via information distillation network,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun. 2018, pp. 723–731, doi: 10.1109/CVPR.2018.00082. [34] R. Lan, L. Sun, Z. Liu, H. Lu, C. Pang, and X. Luo, “MADNet: A fast and lightweight network for single-image super resolution,” IEEE Transactions on Cybernetics, vol. 51, no. 3, pp. 1443–1453, Mar. 2021, doi: 10.1109/TCYB.2020.2970104. [35] B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, “Enhanced deep residual networks for single image super-resolution,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Jul. 2017, pp. 1132–1140, doi: 10.1109/CVPRW.2017.151. [36] J. Liu, W. Zhang, Y. Tang, J. Tang, and G. Wu, “Residual feature aggregation network for image super-resolution,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2020, pp. 2356–2365, doi: 10.1109/CVPR42600.2020.00243. [37] Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu, “Residual dense network for image super-resolution,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun. 2018, pp. 2472–2481, doi: 10.1109/CVPR.2018.00262. BIOGRAPHIES OF AUTHORS Shashi Kiran Seetharamaswamy received B.E. degree in Electronics and Communication engineering in the year 1990 and M.Sc. (Engg.) in the year 2011 from Visvesvaraya Technological University, India. He is working as faculty in the department of Electronics and Telecommunication Engineering, JNN College of Engineering, Shivamogga. Currently he is pursuing Ph. D in Visvesvaraya Technological University, India. His research interests include character recognition, pattern recognition and image processing. He can be contacted at email: shashikiran@jnnce.ac.in. Suresh Kaggere Veeranna received B.E. degree in Electronics and Communication engineering in the year 1990 and M.Tech. in Industrial electronics in the year 1993 both from the University of Mysore, India. From March 1990 to September 1991, he served as a faculty in the department of electronics and communication engineering, Kalpataru Institute of Technology, Tiptur, India. Since 1993, he is working as a faculty in the department of electronics and communication engineering, Siddaganga Institute of Technology, Tumkur, India. He completed Ph. D in the department of electrical engineering, Indian Institute of Technology, Madras in 2007. His research interests include signal processing and computer vision. He can be contacted at email: sureshkvsit@sit.ac.in.