SlideShare a Scribd company logo
International Journal of Research in Engineering and Science (IJRES)
ISSN (Online): 2320-9364, ISSN (Print): 2320-9356
www.ijres.org Volume 5 Issue 5 ǁ May. 2017 ǁ PP. 06-14
www.ijres.org 6 | Page
Image Super-Resolution Reconstruction Based On
Multi-Dictionary Learning
ShiMin1
, ZhuFeng1
, YI Qing-Ming1
1
School Of Information Science And Technology, Jinan University, Guangzhou, Guangdong 510632, China)
ABSTRACT: In order to overcome the problems that the single dictionary cannot be adapted to variety types
of images and the reconstruction quality couldn’t meet the application, we propose a novel Multi-Dictionary
Learning algorithm for feature classification. The algorithm uses the orientation information of the low
resolution image to guide the image patches in the database to classify, and designs the classification dictionary
which can effectively express the reconstructed image patches. Considering the nonlocal similarity of the image,
we construct the combined nonlocal mean value(C-NLM) regularizer, and take the steering kernel
regression(SKR) to formulate a local regularization ,and establish a unified reconstruction framework. Extensive
experiments on single image validate that the proposed method, compared with several other state-of-the-art
learning based algorithms, achieves improvement in image quality and provides more details.
Keywords: directional feature; multi-dictionary learning; combined nonlocal mean value(C-NLM); steering
kernel regression(SKR)
I. INTRODUCTION
The aim of image super-resolution(SR) is to construct a high-resolution (HR) from one or more
low-resolution (LR) images and additional information such as exemplar images or statistical priors using
software techniques. Because it is easy to implement and it takes lower cost performance, the image
super-resolution technique has been widely applied to surveillance, medical imaging,remote sensing, and so on.
In recent years, the super-resolution algorithms based on sparse representation have become one of the hot
issues in current research [1,2].Yang et al.[3,4] first introduced sparse coding theory to estimate high-frequency
details from an over-complete dictionary. The approach performs well in recovering some high frequency
information. But it often produces noticeable blurring artifacts along the edges.For the same assumption, Zeyde
et al.[5] proposed a more efficient image super-resolution algorithm based on K-SVD dictionary learning.All of
the above algorithms used a universal and over-complete dictionary to represent various image structures.
However, sparse decomposition over a highly redundant dictionary is potentially unstable and tends to generate
visual artifacts.To address the aforementioned problem in single dictionary, a large number of SR methods
introduced the idea of classification to produce more compelling SR results.Yin et al.[6] proposed a sparse
representation of texture constraints, which divides the image into different texture regions, and trains multiple
texture dictionaries.Dong et al.[7] proposed an image super-resolution method combining sparse coding and
regularized learning models into a unified SR framework. The methods can recover many high-frequency details,
but it has the high computation complexity and slow convergence.
In this paper, the whole training set is divided into groups so that the patches in each group are similar
in appearance and they are considered to locate in the same subspace. But How to divide these subspaces
efficiently? The usual approach to apply the K-means algorithm to cluster image patches in database without
exploiting the structural information from the low resolution itself. To address the aforementioned problems, we
Image Super-Resolution Reconstruction Based On Multi-Dictionary Learning
www.ijres.org 7 | Page
propose a novel SR method based on Multi-Dictionary Learning algorithm for feature classification. Firstly, we
adopt the curvelet transform to estimate the sixteen direction feature image in which all patches are clustered in
several groups. Then each patch satisfying a certain condition in the database is classified into one of these
groups with supervision of the clustering results, and Principal Component Analysis (PCA) is used to learn
corresponding compact dictionaries for different groups by which we can obtain more accurate representation of
subspaces of these patches. Moreover, considering spatial and directional information of patches, we construct
the combined nonlocal mean value(C-NLM) regularization term, which is very helpful in preserving edge
sharpness.
Single Image SR Model
So far, a variety of models have been proposed for SR recovery, and the most widely used observation
model relating X and Y can be generally expressed as
v Y SHX (1)
Where X andY are the HR and LR images, respectively, S and H denote a down-sampling operator
and a blurring filter respectively, and v represents Gaussian noise.From the observation model, we know that
the low quality observation imageY is typically generated by blurring, down-sampling and noising. However,
the objective of SR reconstruction is to solve the inverse problem of estimating the underlying HR
image X from the LR imageY .That is to solve the following least squares problem:
 2
2
argmin ( )E  XX Y SHX X (2)
Where ( )E X represents the regularization term constructed from a prior knowledge, and  is a
regularization parameter. To obtain the additional information of image, this paper integrates the HR
reconstruction term, local and non-local regularization terms, and example-based hallucination term into the
unified SR framework like Eq. (2). Mathematically, it is defined as

2
12
2 3
( )
argmin
( ) ( )
sparse
nonlocal local
E
E E

 
   
  
   
X
Y SHX X
X
X X
(3)
Where
2
2
Y SHX is the reconstruction term to ensure that the reconstructed HR image should be
consistent with the LR input via back-projection. The second term ( )sparseE X is the sparse hallucination
regularization term, which requires that the estimated HR image has a sparse representation over a
multi-dictionary learnt from the LR itself and the database image. The third term ( )nonlocalE X is the non-local
prior that assumes that each HR pixel can be predicted by weighting average of a large neighborhood. The last
local prior regularization term indicates that each HR pixel should be perfectly estimated from a small local area
around it.
Image Super-Resolution Reconstruction Based On Multi-Dictionary Learning
www.ijres.org 8 | Page
Feature classification dictionary-based hallucination
2.1 Curvelet-Based Direction Extraction
Curvelet transform is an efficient representation for preserving edges since it has very high directional
sensitivity and is highly anisotropic [8]. Therefore, in this paper, we apply the curvelet transform to extract
directional features which are then used for classified dictionary learning and weight estimation. Considering
different sizes of X andY , we firstly apply bicubic interpolation to magnify Y to obtain the initial estimate of
the high resolution image
0
X , which is the same size as X . Then the curvelet coefficients of the image
0
X can
be obtained by 0
= ( )Q X .where  represents the curvelet transform matrix. Q is a set of curvelet
coefficients expressed as  , | 1,..., ; 1,...c l cc C l L Q ,in which C and cL are the total number of scales and
directions at the c th scale, respectively.
Because of the directional symmetry, we partition the curvelet coefficient matrices at the finest scale
into 16 direction subsets, denoted as  
16
1f f
F 
.The directional features of the initial estimate image in 16
different directions can be defined as
1
( ( ))f f

 A H Q (4)
Where 1
 stands for the inverse curvelet transform, and
, ,,
( )
0,
c l c l f
f
if F
otherwise

 

Q Q
H Q .
Obviously, The directional feature image ( 1,...,16)f f A can be obtained by Eq.(4).
2.2 Learning Classification Dictionary
To make full use of the additional information of LR images, we propose a multi-dictionary learning
algorithm based on directional feature classification.First, We adopt the K-means algorithm to partition
fA obtained in Section 2.1 into K clusters  , 1,..., ; 1,...,k
k i i m k K  C s , where k and i denote Category
number and sample patch number respectively. K and mrepresent the total number of Category and the total
number of sample patch, then obtain ( 1,..., )k k K the centroid of cluster kC and the
radius
2
2
max , 1,...,k
k i k i m  r s  .To construct a more effective dictionary, we use the high-pass filter to
extract high-frequency information from high-resolution image of database as the feature. Then we obtain a set
of high-frequency patches GX .Let ( 1,..., )iG i gx be an image patch in the GX and compute the distance ( k
id )
between iGx and k .An image patch iGx will be added to the ikC th class if the minimum value of k
id , denoted
by ik
id , is smaller than threshold ikr in which is a constant to weight the similarity between the patches of
database and the center of the centroid of cluster.Suppose that  , 1,..., ; 1,...,k
k i i M k K  C s is the K
clusters after augmentation, where M is the total number of sample patch. Apply the PCA to all of the
Image Super-Resolution Reconstruction Based On Multi-Dictionary Learning
www.ijres.org 9 | Page
K subdatasets, we could get K subdictionaries ( 1,..., )k k KD .
2.3 Reconstruction with sparse representation
Our method is based on the sparse representation of image patches which demonstrates effectiveness
and robustness in regularizing super-resolution problem. Suppose that
N
RX and  N
RX represent the HR
and reconstructed HR images, respectively.Suppose that n
i Rx and  n
i Rx , 1,...,i p represent the HR and
the reconstructed HR image patches, and here, p is the number of image patches. The image patches are
overlapped with each other in our method to reduce blocking artifacts. An HR image patch can be written
as i ix R X , and here, n N
i R 
R represents the extracting matrix. In previous sections, we have learnt the
K subdictionaries kD .Recall that we have the centroid k of each cluster available, and hence we could select
the best fitted subdictionary to 
ix by calculating the minimum distance between

ix and k ,i.e.
2
2
argmini k i kk  x  .Then the ik thsubditionary ikD will be selected and assigned to
patch 
ix .The sparse representation of 
ix over the subditionary ikD is given
as : 
0
, . . , 1,...,ii k i is t T i p  x D   .where i is the sparse representation coefficient, T is a scalar
controlling the sparsity[9]. The updated HR image Z can be reconstructed by merging all the constructed
patches and averaging the overlapping regions between the adjacent patches[10], i.e.,

 
1
1 1 i
p pT T
i i i k ii i

 
  Z R R R D  (5)
Construct the regularization terms and SR recovery
3.1 Sparse hallucination regularization term
The sparse hallucination regularization term enforces the constrain that the reconstructed HR image X
should perfectly be consistent with by a multi-dictionary learnt from the LR input itself and the database,
i.e.,
2 2
2
 X Z .We formulate the sparse hallucination regularization term as:
2
2
( )sparseE  X X Z (6)
3.2 Combined non-local mean regularization term
Considering that there are often many repetitive patterns throughout a natural image. Such nonlocal
redundancy is very helpful to improve the quality of reconstructed images. Therefore, we introduce a nonlocal
similarity regularization term into the SR model shown in Eq. (2) to enhance the visual effect of the image. For
each local patch 
ix in the reconstruction HR image X , it can be estimated by the weighted average of its similar
patches  j
ix which is found in a sufficiently large area around 
ix .Let i be the central pixel of patch 
ix , and
Image Super-Resolution Reconstruction Based On Multi-Dictionary Learning
www.ijres.org 10 | Page
j
i be the central pixel of patch  j
ix ,and 
i be the estimation of the center pixel of the reconstructed image
patch 
ix .Then, 
i can be expressed as:
 J
=1
χj j
i i ij
χ w  (7)
The conventional NLM regularization term [11] calculate the similarity weight only based on the pixel
values of the patches may lead to inaccurate estimation because of the noise and significantly-changed local
structures. Here, we exploit the directional information of each pixel position and Spatial location information
between pixels to develop the Combined NLM(C-NLM) regularization term to enhance weight estimation.In
this paper, adopting the feature extraction of the 3.1 section, we get the direction feature information of the
target pixel in the 16 directions, and further optimize the estimation of the similarity weight based on the spatial
location information between pixels.Therefore, the C-NLM similarity weights can be defined by
 
2 2
2
2 2 2
O I V
- -( , ) ( , )
exp( )
j
i i
j j
i i i i
ij
u v u v   

  

   
x x V V
(8)
where ( , )i u v is the two-dimensional coordinates of the center pixel i in the image, iV is the directional
feature vector of the center pixel i ,denoted as 1, 2, 16,, ,...,i i i i
T
A A A   
   V and here, ( 1,...,16)f f A is the
directional feature image obtained by the 3.1 section. O 、 I and V are the filter parameters to control the pixel
values of the image patches, the spatial coordinates of the pixels and the directional feature information,
respectively.The weight matrix obtained by Eq.(8) is substituted in (7), and then we reformulate Eq.(7) as the
matrix form by 
 2 2
22
argmin argmin ( )C NLM
i i ii


    X
X x w S I B X ,were C NLM
i

w represents the weight
matrix and iS represents a column vector consisting of the k neighbors with the largest weights related to ix ,
I is the identity matrix and ,
( , )
0,
C NLM
ij i
i
j
i j
j

 
 

w S
B
S .
In this way, we can write the non-local self similarity constraints term as:
2
2
( ) ( )nonlocalE  X I W X (9)
3.3 Steering Kernel Regression Regularization Term
The C-NLM regularization term exploits the non-local redundant information in the natural images. As
a complementary regularization term to C-NLM,we further introduce the local prior information into the SR
model shown in Eq. (2) to obtain a more robust super-resolution reconstruction.The pixels in the reconstruction
HR image can be predicted from a small neighborhood area,i.e.,
 2
( )
=argmin ( )i i
i j i ijj
x N x
x x x w (10)
Image Super-Resolution Reconstruction Based On Multi-Dictionary Learning
www.ijres.org 11 | Page
Where ( )ix stands for the neighbors of ix ,
k
ijw is the controllable kernel which assigns larger
weights to nearby similar pixel while smaller ones to farther non-similar pixels. In this paper, we employ the
steering kernel proposed in [12] to calculate the weights.Putting it into eq.(10),then the formulate can be
rewritten in matrix form: 
 2 2SKR
22
argmin argmin ( )i i ii
    X
X x w L I P X .where SKR
iw represents
the weight matrix and iL represents a column vector consisting of neighborhood pixels centered at ix , I is
the identity matrix and ,
( , )
0,
SKR
ij i
i
j
i j
j
 
 

c L
P
L .
In such a way, we form the local prior regularization term as:
2
2
( ) ( )localE  X I P X (11)
Optimization of the algorithm
By incorporating both the sparse hallucination regularization term obtained in Eq.(6)、the nonlocal
C-NLM regularization term from Eq.(9) and the local prior regularization term gained in Eq.(11) into the unified
reconstruction framework in Eq.(3), we have the energy function as follows:
2 2
12 2
2 2
2 32 2
( )
( ) ( )
E 
 
   
   
X Y SHX X Z
I W X I P X
(12)
In this paper, we employ the gradient descent method to optimize the
solution:  ( 1) ( )
( )
t t
E

  X X X .where t is iteration times and is the step size. The gradient of the energy
function is written as
1 1
1
2 3
( ) ( )( ) ( ) ( )
( ) ( ) ( ) ( )
T T
T T
E 
 
 
   
     
X Y SHX S S H H X Z
I W I W X I P I P X
(13)
Experimental results
In order to validate the effectiveness of the proposed SR method,we conduct super-resolution
reconstruction experiments on a variety of natural images.In our implementation, the magnification factor is 3,
the image training set is derived from the natural images applied by the Zeyde’s algorithm, the dictionary size is
selected as 256, and the training dataset are partitioned into 16 clusters. In the non-local constraints, the filter
parameters O 、 I and V are set to 50, 40,and 30 ,respectively.In addition,the step size  is set to 2.5 in the
iterative process.The regularization parameters in the model are set to 0.03, 2.3 and 0.6, respectively through
experimental adjustment. We compare our method with three representative methods, including SISR [5]and
ASDS [7] ,and the Bicubic interpolation. In order to comprehensively evaluate the performance of compared SR
approaches, we apply both objective and subjective assessments in experiments.
To demonstrate the visual quality of the various methods, the SR results with a magnification factor of
3 on the Girl,Butterfly and Foreman images are presented in Fig.1,2and 3 ,respectively. The region of interest
(ROI) in each resultant image is magnified by Bicubic interpolation with factor of 2 and shown in the corners to
Image Super-Resolution Reconstruction Based On Multi-Dictionary Learning
www.ijres.org 12 | Page
illustrate the high-frequency details in different images. From the visual comparison results, we notice that the
Bicubic interpolation algorithm tends to produce over-smoothness and noticeable ringing artifacts along edges.
Compared with the Bicubic interpolation, SISR and ASDS method can generate more clear details, but there are
still many noticeable zigzagging effects found along edge.The proposed method that incorporates additional
information of the LR itself and the image database produce sharper edges and pleasing details.As shown in the
local magnification of ROIS in Fig.6, we can clearly see the texture of the butterfly in the resultant image of our
method,while other algorithms are fuzzy.
(a) (b) (c) (d) (e)
Fig.1 Comparison of SR results(3x magnification) Girl image. The local magnification of ROI in (a) is
presented in the bottom-left corner of each resultant image. (a) Original HR image. (b) The Bicubic
interpolation. (c) SISR. (d) ASDS (e) The Proposed method.
(a) (b) (c) (d) (e)
Fig.2 Comparison of SR results(3x magnification) Butterfly image. The local magnification of ROI in (a) is
presented in the bottom-left corner of each resultant image. (a) Original HR image. (b) The Bicubic
interpolation. (c) SISR. (d) ASDS (e) The Proposed method.
(a) (b) (c) (d) (e)
Fig.3 Comparison of SR results(3x magnification) Foreman image. The local magnification of ROI in (a) is
presented in the bottom-left corner of each resultant image. (a) Original HR image. (b) The Bicubic
Image Super-Resolution Reconstruction Based On Multi-Dictionary Learning
www.ijres.org 13 | Page
interpolation. (c) SISR. (d) ASDS (e) The Proposed method
Objectively, we evaluate the quality of the resultant images with via Peak Signal to Noise Ratio(PSNR)
and Structural Similarity (SSIM)metrics[13].Table 1 reports the PSNR and SSIM results of the above four
algorithms with x3 magnification factors when applied to the ten test images. As presented in Table1, our
method is superior to the other SR approaches in terms of average metric values. The average metric values of
the Bicubic interpolation algorithm is the lowest, because it only considers the smoothness prior of the image.
Taking into account the sparse prior, local and nonlocal information of the image, the PSNR/SSIM results has
greater improvement than the Bicubic interpolation.In addition, this algorithm exploit the test image itself to
guide the multi-dictionary learning is able to provide high-res images with best recover accuracy and the
Combined NLM(C-NLM) regularization term constructed can further improve the expression accuracy of image
patches, thus the reconstruction effect relative to the other three algorithms have certain improvement.
Table 1
PSNR(dB)and SSIM results(3x) on ten test images. For each test image, there are two rows. The first row is the
PSNR and the second one is the SSIM.
NO. Image Bicubic SISR[5] ASDS[7] Proposed
a Peppers
31.20
0.9513
33.02
0.9604
33.26
0.9728
33.43
0.9746
b Parthenon
26.03
0.7824
26.67
0.8091
26.92
0.8213
27.21
0.8307
c Plants
30.02
0.8621
32.56
0.8993
32.87
0.9102
33.05
0.9186
d Ship
29.26
0.8522
30.24
0.8647
30.46
0.8681
30.78
0.8793
e Parrots
27.01
0.8018
28.69
0.8192
29.03
0.8237
29.36
0.8361
f Hat
28.85
0.8324
30.41
0.8467
30.35
0.8439
30.66
0.8540
g Girl
31.14
0.7881
33.35
0.8162
33.60
0.8247
33.93
0.8355
h Foreman
32.94
0.8893
34.43
0.9116
33.84
0.9198
34.17
0.9304
i Lena
30.09
0.9023
32.67
0.9182
32.98
0.9238
33.20
0.9287
j Butterfly
23.17
0.8293
26.18
0.8792
26.31
0.8903
26.72
0.9094
Avg. PSNR 28.97
0.8491
30.82
0.8725
30.96
0.8799 31.25
0.8897
Avg. SSIM
II. CONCLUSION
In this paper, we proposed a novel multi-dictionaries learning based upon directional feature
classification for single image SR to solve the classification dictionary learning problem. The method exploit
extra information from both the low resolution image itself and the image database.By extracting the features of
the low resolution image as a priori information, the image patches satisfying a certain condition in the database
are classified with the supervision of the priori information, and then use the PCA method to learn the compact
classification dictionary.In the process of reconstruction, the proposed method employs the geometric
information to optimize the non-local regularization technique and incorporates the complementary local
steering kernel regression regularization term into the reconstruction-based SR framework to maintain sharp
edges and recover more high-frequency details. It is experimentally shown that the proposed method can
improve the quality of the reconstructed image and provide more details.
Image Super-Resolution Reconstruction Based On Multi-Dictionary Learning
www.ijres.org 14 | Page
REFERENCES
[1]. KIM Kwang In, KWON Younghee. Single-image super-resolution using sparse regression and natural
image prior [J]. Pattern Analysis & Machine Intelligence IEEE Transactions on, 2010, 32(6): 1127-33.
[2]. TIMOFTE Radu, DE Vincent, GOOL Luc Van. Anchored Neighborhood Regression for Fast
Example-Based Super-Resolution; proceedings of the IEEE International Conference on Computer
Vision, F, 2013 [C].
[3]. YANG Jian-chao , Wright John , Huang Thomas, et al. Image super-resolution as sparse representation of
raw image patches [C].Computer Vision and Pattern Recognition(CVPR) 2008. IEEE Conference on,
2008, pp. 1-8.
[4]. YANG Jian-chao , Wright John , Huang Thomas, et al. Image Super-Resolution Via Sparse Representation
[J]. IEEE Transactions on Image Processing A Publication of the IEEE Signal Processing Society, 2010,
vol. 19, pp. 2861-2873
[5]. ZEYDE Roman , ELAD Michael , PROTTER Matan. On single image scale-up using
sparse-representations.[C] International Conference on Curves and Surfaces, 2010, pp:711-730
[6]. YIN Hai-tao , LI Shu-tao ,HU Jian-wen. Single image super resolution via texture constrained sparse
representation [J]. 2011, 1161-4.
[7]. DONG Wei-sheng , ZHANG Lei , SHI Guang-ming , et al. Image deblurring and super-resolution by
adaptive sparse domain selection and adaptive regularization [J]. IEEE Transactions on Image Processing
A Publication of the IEEE Signal Processing Society, 2011, 20(7): 1838-57.
[8]. EMMANUEL J C, DONOHO D L. Curvelets - A Surprisingly Effective Nonadaptive Representation For
Objects with Edges [J]. Astronomy & Astrophysics, 2000, 283(3): 1051-7.
[9]. A. M. Bruckstein, D. L. Donoho, and M. Elad. From sparse solutions of systems of equation to sparse
modeling of signals and images. SIAM Rev. vol. 51, no. 1, pp. 34-81,Feb. 2009
[10]. ELAD M, AHARON M. Image denoising via sparse and redundant representations over learned
dictionaries [J]. IEEE Transactions on Image Processing, 2006, 15(12): 3736-45.
[11]. PROTTER M, ELAD M, TAKEDA H, et al. Generalizing the Nonlocal-means to super-resolution
reconstruction [J]. IEEE Transactions on Image Processing A Publication of the IEEE Signal Processing
Society, 2009, 18(1): 36-51.
[12]. TAKEDA H, FARSIU S, MILANFAR P. Kernel Regression for Image Processing and Reconstruction [J].
Image Processing IEEE Transactions on, 2007, 16(2): 349-6
[13]. X. Gao, W. Lu, D .Taoet al. Image quality assessment based on multiscale geometric analysis[J]. IEEE
Trans. Image Process. ,Vol. 18, no. 7, pp. 1409-1423, Jul. 2009.

More Related Content

PDF
A1804010105
PDF
Methods of Manifold Learning for Dimension Reduction of Large Data Sets
PDF
Classification of Iris Data using Kernel Radial Basis Probabilistic Neural Ne...
PDF
Presentation v3.2
PDF
CLUSTERING HYPERSPECTRAL DATA
PDF
Bridging knowledge graphs_to_generate_scene_graphs
PDF
Image Super-Resolution using Single Image Semi Coupled Dictionary Learning
PDF
The Gaussian Process Latent Variable Model (GPLVM)
A1804010105
Methods of Manifold Learning for Dimension Reduction of Large Data Sets
Classification of Iris Data using Kernel Radial Basis Probabilistic Neural Ne...
Presentation v3.2
CLUSTERING HYPERSPECTRAL DATA
Bridging knowledge graphs_to_generate_scene_graphs
Image Super-Resolution using Single Image Semi Coupled Dictionary Learning
The Gaussian Process Latent Variable Model (GPLVM)

What's hot (17)

PDF
Improving search time for contentment based image retrieval via, LSH, MTRee, ...
PDF
40120140501004
PDF
Joint3DShapeMatching
PDF
Density Based Clustering
PDF
Training and Inference for Deep Gaussian Processes
PDF
Image segmentation by modified map ml estimations
PDF
Deep learning ensembles loss landscape
PDF
A PSO-Based Subtractive Data Clustering Algorithm
PPT
Section5 Rbf
PDF
Sparse codes for natural images
PDF
4 satellite image fusion using fast discrete
PDF
G143741
PDF
PERFORMANCE EVALUATION OF DIFFERENT TECHNIQUES FOR TEXTURE CLASSIFICATION
PDF
A lattice-based consensus clustering
PDF
Entropy 19-00079
PPTX
Density based clustering
Improving search time for contentment based image retrieval via, LSH, MTRee, ...
40120140501004
Joint3DShapeMatching
Density Based Clustering
Training and Inference for Deep Gaussian Processes
Image segmentation by modified map ml estimations
Deep learning ensembles loss landscape
A PSO-Based Subtractive Data Clustering Algorithm
Section5 Rbf
Sparse codes for natural images
4 satellite image fusion using fast discrete
G143741
PERFORMANCE EVALUATION OF DIFFERENT TECHNIQUES FOR TEXTURE CLASSIFICATION
A lattice-based consensus clustering
Entropy 19-00079
Density based clustering
Ad

Similar to Image Super-Resolution Reconstruction Based On Multi-Dictionary Learning (20)

PDF
Image Restoration and Denoising By Using Nonlocally Centralized Sparse Repres...
PDF
Super resolution image reconstruction via dual dictionary learning in sparse...
PDF
Image Restoration UsingNonlocally Centralized Sparse Representation and histo...
PDF
Band Clustering for the Lossless Compression of AVIRIS Hyperspectral Images
PDF
Survey on Single image Super Resolution Techniques
PDF
Survey on Single image Super Resolution Techniques
PDF
Joint Image Registration And Example-Based Super-Resolution Algorithm
PDF
Joint Image Registration And Example-Based Super-Resolution Algorithm
PDF
www.ijerd.com
PDF
Image Restitution Using Non-Locally Centralized Sparse Representation Model
PDF
Image Reconstruction Using Sparse Approximation
PDF
Joint3DShapeMatching - a fast approach to 3D model matching using MatchALS 3...
PDF
5 single image super resolution using
PDF
Image Denoising Based On Sparse Representation In A Probabilistic Framework
PDF
Classification of Iris Data using Kernel Radial Basis Probabilistic Neural N...
PDF
Dictionary based Image Compression via Sparse Representation
PDF
DICTA 2017 poster
PDF
DOMAIN SPECIFIC CBIR FOR HIGHLY TEXTURED IMAGES
PDF
An Hypergraph Object Oriented Model For Image Segmentation And Annotation
PDF
Paper id 24201464
Image Restoration and Denoising By Using Nonlocally Centralized Sparse Repres...
Super resolution image reconstruction via dual dictionary learning in sparse...
Image Restoration UsingNonlocally Centralized Sparse Representation and histo...
Band Clustering for the Lossless Compression of AVIRIS Hyperspectral Images
Survey on Single image Super Resolution Techniques
Survey on Single image Super Resolution Techniques
Joint Image Registration And Example-Based Super-Resolution Algorithm
Joint Image Registration And Example-Based Super-Resolution Algorithm
www.ijerd.com
Image Restitution Using Non-Locally Centralized Sparse Representation Model
Image Reconstruction Using Sparse Approximation
Joint3DShapeMatching - a fast approach to 3D model matching using MatchALS 3...
5 single image super resolution using
Image Denoising Based On Sparse Representation In A Probabilistic Framework
Classification of Iris Data using Kernel Radial Basis Probabilistic Neural N...
Dictionary based Image Compression via Sparse Representation
DICTA 2017 poster
DOMAIN SPECIFIC CBIR FOR HIGHLY TEXTURED IMAGES
An Hypergraph Object Oriented Model For Image Segmentation And Annotation
Paper id 24201464
Ad

More from IJRESJOURNAL (20)

PDF
Injection Analysis of Hera And Betano New Power Plants At the Interconnection...
PDF
Harmonic AnalysisofDistribution System Due to Embedded Generation Injection
PDF
Decision Support System Implementation for Candidate Selection of the Head of...
PDF
Yield Forecasting to Sustain the Agricultural Transportation UnderStochastic ...
PDF
Injection Analysis of 5 Mwp Photovoltaic Power Plant on 20 Kv Distribution Ne...
PDF
Developing A Prediction Model for Tensile Elastic Modulus of Steel Fiber – Ce...
PDF
Effect of Liquid Viscosity on Sloshing in A Rectangular Tank
PDF
Design and Implementation Decision Support System using MADM Methode for Bank...
PDF
Investigations on The Effects of Different Heat Transfer Coefficients in The ...
PDF
Strategy of Adaptation of Traditional House Architecture Bali Aga
PDF
Design of Universal Measuring Equipment for Bogie Frame
PDF
A Design Of Omni-Directional Mobile Robot Based On Mecanum Wheels
PDF
An Investigation Into The Prevalence of Code Switching in the Teaching of Nat...
PDF
The Filling Up of Yogyakarta Governor Position in Social Harmony Perspective
PDF
Quality Function Deployment And Proactive Quality Techniques Applied to Unive...
PDF
The Analysis And Perspective on Development of Chinese Automotive Heavy-Duty ...
PDF
Research on the Bottom Software of Electronic Control System in Automobile El...
PDF
Risk Based Underwater Inspection (RBUI) For Existing Fixed Platforms In Indon...
PDF
Dynamic Modeling of Four – Rotorcraft
PDF
A New Approach on Proportional Fuzzy Likelihood Ratio orderings of Triangular...
Injection Analysis of Hera And Betano New Power Plants At the Interconnection...
Harmonic AnalysisofDistribution System Due to Embedded Generation Injection
Decision Support System Implementation for Candidate Selection of the Head of...
Yield Forecasting to Sustain the Agricultural Transportation UnderStochastic ...
Injection Analysis of 5 Mwp Photovoltaic Power Plant on 20 Kv Distribution Ne...
Developing A Prediction Model for Tensile Elastic Modulus of Steel Fiber – Ce...
Effect of Liquid Viscosity on Sloshing in A Rectangular Tank
Design and Implementation Decision Support System using MADM Methode for Bank...
Investigations on The Effects of Different Heat Transfer Coefficients in The ...
Strategy of Adaptation of Traditional House Architecture Bali Aga
Design of Universal Measuring Equipment for Bogie Frame
A Design Of Omni-Directional Mobile Robot Based On Mecanum Wheels
An Investigation Into The Prevalence of Code Switching in the Teaching of Nat...
The Filling Up of Yogyakarta Governor Position in Social Harmony Perspective
Quality Function Deployment And Proactive Quality Techniques Applied to Unive...
The Analysis And Perspective on Development of Chinese Automotive Heavy-Duty ...
Research on the Bottom Software of Electronic Control System in Automobile El...
Risk Based Underwater Inspection (RBUI) For Existing Fixed Platforms In Indon...
Dynamic Modeling of Four – Rotorcraft
A New Approach on Proportional Fuzzy Likelihood Ratio orderings of Triangular...

Recently uploaded (20)

PPTX
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
PPTX
Lecture Notes Electrical Wiring System Components
PPTX
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
PDF
Arduino robotics embedded978-1-4302-3184-4.pdf
PDF
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
PPTX
CH1 Production IntroductoryConcepts.pptx
PPTX
web development for engineering and engineering
PDF
Model Code of Practice - Construction Work - 21102022 .pdf
PDF
PPT on Performance Review to get promotions
PPTX
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
PPT
Project quality management in manufacturing
PDF
Operating System & Kernel Study Guide-1 - converted.pdf
PPTX
Sustainable Sites - Green Building Construction
PDF
composite construction of structures.pdf
PPTX
Internet of Things (IOT) - A guide to understanding
DOCX
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
PPTX
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
PPTX
CYBER-CRIMES AND SECURITY A guide to understanding
PDF
Embodied AI: Ushering in the Next Era of Intelligent Systems
PPTX
UNIT 4 Total Quality Management .pptx
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
Lecture Notes Electrical Wiring System Components
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
Arduino robotics embedded978-1-4302-3184-4.pdf
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
CH1 Production IntroductoryConcepts.pptx
web development for engineering and engineering
Model Code of Practice - Construction Work - 21102022 .pdf
PPT on Performance Review to get promotions
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
Project quality management in manufacturing
Operating System & Kernel Study Guide-1 - converted.pdf
Sustainable Sites - Green Building Construction
composite construction of structures.pdf
Internet of Things (IOT) - A guide to understanding
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
CYBER-CRIMES AND SECURITY A guide to understanding
Embodied AI: Ushering in the Next Era of Intelligent Systems
UNIT 4 Total Quality Management .pptx

Image Super-Resolution Reconstruction Based On Multi-Dictionary Learning

  • 1. International Journal of Research in Engineering and Science (IJRES) ISSN (Online): 2320-9364, ISSN (Print): 2320-9356 www.ijres.org Volume 5 Issue 5 ǁ May. 2017 ǁ PP. 06-14 www.ijres.org 6 | Page Image Super-Resolution Reconstruction Based On Multi-Dictionary Learning ShiMin1 , ZhuFeng1 , YI Qing-Ming1 1 School Of Information Science And Technology, Jinan University, Guangzhou, Guangdong 510632, China) ABSTRACT: In order to overcome the problems that the single dictionary cannot be adapted to variety types of images and the reconstruction quality couldn’t meet the application, we propose a novel Multi-Dictionary Learning algorithm for feature classification. The algorithm uses the orientation information of the low resolution image to guide the image patches in the database to classify, and designs the classification dictionary which can effectively express the reconstructed image patches. Considering the nonlocal similarity of the image, we construct the combined nonlocal mean value(C-NLM) regularizer, and take the steering kernel regression(SKR) to formulate a local regularization ,and establish a unified reconstruction framework. Extensive experiments on single image validate that the proposed method, compared with several other state-of-the-art learning based algorithms, achieves improvement in image quality and provides more details. Keywords: directional feature; multi-dictionary learning; combined nonlocal mean value(C-NLM); steering kernel regression(SKR) I. INTRODUCTION The aim of image super-resolution(SR) is to construct a high-resolution (HR) from one or more low-resolution (LR) images and additional information such as exemplar images or statistical priors using software techniques. Because it is easy to implement and it takes lower cost performance, the image super-resolution technique has been widely applied to surveillance, medical imaging,remote sensing, and so on. In recent years, the super-resolution algorithms based on sparse representation have become one of the hot issues in current research [1,2].Yang et al.[3,4] first introduced sparse coding theory to estimate high-frequency details from an over-complete dictionary. The approach performs well in recovering some high frequency information. But it often produces noticeable blurring artifacts along the edges.For the same assumption, Zeyde et al.[5] proposed a more efficient image super-resolution algorithm based on K-SVD dictionary learning.All of the above algorithms used a universal and over-complete dictionary to represent various image structures. However, sparse decomposition over a highly redundant dictionary is potentially unstable and tends to generate visual artifacts.To address the aforementioned problem in single dictionary, a large number of SR methods introduced the idea of classification to produce more compelling SR results.Yin et al.[6] proposed a sparse representation of texture constraints, which divides the image into different texture regions, and trains multiple texture dictionaries.Dong et al.[7] proposed an image super-resolution method combining sparse coding and regularized learning models into a unified SR framework. The methods can recover many high-frequency details, but it has the high computation complexity and slow convergence. In this paper, the whole training set is divided into groups so that the patches in each group are similar in appearance and they are considered to locate in the same subspace. But How to divide these subspaces efficiently? The usual approach to apply the K-means algorithm to cluster image patches in database without exploiting the structural information from the low resolution itself. To address the aforementioned problems, we
  • 2. Image Super-Resolution Reconstruction Based On Multi-Dictionary Learning www.ijres.org 7 | Page propose a novel SR method based on Multi-Dictionary Learning algorithm for feature classification. Firstly, we adopt the curvelet transform to estimate the sixteen direction feature image in which all patches are clustered in several groups. Then each patch satisfying a certain condition in the database is classified into one of these groups with supervision of the clustering results, and Principal Component Analysis (PCA) is used to learn corresponding compact dictionaries for different groups by which we can obtain more accurate representation of subspaces of these patches. Moreover, considering spatial and directional information of patches, we construct the combined nonlocal mean value(C-NLM) regularization term, which is very helpful in preserving edge sharpness. Single Image SR Model So far, a variety of models have been proposed for SR recovery, and the most widely used observation model relating X and Y can be generally expressed as v Y SHX (1) Where X andY are the HR and LR images, respectively, S and H denote a down-sampling operator and a blurring filter respectively, and v represents Gaussian noise.From the observation model, we know that the low quality observation imageY is typically generated by blurring, down-sampling and noising. However, the objective of SR reconstruction is to solve the inverse problem of estimating the underlying HR image X from the LR imageY .That is to solve the following least squares problem:  2 2 argmin ( )E  XX Y SHX X (2) Where ( )E X represents the regularization term constructed from a prior knowledge, and  is a regularization parameter. To obtain the additional information of image, this paper integrates the HR reconstruction term, local and non-local regularization terms, and example-based hallucination term into the unified SR framework like Eq. (2). Mathematically, it is defined as  2 12 2 3 ( ) argmin ( ) ( ) sparse nonlocal local E E E               X Y SHX X X X X (3) Where 2 2 Y SHX is the reconstruction term to ensure that the reconstructed HR image should be consistent with the LR input via back-projection. The second term ( )sparseE X is the sparse hallucination regularization term, which requires that the estimated HR image has a sparse representation over a multi-dictionary learnt from the LR itself and the database image. The third term ( )nonlocalE X is the non-local prior that assumes that each HR pixel can be predicted by weighting average of a large neighborhood. The last local prior regularization term indicates that each HR pixel should be perfectly estimated from a small local area around it.
  • 3. Image Super-Resolution Reconstruction Based On Multi-Dictionary Learning www.ijres.org 8 | Page Feature classification dictionary-based hallucination 2.1 Curvelet-Based Direction Extraction Curvelet transform is an efficient representation for preserving edges since it has very high directional sensitivity and is highly anisotropic [8]. Therefore, in this paper, we apply the curvelet transform to extract directional features which are then used for classified dictionary learning and weight estimation. Considering different sizes of X andY , we firstly apply bicubic interpolation to magnify Y to obtain the initial estimate of the high resolution image 0 X , which is the same size as X . Then the curvelet coefficients of the image 0 X can be obtained by 0 = ( )Q X .where  represents the curvelet transform matrix. Q is a set of curvelet coefficients expressed as  , | 1,..., ; 1,...c l cc C l L Q ,in which C and cL are the total number of scales and directions at the c th scale, respectively. Because of the directional symmetry, we partition the curvelet coefficient matrices at the finest scale into 16 direction subsets, denoted as   16 1f f F  .The directional features of the initial estimate image in 16 different directions can be defined as 1 ( ( ))f f   A H Q (4) Where 1  stands for the inverse curvelet transform, and , ,, ( ) 0, c l c l f f if F otherwise     Q Q H Q . Obviously, The directional feature image ( 1,...,16)f f A can be obtained by Eq.(4). 2.2 Learning Classification Dictionary To make full use of the additional information of LR images, we propose a multi-dictionary learning algorithm based on directional feature classification.First, We adopt the K-means algorithm to partition fA obtained in Section 2.1 into K clusters  , 1,..., ; 1,...,k k i i m k K  C s , where k and i denote Category number and sample patch number respectively. K and mrepresent the total number of Category and the total number of sample patch, then obtain ( 1,..., )k k K the centroid of cluster kC and the radius 2 2 max , 1,...,k k i k i m  r s  .To construct a more effective dictionary, we use the high-pass filter to extract high-frequency information from high-resolution image of database as the feature. Then we obtain a set of high-frequency patches GX .Let ( 1,..., )iG i gx be an image patch in the GX and compute the distance ( k id ) between iGx and k .An image patch iGx will be added to the ikC th class if the minimum value of k id , denoted by ik id , is smaller than threshold ikr in which is a constant to weight the similarity between the patches of database and the center of the centroid of cluster.Suppose that  , 1,..., ; 1,...,k k i i M k K  C s is the K clusters after augmentation, where M is the total number of sample patch. Apply the PCA to all of the
  • 4. Image Super-Resolution Reconstruction Based On Multi-Dictionary Learning www.ijres.org 9 | Page K subdatasets, we could get K subdictionaries ( 1,..., )k k KD . 2.3 Reconstruction with sparse representation Our method is based on the sparse representation of image patches which demonstrates effectiveness and robustness in regularizing super-resolution problem. Suppose that N RX and  N RX represent the HR and reconstructed HR images, respectively.Suppose that n i Rx and  n i Rx , 1,...,i p represent the HR and the reconstructed HR image patches, and here, p is the number of image patches. The image patches are overlapped with each other in our method to reduce blocking artifacts. An HR image patch can be written as i ix R X , and here, n N i R  R represents the extracting matrix. In previous sections, we have learnt the K subdictionaries kD .Recall that we have the centroid k of each cluster available, and hence we could select the best fitted subdictionary to  ix by calculating the minimum distance between  ix and k ,i.e. 2 2 argmini k i kk  x  .Then the ik thsubditionary ikD will be selected and assigned to patch  ix .The sparse representation of  ix over the subditionary ikD is given as :  0 , . . , 1,...,ii k i is t T i p  x D   .where i is the sparse representation coefficient, T is a scalar controlling the sparsity[9]. The updated HR image Z can be reconstructed by merging all the constructed patches and averaging the overlapping regions between the adjacent patches[10], i.e.,    1 1 1 i p pT T i i i k ii i      Z R R R D  (5) Construct the regularization terms and SR recovery 3.1 Sparse hallucination regularization term The sparse hallucination regularization term enforces the constrain that the reconstructed HR image X should perfectly be consistent with by a multi-dictionary learnt from the LR input itself and the database, i.e., 2 2 2  X Z .We formulate the sparse hallucination regularization term as: 2 2 ( )sparseE  X X Z (6) 3.2 Combined non-local mean regularization term Considering that there are often many repetitive patterns throughout a natural image. Such nonlocal redundancy is very helpful to improve the quality of reconstructed images. Therefore, we introduce a nonlocal similarity regularization term into the SR model shown in Eq. (2) to enhance the visual effect of the image. For each local patch  ix in the reconstruction HR image X , it can be estimated by the weighted average of its similar patches  j ix which is found in a sufficiently large area around  ix .Let i be the central pixel of patch  ix , and
  • 5. Image Super-Resolution Reconstruction Based On Multi-Dictionary Learning www.ijres.org 10 | Page j i be the central pixel of patch  j ix ,and  i be the estimation of the center pixel of the reconstructed image patch  ix .Then,  i can be expressed as:  J =1 χj j i i ij χ w  (7) The conventional NLM regularization term [11] calculate the similarity weight only based on the pixel values of the patches may lead to inaccurate estimation because of the noise and significantly-changed local structures. Here, we exploit the directional information of each pixel position and Spatial location information between pixels to develop the Combined NLM(C-NLM) regularization term to enhance weight estimation.In this paper, adopting the feature extraction of the 3.1 section, we get the direction feature information of the target pixel in the 16 directions, and further optimize the estimation of the similarity weight based on the spatial location information between pixels.Therefore, the C-NLM similarity weights can be defined by   2 2 2 2 2 2 O I V - -( , ) ( , ) exp( ) j i i j j i i i i ij u v u v             x x V V (8) where ( , )i u v is the two-dimensional coordinates of the center pixel i in the image, iV is the directional feature vector of the center pixel i ,denoted as 1, 2, 16,, ,...,i i i i T A A A       V and here, ( 1,...,16)f f A is the directional feature image obtained by the 3.1 section. O 、 I and V are the filter parameters to control the pixel values of the image patches, the spatial coordinates of the pixels and the directional feature information, respectively.The weight matrix obtained by Eq.(8) is substituted in (7), and then we reformulate Eq.(7) as the matrix form by   2 2 22 argmin argmin ( )C NLM i i ii       X X x w S I B X ,were C NLM i  w represents the weight matrix and iS represents a column vector consisting of the k neighbors with the largest weights related to ix , I is the identity matrix and , ( , ) 0, C NLM ij i i j i j j       w S B S . In this way, we can write the non-local self similarity constraints term as: 2 2 ( ) ( )nonlocalE  X I W X (9) 3.3 Steering Kernel Regression Regularization Term The C-NLM regularization term exploits the non-local redundant information in the natural images. As a complementary regularization term to C-NLM,we further introduce the local prior information into the SR model shown in Eq. (2) to obtain a more robust super-resolution reconstruction.The pixels in the reconstruction HR image can be predicted from a small neighborhood area,i.e.,  2 ( ) =argmin ( )i i i j i ijj x N x x x x w (10)
  • 6. Image Super-Resolution Reconstruction Based On Multi-Dictionary Learning www.ijres.org 11 | Page Where ( )ix stands for the neighbors of ix , k ijw is the controllable kernel which assigns larger weights to nearby similar pixel while smaller ones to farther non-similar pixels. In this paper, we employ the steering kernel proposed in [12] to calculate the weights.Putting it into eq.(10),then the formulate can be rewritten in matrix form:   2 2SKR 22 argmin argmin ( )i i ii     X X x w L I P X .where SKR iw represents the weight matrix and iL represents a column vector consisting of neighborhood pixels centered at ix , I is the identity matrix and , ( , ) 0, SKR ij i i j i j j      c L P L . In such a way, we form the local prior regularization term as: 2 2 ( ) ( )localE  X I P X (11) Optimization of the algorithm By incorporating both the sparse hallucination regularization term obtained in Eq.(6)、the nonlocal C-NLM regularization term from Eq.(9) and the local prior regularization term gained in Eq.(11) into the unified reconstruction framework in Eq.(3), we have the energy function as follows: 2 2 12 2 2 2 2 32 2 ( ) ( ) ( ) E            X Y SHX X Z I W X I P X (12) In this paper, we employ the gradient descent method to optimize the solution:  ( 1) ( ) ( ) t t E    X X X .where t is iteration times and is the step size. The gradient of the energy function is written as 1 1 1 2 3 ( ) ( )( ) ( ) ( ) ( ) ( ) ( ) ( ) T T T T E                X Y SHX S S H H X Z I W I W X I P I P X (13) Experimental results In order to validate the effectiveness of the proposed SR method,we conduct super-resolution reconstruction experiments on a variety of natural images.In our implementation, the magnification factor is 3, the image training set is derived from the natural images applied by the Zeyde’s algorithm, the dictionary size is selected as 256, and the training dataset are partitioned into 16 clusters. In the non-local constraints, the filter parameters O 、 I and V are set to 50, 40,and 30 ,respectively.In addition,the step size  is set to 2.5 in the iterative process.The regularization parameters in the model are set to 0.03, 2.3 and 0.6, respectively through experimental adjustment. We compare our method with three representative methods, including SISR [5]and ASDS [7] ,and the Bicubic interpolation. In order to comprehensively evaluate the performance of compared SR approaches, we apply both objective and subjective assessments in experiments. To demonstrate the visual quality of the various methods, the SR results with a magnification factor of 3 on the Girl,Butterfly and Foreman images are presented in Fig.1,2and 3 ,respectively. The region of interest (ROI) in each resultant image is magnified by Bicubic interpolation with factor of 2 and shown in the corners to
  • 7. Image Super-Resolution Reconstruction Based On Multi-Dictionary Learning www.ijres.org 12 | Page illustrate the high-frequency details in different images. From the visual comparison results, we notice that the Bicubic interpolation algorithm tends to produce over-smoothness and noticeable ringing artifacts along edges. Compared with the Bicubic interpolation, SISR and ASDS method can generate more clear details, but there are still many noticeable zigzagging effects found along edge.The proposed method that incorporates additional information of the LR itself and the image database produce sharper edges and pleasing details.As shown in the local magnification of ROIS in Fig.6, we can clearly see the texture of the butterfly in the resultant image of our method,while other algorithms are fuzzy. (a) (b) (c) (d) (e) Fig.1 Comparison of SR results(3x magnification) Girl image. The local magnification of ROI in (a) is presented in the bottom-left corner of each resultant image. (a) Original HR image. (b) The Bicubic interpolation. (c) SISR. (d) ASDS (e) The Proposed method. (a) (b) (c) (d) (e) Fig.2 Comparison of SR results(3x magnification) Butterfly image. The local magnification of ROI in (a) is presented in the bottom-left corner of each resultant image. (a) Original HR image. (b) The Bicubic interpolation. (c) SISR. (d) ASDS (e) The Proposed method. (a) (b) (c) (d) (e) Fig.3 Comparison of SR results(3x magnification) Foreman image. The local magnification of ROI in (a) is presented in the bottom-left corner of each resultant image. (a) Original HR image. (b) The Bicubic
  • 8. Image Super-Resolution Reconstruction Based On Multi-Dictionary Learning www.ijres.org 13 | Page interpolation. (c) SISR. (d) ASDS (e) The Proposed method Objectively, we evaluate the quality of the resultant images with via Peak Signal to Noise Ratio(PSNR) and Structural Similarity (SSIM)metrics[13].Table 1 reports the PSNR and SSIM results of the above four algorithms with x3 magnification factors when applied to the ten test images. As presented in Table1, our method is superior to the other SR approaches in terms of average metric values. The average metric values of the Bicubic interpolation algorithm is the lowest, because it only considers the smoothness prior of the image. Taking into account the sparse prior, local and nonlocal information of the image, the PSNR/SSIM results has greater improvement than the Bicubic interpolation.In addition, this algorithm exploit the test image itself to guide the multi-dictionary learning is able to provide high-res images with best recover accuracy and the Combined NLM(C-NLM) regularization term constructed can further improve the expression accuracy of image patches, thus the reconstruction effect relative to the other three algorithms have certain improvement. Table 1 PSNR(dB)and SSIM results(3x) on ten test images. For each test image, there are two rows. The first row is the PSNR and the second one is the SSIM. NO. Image Bicubic SISR[5] ASDS[7] Proposed a Peppers 31.20 0.9513 33.02 0.9604 33.26 0.9728 33.43 0.9746 b Parthenon 26.03 0.7824 26.67 0.8091 26.92 0.8213 27.21 0.8307 c Plants 30.02 0.8621 32.56 0.8993 32.87 0.9102 33.05 0.9186 d Ship 29.26 0.8522 30.24 0.8647 30.46 0.8681 30.78 0.8793 e Parrots 27.01 0.8018 28.69 0.8192 29.03 0.8237 29.36 0.8361 f Hat 28.85 0.8324 30.41 0.8467 30.35 0.8439 30.66 0.8540 g Girl 31.14 0.7881 33.35 0.8162 33.60 0.8247 33.93 0.8355 h Foreman 32.94 0.8893 34.43 0.9116 33.84 0.9198 34.17 0.9304 i Lena 30.09 0.9023 32.67 0.9182 32.98 0.9238 33.20 0.9287 j Butterfly 23.17 0.8293 26.18 0.8792 26.31 0.8903 26.72 0.9094 Avg. PSNR 28.97 0.8491 30.82 0.8725 30.96 0.8799 31.25 0.8897 Avg. SSIM II. CONCLUSION In this paper, we proposed a novel multi-dictionaries learning based upon directional feature classification for single image SR to solve the classification dictionary learning problem. The method exploit extra information from both the low resolution image itself and the image database.By extracting the features of the low resolution image as a priori information, the image patches satisfying a certain condition in the database are classified with the supervision of the priori information, and then use the PCA method to learn the compact classification dictionary.In the process of reconstruction, the proposed method employs the geometric information to optimize the non-local regularization technique and incorporates the complementary local steering kernel regression regularization term into the reconstruction-based SR framework to maintain sharp edges and recover more high-frequency details. It is experimentally shown that the proposed method can improve the quality of the reconstructed image and provide more details.
  • 9. Image Super-Resolution Reconstruction Based On Multi-Dictionary Learning www.ijres.org 14 | Page REFERENCES [1]. KIM Kwang In, KWON Younghee. Single-image super-resolution using sparse regression and natural image prior [J]. Pattern Analysis & Machine Intelligence IEEE Transactions on, 2010, 32(6): 1127-33. [2]. TIMOFTE Radu, DE Vincent, GOOL Luc Van. Anchored Neighborhood Regression for Fast Example-Based Super-Resolution; proceedings of the IEEE International Conference on Computer Vision, F, 2013 [C]. [3]. YANG Jian-chao , Wright John , Huang Thomas, et al. Image super-resolution as sparse representation of raw image patches [C].Computer Vision and Pattern Recognition(CVPR) 2008. IEEE Conference on, 2008, pp. 1-8. [4]. YANG Jian-chao , Wright John , Huang Thomas, et al. Image Super-Resolution Via Sparse Representation [J]. IEEE Transactions on Image Processing A Publication of the IEEE Signal Processing Society, 2010, vol. 19, pp. 2861-2873 [5]. ZEYDE Roman , ELAD Michael , PROTTER Matan. On single image scale-up using sparse-representations.[C] International Conference on Curves and Surfaces, 2010, pp:711-730 [6]. YIN Hai-tao , LI Shu-tao ,HU Jian-wen. Single image super resolution via texture constrained sparse representation [J]. 2011, 1161-4. [7]. DONG Wei-sheng , ZHANG Lei , SHI Guang-ming , et al. Image deblurring and super-resolution by adaptive sparse domain selection and adaptive regularization [J]. IEEE Transactions on Image Processing A Publication of the IEEE Signal Processing Society, 2011, 20(7): 1838-57. [8]. EMMANUEL J C, DONOHO D L. Curvelets - A Surprisingly Effective Nonadaptive Representation For Objects with Edges [J]. Astronomy & Astrophysics, 2000, 283(3): 1051-7. [9]. A. M. Bruckstein, D. L. Donoho, and M. Elad. From sparse solutions of systems of equation to sparse modeling of signals and images. SIAM Rev. vol. 51, no. 1, pp. 34-81,Feb. 2009 [10]. ELAD M, AHARON M. Image denoising via sparse and redundant representations over learned dictionaries [J]. IEEE Transactions on Image Processing, 2006, 15(12): 3736-45. [11]. PROTTER M, ELAD M, TAKEDA H, et al. Generalizing the Nonlocal-means to super-resolution reconstruction [J]. IEEE Transactions on Image Processing A Publication of the IEEE Signal Processing Society, 2009, 18(1): 36-51. [12]. TAKEDA H, FARSIU S, MILANFAR P. Kernel Regression for Image Processing and Reconstruction [J]. Image Processing IEEE Transactions on, 2007, 16(2): 349-6 [13]. X. Gao, W. Lu, D .Taoet al. Image quality assessment based on multiscale geometric analysis[J]. IEEE Trans. Image Process. ,Vol. 18, no. 7, pp. 1409-1423, Jul. 2009.