SlideShare a Scribd company logo
The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.2, April 2016
DOI : 10.5121/ijma.2016.8201 1
MULTILINEAR KERNEL MAPPING FOR FEATURE
DIMENSION REDUCTION IN CONTENT BASED
MULTIMEDIA RETRIEVAL SYSTEM
Vinoda Reddy1
, Dr.P.Suresh Varma2
and Dr.A.Govardhan3
1
Associate Professor & HOD, CSE Dept., SIT, Gulbarga, Karnataka, India,
2
Professor, Dean & Principal, University College of Engineering, Adikavi Nannaya
University, Rajahmundry, India and
3
Professor & Director, JNTU, Hyderabad, Telangana, India.
ABSTRACT
In the process of content-based multimedia retrieval, multimedia information is processed in order to
obtain descriptive features. Descriptive representation of features, results in a huge feature count, which
results in processing overhead. To reduce this descriptive feature overhead, various approaches have been
used to dimensional reduction, among which PCA and LDA are the most used methods. However, these
methods do not reflect the significance of feature content in terms of inter-relation among all dataset
features. To achieve a dimension reduction based on histogram transformation, features with low
significance can be eliminated. In this paper, we propose a feature dimensional reduction approaches to
achieve the dimension reduction approach based on a multi-linear kernel (MLK) modeling. A benchmark
dataset for the experimental work is taken and the proposed work is observed to be improved in analysis in
comparison to the conventional system.
KEYWORDS
Dimensional Reduction Multimedia retrieval, PCA,MLK-DR, Weizmann dataset
1.INTRODUCTION
Multimedia retrieval approach, represents the feature set of more importance. A Statistical
approach was suggested in [2],[3] which represents a pattern subspace method proposed for
automatic pattern recognition. This pattern provides an idea of modeling a sequence of video
representation which are set of individual patterns, represented in sub-space to provide an
iterative principal component analysis (PCA) used for learning principal components. In another
major study outlined in [4],[5], PCA, was used for locally linear embedding (LLE) and
orthogonal Locality preserving projections (OLPP).Three typical manifold embedding
dimensionality reduction methods were suggested. According to the data distribution using the
subspace OLPP, a locally strong regression method (LARR) that learns to predict more accurate
information retrieval was suggested. To get a rough prediction method of selective features using
support vector machines and a local support vector regression, within a limited range of
adjustment was focused. In the second category of information retrieval, the method includes
presence-based approaches. Using presence information, intuitional method for analyzing the
feature for multimedia video was suggested. Young H. Kwon [6] used a visual representation of
the model to produce an anthropological features. In this approach, the primary features of the
The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.2, April 2016
2
subject was used as a representative elements. The proportion of these features to distinguish
different patterns categories were calculated. Secondary feature analysis, using geographical
mapping of image information was used to guide the measurement. June Da Xia [7] suggested an
active appearance models (AAM) feature pattern recognition method used to extract patterns.
Each pattern derives feature point,where feature area is divided into ten subspace. A patch-based
model name kernel patch method suggested by Shuicheng Yan et.al. was presented in [8]. This
method models the global Gaussian mixture models (GMM) for a maximum of two videos, and
created a relational mapping empirical coding using Kullback-Leibler divergence. A weakening
of the learning process called “Inter-instrument synchronization” was suggested. Kernel
regression is employed to assess the pattern. The third category includes frequency-based
approach. In the process of video processing and pattern recognition, frequency domain analysis
features are the most popular method. In Pattern recognition method to investigate a biologically
inspired approach for feature extraction a logical approach was defined in [9, 10]. Unlike
previous works, Guo [9] of the human visual process suggested bio-inspired model based on
imitation by applying Gabor filters. A Gabor filter is a linear edge detection filter used for video
processing. Gabor filter representation of frequency and orientation are similar to those of the
human visual system approach, and representation and discrimination has been found to be
particularly suitable for the structure. Although PCA-based coding is applied to many
applications, this method represents, a more selective approach of operations and feature selection
to reduce feature which effect the accuracy of the system. To overcome the problem of
conventional PCA approach a multi-linear Kernel coding is proposed.
2.DIMENSIONAL REDUCTION IN CBMR
General multimedia retrieval system operates in three phases, training, testing and classification.
In the training phase, the process for various multimedia video datas trained into the database. In
the test phase the extracts features of a sample is given as query for the classification. Classifier
operates by comparison with samples given as query with database sample for classification. A
general Multimedia retrieval system is as given in below figure;
Fig.1 General multimedia retrieval system
The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.2, April 2016
3
2.1. Preprocessing
Depending on the application, multimedia pre-processing include: alignment (translation,
rotation, scaling) and light normalization / correlation. These pre-processed data are used for
coarse multimedia detection so as to improve the robustness in feature extraction and retrieval.
2.2. Feature Extraction
Multimedia content represent structural feature or features to facilitate the recognition process. A
compact set of interpersonal discrimination is focused to be eliminated. In a histogram feature
extraction, outstanding values, frequency features, color features, the Principal Component
Analysis (PCA), linear discriminant analysis (LDA), kernel PCA (KPCA), Local Binary Pattern
(LBP), independent component analysis (ICA), include these as a feature, and these feature set
are represented in order to reduce dimensionality of the feature extraction approach to implement
the preprocessed video feature values from the pre-processed sample. PCA, the principle
component analysis, process on the data and a new system feature values, to coordinate where the
feature data of the second largest variance changes to the first variation. The major components
are called for projection from the largest variance underlying to represent the principal features.
The process of PCA is outlined as follows:
1. Differentiation of each data is over a mean value for reduced dimension is carried out.
2. Each dimension is minimized below the mean average value.
3. The covariance matrix is calculated.
4. All the different dimensions of covariance was computed between all possible values.
5. For the eigenvectors and covariance matrix, a square matrix of the eigen values is been
calculated.
6. The eigen values and eigenvectors are sorted in highest to lower values.
7. This allows the components in order of importance.
8. The selected eigenvectors set by multiplying the original data with the new data set.
PCA data set for the operation, for a N-to-N dataset sample of I(x, y) is a vector of dimension N
to N which is defined as N2
.
A database created with M samples is then mapped to a high dimensional space as Γ1,Γ2,Γ3,….
ΓM. the average of the sample dataset is defined as
Ψ = ∑ Γ (1)
Each dataset can be generalized to mean the average deviation from the dataset as Φ = Γ − Ψ
represented. Covariance matrix ΦΦT
, defined as the expected value which is calculated as,
= ∑ Φ Φ (2)
It set free each data sample is taken from each sample consisting of differently considered. If all
the samples of dataset are normalized perfectly, a conclusion can be done that the Φt, the
variances of every sample of dataset results in as sequentially aligned results.Φtzero covariance
with the means of action lies.ΦThe implications of these beliefs is the idea that a With this
observation, resulting in an expected value of zero can be said t an independent, combining
theΦthat each person is multiplied across characteristics of the time alignment.
The above illustration allows us to represent the covariance matrix in another form. Let
A=[Φ1,Φ2,…ΦM], be the covariance matrix, the expression in Eq.(2) can be written as,
The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.2, April 2016
4
= ( ) (3)
As the factor 1/M affects only the scaling eigenvector, we can leave the calculation resulting in
the scaling factor
= ( ) (4)
Given a covariance matrix C, a computation to the optimal set estimation for, Eigen dataset
variations between the dataset samples and to characterize a set of eigenvectors and eigenvalues.
Consider an eigenvector of C satisfying the condition, given as,
= (5)
= (6)
The eigenvectors are orthogonal and normalized hence
=
1					 =
0					 ≠
(7)
Combining Eq. (2) and (7), Eq. (6) thus become
= ∑ ( Γ ) (8)
It is represented as a set ofdata taken from each sample consisting of differential samples. If all
the samples of dataset are normalized perfectly, a conclusion can be done that the Φt, the
variances of every sample of dataset results in as sequentially time aligned.Φtthe zero covariance
with the means of action lies in the implications of these beliefs in the idea With the observation,
resulting in an expected value of zero that can be setas an independent cross multiplication ,
combining theΦt that each sample is multiplied across characteristics of the time alignment
resulting in an expectation of value zero.The above illustration allows us to represent the
covariance matrix in another form. Let A=[Φ1,Φ2,…ΦM], be the covariance matrix, the
expression in Eq.(2) can be written as
= ( ) (3)
As the factor 1/M affects only the scaling eigenvector, we can leave the calculation resulting in
the scaling factor
= ( ) (4)
Given a covariance matrix C, a computation to the optimal set estimation for Eigen dataset
variations between the dataset samples and to characterize a set of eigenvectors and eigenvalues.
Consider an eigenvector of C satisfying the condition, given as,
= (5)
= (6)
The eigenvectors are orthogonal and normalized hence
The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.2, April 2016
5
=
1					 =
0					 ≠
(7)
Combining Eq. (2) and (7), Eq. (6) thus become
= ∑ ( Γ ) (8)
Eqn. (8) eigenvector representing a representative sample data set corresponding to the variance
shows the eigenvalue. Eigenvectors with the largest eigenvalues vector as the basis, the major
vectors that express the greatest variance are obtained by selecting. PCA dimensionality reduction
algorithm applied to only the internal changes in that particular sample obtained by considering
the feature set reduces the dimensions. However, other frames in a sequence of inter-class
features forms that are not considered.
3. MULTI-LINEAR KERNEL (MLK) CODING
This coding is processed in two phases,
1) Training and 2) testing phase.
The proposed multimedia retrieval system consists of two phases. In the training phase, the
training processis developed for database which facilitates the updating of various multimedia
featuresextracted from different video samples. In the test phase, the test video sample is
processed for feature extraction and the SVM classifier using the training features are extracted
from the database are compared with the features, matching the result of the test video
multimedia feature set taken as multimedia features. The Block diagram of the proposed work is
as shown below:
Fig.2 Block diagram of proposed approach
The Multimedia retrieval system proposed is divided into four operational phases:
I) Pre-processing
2) feature extraction using 2D - Gabor filter and histogram
3) Feature vector dimensionality reduction MLK-DR.
4) multimedia retrieval system using SVM classifier.
The proposed approach is a multimedia video retrieval system that is used to estimate multimedia
information’s from the dataset. Multimedia video from database sources are acquired and the
retrieval system process the video input which is then further processed as multimedia video
feature set. For a given multimedia video sample using the histogram feature to estimate the
feature estimate approach and multi-linear dimension reduction (ML-DR) is used in association
with a 2D-Gabor filter. The Input multimedia video preprocessing phase, process on the input
video data read from the database and process through the images. The samples are then resized,
The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.2, April 2016
6
cropped and converted to gray scale for feature representation. After preprocessing, multimedia
video goes through feature extraction process. At this stage, all possible orientations of Gabor
filter was applied to remove all possible variations. Here, the entire Gabor filter is applied on
eight orientations and histogram feature set is derived. In the next step, the extracted features are
used to reduce the dimensionality using multi-linear approach to reduce feature vectors. A SVM
classifier using these feature vectors are used with the database to compare and match the
classesfor classifying the multimedia information.
The proposed algorithm for the system is as follows.
In the Multi-linear dimensionality reduction ML-DR approach the features are transformed to the
multi-linear subspace that extracts the multi-dimensional features from the database. ML-DR,
which operated in linear dimension for expansion. While ML-DR operated directly on the two-
mode processing the data are processed through multi-functional objectives. Video filtering using
Gabor filter and feature extraction for dimensionality reduction in ML-DR results in output for
dimensionally reduced feature multimedia data set which are derived from projection matrix.
Figure3(a), (b) shows the pictorial representation of conventional PCA and multi linear
dimensional reduction coding.
(a)
(b)
Fig.3(a). Principal Component analysis (b) Multi-Linear Dimension reduction
Algorithm
Input: multimedia databases, multimedia video test sample
Output: the action class for the multimedia sample.
Step 1: multimedia video data is read from the database.
Step 2: The video is resized for uniform dimensions
Step 3: Color video is converted to gray scale video.
Step 4: gray-scale multimedia video is cropped to the area.
Step 5: 2D-Gabor filters and the histogram method is applied to crop video to extract
the multimedia feature.
Step 6: Using the extracted feature ML-DR are processed to reduce dimensionality.
Step 7: The training features available in the database are used for classification using
the SVM classifier.
The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.2, April 2016
7
PCA, operates in a one-dimensional mode, whereas ML-DR works in multi-mode operations. For
a given feature space of a single class, PCA evaluates key components individually, while the
remaining class features are not considered. In ML-DR the dataset is processed by all the
dimension variations. The pseudo code for the ML-DR coding is as given below;
In machine learning, learning algorithms analyze the data and recognize patterns associated with
learning networks. Classification and regression analysis is used to monitor the learning model.
Given a set of training set, each belonging to one of two categories to be marked. SVM learning
algorithm is a model that provides an instance of a class with maximum co-relational values. It is
a non-probabilistic binary linear classifier. The SVM model represented as a point in space,
where, the different categories of distinct features are used which are segregated as wide as
possible, to split representation in mapping, so that they have a place on the boundary of mapping
and the difference falls on a new instance of the class predicted. In addition to performing linear
classification, SVM operates by using kernel-site mapping the information in high-dimensional
feature spaces to perform a non-linear classification. More formally, the SVM classification
performs a high regression infinite dimensional space coding to determine the extent of feature
mapping. Intuitively, most features of any class of a so-called functional separation nearest
margin classifier, train data in general to the large margin of error to be generalized to represent
the distance leading to a classification. The original problem in a finite-dimensional space, as
stated in the space discrimination are often non-separable linear sets. For the purpose of easier
formulation, the original data space is mapped to high dimensional space mapping. SVM
mapping approach used to calculate the dot product of the original space, which is a reference to
the variable kernel function K (x, y) with the feature values to make a decision. A dot product of
the high-dimensional space that is defined as the set of points with a vector space is defined as a
constant. The projection planes αi is a linear combination of the parameters that are represented as
feature vectors in the data base of the video sample defining. By taking the projection plane as an
option, the features of x are mapped into the projection plane defined by the relation of:
∑ !(" , ") = $%&'( &( (9)
Note that as with variation in K(x,y)becomes smaller, the sum of each term in the corresponding
data base xi points to the test measures for the degree of proximity to the point x.
Pseudo Code:
Input: Set of features with larger dimensions
Output: Feature with smaller dimensions
Step 1: Feature set of M x N-dimensional space is taken.
Step 2: evaluate the mean along all ‘n’ dimensions.
Step 3: Create a new matrix by subtracting the mean from each value of dataset.
Step 4: Perform the evaluation of covariance matrix.
Step 5: Evaluate histogram vectors and its respective histogram values.
Step 6: arrange the values in ascending/descending order by sorting them and then choose k
sorted histogram values from n x k dimensional.
Step 7: For each class feature set the performance is carried out in the same manner
Step 8: Finally the intra-class and inter-class histogram values were computed by
considering the histogram values as a new projection matrix.
Step 9: The original values are then transformed to the new sub space as a matrix by
multiplying the one-dimensional subspace reduced.
The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.2, April 2016
8
4. EXPERIMENTAL RESULTS
The proposed system is developed with Matlab tools and Weizmann data set [7] is tested over.
PCA-based feature dimensions reduction is compared with a comparative analysis of the
proposed MLK-DR. the simulation is carried out to evaluate the performance of the proposed
approach. Weizmann dataset simulation for features extraction for which Histogram is computed
as MI-HIST [21] is set forth in the calculation. Figure.4 illustrates the test dataset is shown.
(a)
(b)
(c)
(d)
Fig 4: Dataset with (a) Bending (B), (b) Jumping (J), (c) Running (R) and (d) Walking (W) sample
The samples are captured a 180x144 resolution, using static camera with a homogenous outdoor
background. The processing test sample having a running action is illustrated in Fig 5.
Fig 5: Test sample having Running action
The obtained features using MI-HIST [21] is presented in table 1.
Orignal Sample
The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.2, April 2016
9
Table 1. Comparative Analysis of HoG [22], HIST [23] and proposed MI-HIST [21] for running sample
Observations of
Techniques
HoG
[22]
HIST
[23]
MI-
HIST[21]
Original Sample
Size
478720 478720 478720
Redundant
coefficients
436582 457581 464260
HIST Features for
Motion
components
42138 21139 14460
MI-HIST Histogram features are used for dimensionality reduction, where PCA, LDA and
MLK- DR are applied over it. In the PCA-based dimensionality reduction technique, the
extracted features and the mean normalized K principal features are selected. On the count of
selected facilities where facilities are used for the inter-class correlation, where LDA coding is
applied for least minimum feature count. However, PCA and LDA dimensionality reduction
process for two-dimensional coding. To further reduce the dimensions of the features, MLK-DR
was implemented and ongoing features from all three methods are shown in Table 6 for 4
different action models. Similarly, for all the four classes, the MLK-DR was applied on the
databases created on each category and a generalized feature set is given in Table 2.
Table 2. Dimensionality reduced Feature set of total data base
Where, Fijdefines the feature for ith
class andjth sample. On testing,the feature set is selected and
the features of the proposed approach represents a sample of test sample applied to the query
observing 4780 features reduction.
to evaluate the performance, the approach developed the following parameters, used to evaluate
the performance.
$$ $) =
*+ ,
*+ ,+-*+-,
(10)
Where,
TP = True positive (Correctly identified)
FP = False positive (Incorrectly identified)
TN = True negative (false, Correctly identified)
FN = False negative (false, incorrectly identified)
The simulation model for each class of 5 subjects, 20 subjects with a total of four categories
forming is used for training. During the testing process, the sample is processed and extracted
features with the query histogram features are passed to the SVM classifier. For the SVM
classifier, the obtained result of classification is described in Table.3
The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.2, April 2016
10
Table 3. Classification Results
From Table 3, the confusion matrix can be written as
Fig.6. Confusion Matrix
From the confusion matrix, the accuracy can be calculated as
$$ $)(%) =
(4 + 10)
(4 + 1 + 5 + 10)
	= 70
This suggested approach for its evaluation, developed the parameters of sensitivity, specificity,
recall, precision and F-measure for its evaluation analysis using the measured parameters. To
compute the measured parameters, a mathematical expressions used are defined as,
The sensitivity is measured as the ratio of true positive (TP) to the sum of true positive (TP) and
false negative (FN).
34&' ( () =
*
*+-,
(11)
The specificity is measured as the ratio true negative (TN) to the sum of true negative (TN) and
false positive (FP).
354$ 6 $ () =
,
,+-*
(12)
Precision is the ratio of TP to sum of TP and FP while recall is the ratio of TP to sum of TP and
FN. The following expressions give precision and recall measurements
74$ 88 =
*
*+-,
(13)
9 4$ ' %& =
*
*+-*
(14)
F-measure is the combined measure of precision and recall. F-measure is also called as balanced
F-score, the expression is as
:_<4 ' 4 =
=∗?@ABCC.*E@A F G
?@ABCC+*E@A F G
(15)
Table 4.Parametric evaluation of the developed system for processing efficiency.
The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.2, April 2016
11
The obtained retrieval observations for different test action in the Weizmann dataset were
observed through the feature count and the overhead. For the obtained features, the overhead was
measured as
H =
-IJK
-LMN -IJK
(16)
Where :OEP= Original Features
:Q@A= Decimated features
The decimated features and the processing overhead occurred for the running sample for
proposed approach and for conventional approaches were shown in table.5.
Table.5.feature count & overhead for running sample
Approach RSTU RVWX Overhead
PCA 14460 6140 73.80%
LDA 14460 5380 59.25%
MLK-DR 14460 4780 49.38%
Fig 7. Feature count of Running sample
Fig.7 shows the feature count for PCA and LDA techniques, in comparison to the proposed
MLK-DR approach for running sample. Compared with PCA and LDA, MLK-DR retrieval
accuracy is improved with the feature counts reduces and also the computational time has been
1
3000
3500
4000
4500
5000
5500
6000
6500
sample1
featurecount
PCA
LDA
MLK-DR
The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.2, April 2016
12
reduced. Accuracy of information received and the overhead observed is outlined in Table.5 are
represented in Fig.8. This Fig. shows details of the proposed overhead for MLK-DR in
comparison to PCA and LDA, where the proposed approach is observed with 23% less overhead.
Fig 8. Overhead of Running sample
Table.6. Feature count & Overhead for Walking sample
Approach RSTU RVWX Overhead
PCA 37794 15600 70.29%
LDA 37794 14560 62.71%
MLK-DR 37794 12748 50.20%
Fig 9. Feature count of Walking sample
Fig.9 illustrates the PCA and LDA techniques in comparison to the proposed MLK-DR for
running sample. The feature count is shown in comparison with PCA and LDA, MLK-DR
retrieval accuracy is improved, and the feature count and computational time has been reduced.
Compared with PCA, MLK-DR has 3252 features low in count.
Fig 10. Overhead of Walking sample
1
20
30
40
50
60
70
80
90
100
sample1
overhead(%)
PCA
LDA
MLK-DR
1
1.2
1.25
1.3
1.35
1.4
1.45
1.5
1.55
1.6
1.65
1.7
x 10
4
sample2
featurecount
PCA
LDA
MLK-DR
1
20
30
40
50
60
70
80
sample2
overhead(%)
PCA
LDA
MLK-DR
The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.2, April 2016
13
Fig.10 illustrates the overhead details of proposed MLK-DR. compared with PCA and LDA, the
proposed approach has reduced overhead. Compared with PCA, MLK-DR has 20% reduced
overhead and 10% when compared with LDA.
Table 7. Feature count & Overhead for Jumping sample
Approach RSTU RVWX Overhead
PCA 20400 8500 71.43%
LDA 20400 7520 56.75%
MLK-DR 20400 6250 47.38%
Fig 11.Feature count of Jumping sample
Fig 11.Illustrates the feature count details of the jumping sample for the proposed MLK-DR
along with earlier PCA and LDA techniques. Compared with PCA and LDA, the MLK-DR has
reduced feature count which reduces computational time along with retrieval accuracy.
Fig 12.Overhead of Jumping sample
Fig 12. illustrates the overhead details of proposed MLK-DR. compared with PCA and LDA, the
proposed approach has reduced overhead. Compared with PCA, MLK-DR has 24% reduced
overhead and 15% when compared with LDA.
Table 8. Feature count & Overhead for Bending sample
Approach RSTU RVWX Overhead
PCA 18560 7150 62.56%
LDA 18560 6320 52.35%
MLK-DR 18560 5870 45.28%
1
4000
5000
6000
7000
8000
9000
10000
sample3
featurecount
PCA
LDA
MLK-DR
1
20
30
40
50
60
70
80
90
sample3
overhead(%)
PCA
LDA
MLK-DR
The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.2, April 2016
14
Fig 13. Feature count of Bending sample
The reduced feature count for bending sample using MLK-DR along with PCA and LDA is
shown in Fig.13. The proposed approach has reduced feature count when it is compared with
earlier approaches.
Fig 14.Overhead of Bending sample
The overhead of proposed MLK-DR is less compared with PCA and LDA for bending sample.
The details are represented in fig.14. Compared with earlier approaches, MLK-DR has 17%
reduced overhead.
5. CONCLUSION
The proposed approach derives all possible variations of the frame details given. By applying
different orientations histogram, for related frames is derived. In each and every orientation, only
a few features are of dominating nature. In the suggested process Histogram was applied to the
video sample, and the dominant feature coefficients were derived. In the proposed approach, the
histogram features are extracted after obtaining all possible variation only, instead of directly
extracting from multimedia as in conventional approach. The proposed approach apply the multi-
dimensionality reduction method, the dimension reduction approach, process based on the intra-
group set by considering the inter-class features in the dataset. Wherein the traditional approach is
processed considering the intra-class features reduction, the suggested approach minimizes the
dimension in consideration to interclass relation as well. This gives the optimality of feature
estimation in multiple directions, resulting in higher dimension reduction.
1
3000
4000
5000
6000
7000
8000
9000
sample4
featurecount
PCA
LDA
MLK-DR
1
10
20
30
40
50
60
70
80
sample4
overhead(%)
PCA
LDA
MLK-DR
The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.2, April 2016
15
6.REFERENCES
[1] Paul V, Jones M.J, “ Robust Real‐Time Pattern Detection,” International Journal of Computer
Vision, Vol. 57, pp.137‐154, 2004
[2] Geng X, Zhou Z‐H, Zhang Y, Li G, Dai H. “Learning from multimedia representation patterns for
automatic age recognition,” In ACM Conf. on Multimedia, pages 307–316, 2006
[3] GuodongGuo, Guowang Mu, Yun Fu, Charles Dyer, “A Study on Automatic Age Estimation using a
Large Database”, Computer Vision, 12th International Conference, pp. 1986 – 1991, IEEE, 2009.
[4] Guo G, Fu Y, Dyer, C.R., Huang, T.S. “Video‐Based Human Pattern Recognition by Manifold
Learning and Locally Adjusted Robust Regression,” IEEE Trans.on Video Processing, Vol. 17,
pp.1178‐1188, 2008.
[5] Guo G, Fu Y, Huang T.S. and Dyer, C.R. “Locally Adjusted Robust Regression for Human
AgeRecognition,” IEEE Workshop on Applications ofComputer Vision, pages 1‐6, 2008.
[6] Matthew Cooper, Ting Liu, and Eleanor Rieffel, “Video Segmentation via Temporal Pattern
Classification,” IEEE transactions on multimedia, Vol.9, pp. 610 – 618, 2007.
[7] AsumanGünay and Vasif V. Nabiyev, “Age Estimation Based on AAM and 2D-DCT Features of
Facial Images,”International Journal of Advanced Computer Science and Applications, Vol. 6, No.
2, 2015.
[8] Hiroyuki Takeda, SinaFarsiu, and PeymanMilanfar, “Kernel Regression for Image Processingand
Reconstruction,” IEEE transactions on image processing, vol. 16, no. 2, 2007.
[9] GuodongGuo, Guowang Mu, Yun Fu, Thomas S. Huang, “Human Age Estimation Using Bio-
inspired Features,” conference on Computer Vision and Pattern Recognition, pp. 112 – 119, 2009.
[10] Serre T, Wolf L, Bileschi S, Riesenhuber M andPoggio T. “Robust Object Recognition
withCortex‐Like Mechanisms,” IEEE Trans. on PAMI,29(3): 411–426, 2007.
[11] FengGao, Haizhou Ai. “Pattern Classification on Consumer Videos with Gabor Feature and Fuzzy
LDA Method,” Advances in Biometrics. In: Third International Conference, ICB, Alghero, Italy,
Proceedings, pp 132-141, 2009.
[12] Andreas Lanitis, “Multimedia Biometric Templates and Representation: Problems and Challenges,”
for Artificial Intelligence, AIAI. 2009.
[13] Ramesha K et al, “Feature Extraction based Pattern Recognition, Gender and Pattern Classification,”
International Journal on Computer Science and Engineering, Vol.02, pp.14-23, 2010.
[14] Stephen Reder, Kathryn Harris, Kristen Setzler, “The Multimedia Adult Learner Corpus,” TESOL
Quarterly, Vol.37, pp. 546-557, 2003.
[15] PetterssonOlle. “Implementing LVQ for Pattern Classification,” Master of Science Thesis, 2007.
[16] Petra GRD, “introduction to human age estimation using face images,” Faculty of organization and
informatics, 2013.
[17] Tudor barbu , “an automatic unsupervised pattern recognition approach,”proceedings of the
Romanian academy, series a, of the romanian academy volume 7, 2006
[18] GuodongGu, oYun Fu, Thomas S. Huang Charles R. Dyer Computer Sciences, “Locally Adjusted
Robust Regression for Human Age Estimation,” IEEE workshop on compture vision, pp. 1-8, 2008.
[19] Sheng Huanga,c, Dan Yanga,b, Dong Yangc, Ahmed Elgammal, “Collaborative Discriminant
Locality Preserving Projections With its Applicationto Face Recognition,” Computer Vision and
Pattern Recognition, 2014.
[20] Jinwei Wang1,2, Xirong Ma2, Jizhou Sun1, Ziping Zhao2 and Yuanping Zhu, “Puzzlement
Detection from Facial Expression Using Active Appearance Models and Support Vector Machines,”
International Journal of Signal Processing, Image Processing and Pattern Recognition Vol.7, pp.349-
360 ,2014.
[21] Vinoda Reddy, P. SureshVarma, A.Govardhan, “Recurrent Energy Coding For Content Based
Multimedia Retrieval System”, International Journal of Multimedia and User Design & User
Experience, Vol.24, 2015.
[22] A. Kl¨aser,M. Marszałek, and C. Schmid, “A spatio-temporal descriptor based on 3-D-gradients,”
Brit. Mach. Vision Conf., 2008, pp. 995–1004.
[23] Ling Shao, Simon Jones, and XuelongLi,“Efficient Search and Localization of Human Actions in
Video Databases,”IEEE Transactions on Circuits and Systems for Video Technology, Vol. 24, No.
3, March 2014, pp. 504-512.
The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.2, April 2016
16
Authors
Dr. A. Govardhan is presently the Principal at JNTUH College of Engineering
Hyderabad and Executive Council Member, Jawaharlal Nehru Technological
University Hyderabad (JNTUH), India. He did his B.E.(CSE) from Osmania
University College of Engineering, Hyderabad in 1992, M.Tech from Jawaharlal
Nehru University(JNU), New Delhi in 1994 and Ph.D from Jawaharlal Nehru
Technological University, Hyderabad in 2003. He served and held several
Academic and Administrative positions including Director of Evaluation,
Principal, Head of the Department, Chairman and Member of Boards of Studies
and Students’ Advisor. He is the recipient of 26 International, National and
State Awards including A.P. State Government Best Teacher Award, Bharat
Seva Ratna Puraskar, CSI Chapter Patron Award, Bharat Jyoti Award, International Intellectual
Development Award and Mother Teresa Award for Outstanding Services, Achievements, Contributions,
Meritorious Services, Outstanding Performance and Remarkable Role in the field of Education and Service
to the Nation. He is a Chairman and Member on several Boards of Studies of various Universities. He was
the Chairman of CSI Hyderabad Chapter during 2013-2014. He is a Member on the Editorial Boards for
Eleven International Journals. He is a Member of several Advisory Boards & Academic Boards, a
Committee Member for several International and National Conferences including ASUC-2014, Dubai
(UAE), ICT 2014, Singapore, CMIT-2014, Zurich (Switzerland), PAKDD2010, IIIT Hyd. (India) and
IKE2010, Las Vegas, Nevada (USA). He has 2 Monographs by Lambert Academic Publishing, Published
in USA. He has guided 56 Ph.D theses, 135 M.Tech projects and he has published 350 research papers at
International/National Journals/Conferences including IEEE, ACM, Springer, Elsevier and InderScience.
He has organized 1 International Conference, 20 Workshops and 1 Refresher Course. He has delivered
more than 100 Keynote addresses and invited lectures. He has 21 years of Teaching and Research
experience. He served as Co-Convener for EAMCET2009, Chief Regional Coordinator for EAMCET2010
and EAMCET2011. He also served as Vice-Chairman, CSI Hyderabad Chapter and IT Professional Forum,
A.P. He is a member in several Professional and Service Oriented Bodies including ACM, IEEE and CSI.
His areas of research include Databases, Data Warehousing & Mining and Information Retrieval Systems.
Dr. P Suresh Varma is a Professor of Computer Science and Engineering & Dean,
Faculty of Engineering and Technology, AdikaviNannaya University. Presently he
is Principal, University College of Engineering, and Founder Dean College
Development Council from 14th June 2012 to 13th June 2015, AdikaviNannaya
University. Professor Varma has been engaged in teaching, research and research
supervision at AdikaviNannaya University, Rajahmundry since Oct 2008. He
secured M.Tech in Computer Science and Technology from Andhra University in
1999 and secured Ph.D in Computer Science and Engineering from Acharya
Nagarjuna University in 2008. He has supervised the research work of a number of
Ph.D and M.Phil scholars and published and lectured extensively on
Communication Networks, Data mining, Cloud Computing, Big Data and Image Processing. In 2010
Government of Andhra Pradesh honoured with Best Teacher Award in the occasion of Teacher Day. He
has published over 125 papers in journals and conferences. 6 Ph.Ds are awarded and 4 Ph.Ds are submitted
under his guidance and over 100 M.Tech./MCA thesis are supervised.
Vinoda reddy is Associate Professor & Head of Computer Science Department Shetty
Institute of Technology Kalaburgi. She has completed her Bachelor of Engineering
Degree in Rural Engineering College Bhalki from Vishveshwariya Technological
University Belagavi, Karnataka(1996), M.Tech Degree in National Institute of
Technology Surathkal from Deemed University(2003) and presently perusing Ph.D
in Jawaharlal Nehru Technical University Kukatpally, Hyderabad. She has total 19
years of teaching experience. She has published papers in international journal and
national conference. Her area of research includes Multimedia data Mining and
Information Retrieval Systems

More Related Content

PDF
Comparision of Clustering Algorithms usingNeural Network Classifier for Satel...
PDF
Orientation Spectral Resolution Coding for Pattern Recognition
PDF
IRJET- Digital Image Forgery Detection using Local Binary Patterns (LBP) and ...
PDF
Foliage Measurement Using Image Processing Techniques
PDF
An Improved Way of Segmentation and Classification of Remote Sensing Images U...
PDF
Density Driven Image Coding for Tumor Detection in mri Image
PDF
Q UANTUM C LUSTERING -B ASED F EATURE SUBSET S ELECTION FOR MAMMOGRAPHIC I...
PDF
IRJET- Surveillance for Leaf Detection using Hexacopter
Comparision of Clustering Algorithms usingNeural Network Classifier for Satel...
Orientation Spectral Resolution Coding for Pattern Recognition
IRJET- Digital Image Forgery Detection using Local Binary Patterns (LBP) and ...
Foliage Measurement Using Image Processing Techniques
An Improved Way of Segmentation and Classification of Remote Sensing Images U...
Density Driven Image Coding for Tumor Detection in mri Image
Q UANTUM C LUSTERING -B ASED F EATURE SUBSET S ELECTION FOR MAMMOGRAPHIC I...
IRJET- Surveillance for Leaf Detection using Hexacopter

What's hot (19)

PDF
Texture Classification
PDF
An ensemble classification algorithm for hyperspectral images
PDF
OBJECT DETECTION, EXTRACTION AND CLASSIFICATION USING IMAGE PROCESSING TECHNIQUE
PDF
Multi Resolution features of Content Based Image Retrieval
PDF
Spectral Density Oriented Feature Coding For Pattern Recognition Application
PDF
National Flags Recognition Based on Principal Component Analysis
PDF
An Unsupervised Cluster-based Image Retrieval Algorithm using Relevance Feedback
PDF
Probabilistic model based image segmentation
PDF
Object-Oriented Approach of Information Extraction from High Resolution Satel...
PDF
Detection of leaf diseases and classification using digital image processing
PDF
International Journal of Engineering and Science Invention (IJESI)
PDF
I MAGE S UBSET S ELECTION U SING G ABOR F ILTERS A ND N EURAL N ETWORKS
PDF
Content-based Image Retrieval Using The knowledge of Color, Texture in Binary...
PDF
Facial recognition using modified local binary pattern and random forest
PDF
adaptive metric learning for saliency detection base paper
PDF
Image similarity using fourier transform
PDF
A divisive hierarchical clustering based method for indexing image information
PDF
F010224446
PDF
WEB IMAGE RETRIEVAL USING CLUSTERING APPROACHES
Texture Classification
An ensemble classification algorithm for hyperspectral images
OBJECT DETECTION, EXTRACTION AND CLASSIFICATION USING IMAGE PROCESSING TECHNIQUE
Multi Resolution features of Content Based Image Retrieval
Spectral Density Oriented Feature Coding For Pattern Recognition Application
National Flags Recognition Based on Principal Component Analysis
An Unsupervised Cluster-based Image Retrieval Algorithm using Relevance Feedback
Probabilistic model based image segmentation
Object-Oriented Approach of Information Extraction from High Resolution Satel...
Detection of leaf diseases and classification using digital image processing
International Journal of Engineering and Science Invention (IJESI)
I MAGE S UBSET S ELECTION U SING G ABOR F ILTERS A ND N EURAL N ETWORKS
Content-based Image Retrieval Using The knowledge of Color, Texture in Binary...
Facial recognition using modified local binary pattern and random forest
adaptive metric learning for saliency detection base paper
Image similarity using fourier transform
A divisive hierarchical clustering based method for indexing image information
F010224446
WEB IMAGE RETRIEVAL USING CLUSTERING APPROACHES
Ad

Viewers also liked (10)

PPT
GABALL project
PDF
Review of black hole and grey hole attack
PDF
Leader follower formation control of ground vehicles using camshift based gui...
PDF
Error resilient for multiview video transmissions with gop analysis
PDF
An approach to improving edge
PPT
Gaball presentation final_bg
PDF
A MODEL TO CONVERT WAVE–FORM-TEXT TO LINEAR-FORM-TEXT FOR BETTER READABILITY ...
PDF
A N A LTERNATIVE G REEN S CREEN K EYING M ETHOD F OR F ILM V ISUAL E ...
PDF
A C OMPARATIVE S TUDY ON A DAPTIVE L IFTING B ASED S CHEME AND I NTERACT...
PDF
A NALYSIS OF P AIN H EMODYNAMIC R ESPONSE U SING N EAR -I NFRARED S PECTROSCOPY
GABALL project
Review of black hole and grey hole attack
Leader follower formation control of ground vehicles using camshift based gui...
Error resilient for multiview video transmissions with gop analysis
An approach to improving edge
Gaball presentation final_bg
A MODEL TO CONVERT WAVE–FORM-TEXT TO LINEAR-FORM-TEXT FOR BETTER READABILITY ...
A N A LTERNATIVE G REEN S CREEN K EYING M ETHOD F OR F ILM V ISUAL E ...
A C OMPARATIVE S TUDY ON A DAPTIVE L IFTING B ASED S CHEME AND I NTERACT...
A NALYSIS OF P AIN H EMODYNAMIC R ESPONSE U SING N EAR -I NFRARED S PECTROSCOPY
Ad

Similar to Multilinear Kernel Mapping for Feature Dimension Reduction in Content Based Multimedia Retrieval System (20)

PDF
Pca analysis
PPTX
Image recogonization
PPTX
DimensionalityReduction.pptx
PDF
pca.pdf polymer nanoparticles and sensors
PDF
Ijebea14 276
PDF
5 DimensionalityReduction.pdf
PDF
Unit_2_Feature Engineering.pdf
PDF
1376846406 14447221
PDF
A Novel Algorithm for Design Tree Classification with PCA
PDF
Survey on Supervised Method for Face Image Retrieval Based on Euclidean Dist...
PDF
Cs229 notes10
PPTX
Module-4_Part-II.pptx
PDF
Feature Engineering in Machine Learning
PDF
AIML_UNIT 2 _PPT_HAND NOTES_MPS.pdf
PDF
Comparison on PCA ICA and LDA in Face Recognition
PDF
Machine learning (11)
PDF
Principal component analysis and lda
PPTX
introduction to Statistical Theory.pptx
PPT
Fr pca lda
PDF
An Intelligent Approach for Effective Retrieval of Content from Large Data Se...
Pca analysis
Image recogonization
DimensionalityReduction.pptx
pca.pdf polymer nanoparticles and sensors
Ijebea14 276
5 DimensionalityReduction.pdf
Unit_2_Feature Engineering.pdf
1376846406 14447221
A Novel Algorithm for Design Tree Classification with PCA
Survey on Supervised Method for Face Image Retrieval Based on Euclidean Dist...
Cs229 notes10
Module-4_Part-II.pptx
Feature Engineering in Machine Learning
AIML_UNIT 2 _PPT_HAND NOTES_MPS.pdf
Comparison on PCA ICA and LDA in Face Recognition
Machine learning (11)
Principal component analysis and lda
introduction to Statistical Theory.pptx
Fr pca lda
An Intelligent Approach for Effective Retrieval of Content from Large Data Se...

Recently uploaded (20)

PPTX
CYBER-CRIMES AND SECURITY A guide to understanding
PPTX
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
PPTX
web development for engineering and engineering
PDF
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
PDF
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
PPTX
Welding lecture in detail for understanding
PDF
Embodied AI: Ushering in the Next Era of Intelligent Systems
PPTX
Foundation to blockchain - A guide to Blockchain Tech
PPTX
Internet of Things (IOT) - A guide to understanding
PDF
Well-logging-methods_new................
PDF
Operating System & Kernel Study Guide-1 - converted.pdf
PPTX
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
PDF
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
PPTX
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
PDF
Digital Logic Computer Design lecture notes
PPTX
Lecture Notes Electrical Wiring System Components
PPTX
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
PDF
Model Code of Practice - Construction Work - 21102022 .pdf
PPTX
OOP with Java - Java Introduction (Basics)
PPTX
UNIT 4 Total Quality Management .pptx
CYBER-CRIMES AND SECURITY A guide to understanding
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
web development for engineering and engineering
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
Welding lecture in detail for understanding
Embodied AI: Ushering in the Next Era of Intelligent Systems
Foundation to blockchain - A guide to Blockchain Tech
Internet of Things (IOT) - A guide to understanding
Well-logging-methods_new................
Operating System & Kernel Study Guide-1 - converted.pdf
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
Digital Logic Computer Design lecture notes
Lecture Notes Electrical Wiring System Components
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
Model Code of Practice - Construction Work - 21102022 .pdf
OOP with Java - Java Introduction (Basics)
UNIT 4 Total Quality Management .pptx

Multilinear Kernel Mapping for Feature Dimension Reduction in Content Based Multimedia Retrieval System

  • 1. The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.2, April 2016 DOI : 10.5121/ijma.2016.8201 1 MULTILINEAR KERNEL MAPPING FOR FEATURE DIMENSION REDUCTION IN CONTENT BASED MULTIMEDIA RETRIEVAL SYSTEM Vinoda Reddy1 , Dr.P.Suresh Varma2 and Dr.A.Govardhan3 1 Associate Professor & HOD, CSE Dept., SIT, Gulbarga, Karnataka, India, 2 Professor, Dean & Principal, University College of Engineering, Adikavi Nannaya University, Rajahmundry, India and 3 Professor & Director, JNTU, Hyderabad, Telangana, India. ABSTRACT In the process of content-based multimedia retrieval, multimedia information is processed in order to obtain descriptive features. Descriptive representation of features, results in a huge feature count, which results in processing overhead. To reduce this descriptive feature overhead, various approaches have been used to dimensional reduction, among which PCA and LDA are the most used methods. However, these methods do not reflect the significance of feature content in terms of inter-relation among all dataset features. To achieve a dimension reduction based on histogram transformation, features with low significance can be eliminated. In this paper, we propose a feature dimensional reduction approaches to achieve the dimension reduction approach based on a multi-linear kernel (MLK) modeling. A benchmark dataset for the experimental work is taken and the proposed work is observed to be improved in analysis in comparison to the conventional system. KEYWORDS Dimensional Reduction Multimedia retrieval, PCA,MLK-DR, Weizmann dataset 1.INTRODUCTION Multimedia retrieval approach, represents the feature set of more importance. A Statistical approach was suggested in [2],[3] which represents a pattern subspace method proposed for automatic pattern recognition. This pattern provides an idea of modeling a sequence of video representation which are set of individual patterns, represented in sub-space to provide an iterative principal component analysis (PCA) used for learning principal components. In another major study outlined in [4],[5], PCA, was used for locally linear embedding (LLE) and orthogonal Locality preserving projections (OLPP).Three typical manifold embedding dimensionality reduction methods were suggested. According to the data distribution using the subspace OLPP, a locally strong regression method (LARR) that learns to predict more accurate information retrieval was suggested. To get a rough prediction method of selective features using support vector machines and a local support vector regression, within a limited range of adjustment was focused. In the second category of information retrieval, the method includes presence-based approaches. Using presence information, intuitional method for analyzing the feature for multimedia video was suggested. Young H. Kwon [6] used a visual representation of the model to produce an anthropological features. In this approach, the primary features of the
  • 2. The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.2, April 2016 2 subject was used as a representative elements. The proportion of these features to distinguish different patterns categories were calculated. Secondary feature analysis, using geographical mapping of image information was used to guide the measurement. June Da Xia [7] suggested an active appearance models (AAM) feature pattern recognition method used to extract patterns. Each pattern derives feature point,where feature area is divided into ten subspace. A patch-based model name kernel patch method suggested by Shuicheng Yan et.al. was presented in [8]. This method models the global Gaussian mixture models (GMM) for a maximum of two videos, and created a relational mapping empirical coding using Kullback-Leibler divergence. A weakening of the learning process called “Inter-instrument synchronization” was suggested. Kernel regression is employed to assess the pattern. The third category includes frequency-based approach. In the process of video processing and pattern recognition, frequency domain analysis features are the most popular method. In Pattern recognition method to investigate a biologically inspired approach for feature extraction a logical approach was defined in [9, 10]. Unlike previous works, Guo [9] of the human visual process suggested bio-inspired model based on imitation by applying Gabor filters. A Gabor filter is a linear edge detection filter used for video processing. Gabor filter representation of frequency and orientation are similar to those of the human visual system approach, and representation and discrimination has been found to be particularly suitable for the structure. Although PCA-based coding is applied to many applications, this method represents, a more selective approach of operations and feature selection to reduce feature which effect the accuracy of the system. To overcome the problem of conventional PCA approach a multi-linear Kernel coding is proposed. 2.DIMENSIONAL REDUCTION IN CBMR General multimedia retrieval system operates in three phases, training, testing and classification. In the training phase, the process for various multimedia video datas trained into the database. In the test phase the extracts features of a sample is given as query for the classification. Classifier operates by comparison with samples given as query with database sample for classification. A general Multimedia retrieval system is as given in below figure; Fig.1 General multimedia retrieval system
  • 3. The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.2, April 2016 3 2.1. Preprocessing Depending on the application, multimedia pre-processing include: alignment (translation, rotation, scaling) and light normalization / correlation. These pre-processed data are used for coarse multimedia detection so as to improve the robustness in feature extraction and retrieval. 2.2. Feature Extraction Multimedia content represent structural feature or features to facilitate the recognition process. A compact set of interpersonal discrimination is focused to be eliminated. In a histogram feature extraction, outstanding values, frequency features, color features, the Principal Component Analysis (PCA), linear discriminant analysis (LDA), kernel PCA (KPCA), Local Binary Pattern (LBP), independent component analysis (ICA), include these as a feature, and these feature set are represented in order to reduce dimensionality of the feature extraction approach to implement the preprocessed video feature values from the pre-processed sample. PCA, the principle component analysis, process on the data and a new system feature values, to coordinate where the feature data of the second largest variance changes to the first variation. The major components are called for projection from the largest variance underlying to represent the principal features. The process of PCA is outlined as follows: 1. Differentiation of each data is over a mean value for reduced dimension is carried out. 2. Each dimension is minimized below the mean average value. 3. The covariance matrix is calculated. 4. All the different dimensions of covariance was computed between all possible values. 5. For the eigenvectors and covariance matrix, a square matrix of the eigen values is been calculated. 6. The eigen values and eigenvectors are sorted in highest to lower values. 7. This allows the components in order of importance. 8. The selected eigenvectors set by multiplying the original data with the new data set. PCA data set for the operation, for a N-to-N dataset sample of I(x, y) is a vector of dimension N to N which is defined as N2 . A database created with M samples is then mapped to a high dimensional space as Γ1,Γ2,Γ3,…. ΓM. the average of the sample dataset is defined as Ψ = ∑ Γ (1) Each dataset can be generalized to mean the average deviation from the dataset as Φ = Γ − Ψ represented. Covariance matrix ΦΦT , defined as the expected value which is calculated as, = ∑ Φ Φ (2) It set free each data sample is taken from each sample consisting of differently considered. If all the samples of dataset are normalized perfectly, a conclusion can be done that the Φt, the variances of every sample of dataset results in as sequentially aligned results.Φtzero covariance with the means of action lies.ΦThe implications of these beliefs is the idea that a With this observation, resulting in an expected value of zero can be said t an independent, combining theΦthat each person is multiplied across characteristics of the time alignment. The above illustration allows us to represent the covariance matrix in another form. Let A=[Φ1,Φ2,…ΦM], be the covariance matrix, the expression in Eq.(2) can be written as,
  • 4. The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.2, April 2016 4 = ( ) (3) As the factor 1/M affects only the scaling eigenvector, we can leave the calculation resulting in the scaling factor = ( ) (4) Given a covariance matrix C, a computation to the optimal set estimation for, Eigen dataset variations between the dataset samples and to characterize a set of eigenvectors and eigenvalues. Consider an eigenvector of C satisfying the condition, given as, = (5) = (6) The eigenvectors are orthogonal and normalized hence = 1 = 0 ≠ (7) Combining Eq. (2) and (7), Eq. (6) thus become = ∑ ( Γ ) (8) It is represented as a set ofdata taken from each sample consisting of differential samples. If all the samples of dataset are normalized perfectly, a conclusion can be done that the Φt, the variances of every sample of dataset results in as sequentially time aligned.Φtthe zero covariance with the means of action lies in the implications of these beliefs in the idea With the observation, resulting in an expected value of zero that can be setas an independent cross multiplication , combining theΦt that each sample is multiplied across characteristics of the time alignment resulting in an expectation of value zero.The above illustration allows us to represent the covariance matrix in another form. Let A=[Φ1,Φ2,…ΦM], be the covariance matrix, the expression in Eq.(2) can be written as = ( ) (3) As the factor 1/M affects only the scaling eigenvector, we can leave the calculation resulting in the scaling factor = ( ) (4) Given a covariance matrix C, a computation to the optimal set estimation for Eigen dataset variations between the dataset samples and to characterize a set of eigenvectors and eigenvalues. Consider an eigenvector of C satisfying the condition, given as, = (5) = (6) The eigenvectors are orthogonal and normalized hence
  • 5. The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.2, April 2016 5 = 1 = 0 ≠ (7) Combining Eq. (2) and (7), Eq. (6) thus become = ∑ ( Γ ) (8) Eqn. (8) eigenvector representing a representative sample data set corresponding to the variance shows the eigenvalue. Eigenvectors with the largest eigenvalues vector as the basis, the major vectors that express the greatest variance are obtained by selecting. PCA dimensionality reduction algorithm applied to only the internal changes in that particular sample obtained by considering the feature set reduces the dimensions. However, other frames in a sequence of inter-class features forms that are not considered. 3. MULTI-LINEAR KERNEL (MLK) CODING This coding is processed in two phases, 1) Training and 2) testing phase. The proposed multimedia retrieval system consists of two phases. In the training phase, the training processis developed for database which facilitates the updating of various multimedia featuresextracted from different video samples. In the test phase, the test video sample is processed for feature extraction and the SVM classifier using the training features are extracted from the database are compared with the features, matching the result of the test video multimedia feature set taken as multimedia features. The Block diagram of the proposed work is as shown below: Fig.2 Block diagram of proposed approach The Multimedia retrieval system proposed is divided into four operational phases: I) Pre-processing 2) feature extraction using 2D - Gabor filter and histogram 3) Feature vector dimensionality reduction MLK-DR. 4) multimedia retrieval system using SVM classifier. The proposed approach is a multimedia video retrieval system that is used to estimate multimedia information’s from the dataset. Multimedia video from database sources are acquired and the retrieval system process the video input which is then further processed as multimedia video feature set. For a given multimedia video sample using the histogram feature to estimate the feature estimate approach and multi-linear dimension reduction (ML-DR) is used in association with a 2D-Gabor filter. The Input multimedia video preprocessing phase, process on the input video data read from the database and process through the images. The samples are then resized,
  • 6. The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.2, April 2016 6 cropped and converted to gray scale for feature representation. After preprocessing, multimedia video goes through feature extraction process. At this stage, all possible orientations of Gabor filter was applied to remove all possible variations. Here, the entire Gabor filter is applied on eight orientations and histogram feature set is derived. In the next step, the extracted features are used to reduce the dimensionality using multi-linear approach to reduce feature vectors. A SVM classifier using these feature vectors are used with the database to compare and match the classesfor classifying the multimedia information. The proposed algorithm for the system is as follows. In the Multi-linear dimensionality reduction ML-DR approach the features are transformed to the multi-linear subspace that extracts the multi-dimensional features from the database. ML-DR, which operated in linear dimension for expansion. While ML-DR operated directly on the two- mode processing the data are processed through multi-functional objectives. Video filtering using Gabor filter and feature extraction for dimensionality reduction in ML-DR results in output for dimensionally reduced feature multimedia data set which are derived from projection matrix. Figure3(a), (b) shows the pictorial representation of conventional PCA and multi linear dimensional reduction coding. (a) (b) Fig.3(a). Principal Component analysis (b) Multi-Linear Dimension reduction Algorithm Input: multimedia databases, multimedia video test sample Output: the action class for the multimedia sample. Step 1: multimedia video data is read from the database. Step 2: The video is resized for uniform dimensions Step 3: Color video is converted to gray scale video. Step 4: gray-scale multimedia video is cropped to the area. Step 5: 2D-Gabor filters and the histogram method is applied to crop video to extract the multimedia feature. Step 6: Using the extracted feature ML-DR are processed to reduce dimensionality. Step 7: The training features available in the database are used for classification using the SVM classifier.
  • 7. The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.2, April 2016 7 PCA, operates in a one-dimensional mode, whereas ML-DR works in multi-mode operations. For a given feature space of a single class, PCA evaluates key components individually, while the remaining class features are not considered. In ML-DR the dataset is processed by all the dimension variations. The pseudo code for the ML-DR coding is as given below; In machine learning, learning algorithms analyze the data and recognize patterns associated with learning networks. Classification and regression analysis is used to monitor the learning model. Given a set of training set, each belonging to one of two categories to be marked. SVM learning algorithm is a model that provides an instance of a class with maximum co-relational values. It is a non-probabilistic binary linear classifier. The SVM model represented as a point in space, where, the different categories of distinct features are used which are segregated as wide as possible, to split representation in mapping, so that they have a place on the boundary of mapping and the difference falls on a new instance of the class predicted. In addition to performing linear classification, SVM operates by using kernel-site mapping the information in high-dimensional feature spaces to perform a non-linear classification. More formally, the SVM classification performs a high regression infinite dimensional space coding to determine the extent of feature mapping. Intuitively, most features of any class of a so-called functional separation nearest margin classifier, train data in general to the large margin of error to be generalized to represent the distance leading to a classification. The original problem in a finite-dimensional space, as stated in the space discrimination are often non-separable linear sets. For the purpose of easier formulation, the original data space is mapped to high dimensional space mapping. SVM mapping approach used to calculate the dot product of the original space, which is a reference to the variable kernel function K (x, y) with the feature values to make a decision. A dot product of the high-dimensional space that is defined as the set of points with a vector space is defined as a constant. The projection planes αi is a linear combination of the parameters that are represented as feature vectors in the data base of the video sample defining. By taking the projection plane as an option, the features of x are mapped into the projection plane defined by the relation of: ∑ !(" , ") = $%&'( &( (9) Note that as with variation in K(x,y)becomes smaller, the sum of each term in the corresponding data base xi points to the test measures for the degree of proximity to the point x. Pseudo Code: Input: Set of features with larger dimensions Output: Feature with smaller dimensions Step 1: Feature set of M x N-dimensional space is taken. Step 2: evaluate the mean along all ‘n’ dimensions. Step 3: Create a new matrix by subtracting the mean from each value of dataset. Step 4: Perform the evaluation of covariance matrix. Step 5: Evaluate histogram vectors and its respective histogram values. Step 6: arrange the values in ascending/descending order by sorting them and then choose k sorted histogram values from n x k dimensional. Step 7: For each class feature set the performance is carried out in the same manner Step 8: Finally the intra-class and inter-class histogram values were computed by considering the histogram values as a new projection matrix. Step 9: The original values are then transformed to the new sub space as a matrix by multiplying the one-dimensional subspace reduced.
  • 8. The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.2, April 2016 8 4. EXPERIMENTAL RESULTS The proposed system is developed with Matlab tools and Weizmann data set [7] is tested over. PCA-based feature dimensions reduction is compared with a comparative analysis of the proposed MLK-DR. the simulation is carried out to evaluate the performance of the proposed approach. Weizmann dataset simulation for features extraction for which Histogram is computed as MI-HIST [21] is set forth in the calculation. Figure.4 illustrates the test dataset is shown. (a) (b) (c) (d) Fig 4: Dataset with (a) Bending (B), (b) Jumping (J), (c) Running (R) and (d) Walking (W) sample The samples are captured a 180x144 resolution, using static camera with a homogenous outdoor background. The processing test sample having a running action is illustrated in Fig 5. Fig 5: Test sample having Running action The obtained features using MI-HIST [21] is presented in table 1. Orignal Sample
  • 9. The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.2, April 2016 9 Table 1. Comparative Analysis of HoG [22], HIST [23] and proposed MI-HIST [21] for running sample Observations of Techniques HoG [22] HIST [23] MI- HIST[21] Original Sample Size 478720 478720 478720 Redundant coefficients 436582 457581 464260 HIST Features for Motion components 42138 21139 14460 MI-HIST Histogram features are used for dimensionality reduction, where PCA, LDA and MLK- DR are applied over it. In the PCA-based dimensionality reduction technique, the extracted features and the mean normalized K principal features are selected. On the count of selected facilities where facilities are used for the inter-class correlation, where LDA coding is applied for least minimum feature count. However, PCA and LDA dimensionality reduction process for two-dimensional coding. To further reduce the dimensions of the features, MLK-DR was implemented and ongoing features from all three methods are shown in Table 6 for 4 different action models. Similarly, for all the four classes, the MLK-DR was applied on the databases created on each category and a generalized feature set is given in Table 2. Table 2. Dimensionality reduced Feature set of total data base Where, Fijdefines the feature for ith class andjth sample. On testing,the feature set is selected and the features of the proposed approach represents a sample of test sample applied to the query observing 4780 features reduction. to evaluate the performance, the approach developed the following parameters, used to evaluate the performance. $$ $) = *+ , *+ ,+-*+-, (10) Where, TP = True positive (Correctly identified) FP = False positive (Incorrectly identified) TN = True negative (false, Correctly identified) FN = False negative (false, incorrectly identified) The simulation model for each class of 5 subjects, 20 subjects with a total of four categories forming is used for training. During the testing process, the sample is processed and extracted features with the query histogram features are passed to the SVM classifier. For the SVM classifier, the obtained result of classification is described in Table.3
  • 10. The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.2, April 2016 10 Table 3. Classification Results From Table 3, the confusion matrix can be written as Fig.6. Confusion Matrix From the confusion matrix, the accuracy can be calculated as $$ $)(%) = (4 + 10) (4 + 1 + 5 + 10) = 70 This suggested approach for its evaluation, developed the parameters of sensitivity, specificity, recall, precision and F-measure for its evaluation analysis using the measured parameters. To compute the measured parameters, a mathematical expressions used are defined as, The sensitivity is measured as the ratio of true positive (TP) to the sum of true positive (TP) and false negative (FN). 34&' ( () = * *+-, (11) The specificity is measured as the ratio true negative (TN) to the sum of true negative (TN) and false positive (FP). 354$ 6 $ () = , ,+-* (12) Precision is the ratio of TP to sum of TP and FP while recall is the ratio of TP to sum of TP and FN. The following expressions give precision and recall measurements 74$ 88 = * *+-, (13) 9 4$ ' %& = * *+-* (14) F-measure is the combined measure of precision and recall. F-measure is also called as balanced F-score, the expression is as :_<4 ' 4 = =∗?@ABCC.*E@A F G ?@ABCC+*E@A F G (15) Table 4.Parametric evaluation of the developed system for processing efficiency.
  • 11. The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.2, April 2016 11 The obtained retrieval observations for different test action in the Weizmann dataset were observed through the feature count and the overhead. For the obtained features, the overhead was measured as H = -IJK -LMN -IJK (16) Where :OEP= Original Features :Q@A= Decimated features The decimated features and the processing overhead occurred for the running sample for proposed approach and for conventional approaches were shown in table.5. Table.5.feature count & overhead for running sample Approach RSTU RVWX Overhead PCA 14460 6140 73.80% LDA 14460 5380 59.25% MLK-DR 14460 4780 49.38% Fig 7. Feature count of Running sample Fig.7 shows the feature count for PCA and LDA techniques, in comparison to the proposed MLK-DR approach for running sample. Compared with PCA and LDA, MLK-DR retrieval accuracy is improved with the feature counts reduces and also the computational time has been 1 3000 3500 4000 4500 5000 5500 6000 6500 sample1 featurecount PCA LDA MLK-DR
  • 12. The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.2, April 2016 12 reduced. Accuracy of information received and the overhead observed is outlined in Table.5 are represented in Fig.8. This Fig. shows details of the proposed overhead for MLK-DR in comparison to PCA and LDA, where the proposed approach is observed with 23% less overhead. Fig 8. Overhead of Running sample Table.6. Feature count & Overhead for Walking sample Approach RSTU RVWX Overhead PCA 37794 15600 70.29% LDA 37794 14560 62.71% MLK-DR 37794 12748 50.20% Fig 9. Feature count of Walking sample Fig.9 illustrates the PCA and LDA techniques in comparison to the proposed MLK-DR for running sample. The feature count is shown in comparison with PCA and LDA, MLK-DR retrieval accuracy is improved, and the feature count and computational time has been reduced. Compared with PCA, MLK-DR has 3252 features low in count. Fig 10. Overhead of Walking sample 1 20 30 40 50 60 70 80 90 100 sample1 overhead(%) PCA LDA MLK-DR 1 1.2 1.25 1.3 1.35 1.4 1.45 1.5 1.55 1.6 1.65 1.7 x 10 4 sample2 featurecount PCA LDA MLK-DR 1 20 30 40 50 60 70 80 sample2 overhead(%) PCA LDA MLK-DR
  • 13. The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.2, April 2016 13 Fig.10 illustrates the overhead details of proposed MLK-DR. compared with PCA and LDA, the proposed approach has reduced overhead. Compared with PCA, MLK-DR has 20% reduced overhead and 10% when compared with LDA. Table 7. Feature count & Overhead for Jumping sample Approach RSTU RVWX Overhead PCA 20400 8500 71.43% LDA 20400 7520 56.75% MLK-DR 20400 6250 47.38% Fig 11.Feature count of Jumping sample Fig 11.Illustrates the feature count details of the jumping sample for the proposed MLK-DR along with earlier PCA and LDA techniques. Compared with PCA and LDA, the MLK-DR has reduced feature count which reduces computational time along with retrieval accuracy. Fig 12.Overhead of Jumping sample Fig 12. illustrates the overhead details of proposed MLK-DR. compared with PCA and LDA, the proposed approach has reduced overhead. Compared with PCA, MLK-DR has 24% reduced overhead and 15% when compared with LDA. Table 8. Feature count & Overhead for Bending sample Approach RSTU RVWX Overhead PCA 18560 7150 62.56% LDA 18560 6320 52.35% MLK-DR 18560 5870 45.28% 1 4000 5000 6000 7000 8000 9000 10000 sample3 featurecount PCA LDA MLK-DR 1 20 30 40 50 60 70 80 90 sample3 overhead(%) PCA LDA MLK-DR
  • 14. The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.2, April 2016 14 Fig 13. Feature count of Bending sample The reduced feature count for bending sample using MLK-DR along with PCA and LDA is shown in Fig.13. The proposed approach has reduced feature count when it is compared with earlier approaches. Fig 14.Overhead of Bending sample The overhead of proposed MLK-DR is less compared with PCA and LDA for bending sample. The details are represented in fig.14. Compared with earlier approaches, MLK-DR has 17% reduced overhead. 5. CONCLUSION The proposed approach derives all possible variations of the frame details given. By applying different orientations histogram, for related frames is derived. In each and every orientation, only a few features are of dominating nature. In the suggested process Histogram was applied to the video sample, and the dominant feature coefficients were derived. In the proposed approach, the histogram features are extracted after obtaining all possible variation only, instead of directly extracting from multimedia as in conventional approach. The proposed approach apply the multi- dimensionality reduction method, the dimension reduction approach, process based on the intra- group set by considering the inter-class features in the dataset. Wherein the traditional approach is processed considering the intra-class features reduction, the suggested approach minimizes the dimension in consideration to interclass relation as well. This gives the optimality of feature estimation in multiple directions, resulting in higher dimension reduction. 1 3000 4000 5000 6000 7000 8000 9000 sample4 featurecount PCA LDA MLK-DR 1 10 20 30 40 50 60 70 80 sample4 overhead(%) PCA LDA MLK-DR
  • 15. The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.2, April 2016 15 6.REFERENCES [1] Paul V, Jones M.J, “ Robust Real‐Time Pattern Detection,” International Journal of Computer Vision, Vol. 57, pp.137‐154, 2004 [2] Geng X, Zhou Z‐H, Zhang Y, Li G, Dai H. “Learning from multimedia representation patterns for automatic age recognition,” In ACM Conf. on Multimedia, pages 307–316, 2006 [3] GuodongGuo, Guowang Mu, Yun Fu, Charles Dyer, “A Study on Automatic Age Estimation using a Large Database”, Computer Vision, 12th International Conference, pp. 1986 – 1991, IEEE, 2009. [4] Guo G, Fu Y, Dyer, C.R., Huang, T.S. “Video‐Based Human Pattern Recognition by Manifold Learning and Locally Adjusted Robust Regression,” IEEE Trans.on Video Processing, Vol. 17, pp.1178‐1188, 2008. [5] Guo G, Fu Y, Huang T.S. and Dyer, C.R. “Locally Adjusted Robust Regression for Human AgeRecognition,” IEEE Workshop on Applications ofComputer Vision, pages 1‐6, 2008. [6] Matthew Cooper, Ting Liu, and Eleanor Rieffel, “Video Segmentation via Temporal Pattern Classification,” IEEE transactions on multimedia, Vol.9, pp. 610 – 618, 2007. [7] AsumanGünay and Vasif V. Nabiyev, “Age Estimation Based on AAM and 2D-DCT Features of Facial Images,”International Journal of Advanced Computer Science and Applications, Vol. 6, No. 2, 2015. [8] Hiroyuki Takeda, SinaFarsiu, and PeymanMilanfar, “Kernel Regression for Image Processingand Reconstruction,” IEEE transactions on image processing, vol. 16, no. 2, 2007. [9] GuodongGuo, Guowang Mu, Yun Fu, Thomas S. Huang, “Human Age Estimation Using Bio- inspired Features,” conference on Computer Vision and Pattern Recognition, pp. 112 – 119, 2009. [10] Serre T, Wolf L, Bileschi S, Riesenhuber M andPoggio T. “Robust Object Recognition withCortex‐Like Mechanisms,” IEEE Trans. on PAMI,29(3): 411–426, 2007. [11] FengGao, Haizhou Ai. “Pattern Classification on Consumer Videos with Gabor Feature and Fuzzy LDA Method,” Advances in Biometrics. In: Third International Conference, ICB, Alghero, Italy, Proceedings, pp 132-141, 2009. [12] Andreas Lanitis, “Multimedia Biometric Templates and Representation: Problems and Challenges,” for Artificial Intelligence, AIAI. 2009. [13] Ramesha K et al, “Feature Extraction based Pattern Recognition, Gender and Pattern Classification,” International Journal on Computer Science and Engineering, Vol.02, pp.14-23, 2010. [14] Stephen Reder, Kathryn Harris, Kristen Setzler, “The Multimedia Adult Learner Corpus,” TESOL Quarterly, Vol.37, pp. 546-557, 2003. [15] PetterssonOlle. “Implementing LVQ for Pattern Classification,” Master of Science Thesis, 2007. [16] Petra GRD, “introduction to human age estimation using face images,” Faculty of organization and informatics, 2013. [17] Tudor barbu , “an automatic unsupervised pattern recognition approach,”proceedings of the Romanian academy, series a, of the romanian academy volume 7, 2006 [18] GuodongGu, oYun Fu, Thomas S. Huang Charles R. Dyer Computer Sciences, “Locally Adjusted Robust Regression for Human Age Estimation,” IEEE workshop on compture vision, pp. 1-8, 2008. [19] Sheng Huanga,c, Dan Yanga,b, Dong Yangc, Ahmed Elgammal, “Collaborative Discriminant Locality Preserving Projections With its Applicationto Face Recognition,” Computer Vision and Pattern Recognition, 2014. [20] Jinwei Wang1,2, Xirong Ma2, Jizhou Sun1, Ziping Zhao2 and Yuanping Zhu, “Puzzlement Detection from Facial Expression Using Active Appearance Models and Support Vector Machines,” International Journal of Signal Processing, Image Processing and Pattern Recognition Vol.7, pp.349- 360 ,2014. [21] Vinoda Reddy, P. SureshVarma, A.Govardhan, “Recurrent Energy Coding For Content Based Multimedia Retrieval System”, International Journal of Multimedia and User Design & User Experience, Vol.24, 2015. [22] A. Kl¨aser,M. Marszałek, and C. Schmid, “A spatio-temporal descriptor based on 3-D-gradients,” Brit. Mach. Vision Conf., 2008, pp. 995–1004. [23] Ling Shao, Simon Jones, and XuelongLi,“Efficient Search and Localization of Human Actions in Video Databases,”IEEE Transactions on Circuits and Systems for Video Technology, Vol. 24, No. 3, March 2014, pp. 504-512.
  • 16. The International Journal of Multimedia & Its Applications (IJMA) Vol.8, No.2, April 2016 16 Authors Dr. A. Govardhan is presently the Principal at JNTUH College of Engineering Hyderabad and Executive Council Member, Jawaharlal Nehru Technological University Hyderabad (JNTUH), India. He did his B.E.(CSE) from Osmania University College of Engineering, Hyderabad in 1992, M.Tech from Jawaharlal Nehru University(JNU), New Delhi in 1994 and Ph.D from Jawaharlal Nehru Technological University, Hyderabad in 2003. He served and held several Academic and Administrative positions including Director of Evaluation, Principal, Head of the Department, Chairman and Member of Boards of Studies and Students’ Advisor. He is the recipient of 26 International, National and State Awards including A.P. State Government Best Teacher Award, Bharat Seva Ratna Puraskar, CSI Chapter Patron Award, Bharat Jyoti Award, International Intellectual Development Award and Mother Teresa Award for Outstanding Services, Achievements, Contributions, Meritorious Services, Outstanding Performance and Remarkable Role in the field of Education and Service to the Nation. He is a Chairman and Member on several Boards of Studies of various Universities. He was the Chairman of CSI Hyderabad Chapter during 2013-2014. He is a Member on the Editorial Boards for Eleven International Journals. He is a Member of several Advisory Boards & Academic Boards, a Committee Member for several International and National Conferences including ASUC-2014, Dubai (UAE), ICT 2014, Singapore, CMIT-2014, Zurich (Switzerland), PAKDD2010, IIIT Hyd. (India) and IKE2010, Las Vegas, Nevada (USA). He has 2 Monographs by Lambert Academic Publishing, Published in USA. He has guided 56 Ph.D theses, 135 M.Tech projects and he has published 350 research papers at International/National Journals/Conferences including IEEE, ACM, Springer, Elsevier and InderScience. He has organized 1 International Conference, 20 Workshops and 1 Refresher Course. He has delivered more than 100 Keynote addresses and invited lectures. He has 21 years of Teaching and Research experience. He served as Co-Convener for EAMCET2009, Chief Regional Coordinator for EAMCET2010 and EAMCET2011. He also served as Vice-Chairman, CSI Hyderabad Chapter and IT Professional Forum, A.P. He is a member in several Professional and Service Oriented Bodies including ACM, IEEE and CSI. His areas of research include Databases, Data Warehousing & Mining and Information Retrieval Systems. Dr. P Suresh Varma is a Professor of Computer Science and Engineering & Dean, Faculty of Engineering and Technology, AdikaviNannaya University. Presently he is Principal, University College of Engineering, and Founder Dean College Development Council from 14th June 2012 to 13th June 2015, AdikaviNannaya University. Professor Varma has been engaged in teaching, research and research supervision at AdikaviNannaya University, Rajahmundry since Oct 2008. He secured M.Tech in Computer Science and Technology from Andhra University in 1999 and secured Ph.D in Computer Science and Engineering from Acharya Nagarjuna University in 2008. He has supervised the research work of a number of Ph.D and M.Phil scholars and published and lectured extensively on Communication Networks, Data mining, Cloud Computing, Big Data and Image Processing. In 2010 Government of Andhra Pradesh honoured with Best Teacher Award in the occasion of Teacher Day. He has published over 125 papers in journals and conferences. 6 Ph.Ds are awarded and 4 Ph.Ds are submitted under his guidance and over 100 M.Tech./MCA thesis are supervised. Vinoda reddy is Associate Professor & Head of Computer Science Department Shetty Institute of Technology Kalaburgi. She has completed her Bachelor of Engineering Degree in Rural Engineering College Bhalki from Vishveshwariya Technological University Belagavi, Karnataka(1996), M.Tech Degree in National Institute of Technology Surathkal from Deemed University(2003) and presently perusing Ph.D in Jawaharlal Nehru Technical University Kukatpally, Hyderabad. She has total 19 years of teaching experience. She has published papers in international journal and national conference. Her area of research includes Multimedia data Mining and Information Retrieval Systems