SlideShare a Scribd company logo
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395 -0056
Volume: 04 Issue: 03 | March -2017 www.irjet.net p-ISSN: 2395-0072
© 2017, IRJET | Impact Factor value: 5.181 | ISO 9001:2008 Certified Journal | Page 867
Feature Subset Selection for High Dimensional Data Using Clustering
Techniques
Nilam Prakash Sonawale
Student of M.E. Computer Engineering, Bharati Vidyapeeth College of Engineering, Mumbai University, Navi
Mumbai , Maharashtra, India.
Prof. B. W. Balkhande
Assistant Professor, Dept. of Computer Engineering, Bharati Vidyapeeth College of Engineering, Mumbai
University, Navi Mumbai, Maharashtra, India
---------------------------------------------------------------------***---------------------------------------------------------------------
Abstract - Data mining is the process of analyzingthedata
from different perspective and summarizing it into useful
information (Information that can be used to increase
revenue, cuts costs or both). Database containslargevolume
of attributes or dimensions which are further classified as
low dimension data and high dimension data. When
dimensionality increases, data in the irrelevant dimension
may produce noise, to deal with this problem it is crucial to
have a feature selection mechanism that can find a subset of
features that meets requirement and achieves high
relevance. The proposed algorithm FAST is evaluated in this
project. FAST algorithm has three steps: irrelevant features
are removed; Features are divided in to clusters, selecting
the most representative feature from cluster [8]. This
algorithm can be performed by DBSCAN (Density-Based
Spatial Clustering with Noise) algorithm that can be worked
in the distributed environment using the Map Reduce and
Hadoop. The final result will be a small number of
discriminative features selected.
Key Words: Data Mining, Feature subset selection,
FAST, DBSCAN, SU, Eps, MinPts
1. INTRODUCTION
Data mining is an interdisciplinary subfield of computer
science; it is the computational process of discovering
patterns in large data sets involving methods at the
intersection of artificial intelligence, machine learning,
statistics and database system [4]. The overall goals of the
data mining process are too abstract information from a
dataset and transform it into an understandable structure
for further use. Cluster analysis or clustering is the task of
grouping the set of objects in such a way that objects in the
same group ( called clusters) are more similar (in some
sense or another ) to each other than to those in other
groups(Cluster). Clustering is an example of unsupervised
learning because there is no predefined class; the quality of
cluster can be measure by high intra-cluster similarity and
low inter-cluster similarity [5]. A categorization of major
clustering methods.
1.1 Partitioning method:
It classifies the data into k groups, together which satisfy the
following requirements: (1) each group must containatleast
one object, and (2) each object must belong to exactly one
group. Like the algorithmsk-meansandk-medoids.Thecons
of it are, most partitioning methods cluster objects based on
the distance between objects. Such methods can find only
spherical-shaped clusters and encounter difficulty at
discovering clusters of arbitrary shapes.
1.2 Hierarchical methods:
Hierarchical decomposition ofthegivensetofdata objects
has been created by Hierarchical methods. Hierarchical
methods suffer from the fact that once a step (merger or
split) is done, it can never be undone.
1.3 Density-based methods:
To continue growing the given cluster as long as the density
(number of objects / data points) in the “neighborhood”
exceeds some threshold; that is, for each data point within a
given cluster, the neighborhood of a given radius has to
contain at least a minimum number of points [3]. This
method can be used to filer out noise (outliers) and to
discover clusters of arbitrary shapes.
1.4 Grid-based methods:
These methods quantize the object space into a finite
number of cells that form a grid structure.The primary
advantage of this approach is its fast processing time,
which is dependent only on the number of cells in each
dimension in the quantized space and independent of
the number of data objects.
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395 -0056
Volume: 04 Issue: 03 | March -2017 www.irjet.net p-ISSN: 2395-0072
© 2017, IRJET | Impact Factor value: 5.181 | ISO 9001:2008 Certified Journal | Page 868
1.5 Model-based methods:
These methods hypothesize a model for each of the
clusters and find the best fit of the data to the given
model. Clusters may be located by model-based
algorithm by constructing a density function that
reflects the spatial distribution of the data points.
1.6 Other special method:
(i) Clustering high-dimensional data, (ii) Constrain-
based clustering
1.7 DBSCAN:
DBSCAN (Density-Based Spatial Clustering of
Applications with Noise) is a density based clustering
algorithm which can generate any number of clusters,
and also for the distribution of spatial data [1]. The
purpose of clustering algorithm is to convert large
amount of raw data into separate clusters in order to
better and faster access. The algorithm grows regions
with sufficiently high density into clusters and
discovers clusters of arbitrary shape in spatial
databases with noise. It defines a cluster as a maximal
set of density-connected points. A density-based
cluster is a set of density-connected objects that is
maximal with respect to density-reach ability. Every
object not contained in any cluster is considered to be
noise. This method is sensitive to its parameter e and
Min Pts, and leaves the user with the responsibility of
selecting parameter values that will lead to the
discovery of acceptable clusters. If a spatial index is
used, the computational complexity of DBSCAN is
O(nlogn), where n is the number of database objects.
Otherwise, it is O(n2).
DBSCAN does not need to know the number of classes
to be formed in advance. It can not only find freeform
class, but also to identify the noise points. Class is
defined as a collection contains the maximum number
of data objects which density connectivity in DBSCAN
algorithm [1]. For all of the unmarked objects in data
set D, select object P and marked P as visited. Region
query for P to determine whether it is a core object.IfP
is not a core object, then mark it as noise and re-select
another object that is not marked. If P is a core object,
then establish class C for the core object Pandgenerals
the objects within P as seed objects to region query to
expanding the class C until no new object join class C,
clustering process over. That is when the number of
objects in the given radius (ε) region not less than the
density threshold (MinPts), then clustering.Becauseof
taking the density distribution of data object into
account, so it can mining for freeform datasets.
1.8 Clustering High-Dimensional Data:
Most clustering methods are designed for clustering
low-dimensional data and encounter challenges when
the dimensionality of the datagrowsreallyhigh.Thisis
because when the dimensionality increases, usually
only a small number of dimensions are relevant to
certain clusters, but data in the irrelevant dimensions
may produce much noise and mask the real clusters to
be discovered. Moreover, when dimensionality
increases, data usually become increasingly sparse
because the data points are likely located in different
dimensional subspaces.Toovercomethisdifficulty,we
may consider using feature (or attribute)
transformation and feature (or attribute) selection
techniques. Feature transformation methods, such as
principal component analysis and singular value
decomposition,transformthedataontoasmallerspace
while generally preserving the original relative
distance between objects. They summarize data by
creating linear combinations of theattributes,andmay
discover hidden structures in the data. However, such
techniques do not actually remove any of the original
attributes from analysis. This is problematic when
there are a large number of irrelevant attributes. The
irrelevantinformationmaymasktherealclusters,even
after transformation. Moreover, the transformed
features (attributes) are often difficult to interpret,
making the clustering results less useful. Thus, feature
transformation is only suited to data sets where most
of the dimensions are relevant to the clustering task.
Unfortunately, real-world data sets tend to have many
highly correlated, or redundant, dimensions. Another
way of tackling the curse of dimensionality is to try to
remove some of the dimensions. Attribute subset
selection (or feature subset selection) is commonly
used for data reduction by removing irrelevant or
redundant dimensions (or attributes). Given a set of
attributes, attribute subsetselectionfindsthesubsetof
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395 -0056
Volume: 04 Issue: 03 | March -2017 www.irjet.net p-ISSN: 2395-0072
© 2017, IRJET | Impact Factor value: 5.181 | ISO 9001:2008 Certified Journal | Page 869
attributes that are most relevant to the data mining
task. Attribute subset selection involves searching
through various attribute subsets andevaluatingthese
subsets using certain criteria. It is most commonly
performed by supervised learning—the most relevant
set of attributes are found with respect to the given
class labels. It can also be performed by an
unsupervised process, such as entropy analysis, which
is based on the property that entropy tends to be low
for data that contain tight clusters. Other evaluation
functions, such as category utility, may also be used.
Subspace clustering is an extension to attribute subset
selection that has shown its strength at high-
dimensional clustering. It is based on the observation
that different subspaces may contain different,
meaningful clusters. Subspace clustering searches for
groups of clusters within different subspaces of the
same data set. The problem becomes how to find such
subspace clusters effectively and efficiently.
1.9 Constraint-Based Cluster Analysis:
Users often have a clear view of the application
requirements, which they would ideally like to use to
guide the clustering process and influence the
clustering results [7]. Thus, in many applications, it is
desirable to have the clustering process take user
preferences and constraints into consideration
1.10 Clustering required in Data Mining
(Advantages):
(i) Scalability
(ii) Ability to deal with different kinds of attributes
(iii) Discovery of clusters with attribute shape
(iv) High Dimensionality
(v) Ability to deal with noisy data
(vi) Interpretability
2. LITERATURE SURVEY
A feature selection algorithm can be seen as the
combination of a search technique for proposing new
feature subsets, along with an evaluation measure
which scores the different feature subsets. The
simplest algorithm is to test each possible subset of
features finding the one which minimizes the error
rate. This is an exhaustive search of the space, and is
computationally intractable for all but the smallest of
feature sets. The choice of evaluation metric heavily
influences the algorithm, and it is these evaluation
metrics which distinguish between the three main
categories of feature selection algorithms: wrappers,
filters and embedded methods.
2.1 Wrapper methods:
Use a predictive model to score feature subsets. Each
new subset is used to train a model, which is tested on
a hold-out set. Counting the number of mistakes made
on that hold-out set (the error rate of the model) gives
the score for that subset [3]. As wrappermethodstrain
a new model for each subset, they are very
computationally intensive, but usually provide the
performing feature set for that particular type of
model.
Figure 1: Wrapper Method
2.2 Filter methods:
Use a proxy measure rather thantheerrorratetoscore
a feature subset. This measure is selected to be
expeditious to compute, while still capturing the
usefulness of the feature set. Generally measures
include the mutual information, the point wise mutual
information, Pearson product-moment correlation
coefficient, inter/intra class distance or the scores of
significance tests for each class/feature combinations.
Filters are usually less computationally enhancedthan
wrappers, but they produce a feature set which is not
tuned to a specific type of predictive model [3]. This
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395 -0056
Volume: 04 Issue: 03 | March -2017 www.irjet.net p-ISSN: 2395-0072
© 2017, IRJET | Impact Factor value: 5.181 | ISO 9001:2008 Certified Journal | Page 870
lack of tuning means a feature set from a filter is more
general than the set from a wrapper, usually giving
small prediction performance than a wrapper.
However the feature set doesn't contain the
assumptions of a prediction model, and hence is more
useful for exposing the relationships between the
features. Many filters provide a feature ranking rather
than an explicit bestfeaturesubset,andthecutoffpoint
in the ranking is chosen via cross-validation. This
method has also been used as a pre-processingstepfor
wrapper methods, allowing a wrapper to be used on
big problems.
Figure 2: Filter Method
2.3 Embedded methods:
A catch-all group of techniques which implement
feature selection as part of the model construction
process. Theexemplarofthis pathistheLASSOmethod
for designing a linear model, which penalizes the
regression coefficients with an L1 penalty; compress
many of them to zero. Any features which have non-
zero regressioncoefficients are'selected'bytheLASSO
algorithm.ImprovementstotheLASSOincludeBolasso
which bootstraps samples, and FeaLect which scores
all the features based on combinatorial analysis of
regression coefficients. One other popular path is the
Recursive Feature Elimination algorithm, commonly
used with Support Vector Machines to repeatedly
construct a model and remove features with minimum
weights. These approaches tend to be between filters
and wrappers in terms of computational diversity.
In statistics, the most preferred form of feature
selection is stepwise regression, which is a wrapper
technique. It is a greedy algorithm that appends the
best feature (or deletes the worst feature) at each
round. The main control issue is determine when to
stop the algorithm.Inmachinelearning,thisistypically
done by cross-validation. In statistics, some standards
are optimized. This leads to the inherent problem of
nesting.
Figure 3: Embedded Method
3. PROBLEM STATEMENT
The quality of DBSCAN depends on the distance
measure (P,ε). The most commondistancemetricused
is Euclidean distance. Especially this metric can be
rendered almost useless for high-dimensional data,
making it difficult to find an appropriate value for ε.
DBSCAN algorithm fails in case of varying density
clusters.
4. PROPOSED SYSTEM
Feature set choice is viewed because the method of
characteristic and removing as several extraneousand
redundant options as doable. this is often as a result of
extraneous options don’t contribute to the
prognosticative accuracyand redundant options don’t
redound to obtaining a stronger predictor forthatthey
supply largely data that is already gift in different
feature(s). Of the numerous feature set choice
algorithms, some will effectively eliminate extraneous
options however fail to handle redundant options
however a number of others will eliminate the
extraneous whereas taking care of the redundant
options [3]. The proposed FAST algorithm false under
the Filter method. The filter method in addition to the
generality is good choicewhenthenumbersoffeatures
are very large. Feature selection is the process of
identifying and removing as many as relevant and
redundant feature as many as possible. Irrelevant
features do not contribute to the predictive accuracy
and redundant feature provides information which is
already present in other feature.
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395 -0056
Volume: 04 Issue: 03 | March -2017 www.irjet.net p-ISSN: 2395-0072
© 2017, IRJET | Impact Factor value: 5.181 | ISO 9001:2008 Certified Journal | Page 871
Figure 4: System Architecture
Figure 5: DBSCAN Flowchart based on Map Reduce
Figure 6: Flow Diagram
4.1 Subset Selection Algorithm:
The extraneous options, at the side of redundant options,
severely have an effect on the accuracy of the educational
machines. Thus, feature set choice ought to be ready to
determine and take away the maximum amount of the
extraneous and redundant data as potential. Moreover,
“good feature subsetscontainoptionsextremelyrelatedwith
(predictive of) the category, nevertheless unrelated with
(not prognostic of) one another. Keeping these in mind, we
tend to develop a completely unique algorithmic program
which may with efficiency and effectively handle each
extraneous and redundant option, and acquire a decent
feature set.
4.2 Time Complexity:
The major quantity of labor for algorithmicruleoneinvolves
the computation of SU values for TR connectedness and F-
Correlation that has linear complexness in terms of the
amount of instances during a given knowledge set. The
primary a part of the algorithmic rule encompasses a linear
time complexness in terms of the amount of options m.
presumptuous options square measure designated as
relevant ones within the initial half, once k ¼ only 1 feature
is chosen.
4.2.1 Removal of Irrelevant Features:
An efficient way for degrading dimensionality, removing
irrelevant data, increasing learning accuracy,andimproving
results comprehensibility. If we take a Dataset ‘D’ with m
features F={F1,F2,..,Fn} and class C, by default features are
available with target relevant feature. The generality of the
selected features is limited and the computational
complexity is more. The hybrid methods are a combination
of filter and wrapper methods by using a filter method to
degrade search space that will be considered by the
subsequent wrapper.
4.2.2 T-Relevance, F-Correlation Calculation:
T-Relevance between a feature and the target concept C, the
correlation F-Correlation between a pair of features, the
feature redundancy F-Redundancy and the representative
feature R-Feature of a feature cluster can be defined.
According to the over definitions, feature subset selection is
the process that determine and possess the strong T-
Relevance features and selects R-Features from feature
clusters [7]. The behind heuristics are that 1.Irrelevant
features have weak correlation with target concept. 2.
Redundant features are grouped in a cluster and a
representative feature can be taken out of the cluster.
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395 -0056
Volume: 04 Issue: 03 | March -2017 www.irjet.net p-ISSN: 2395-0072
© 2017, IRJET | Impact Factor value: 5.181 | ISO 9001:2008 Certified Journal | Page 872
4.2.3 T-Relevance, F-Correlation Calculation:
We can take benefit of the MapReduce programming model
to parallelize the whole process to save clustering time and
resources [2]. The basic idea of DBSCAN algorithm based on
MapReduce is divided into four steps:
1. The data in the dataset cut into small blocks which are
equal size.
2. The blocks are distributed to the nodes in the cluster,
Hence all of nodes in the cluster can run the Map function of
themselves in parallel to calculate and process those blocks.
(i) Initial Cluster: For any unlabeled object p in the dataset,
using Map-Reduce parallel Programming model to calculate
the number of objects in its region to determine if it is a core
object. If is, P and all the objects in the region of P constitute
an initial class (C), and marked those objects with the same
cluster identifier (Cid). Conversely, if P is not a core object,
and there is no other object in its region, marked P as noise.
Otherwise, detect whether there has a core object q in the
region of non-core object p. If has, given the object p and the
objects in the region of 1 with the same cluster identifier.
Repeat until all of the objects in the dataset are identified.
After the process is completed, get an initial classclusterand
a noise set.
(ii) Merge Result: Class merging is to consider these objects
which exist in more than two classes. If there is a sharing
core object, merging the two classes. Else classify the object
as the proximity side. It will be given a new class name if
there have different classes are combined. When there is no
object exists in different classes, merging initialize class
completed.
3 Merge the result of each processor
4 Output clustering results.
5. FAST ALGORITHM
Inputs: D(F1, F2, ..., FM, C) - the given data set
𝜃 - the T-Relevance threshold, radius Eps, density
threshold
MinPts , Output:class C
output: S - selected feature subset .
//==== Part 1 : Irrelevant Feature Removal ====
1 for i = 1 to m do
2 T-Relevance = SU (Fi, C)
3 if T-Relevance > 𝜃 then
4 S = S ∪ {Fi };
//==== Part 2 : Density Based Clustering ====
5 G = NULL; //G is a complete graph
6 for each pair of features { 𝐹 ′ 𝑖, 𝐹′ 𝑗} ⊂ S do
7 F-Correlation = SU ( 𝐹′ 𝑖 , 𝐹′ 𝑗)
8 𝐴𝑑𝑑 𝐹′ 𝑖 𝑎𝑛𝑑 𝑜𝑟 𝐹′ 𝑗 𝑡𝑜 𝐺 𝑤𝑖𝑡𝑕 F-Correlation 𝑎𝑠 𝑡𝑕𝑒
𝑤𝑒𝑖𝑔𝑕𝑡 𝑜𝑓 𝑡𝑕𝑒 𝑐𝑜𝑟𝑟𝑒𝑠𝑝𝑜𝑛𝑑𝑖𝑛𝑔 𝑒𝑑𝑔𝑒 ;
//== Part 3: Representative Feature Selection ====
9 DBSCAN(D, Eps, MinPts)
10 Begin
11 init C=0;// The number of classes is initialized to 0
12 for each unvisited point p in D
13 mark p as visited; //marked P as accessed
14 N = getNeighbours (p, Eps);
15 if sizeOf(N) < MinPts then
16 mark p as Noise; //if sizeOf(N) < MinPts,then
mark P as noise
17 else
18 C= next cluster; //create a new class C
19 ExpandCluster (p, N, C, Eps, MinPts);//expand
class C
20 end if
21 end for
22 End
23 for each cluster
24 add feature with max T-relevance to final subset
25 end for
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395 -0056
Volume: 04 Issue: 03 | March -2017 www.irjet.net p-ISSN: 2395-0072
© 2017, IRJET | Impact Factor value: 5.181 | ISO 9001:2008 Certified Journal | Page 873
6. CONCLUSIONS
DBSCAN algorithm based on the Map Reduce divides tasks
in such a way that allows their execution in parallel. Parallel
processing allows multiple processors to take on these
divided tasks, such that they run entire programs in less
time. Hadoop’s Map Reduce programming allows for the
processing of such large volumes of data in a completelysafe
and cost-effective manner.
REFERENCES
[1].MR-IDBSCAN: Efficient Parallel Incremental DBSCAN
Algorithm using MapReduce by Maitry Noticewala CSE
department Parul Institute of Technology 29,
Gopaleshvar Soc. Tadwadi Rander Road, Surat-395009
and Dinesh Vaghela CSE department Parul Institute of
Technology Limda, Vadodara, Indi
[2]. Research of parallel DBSCAN clustering algorithm based
on MapReduce by Xiufen Fu, Shanshan Hu and Yaguang
Wang School of Computer, Guangdong University of
Technology, 510006, P.R.China xffu@gdut,edu.cn,
895962584@qq.com
[3]. Filter versus Wrapper Feature Subset Selection in Large
Dimensionality Micro array: A Review, Binita
Kumari,Tripti Swarnkar . Department of Computer
Science - Department of Computer Applications, ITER,
SOA University Orissa, INDIA
[4].Data Mining Cluster Analysis
http://www.tutorialspoint.com/data_mining/dm_cluste
r_analysis.html, Copyright © tutorialspoint.com
[5]. Wikipedia on clusters
[6]. A Dynamic Feature Selection Method For Document
Ranking with Relevance Feedback Approach, K.Latha,
B.Bhargavi, C.Dharani and R.Rajaram Department of
Computer Science and Engineering, Anna University of
Technology, Tiruchirappalli, Tamil Nadu, India. E-mail:
erklatha@gmail.com Department of Information
Technology, Thiagarajar College of Engineering, Madurai,
Tamil Nadu, India E-mail: rrajaram@tce.edu
[7].Bayes Classifier forDifferentData Clustering-BasedExtra
Selection Methods,Abhinav. Kunja,Ch.Heyma RajuDept.
of Computer Science and Engineering, Gitam University
Visakhapatnam, AP, India
[8].An Evaluation on Feature Selection for Text Clustering,
Tao Liu Department of Information Science, Nankai
University, Tianjin 300071, P. R. China. Shengping Liu
Department of Information Science, Peking University,
Beijing 100871, P. R. China. Zheng Chen, Wei-Ying Ma
Microsoft Research Asia, 49 Zhichun Road, Beijing
100080, P. R. China
BIOGRAPHIES
Nilam Prakash Sonawale is Student of M.E. Computer
Engineering, Bharati Vidyapeeth College of
Engineering, Mumbai University, Navi Mumbai,
Maharashtra, India.
Senior Consultant at Capgemini Pvt. Ltd.,
Her area of interest is Data Warehousing and Data Mining.
Prof. B.W. Balkhande is a Professor in the Department
of Computer Engineering, Bharati Vidyapeeth College
of Engineering, Mumbai University, Navi Mumbai,
Maharashtra, India. His area of interest is Data Mining,
algorithms and programming.

More Related Content

PDF
Big data Clustering Algorithms And Strategies
PDF
Feature Subset Selection for High Dimensional Data using Clustering Techniques
PDF
Current clustering techniques
PDF
Volume 2-issue-6-2143-2147
PDF
Paper id 26201478
PPT
3.5 model based clustering
PDF
Ir3116271633
PDF
Enhanced Clustering Algorithm for Processing Online Data
Big data Clustering Algorithms And Strategies
Feature Subset Selection for High Dimensional Data using Clustering Techniques
Current clustering techniques
Volume 2-issue-6-2143-2147
Paper id 26201478
3.5 model based clustering
Ir3116271633
Enhanced Clustering Algorithm for Processing Online Data

What's hot (19)

PPTX
Clustering in data Mining (Data Mining)
PPTX
05 Clustering in Data Mining
PDF
Principle Component Analysis Based on Optimal Centroid Selection Model for Su...
PPT
3.6 constraint based cluster analysis
PDF
C0312023
PDF
A survey on Efficient Enhanced K-Means Clustering Algorithm
PDF
A fuzzy clustering algorithm for high dimensional streaming data
PDF
A0360109
PDF
Extended pso algorithm for improvement problems k means clustering algorithm
PDF
B0330811
PPT
DATA MINING:Clustering Types
PDF
Clustering Algorithm with a Novel Similarity Measure
PDF
Experimental study of Data clustering using k- Means and modified algorithms
PDF
A Novel Multi- Viewpoint based Similarity Measure for Document Clustering
PPTX
Machine learning clustering
PPTX
Introduction to Clustering algorithm
PPT
Capter10 cluster basic
PPT
5.4 mining sequence patterns in biological data
PDF
F04463437
Clustering in data Mining (Data Mining)
05 Clustering in Data Mining
Principle Component Analysis Based on Optimal Centroid Selection Model for Su...
3.6 constraint based cluster analysis
C0312023
A survey on Efficient Enhanced K-Means Clustering Algorithm
A fuzzy clustering algorithm for high dimensional streaming data
A0360109
Extended pso algorithm for improvement problems k means clustering algorithm
B0330811
DATA MINING:Clustering Types
Clustering Algorithm with a Novel Similarity Measure
Experimental study of Data clustering using k- Means and modified algorithms
A Novel Multi- Viewpoint based Similarity Measure for Document Clustering
Machine learning clustering
Introduction to Clustering algorithm
Capter10 cluster basic
5.4 mining sequence patterns in biological data
F04463437
Ad

Similar to Feature Subset Selection for High Dimensional Data Using Clustering Techniques (20)

PDF
A h k clustering algorithm for high dimensional data using ensemble learning
PDF
A Survey on Constellation Based Attribute Selection Method for High Dimension...
PDF
IRJET- Customer Segmentation from Massive Customer Transaction Data
PDF
7. 10083 12464-1-pb
PDF
Ijartes v1-i2-006
PDF
Clustering Algorithms for Data Stream
PDF
Clustering of Big Data Using Different Data-Mining Techniques
PPTX
Clustering in Machine Learning, a process of grouping.
PDF
Iaetsd an efficient and large data base using subset selection algorithm
PDF
A Density Based Clustering Technique For Large Spatial Data Using Polygon App...
PDF
Feature Selection Algorithm for Supervised and Semisupervised Clustering
PDF
Cancer data partitioning with data structure and difficulty independent clust...
PDF
Improved K-mean Clustering Algorithm for Prediction Analysis using Classifica...
PDF
A Novel Dencos Model For High Dimensional Data Using Genetic Algorithms
PPT
Clustering_Unsupervised learning Unsupervised learning.ppt
PPT
upd Unit-v -Cluster Analysis (1) (1).ppt
PPTX
3b318431-df9f-4a2c-9909-61ecb6af8444.pptx
PDF
Literature Survey: Clustering Technique
PDF
IRJET- Enhanced Density Based Method for Clustering Data Stream
A h k clustering algorithm for high dimensional data using ensemble learning
A Survey on Constellation Based Attribute Selection Method for High Dimension...
IRJET- Customer Segmentation from Massive Customer Transaction Data
7. 10083 12464-1-pb
Ijartes v1-i2-006
Clustering Algorithms for Data Stream
Clustering of Big Data Using Different Data-Mining Techniques
Clustering in Machine Learning, a process of grouping.
Iaetsd an efficient and large data base using subset selection algorithm
A Density Based Clustering Technique For Large Spatial Data Using Polygon App...
Feature Selection Algorithm for Supervised and Semisupervised Clustering
Cancer data partitioning with data structure and difficulty independent clust...
Improved K-mean Clustering Algorithm for Prediction Analysis using Classifica...
A Novel Dencos Model For High Dimensional Data Using Genetic Algorithms
Clustering_Unsupervised learning Unsupervised learning.ppt
upd Unit-v -Cluster Analysis (1) (1).ppt
3b318431-df9f-4a2c-9909-61ecb6af8444.pptx
Literature Survey: Clustering Technique
IRJET- Enhanced Density Based Method for Clustering Data Stream
Ad

More from IRJET Journal (20)

PDF
Enhanced heart disease prediction using SKNDGR ensemble Machine Learning Model
PDF
Utilizing Biomedical Waste for Sustainable Brick Manufacturing: A Novel Appro...
PDF
Kiona – A Smart Society Automation Project
PDF
DESIGN AND DEVELOPMENT OF BATTERY THERMAL MANAGEMENT SYSTEM USING PHASE CHANG...
PDF
Invest in Innovation: Empowering Ideas through Blockchain Based Crowdfunding
PDF
SPACE WATCH YOUR REAL-TIME SPACE INFORMATION HUB
PDF
A Review on Influence of Fluid Viscous Damper on The Behaviour of Multi-store...
PDF
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...
PDF
Explainable AI(XAI) using LIME and Disease Detection in Mango Leaf by Transfe...
PDF
BRAIN TUMOUR DETECTION AND CLASSIFICATION
PDF
The Project Manager as an ambassador of the contract. The case of NEC4 ECC co...
PDF
"Enhanced Heat Transfer Performance in Shell and Tube Heat Exchangers: A CFD ...
PDF
Advancements in CFD Analysis of Shell and Tube Heat Exchangers with Nanofluid...
PDF
Breast Cancer Detection using Computer Vision
PDF
Auto-Charging E-Vehicle with its battery Management.
PDF
Analysis of high energy charge particle in the Heliosphere
PDF
A Novel System for Recommending Agricultural Crops Using Machine Learning App...
PDF
Auto-Charging E-Vehicle with its battery Management.
PDF
Analysis of high energy charge particle in the Heliosphere
PDF
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...
Enhanced heart disease prediction using SKNDGR ensemble Machine Learning Model
Utilizing Biomedical Waste for Sustainable Brick Manufacturing: A Novel Appro...
Kiona – A Smart Society Automation Project
DESIGN AND DEVELOPMENT OF BATTERY THERMAL MANAGEMENT SYSTEM USING PHASE CHANG...
Invest in Innovation: Empowering Ideas through Blockchain Based Crowdfunding
SPACE WATCH YOUR REAL-TIME SPACE INFORMATION HUB
A Review on Influence of Fluid Viscous Damper on The Behaviour of Multi-store...
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...
Explainable AI(XAI) using LIME and Disease Detection in Mango Leaf by Transfe...
BRAIN TUMOUR DETECTION AND CLASSIFICATION
The Project Manager as an ambassador of the contract. The case of NEC4 ECC co...
"Enhanced Heat Transfer Performance in Shell and Tube Heat Exchangers: A CFD ...
Advancements in CFD Analysis of Shell and Tube Heat Exchangers with Nanofluid...
Breast Cancer Detection using Computer Vision
Auto-Charging E-Vehicle with its battery Management.
Analysis of high energy charge particle in the Heliosphere
A Novel System for Recommending Agricultural Crops Using Machine Learning App...
Auto-Charging E-Vehicle with its battery Management.
Analysis of high energy charge particle in the Heliosphere
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...

Recently uploaded (20)

PPTX
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
PDF
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
PPTX
CYBER-CRIMES AND SECURITY A guide to understanding
PDF
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
PPTX
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
PPTX
Construction Project Organization Group 2.pptx
PPT
Mechanical Engineering MATERIALS Selection
PPTX
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
PDF
Operating System & Kernel Study Guide-1 - converted.pdf
PPTX
Lecture Notes Electrical Wiring System Components
PDF
Structs to JSON How Go Powers REST APIs.pdf
PDF
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
PPTX
Strings in CPP - Strings in C++ are sequences of characters used to store and...
PPTX
Lesson 3_Tessellation.pptx finite Mathematics
PPTX
additive manufacturing of ss316l using mig welding
PDF
composite construction of structures.pdf
PPTX
Welding lecture in detail for understanding
PDF
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
PDF
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
PDF
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
CYBER-CRIMES AND SECURITY A guide to understanding
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
Construction Project Organization Group 2.pptx
Mechanical Engineering MATERIALS Selection
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
Operating System & Kernel Study Guide-1 - converted.pdf
Lecture Notes Electrical Wiring System Components
Structs to JSON How Go Powers REST APIs.pdf
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
Strings in CPP - Strings in C++ are sequences of characters used to store and...
Lesson 3_Tessellation.pptx finite Mathematics
additive manufacturing of ss316l using mig welding
composite construction of structures.pdf
Welding lecture in detail for understanding
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
Mitigating Risks through Effective Management for Enhancing Organizational Pe...

Feature Subset Selection for High Dimensional Data Using Clustering Techniques

  • 1. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395 -0056 Volume: 04 Issue: 03 | March -2017 www.irjet.net p-ISSN: 2395-0072 © 2017, IRJET | Impact Factor value: 5.181 | ISO 9001:2008 Certified Journal | Page 867 Feature Subset Selection for High Dimensional Data Using Clustering Techniques Nilam Prakash Sonawale Student of M.E. Computer Engineering, Bharati Vidyapeeth College of Engineering, Mumbai University, Navi Mumbai , Maharashtra, India. Prof. B. W. Balkhande Assistant Professor, Dept. of Computer Engineering, Bharati Vidyapeeth College of Engineering, Mumbai University, Navi Mumbai, Maharashtra, India ---------------------------------------------------------------------***--------------------------------------------------------------------- Abstract - Data mining is the process of analyzingthedata from different perspective and summarizing it into useful information (Information that can be used to increase revenue, cuts costs or both). Database containslargevolume of attributes or dimensions which are further classified as low dimension data and high dimension data. When dimensionality increases, data in the irrelevant dimension may produce noise, to deal with this problem it is crucial to have a feature selection mechanism that can find a subset of features that meets requirement and achieves high relevance. The proposed algorithm FAST is evaluated in this project. FAST algorithm has three steps: irrelevant features are removed; Features are divided in to clusters, selecting the most representative feature from cluster [8]. This algorithm can be performed by DBSCAN (Density-Based Spatial Clustering with Noise) algorithm that can be worked in the distributed environment using the Map Reduce and Hadoop. The final result will be a small number of discriminative features selected. Key Words: Data Mining, Feature subset selection, FAST, DBSCAN, SU, Eps, MinPts 1. INTRODUCTION Data mining is an interdisciplinary subfield of computer science; it is the computational process of discovering patterns in large data sets involving methods at the intersection of artificial intelligence, machine learning, statistics and database system [4]. The overall goals of the data mining process are too abstract information from a dataset and transform it into an understandable structure for further use. Cluster analysis or clustering is the task of grouping the set of objects in such a way that objects in the same group ( called clusters) are more similar (in some sense or another ) to each other than to those in other groups(Cluster). Clustering is an example of unsupervised learning because there is no predefined class; the quality of cluster can be measure by high intra-cluster similarity and low inter-cluster similarity [5]. A categorization of major clustering methods. 1.1 Partitioning method: It classifies the data into k groups, together which satisfy the following requirements: (1) each group must containatleast one object, and (2) each object must belong to exactly one group. Like the algorithmsk-meansandk-medoids.Thecons of it are, most partitioning methods cluster objects based on the distance between objects. Such methods can find only spherical-shaped clusters and encounter difficulty at discovering clusters of arbitrary shapes. 1.2 Hierarchical methods: Hierarchical decomposition ofthegivensetofdata objects has been created by Hierarchical methods. Hierarchical methods suffer from the fact that once a step (merger or split) is done, it can never be undone. 1.3 Density-based methods: To continue growing the given cluster as long as the density (number of objects / data points) in the “neighborhood” exceeds some threshold; that is, for each data point within a given cluster, the neighborhood of a given radius has to contain at least a minimum number of points [3]. This method can be used to filer out noise (outliers) and to discover clusters of arbitrary shapes. 1.4 Grid-based methods: These methods quantize the object space into a finite number of cells that form a grid structure.The primary advantage of this approach is its fast processing time, which is dependent only on the number of cells in each dimension in the quantized space and independent of the number of data objects.
  • 2. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395 -0056 Volume: 04 Issue: 03 | March -2017 www.irjet.net p-ISSN: 2395-0072 © 2017, IRJET | Impact Factor value: 5.181 | ISO 9001:2008 Certified Journal | Page 868 1.5 Model-based methods: These methods hypothesize a model for each of the clusters and find the best fit of the data to the given model. Clusters may be located by model-based algorithm by constructing a density function that reflects the spatial distribution of the data points. 1.6 Other special method: (i) Clustering high-dimensional data, (ii) Constrain- based clustering 1.7 DBSCAN: DBSCAN (Density-Based Spatial Clustering of Applications with Noise) is a density based clustering algorithm which can generate any number of clusters, and also for the distribution of spatial data [1]. The purpose of clustering algorithm is to convert large amount of raw data into separate clusters in order to better and faster access. The algorithm grows regions with sufficiently high density into clusters and discovers clusters of arbitrary shape in spatial databases with noise. It defines a cluster as a maximal set of density-connected points. A density-based cluster is a set of density-connected objects that is maximal with respect to density-reach ability. Every object not contained in any cluster is considered to be noise. This method is sensitive to its parameter e and Min Pts, and leaves the user with the responsibility of selecting parameter values that will lead to the discovery of acceptable clusters. If a spatial index is used, the computational complexity of DBSCAN is O(nlogn), where n is the number of database objects. Otherwise, it is O(n2). DBSCAN does not need to know the number of classes to be formed in advance. It can not only find freeform class, but also to identify the noise points. Class is defined as a collection contains the maximum number of data objects which density connectivity in DBSCAN algorithm [1]. For all of the unmarked objects in data set D, select object P and marked P as visited. Region query for P to determine whether it is a core object.IfP is not a core object, then mark it as noise and re-select another object that is not marked. If P is a core object, then establish class C for the core object Pandgenerals the objects within P as seed objects to region query to expanding the class C until no new object join class C, clustering process over. That is when the number of objects in the given radius (ε) region not less than the density threshold (MinPts), then clustering.Becauseof taking the density distribution of data object into account, so it can mining for freeform datasets. 1.8 Clustering High-Dimensional Data: Most clustering methods are designed for clustering low-dimensional data and encounter challenges when the dimensionality of the datagrowsreallyhigh.Thisis because when the dimensionality increases, usually only a small number of dimensions are relevant to certain clusters, but data in the irrelevant dimensions may produce much noise and mask the real clusters to be discovered. Moreover, when dimensionality increases, data usually become increasingly sparse because the data points are likely located in different dimensional subspaces.Toovercomethisdifficulty,we may consider using feature (or attribute) transformation and feature (or attribute) selection techniques. Feature transformation methods, such as principal component analysis and singular value decomposition,transformthedataontoasmallerspace while generally preserving the original relative distance between objects. They summarize data by creating linear combinations of theattributes,andmay discover hidden structures in the data. However, such techniques do not actually remove any of the original attributes from analysis. This is problematic when there are a large number of irrelevant attributes. The irrelevantinformationmaymasktherealclusters,even after transformation. Moreover, the transformed features (attributes) are often difficult to interpret, making the clustering results less useful. Thus, feature transformation is only suited to data sets where most of the dimensions are relevant to the clustering task. Unfortunately, real-world data sets tend to have many highly correlated, or redundant, dimensions. Another way of tackling the curse of dimensionality is to try to remove some of the dimensions. Attribute subset selection (or feature subset selection) is commonly used for data reduction by removing irrelevant or redundant dimensions (or attributes). Given a set of attributes, attribute subsetselectionfindsthesubsetof
  • 3. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395 -0056 Volume: 04 Issue: 03 | March -2017 www.irjet.net p-ISSN: 2395-0072 © 2017, IRJET | Impact Factor value: 5.181 | ISO 9001:2008 Certified Journal | Page 869 attributes that are most relevant to the data mining task. Attribute subset selection involves searching through various attribute subsets andevaluatingthese subsets using certain criteria. It is most commonly performed by supervised learning—the most relevant set of attributes are found with respect to the given class labels. It can also be performed by an unsupervised process, such as entropy analysis, which is based on the property that entropy tends to be low for data that contain tight clusters. Other evaluation functions, such as category utility, may also be used. Subspace clustering is an extension to attribute subset selection that has shown its strength at high- dimensional clustering. It is based on the observation that different subspaces may contain different, meaningful clusters. Subspace clustering searches for groups of clusters within different subspaces of the same data set. The problem becomes how to find such subspace clusters effectively and efficiently. 1.9 Constraint-Based Cluster Analysis: Users often have a clear view of the application requirements, which they would ideally like to use to guide the clustering process and influence the clustering results [7]. Thus, in many applications, it is desirable to have the clustering process take user preferences and constraints into consideration 1.10 Clustering required in Data Mining (Advantages): (i) Scalability (ii) Ability to deal with different kinds of attributes (iii) Discovery of clusters with attribute shape (iv) High Dimensionality (v) Ability to deal with noisy data (vi) Interpretability 2. LITERATURE SURVEY A feature selection algorithm can be seen as the combination of a search technique for proposing new feature subsets, along with an evaluation measure which scores the different feature subsets. The simplest algorithm is to test each possible subset of features finding the one which minimizes the error rate. This is an exhaustive search of the space, and is computationally intractable for all but the smallest of feature sets. The choice of evaluation metric heavily influences the algorithm, and it is these evaluation metrics which distinguish between the three main categories of feature selection algorithms: wrappers, filters and embedded methods. 2.1 Wrapper methods: Use a predictive model to score feature subsets. Each new subset is used to train a model, which is tested on a hold-out set. Counting the number of mistakes made on that hold-out set (the error rate of the model) gives the score for that subset [3]. As wrappermethodstrain a new model for each subset, they are very computationally intensive, but usually provide the performing feature set for that particular type of model. Figure 1: Wrapper Method 2.2 Filter methods: Use a proxy measure rather thantheerrorratetoscore a feature subset. This measure is selected to be expeditious to compute, while still capturing the usefulness of the feature set. Generally measures include the mutual information, the point wise mutual information, Pearson product-moment correlation coefficient, inter/intra class distance or the scores of significance tests for each class/feature combinations. Filters are usually less computationally enhancedthan wrappers, but they produce a feature set which is not tuned to a specific type of predictive model [3]. This
  • 4. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395 -0056 Volume: 04 Issue: 03 | March -2017 www.irjet.net p-ISSN: 2395-0072 © 2017, IRJET | Impact Factor value: 5.181 | ISO 9001:2008 Certified Journal | Page 870 lack of tuning means a feature set from a filter is more general than the set from a wrapper, usually giving small prediction performance than a wrapper. However the feature set doesn't contain the assumptions of a prediction model, and hence is more useful for exposing the relationships between the features. Many filters provide a feature ranking rather than an explicit bestfeaturesubset,andthecutoffpoint in the ranking is chosen via cross-validation. This method has also been used as a pre-processingstepfor wrapper methods, allowing a wrapper to be used on big problems. Figure 2: Filter Method 2.3 Embedded methods: A catch-all group of techniques which implement feature selection as part of the model construction process. Theexemplarofthis pathistheLASSOmethod for designing a linear model, which penalizes the regression coefficients with an L1 penalty; compress many of them to zero. Any features which have non- zero regressioncoefficients are'selected'bytheLASSO algorithm.ImprovementstotheLASSOincludeBolasso which bootstraps samples, and FeaLect which scores all the features based on combinatorial analysis of regression coefficients. One other popular path is the Recursive Feature Elimination algorithm, commonly used with Support Vector Machines to repeatedly construct a model and remove features with minimum weights. These approaches tend to be between filters and wrappers in terms of computational diversity. In statistics, the most preferred form of feature selection is stepwise regression, which is a wrapper technique. It is a greedy algorithm that appends the best feature (or deletes the worst feature) at each round. The main control issue is determine when to stop the algorithm.Inmachinelearning,thisistypically done by cross-validation. In statistics, some standards are optimized. This leads to the inherent problem of nesting. Figure 3: Embedded Method 3. PROBLEM STATEMENT The quality of DBSCAN depends on the distance measure (P,ε). The most commondistancemetricused is Euclidean distance. Especially this metric can be rendered almost useless for high-dimensional data, making it difficult to find an appropriate value for ε. DBSCAN algorithm fails in case of varying density clusters. 4. PROPOSED SYSTEM Feature set choice is viewed because the method of characteristic and removing as several extraneousand redundant options as doable. this is often as a result of extraneous options don’t contribute to the prognosticative accuracyand redundant options don’t redound to obtaining a stronger predictor forthatthey supply largely data that is already gift in different feature(s). Of the numerous feature set choice algorithms, some will effectively eliminate extraneous options however fail to handle redundant options however a number of others will eliminate the extraneous whereas taking care of the redundant options [3]. The proposed FAST algorithm false under the Filter method. The filter method in addition to the generality is good choicewhenthenumbersoffeatures are very large. Feature selection is the process of identifying and removing as many as relevant and redundant feature as many as possible. Irrelevant features do not contribute to the predictive accuracy and redundant feature provides information which is already present in other feature.
  • 5. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395 -0056 Volume: 04 Issue: 03 | March -2017 www.irjet.net p-ISSN: 2395-0072 © 2017, IRJET | Impact Factor value: 5.181 | ISO 9001:2008 Certified Journal | Page 871 Figure 4: System Architecture Figure 5: DBSCAN Flowchart based on Map Reduce Figure 6: Flow Diagram 4.1 Subset Selection Algorithm: The extraneous options, at the side of redundant options, severely have an effect on the accuracy of the educational machines. Thus, feature set choice ought to be ready to determine and take away the maximum amount of the extraneous and redundant data as potential. Moreover, “good feature subsetscontainoptionsextremelyrelatedwith (predictive of) the category, nevertheless unrelated with (not prognostic of) one another. Keeping these in mind, we tend to develop a completely unique algorithmic program which may with efficiency and effectively handle each extraneous and redundant option, and acquire a decent feature set. 4.2 Time Complexity: The major quantity of labor for algorithmicruleoneinvolves the computation of SU values for TR connectedness and F- Correlation that has linear complexness in terms of the amount of instances during a given knowledge set. The primary a part of the algorithmic rule encompasses a linear time complexness in terms of the amount of options m. presumptuous options square measure designated as relevant ones within the initial half, once k ¼ only 1 feature is chosen. 4.2.1 Removal of Irrelevant Features: An efficient way for degrading dimensionality, removing irrelevant data, increasing learning accuracy,andimproving results comprehensibility. If we take a Dataset ‘D’ with m features F={F1,F2,..,Fn} and class C, by default features are available with target relevant feature. The generality of the selected features is limited and the computational complexity is more. The hybrid methods are a combination of filter and wrapper methods by using a filter method to degrade search space that will be considered by the subsequent wrapper. 4.2.2 T-Relevance, F-Correlation Calculation: T-Relevance between a feature and the target concept C, the correlation F-Correlation between a pair of features, the feature redundancy F-Redundancy and the representative feature R-Feature of a feature cluster can be defined. According to the over definitions, feature subset selection is the process that determine and possess the strong T- Relevance features and selects R-Features from feature clusters [7]. The behind heuristics are that 1.Irrelevant features have weak correlation with target concept. 2. Redundant features are grouped in a cluster and a representative feature can be taken out of the cluster.
  • 6. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395 -0056 Volume: 04 Issue: 03 | March -2017 www.irjet.net p-ISSN: 2395-0072 © 2017, IRJET | Impact Factor value: 5.181 | ISO 9001:2008 Certified Journal | Page 872 4.2.3 T-Relevance, F-Correlation Calculation: We can take benefit of the MapReduce programming model to parallelize the whole process to save clustering time and resources [2]. The basic idea of DBSCAN algorithm based on MapReduce is divided into four steps: 1. The data in the dataset cut into small blocks which are equal size. 2. The blocks are distributed to the nodes in the cluster, Hence all of nodes in the cluster can run the Map function of themselves in parallel to calculate and process those blocks. (i) Initial Cluster: For any unlabeled object p in the dataset, using Map-Reduce parallel Programming model to calculate the number of objects in its region to determine if it is a core object. If is, P and all the objects in the region of P constitute an initial class (C), and marked those objects with the same cluster identifier (Cid). Conversely, if P is not a core object, and there is no other object in its region, marked P as noise. Otherwise, detect whether there has a core object q in the region of non-core object p. If has, given the object p and the objects in the region of 1 with the same cluster identifier. Repeat until all of the objects in the dataset are identified. After the process is completed, get an initial classclusterand a noise set. (ii) Merge Result: Class merging is to consider these objects which exist in more than two classes. If there is a sharing core object, merging the two classes. Else classify the object as the proximity side. It will be given a new class name if there have different classes are combined. When there is no object exists in different classes, merging initialize class completed. 3 Merge the result of each processor 4 Output clustering results. 5. FAST ALGORITHM Inputs: D(F1, F2, ..., FM, C) - the given data set 𝜃 - the T-Relevance threshold, radius Eps, density threshold MinPts , Output:class C output: S - selected feature subset . //==== Part 1 : Irrelevant Feature Removal ==== 1 for i = 1 to m do 2 T-Relevance = SU (Fi, C) 3 if T-Relevance > 𝜃 then 4 S = S ∪ {Fi }; //==== Part 2 : Density Based Clustering ==== 5 G = NULL; //G is a complete graph 6 for each pair of features { 𝐹 ′ 𝑖, 𝐹′ 𝑗} ⊂ S do 7 F-Correlation = SU ( 𝐹′ 𝑖 , 𝐹′ 𝑗) 8 𝐴𝑑𝑑 𝐹′ 𝑖 𝑎𝑛𝑑 𝑜𝑟 𝐹′ 𝑗 𝑡𝑜 𝐺 𝑤𝑖𝑡𝑕 F-Correlation 𝑎𝑠 𝑡𝑕𝑒 𝑤𝑒𝑖𝑔𝑕𝑡 𝑜𝑓 𝑡𝑕𝑒 𝑐𝑜𝑟𝑟𝑒𝑠𝑝𝑜𝑛𝑑𝑖𝑛𝑔 𝑒𝑑𝑔𝑒 ; //== Part 3: Representative Feature Selection ==== 9 DBSCAN(D, Eps, MinPts) 10 Begin 11 init C=0;// The number of classes is initialized to 0 12 for each unvisited point p in D 13 mark p as visited; //marked P as accessed 14 N = getNeighbours (p, Eps); 15 if sizeOf(N) < MinPts then 16 mark p as Noise; //if sizeOf(N) < MinPts,then mark P as noise 17 else 18 C= next cluster; //create a new class C 19 ExpandCluster (p, N, C, Eps, MinPts);//expand class C 20 end if 21 end for 22 End 23 for each cluster 24 add feature with max T-relevance to final subset 25 end for
  • 7. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395 -0056 Volume: 04 Issue: 03 | March -2017 www.irjet.net p-ISSN: 2395-0072 © 2017, IRJET | Impact Factor value: 5.181 | ISO 9001:2008 Certified Journal | Page 873 6. CONCLUSIONS DBSCAN algorithm based on the Map Reduce divides tasks in such a way that allows their execution in parallel. Parallel processing allows multiple processors to take on these divided tasks, such that they run entire programs in less time. Hadoop’s Map Reduce programming allows for the processing of such large volumes of data in a completelysafe and cost-effective manner. REFERENCES [1].MR-IDBSCAN: Efficient Parallel Incremental DBSCAN Algorithm using MapReduce by Maitry Noticewala CSE department Parul Institute of Technology 29, Gopaleshvar Soc. Tadwadi Rander Road, Surat-395009 and Dinesh Vaghela CSE department Parul Institute of Technology Limda, Vadodara, Indi [2]. Research of parallel DBSCAN clustering algorithm based on MapReduce by Xiufen Fu, Shanshan Hu and Yaguang Wang School of Computer, Guangdong University of Technology, 510006, P.R.China xffu@gdut,edu.cn, 895962584@qq.com [3]. Filter versus Wrapper Feature Subset Selection in Large Dimensionality Micro array: A Review, Binita Kumari,Tripti Swarnkar . Department of Computer Science - Department of Computer Applications, ITER, SOA University Orissa, INDIA [4].Data Mining Cluster Analysis http://www.tutorialspoint.com/data_mining/dm_cluste r_analysis.html, Copyright © tutorialspoint.com [5]. Wikipedia on clusters [6]. A Dynamic Feature Selection Method For Document Ranking with Relevance Feedback Approach, K.Latha, B.Bhargavi, C.Dharani and R.Rajaram Department of Computer Science and Engineering, Anna University of Technology, Tiruchirappalli, Tamil Nadu, India. E-mail: erklatha@gmail.com Department of Information Technology, Thiagarajar College of Engineering, Madurai, Tamil Nadu, India E-mail: rrajaram@tce.edu [7].Bayes Classifier forDifferentData Clustering-BasedExtra Selection Methods,Abhinav. Kunja,Ch.Heyma RajuDept. of Computer Science and Engineering, Gitam University Visakhapatnam, AP, India [8].An Evaluation on Feature Selection for Text Clustering, Tao Liu Department of Information Science, Nankai University, Tianjin 300071, P. R. China. Shengping Liu Department of Information Science, Peking University, Beijing 100871, P. R. China. Zheng Chen, Wei-Ying Ma Microsoft Research Asia, 49 Zhichun Road, Beijing 100080, P. R. China BIOGRAPHIES Nilam Prakash Sonawale is Student of M.E. Computer Engineering, Bharati Vidyapeeth College of Engineering, Mumbai University, Navi Mumbai, Maharashtra, India. Senior Consultant at Capgemini Pvt. Ltd., Her area of interest is Data Warehousing and Data Mining. Prof. B.W. Balkhande is a Professor in the Department of Computer Engineering, Bharati Vidyapeeth College of Engineering, Mumbai University, Navi Mumbai, Maharashtra, India. His area of interest is Data Mining, algorithms and programming.