SlideShare a Scribd company logo
IDL - International Digital Library Of
Technology & Research
Volume 1, Issue 5, May 2017 Available at: www.dbpublications.org
International e-Journal For Technology And Research-2017
IDL - International Digital Library 1|P a g e Copyright@IDL-2017
Web Oriented FIM for large scale dataset using
Hadoop
Mrs. Supriya C
PG Scholar
Department of Computer Science and Engineering
C.M.R.I.T, Bangalore, Karnataka, India
supriyakuppur@gmail.com
Abstract: In large scale datasets, mining frequent
itemsets using existing parallel mining algorithm is to
balance the load by distributing such enormous data
between collections of computers. But we identify
high performance issue in existing mining algorithms
[1]. To handle this problem, we introduce a new
approach called data partitioning using Map Reduce
programming model.In our proposed system, we have
introduced new technique called frequent itemset
ultrametric tree rather than conservative FP-trees. An
investigational outcome tells us that, eradicating
redundant transaction results in improving the
performance by reducing computing loads.
Keywords: Frequent Itemset, MapReduce, Data
partitioning, parallel computing, load balance
1 INTRODUCTION
Big data is an emerging technology in modern world.
It is a greater amount of data, which is hard to process
using traditional data processing techniques or
software‟s. Major challenges in big data are
information safekeeping, distribution, searching,
revelation, querying, updating such data. Data
analyzation is another big apprehension need to
concentrate while dealing with big data. It involves
data which is formed by different types of data and
applications like social media data, online auctions.
Data is differentiated into 3 major types‟ structured,
unstructured and semi-structured data. It also defines 3
major V‟s Volume, Velocity, and Variety which gives
us apparent notion on what is big data.
Now a day‟s data is growing very fast, consider an
example: many hospitals have trillions of data facets
of ECG data. Twitter alone collects around 170million
temporal data, every now and then, serves as much as
200million queries/day. Most important limitations
with the existing systems are handling larger datasets;
our databases can handle only structured data but not
varieties of data, fault tolerance, scalability. That‟s
why big data consign an important role in these days.
IDL - International Digital Library Of
Technology & Research
Volume 1, Issue 5, May 2017 Available at: www.dbpublications.org
International e-Journal For Technology And Research-2017
IDL - International Digital Library 2|P a g e Copyright@IDL-2017
Considering bulky datasets, it is not able to handle all
with a single machine. So data need to be distributed
and processing it Parallely amongst clusters of nodes,
which is a foremost challenge. To handle this scenario
we need to design a distributed storage system. In big
data, this can be conceded by a system called Hadoop
– stores and processing big data. It includes 2
important techniques called HDFS (storing big data)
and MapReduce framework (processing big data). Big
data process deals with 3 different techniques data
ingestion, data storage, and data analysis.
If data is distributed it is tough to find the locality of
such files in view of bigger datasets. Better solution to
this problem is to follow Master-Slave architecture, in
which single machine acts as a „Master‟ and remaining
machines are treated as „Slave‟. Master knows the
location of file being stored on different Slave
machines. So whenever a client sends a request,
Master machine processes it by finding out the
requested file in any of the underlined slave machines.
Hadoop follows same architecture.
2OBJECTIVES
The main goal of the project is to eliminate the
redundant transactions on Hadoop nodes to improve the
performance and this can be achieved by reducing the
computing and networking load. It mainly gives
attention to grouping highly significant transactions into
a data partitioning. In the area of big data processing,
MR framework has been used to develop parallel data
mining algorithms which includes FIM, FP-growth [3]
based, some ARM.
Compared with the traditional system, modern
distributed systems tries to achieve high efficiency and
scalability when distributed data is been executed in a
large scale clusters. Many algorithms have been defined
to process FIM, built in Hadoop which aims at
balancing the load by equally distributed [4] among
nodes. When such data is divided into different parts
need to maintain the connection between the data thus it
leads poor data locality and Parallely it increases data
shuffling costs and network overhead. In order to
improve data locality in this we are introducing a
parallel FIM technique, where bulk of data is distributed
across Hadoop clusters.
In this paper they have implemented FIM on Hadoop
[10] clusters using Map Reduce framework. This project
aims is to boost the performance of parallel FIM on
Hadoop clusters and this can be achieved with the help
of Map and Reduce job.
3 METHODOLOGY
Traditional mining algorithms [2] are not enough to
handle large data sets. Thus we have introduced a new
data partitioning technique. Parallel computing [7] is
one more method which we have introduced here to
compute the redundant transactions parallely. So that
we can achieve better performance compared with the
traditional mining algorithms.
IDL - International Digital Library Of
Technology & Research
Volume 1, Issue 5, May 2017 Available at: www.dbpublications.org
International e-Journal For Technology And Research-2017
IDL - International Digital Library 3|P a g e Copyright@IDL-2017
Fig 3.1 System Architecture: High Level View
In proposed system, considering old parallel mining
and new mining algorithm using Hadoop technique
shows that how much processing time is acquired by
each of system. In which Hadoop gives us better
modules to achieve this and illustration of whole
system is depicted briefly in the Fig 3.1.
4 IMPLEMENTATION
In this project, we are trying to show how to achieve
better performance measure by comparing existing
parallel mining algorithm with data partitioning
system using some cluster algorithms. First we will
load large datasets into HDFS [6], once it is uploaded
into the main web server where parallel FIM [5]
application is running. Based on the minimum support,
it partitions the data among 2 different servers and
runs two map reduce jobs. Finally, result will be sent
back to the main server which conducts another map
and reduce job to mining further frequent itemsets.
Thus here we are running 3 map and reduce job.
Step1 Scans transaction DB: In this step first we
will scan the transaction database to retrieve the
frequent itemsets and call is as frequent 1-itemsets.
And each set consist of key and value pair.
Step 2 Organizing frequent 1-itemsetsFlist: Based
on the frequent 1-itemsetsfrequency it sorts in a
decreasing order fashion call it as Flist.
Step 3 FIU-Tree: It performs with 2 Map and Reduce
phase.
 Mapper:From step2 we got Flist, here
Mappers process Flist and finally will
produce output as a set of <key, value> pair.
 Reducer: Each reducer instance is assigned
to process one or more group-dependent sub-
datasets one by one. For each sub-datasets,
the reducer instance builds a local FP-tree.
During the recursive process, it may output
discovered patterns.
Step 4: Accumulating: the outcomes which are
generated in Step.3are combined to produce final
result.
5 OUTCOMES
Bringing together both new parallel mining algorithm
and data partitioning yields to better performance by
comparing with the traditional mining algorithms like
Apriori , MLFPT [9] etc. which is showcased in below
graph.
IDL - International Digital Library Of
Technology & Research
Volume 1, Issue 5, May 2017 Available at: www.dbpublications.org
International e-Journal For Technology And Research-2017
IDL - International Digital Library 4|P a g e Copyright@IDL-2017
Fig 5.1 Effects of minimum support
Fig 5.2 Speed up performance
CONCLUSION AND FUTURE SCOPE
Any area if we consider can realize huge level of
records will be generated in a fraction of a second.
Processing such info “Apache Hadoop” provides
different framework like MapReduce etc. In
Traditional parallel mining algorithms for frequent
itemset mining it takes more time to process such data,
system performance and balancing the load was major
challenges. This experiment introduces a new parallel
mining algorithm called FIUT using Map Reduce
programming paradigm; it divides the input data
across multiple Hadoop nodes and start doing parallel
excavating to generate frequent itemset. This data
partitioning technique not only improves the
performance of a system but also balance the load.
In future it can be validated with another emerging
technology introduced by Apache Hadoop is Apache
Spark [6]. It is a cluster computing technology [8],
which is faster than Map Reduce. It uses python as a
programming language, where Map Reduce uses Java.
Python requires less number of codes to write. Thus it
improves processing speed.
ACKNOWLEDGEMENT
I would also like to thank Mrs. Swathi,
Assoc. Professor andHOD, Department of Computer
Science and Engineering, CMRIT, Bangalore who
shared her opinions and experiences through which I
received the required information crucial for the
project.
REFERENCES
[1].Fast Parallel ARM without Candidacy generation.
Osmar R. ZaYane, Mohammad El-Hajj , Paul Lu.
Canada : IEEE, 2001. 7695-1 119-8.
[2]. Cloud Data Mining based on Association Rule.
CH.Sekhar, S ReshmaAnjum. 2091-2094,
AndraPradesh : International journal of computer
science and information technology, 2014, Vol. 5 (2).
09759646.
[3]. An enhanced FP growth based on MapReduce for
mining association rules. ARKAN A. G. AL-
HAMODI, SONGFENG LU, YAHYA E. A. AL-
SALHI. China : IJDKP, 2016, Vol. 6.
IDL - International Digital Library Of
Technology & Research
Volume 1, Issue 5, May 2017 Available at: www.dbpublications.org
International e-Journal For Technology And Research-2017
IDL - International Digital Library 5|P a g e Copyright@IDL-2017
[4]. Novel Data-Distribution Technique for Hadoop in
Heterogeneous Cloud Environments.
VrushaliUbarhande, Alina-
MadalinaPopescu,HoracioGonz ́alez–V ́elez. Ireland :
International Conference on complex intelligent and
software sensitive systems, 2015, Vol. 15. 978-1-
4799-8870-9.
[5]. An Improved MapReduce Algorithm for Mining
Closed Frequent Itemsets. YaronGonen, Ehud Gudes.
Israel : International Conference on Software Science,
Technology and Engineering, 2016. 978-1-5090-1018-
9.
[6]. Big Data Management Processing with Hadoop
MapReduce and Spark Technology: A Comparison.
AnkushVerma, AshikHussainMansuri ,Dr. Neelesh
Jain. 16, Rajasthan : CDAN, 2016.
[7] Deep Parallelization of Parallel FP-Growth Using
Parent-Child MapReduce. AdetokunboMakanju, Zahra
Farzanyar, Aijun An, Nick Cercone,ZaneZhenhua Hu,
Yonggang Hu. Canada : IEEE, 2016.
[8] A distributed frequent itemset mining algorithm
using Spark for Big Data analytics. Feng Zhang,
Yunlong Ma, Min Liu. New York : Springer, 2015.
[9] Review:Association Rule for Distributed Data.
BhagyashriWaghamare, Bharat Tidke. India :
ISCSCN. 2249-5789.
[10] H2Hadoop: Improving Hadoop Performance
using the Metadata of Related Jobs.
HamoudAlshammari, Jeongkyu Lee and Hassan
Bajwa. TCC-2015-11-0399, s.l. : IEEE, 2015.

More Related Content

DOCX
Seminar Report Vaibhav
DOCX
Map reduce advantages over parallel databases report
PDF
A sql implementation on the map reduce framework
PDF
Implementation of p pic algorithm in map reduce to handle big data
PDF
Python in an Evolving Enterprise System (PyData SV 2013)
PDF
Performance Improvement of Heterogeneous Hadoop Cluster using Ranking Algorithm
Seminar Report Vaibhav
Map reduce advantages over parallel databases report
A sql implementation on the map reduce framework
Implementation of p pic algorithm in map reduce to handle big data
Python in an Evolving Enterprise System (PyData SV 2013)
Performance Improvement of Heterogeneous Hadoop Cluster using Ranking Algorithm

What's hot (20)

PDF
Survey of Parallel Data Processing in Context with MapReduce
PDF
An effective classification approach for big data with parallel generalized H...
PDF
Performance evaluation of Map-reduce jar pig hive and spark with machine lear...
DOCX
hadoop seminar training report
PPTX
Big Data Analytics(Intro,Hadoop Map Reduce,Mahout,K-means clustering,H-base)
PPTX
High Performance Computing and Big Data
PDF
Survey Paper on Big Data and Hadoop
PDF
PDF
Introduction to Big Data and Science Clouds (Chapter 1, SC 11 Tutorial)
PPTX
Berlin Hadoop Get Together Apache Drill
PDF
LARGE-SCALE DATA PROCESSING USING MAPREDUCE IN CLOUD COMPUTING ENVIRONMENT
PDF
Distributed Algorithm for Frequent Pattern Mining using HadoopMap Reduce Fram...
PDF
Survey on Performance of Hadoop Map reduce Optimization Methods
PDF
Harnessing Hadoop and Big Data to Reduce Execution Times
PDF
IRJET - Survey Paper on Map Reduce Processing using HADOOP
PDF
IRJET- Big Data-A Review Study with Comparitive Analysis of Hadoop
PDF
A Survey on Big Data Analysis Techniques
PDF
Leveraging Map Reduce With Hadoop for Weather Data Analytics
PPTX
Comparison with Traditional databases
Survey of Parallel Data Processing in Context with MapReduce
An effective classification approach for big data with parallel generalized H...
Performance evaluation of Map-reduce jar pig hive and spark with machine lear...
hadoop seminar training report
Big Data Analytics(Intro,Hadoop Map Reduce,Mahout,K-means clustering,H-base)
High Performance Computing and Big Data
Survey Paper on Big Data and Hadoop
Introduction to Big Data and Science Clouds (Chapter 1, SC 11 Tutorial)
Berlin Hadoop Get Together Apache Drill
LARGE-SCALE DATA PROCESSING USING MAPREDUCE IN CLOUD COMPUTING ENVIRONMENT
Distributed Algorithm for Frequent Pattern Mining using HadoopMap Reduce Fram...
Survey on Performance of Hadoop Map reduce Optimization Methods
Harnessing Hadoop and Big Data to Reduce Execution Times
IRJET - Survey Paper on Map Reduce Processing using HADOOP
IRJET- Big Data-A Review Study with Comparitive Analysis of Hadoop
A Survey on Big Data Analysis Techniques
Leveraging Map Reduce With Hadoop for Weather Data Analytics
Comparison with Traditional databases
Ad

Similar to Web Oriented FIM for large scale dataset using Hadoop (20)

PDF
B017320612
PDF
Ijetcas14 316
PDF
Unstructured Datasets Analysis: Thesaurus Model
PDF
LARGE-SCALE DATA PROCESSING USING MAPREDUCE IN CLOUD COMPUTING ENVIRONMENT
PDF
Introduction to Big Data and Hadoop using Local Standalone Mode
PDF
Big Data on Implementation of Many to Many Clustering
PDF
BIGDATA- Survey on Scheduling Methods in Hadoop MapReduce
PDF
BIGDATA- Survey on Scheduling Methods in Hadoop MapReduce Framework
PDF
Big Data Processing with Hadoop : A Review
PDF
Seminar_Report_hadoop
PDF
Design Issues and Challenges of Peer-to-Peer Video on Demand System
PDF
A data aware caching 2415
PPT
Aginity "Big Data" Research Lab
PDF
Big Data with Hadoop – For Data Management, Processing and Storing
PDF
HMR LOG ANALYZER: ANALYZE WEB APPLICATION LOGS OVER HADOOP MAPREDUCE
PDF
HMR Log Analyzer: Analyze Web Application Logs Over Hadoop MapReduce
PDF
Effect of countries in performance of hadoop.
PDF
Eg4301808811
B017320612
Ijetcas14 316
Unstructured Datasets Analysis: Thesaurus Model
LARGE-SCALE DATA PROCESSING USING MAPREDUCE IN CLOUD COMPUTING ENVIRONMENT
Introduction to Big Data and Hadoop using Local Standalone Mode
Big Data on Implementation of Many to Many Clustering
BIGDATA- Survey on Scheduling Methods in Hadoop MapReduce
BIGDATA- Survey on Scheduling Methods in Hadoop MapReduce Framework
Big Data Processing with Hadoop : A Review
Seminar_Report_hadoop
Design Issues and Challenges of Peer-to-Peer Video on Demand System
A data aware caching 2415
Aginity "Big Data" Research Lab
Big Data with Hadoop – For Data Management, Processing and Storing
HMR LOG ANALYZER: ANALYZE WEB APPLICATION LOGS OVER HADOOP MAPREDUCE
HMR Log Analyzer: Analyze Web Application Logs Over Hadoop MapReduce
Effect of countries in performance of hadoop.
Eg4301808811
Ad

Recently uploaded (20)

PPT
Project quality management in manufacturing
PPTX
MCN 401 KTU-2019-PPE KITS-MODULE 2.pptx
PPTX
additive manufacturing of ss316l using mig welding
PDF
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
PPTX
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
PPTX
Construction Project Organization Group 2.pptx
PPTX
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PDF
composite construction of structures.pdf
PPTX
Internet of Things (IOT) - A guide to understanding
PPTX
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
PPTX
CYBER-CRIMES AND SECURITY A guide to understanding
PPTX
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
PDF
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
PPT
CRASH COURSE IN ALTERNATIVE PLUMBING CLASS
PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PPTX
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
PPTX
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
PPTX
CH1 Production IntroductoryConcepts.pptx
PPTX
web development for engineering and engineering
Project quality management in manufacturing
MCN 401 KTU-2019-PPE KITS-MODULE 2.pptx
additive manufacturing of ss316l using mig welding
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
Construction Project Organization Group 2.pptx
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
composite construction of structures.pdf
Internet of Things (IOT) - A guide to understanding
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
CYBER-CRIMES AND SECURITY A guide to understanding
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
CRASH COURSE IN ALTERNATIVE PLUMBING CLASS
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
CH1 Production IntroductoryConcepts.pptx
web development for engineering and engineering

Web Oriented FIM for large scale dataset using Hadoop

  • 1. IDL - International Digital Library Of Technology & Research Volume 1, Issue 5, May 2017 Available at: www.dbpublications.org International e-Journal For Technology And Research-2017 IDL - International Digital Library 1|P a g e Copyright@IDL-2017 Web Oriented FIM for large scale dataset using Hadoop Mrs. Supriya C PG Scholar Department of Computer Science and Engineering C.M.R.I.T, Bangalore, Karnataka, India supriyakuppur@gmail.com Abstract: In large scale datasets, mining frequent itemsets using existing parallel mining algorithm is to balance the load by distributing such enormous data between collections of computers. But we identify high performance issue in existing mining algorithms [1]. To handle this problem, we introduce a new approach called data partitioning using Map Reduce programming model.In our proposed system, we have introduced new technique called frequent itemset ultrametric tree rather than conservative FP-trees. An investigational outcome tells us that, eradicating redundant transaction results in improving the performance by reducing computing loads. Keywords: Frequent Itemset, MapReduce, Data partitioning, parallel computing, load balance 1 INTRODUCTION Big data is an emerging technology in modern world. It is a greater amount of data, which is hard to process using traditional data processing techniques or software‟s. Major challenges in big data are information safekeeping, distribution, searching, revelation, querying, updating such data. Data analyzation is another big apprehension need to concentrate while dealing with big data. It involves data which is formed by different types of data and applications like social media data, online auctions. Data is differentiated into 3 major types‟ structured, unstructured and semi-structured data. It also defines 3 major V‟s Volume, Velocity, and Variety which gives us apparent notion on what is big data. Now a day‟s data is growing very fast, consider an example: many hospitals have trillions of data facets of ECG data. Twitter alone collects around 170million temporal data, every now and then, serves as much as 200million queries/day. Most important limitations with the existing systems are handling larger datasets; our databases can handle only structured data but not varieties of data, fault tolerance, scalability. That‟s why big data consign an important role in these days.
  • 2. IDL - International Digital Library Of Technology & Research Volume 1, Issue 5, May 2017 Available at: www.dbpublications.org International e-Journal For Technology And Research-2017 IDL - International Digital Library 2|P a g e Copyright@IDL-2017 Considering bulky datasets, it is not able to handle all with a single machine. So data need to be distributed and processing it Parallely amongst clusters of nodes, which is a foremost challenge. To handle this scenario we need to design a distributed storage system. In big data, this can be conceded by a system called Hadoop – stores and processing big data. It includes 2 important techniques called HDFS (storing big data) and MapReduce framework (processing big data). Big data process deals with 3 different techniques data ingestion, data storage, and data analysis. If data is distributed it is tough to find the locality of such files in view of bigger datasets. Better solution to this problem is to follow Master-Slave architecture, in which single machine acts as a „Master‟ and remaining machines are treated as „Slave‟. Master knows the location of file being stored on different Slave machines. So whenever a client sends a request, Master machine processes it by finding out the requested file in any of the underlined slave machines. Hadoop follows same architecture. 2OBJECTIVES The main goal of the project is to eliminate the redundant transactions on Hadoop nodes to improve the performance and this can be achieved by reducing the computing and networking load. It mainly gives attention to grouping highly significant transactions into a data partitioning. In the area of big data processing, MR framework has been used to develop parallel data mining algorithms which includes FIM, FP-growth [3] based, some ARM. Compared with the traditional system, modern distributed systems tries to achieve high efficiency and scalability when distributed data is been executed in a large scale clusters. Many algorithms have been defined to process FIM, built in Hadoop which aims at balancing the load by equally distributed [4] among nodes. When such data is divided into different parts need to maintain the connection between the data thus it leads poor data locality and Parallely it increases data shuffling costs and network overhead. In order to improve data locality in this we are introducing a parallel FIM technique, where bulk of data is distributed across Hadoop clusters. In this paper they have implemented FIM on Hadoop [10] clusters using Map Reduce framework. This project aims is to boost the performance of parallel FIM on Hadoop clusters and this can be achieved with the help of Map and Reduce job. 3 METHODOLOGY Traditional mining algorithms [2] are not enough to handle large data sets. Thus we have introduced a new data partitioning technique. Parallel computing [7] is one more method which we have introduced here to compute the redundant transactions parallely. So that we can achieve better performance compared with the traditional mining algorithms.
  • 3. IDL - International Digital Library Of Technology & Research Volume 1, Issue 5, May 2017 Available at: www.dbpublications.org International e-Journal For Technology And Research-2017 IDL - International Digital Library 3|P a g e Copyright@IDL-2017 Fig 3.1 System Architecture: High Level View In proposed system, considering old parallel mining and new mining algorithm using Hadoop technique shows that how much processing time is acquired by each of system. In which Hadoop gives us better modules to achieve this and illustration of whole system is depicted briefly in the Fig 3.1. 4 IMPLEMENTATION In this project, we are trying to show how to achieve better performance measure by comparing existing parallel mining algorithm with data partitioning system using some cluster algorithms. First we will load large datasets into HDFS [6], once it is uploaded into the main web server where parallel FIM [5] application is running. Based on the minimum support, it partitions the data among 2 different servers and runs two map reduce jobs. Finally, result will be sent back to the main server which conducts another map and reduce job to mining further frequent itemsets. Thus here we are running 3 map and reduce job. Step1 Scans transaction DB: In this step first we will scan the transaction database to retrieve the frequent itemsets and call is as frequent 1-itemsets. And each set consist of key and value pair. Step 2 Organizing frequent 1-itemsetsFlist: Based on the frequent 1-itemsetsfrequency it sorts in a decreasing order fashion call it as Flist. Step 3 FIU-Tree: It performs with 2 Map and Reduce phase.  Mapper:From step2 we got Flist, here Mappers process Flist and finally will produce output as a set of <key, value> pair.  Reducer: Each reducer instance is assigned to process one or more group-dependent sub- datasets one by one. For each sub-datasets, the reducer instance builds a local FP-tree. During the recursive process, it may output discovered patterns. Step 4: Accumulating: the outcomes which are generated in Step.3are combined to produce final result. 5 OUTCOMES Bringing together both new parallel mining algorithm and data partitioning yields to better performance by comparing with the traditional mining algorithms like Apriori , MLFPT [9] etc. which is showcased in below graph.
  • 4. IDL - International Digital Library Of Technology & Research Volume 1, Issue 5, May 2017 Available at: www.dbpublications.org International e-Journal For Technology And Research-2017 IDL - International Digital Library 4|P a g e Copyright@IDL-2017 Fig 5.1 Effects of minimum support Fig 5.2 Speed up performance CONCLUSION AND FUTURE SCOPE Any area if we consider can realize huge level of records will be generated in a fraction of a second. Processing such info “Apache Hadoop” provides different framework like MapReduce etc. In Traditional parallel mining algorithms for frequent itemset mining it takes more time to process such data, system performance and balancing the load was major challenges. This experiment introduces a new parallel mining algorithm called FIUT using Map Reduce programming paradigm; it divides the input data across multiple Hadoop nodes and start doing parallel excavating to generate frequent itemset. This data partitioning technique not only improves the performance of a system but also balance the load. In future it can be validated with another emerging technology introduced by Apache Hadoop is Apache Spark [6]. It is a cluster computing technology [8], which is faster than Map Reduce. It uses python as a programming language, where Map Reduce uses Java. Python requires less number of codes to write. Thus it improves processing speed. ACKNOWLEDGEMENT I would also like to thank Mrs. Swathi, Assoc. Professor andHOD, Department of Computer Science and Engineering, CMRIT, Bangalore who shared her opinions and experiences through which I received the required information crucial for the project. REFERENCES [1].Fast Parallel ARM without Candidacy generation. Osmar R. ZaYane, Mohammad El-Hajj , Paul Lu. Canada : IEEE, 2001. 7695-1 119-8. [2]. Cloud Data Mining based on Association Rule. CH.Sekhar, S ReshmaAnjum. 2091-2094, AndraPradesh : International journal of computer science and information technology, 2014, Vol. 5 (2). 09759646. [3]. An enhanced FP growth based on MapReduce for mining association rules. ARKAN A. G. AL- HAMODI, SONGFENG LU, YAHYA E. A. AL- SALHI. China : IJDKP, 2016, Vol. 6.
  • 5. IDL - International Digital Library Of Technology & Research Volume 1, Issue 5, May 2017 Available at: www.dbpublications.org International e-Journal For Technology And Research-2017 IDL - International Digital Library 5|P a g e Copyright@IDL-2017 [4]. Novel Data-Distribution Technique for Hadoop in Heterogeneous Cloud Environments. VrushaliUbarhande, Alina- MadalinaPopescu,HoracioGonz ́alez–V ́elez. Ireland : International Conference on complex intelligent and software sensitive systems, 2015, Vol. 15. 978-1- 4799-8870-9. [5]. An Improved MapReduce Algorithm for Mining Closed Frequent Itemsets. YaronGonen, Ehud Gudes. Israel : International Conference on Software Science, Technology and Engineering, 2016. 978-1-5090-1018- 9. [6]. Big Data Management Processing with Hadoop MapReduce and Spark Technology: A Comparison. AnkushVerma, AshikHussainMansuri ,Dr. Neelesh Jain. 16, Rajasthan : CDAN, 2016. [7] Deep Parallelization of Parallel FP-Growth Using Parent-Child MapReduce. AdetokunboMakanju, Zahra Farzanyar, Aijun An, Nick Cercone,ZaneZhenhua Hu, Yonggang Hu. Canada : IEEE, 2016. [8] A distributed frequent itemset mining algorithm using Spark for Big Data analytics. Feng Zhang, Yunlong Ma, Min Liu. New York : Springer, 2015. [9] Review:Association Rule for Distributed Data. BhagyashriWaghamare, Bharat Tidke. India : ISCSCN. 2249-5789. [10] H2Hadoop: Improving Hadoop Performance using the Metadata of Related Jobs. HamoudAlshammari, Jeongkyu Lee and Hassan Bajwa. TCC-2015-11-0399, s.l. : IEEE, 2015.