SlideShare a Scribd company logo
M.Manikyam Email:manikyam.m20@gmailcom
Mobile: +91-7075195124
Carrer Objective:
To work in a professionaland solution oriented environment where I can get enough opportunity to continuously
innovate and improve Software Products as well as myself, which in the long run will allow me to learn and establish
processes and standards.
Professional summary:
 Having 5+ years of experience, out of which 3+ years experience in Spark & Hadoop development.
 Good experience creating real time data streaming solutions using Apache Spark Core, SparkSQL & DataFrames,
Spark Streaming.
 Hands on experience with Big Data core components and Ecosystem(Spark, Spark SQL, Spark Streaming,
Hadoop, HDFS, MapReduce, YARN, Zookeeper, Hive, Hbase, Pig, Sqoop, Flume, Kafka, Storm, Oozie).
 Experience in importing and exporting data using Sqoop from HDFS to Relational DataBase Systems (RDBMS)
and from RDBMS to HDFS.
 Involved in Integrating Hive with Hbase, Pig with Hbase and Hive with Tez.
 Good Knowledge on NOSQL databases like as Hbase, Cassandra and MongoDB.
 Experience in Hadoop Administration in Cloudera Distribution and have knowledge of HortonWorks Distribution.
 Ability to adapt to evolving technology strong sense ofresponsibility and accomplishment.
Technical Skills:
Bigdata Skills : Spark, Hadoop, HDFS, MapReduce, YARN, Zookeeper, Hive, Hbase,
Pig, Sqoop, Flume, Kafka, Storm, Oozie.
Languages : Scala, Java, C++, C
Scripting Languages : HTML, CSS, JavaScript.
Web Application Server : Apache Tomcat.
NoSql DataBases : Hbase, Phoenix, Cassandra, MongoDB
Databases : Mysql, Oracle, Derby
IDE’s : Eclipse, EditPlus
JavaEE Technologies : Servlet, JSP.
Professional Experience:
 Working as a Software Engineer in TATA CONSULTANCY SERVICE in Hyd, From July 2013 to TillDate.
 Worked as a Software Engineer in WIPRO Hyd from June 2011 to June 2013.
Work Experience:
PROJECT # 1
Project Name : E-Commerce Data Pipe Line
Role : Developer
Client Name : Obsessory
CompanyName : TCS
Duration : Jan 2015 to Till Now
Team Size : 10
Environment : Cassandra,Hive, Spark (Core, SQL, MLLib, Streaming), Hadoop,
MapReduce, Scala, Java
Project Description: Obsessory is a technology company that provides a web and mobile platform to assist shoppers in
discovery, search, comparison, and tracking of items across the Internet. Obsessory’s powerful search engine catalogs
millions of products from online stores on a daily basis and uses proprietary algorithms to enhance the depth and breadth of
the user’s search. It employs adaptive and social learning to continuously refine the search results present the user with th e
most relevant selection of items.
Role & Responsibilities:
1. Crawling of Data from 100 + sites based on ontology maintenance.
2. Designed schema and modeling of data and written algorithm to store all validated data in Cassandra using Spring
Data Cassandra Rest.
3. To standardize the Input Merchants Data, Uploading images, indexthe given Data sets into Hsearch and Persist the
data on Hbase tables.
4. Setting up the Spark Streaming and Kafka Cluster and developed a Spark Streaming Kafka App.
5. Generate Stock Alerts, Price Alerts, Popular Product Alerts, New Arrivals for each user based on given likes,
favorite, shares count information.
PROJECT # 2:
Project Name : Truck Events Analysis
Role : Developer
Client Name : HortonWorks
CompanyName : TCS
Duration : Jan 2014 to Dec 2014
Team Size : 10
Environment : Hadoop, HDFS, Hive, Hbase, Kafka, Storm, Rabbit-MQ WebStormp,
Google Maps, New York City Truck Routes from NYC DOT. Truck
Events Data Generated using a custom.
Project Description: The Trucking business is a high risk business in which truck driver venture into remote areas, aften in
harsh weather conditions and chaotic traffic on a daily basis. Using this solution illustrating Modern Data Architecture with
Hortonworks Data Platform, we have developed a centralized management system that can help reduce risk and lower the
total cost of operations.
Responsibilities:
1. Developed a simulator to send / emit events based on NYC DOT data file.
2. Built Kafka Producer to accept / send events to Kafka Producer which is on Storm Spout.
3. Written Storm topology to accept events fromKafka Producer and Process Events.
4. Developed Storm Bolts to Emit data into Hbase, HDFS, Rabbit-MQ Web Stomp.
5. Hive Queries to Map Truck Events Data, Weather Data, Traffic Data
POC:
Title : Research and Analysis in Twitter Data
Role : Admin and Developer
CompanyName : TCS
Duration : Aug-2013 to Dec-2013
Environment : Hadoop, Hive, Flume.
Operating Systems : Ubuntu.
Description: It was based on a Social media has gained immense popularity with marketing teams, and Twitter is an
effective tool for a company to get people excited about its products. Twitter makes it easy to engage users and communicate
directly with them, and in turn, users can provide word-of-mouth marketing for companies by discussing the
products.Analyzing an application which generates logs to keep track of the status (health)
Responsibilities:
 Trained the team on Hadoop and Eco SystemComponents.
 Installed Apache Hadoop Cluster.
 Installed Flume agent on source and retrieved the incremental data to HDFS.
 Loading the Twitter data into HDFS using Flume
 Installed Pig, Hive for Analysis
 Inserting data into Hive using JSON SerDe
 Analyzing data for creating various reports
PROJECT # 3
Project Name : ColeHaan B2B Development & Upgrade
Role : Developer
Client Name : ColeHaan.
CompanyName : Wipro
Duration : July 2011 to June2013
Team Size : 6
Environment : Java, NWDI, NWDS, Spring IoC
Project Description: Online shopping portal for Business to Business which is developed on Java. Vendors raise a sales
order from sited which will intern raise a sales order in back end SAP system. As back end SAP systemhas been upgraded to
newer version application stopped working. Fixing of application and developing same has done as a part of this project
implementation.
Responsibilities:
1. Responsible for all java related activities like analysis, design and development.
2. Participated R & D for two months and provided a solution for making this application work for upgraded backend
SAP system. Now this solution is highly used in organization for SAP upgrade related projects.
3. Further developments in this project using J2EE and related technologies
4. Interacting with clients for giving demo’s and taking requirements, suggestions.
Achievements:
 Received Best performer award for 2015.
 Consistently achieved high quality assurance ratings.
 Resolved Critical issues, and got good feedback fromclients.
Educational Qualification:
Bachelor of Engineering (B.Tech) from Intel Engineering College (CSE), JNTU Anantapuram2011.
Date :
Place:
(M.Manikyam)

More Related Content

DOCX
Srikanth hadoop 3.6yrs_hyd
PPTX
Big dataarchitecturesandecosystem+nosql
PDF
DBA to Data Scientist
PDF
Dba to data scientist -Satyendra
PDF
Hadoop Vs Spark — Choosing the Right Big Data Framework
PDF
Sandish3Certs
PDF
Functional programming
 for optimization problems 
in Big Data
PDF
Spark tutorial @ KCC 2015
Srikanth hadoop 3.6yrs_hyd
Big dataarchitecturesandecosystem+nosql
DBA to Data Scientist
Dba to data scientist -Satyendra
Hadoop Vs Spark — Choosing the Right Big Data Framework
Sandish3Certs
Functional programming
 for optimization problems 
in Big Data
Spark tutorial @ KCC 2015

What's hot (20)

PDF
Show me the Money! Cost & Resource Tracking for Hadoop and Storm
DOCX
Bharath Hadoop Resume
PPTX
Introduction To Big Data and Use Cases on Hadoop
PPTX
Introduction to Big Data, MapReduce, its Use Cases, and the Ecosystems
DOCX
Shiv shakti resume
PPTX
Hadoop from Hive with Stinger to Tez
PPTX
Introduction To Big Data with Hadoop and Spark - For Batch and Real Time Proc...
PDF
Big data Hadoop Analytic and Data warehouse comparison guide
PPTX
A Graph Service for Global Web Entities Traversal and Reputation Evaluation B...
PDF
Hw09 Welcome To Hadoop World
PPT
Architecting the Future of Big Data and Search
PDF
Fishing Graphs in a Hadoop Data Lake by Jörg Schad and Max Neunhoeffer at Big...
PDF
Hadoop Ecosystem Architecture Overview
PPTX
Hd insight overview
PPTX
Big Data Analysis and Industrial Approach using Spark
PPTX
ING- CoreIntel- Collect and Process Network Logs Across Data Centers in Real ...
DOCX
Resume_Karthick
DOCX
Senior systems engineer at Infosys with 2.4yrs of experience on Bigdata & hadoop
PPTX
Predictive Analytics with Hadoop
Show me the Money! Cost & Resource Tracking for Hadoop and Storm
Bharath Hadoop Resume
Introduction To Big Data and Use Cases on Hadoop
Introduction to Big Data, MapReduce, its Use Cases, and the Ecosystems
Shiv shakti resume
Hadoop from Hive with Stinger to Tez
Introduction To Big Data with Hadoop and Spark - For Batch and Real Time Proc...
Big data Hadoop Analytic and Data warehouse comparison guide
A Graph Service for Global Web Entities Traversal and Reputation Evaluation B...
Hw09 Welcome To Hadoop World
Architecting the Future of Big Data and Search
Fishing Graphs in a Hadoop Data Lake by Jörg Schad and Max Neunhoeffer at Big...
Hadoop Ecosystem Architecture Overview
Hd insight overview
Big Data Analysis and Industrial Approach using Spark
ING- CoreIntel- Collect and Process Network Logs Across Data Centers in Real ...
Resume_Karthick
Senior systems engineer at Infosys with 2.4yrs of experience on Bigdata & hadoop
Predictive Analytics with Hadoop
Ad

Viewers also liked (20)

PDF
GLIMPSE Social Media
PDF
digitale-agenda-vernieuwen-vertrouwen-versnellen
PPTX
Login cat tekmonks - v3
DOCX
Alexis Resume[241
PPTX
Números reales
PDF
Intro til Dialogmarkedsføring
PPTX
Informática en la producción audiovisual
PPTX
Dia de la cancion criolla
PPT
ภาษาไทดำ บทเรียนมัลติมีเดีย : Multimedia Lesson on Tai Dam Language.
PDF
VALUEPRO TEASER
PPTX
Educación intercultural: Mapeo y diversidad cultural.
PPTX
Virus y vacunas andres cardona
PPTX
Login cat tekmonks - v4
PPTX
La narrativa hispanoamericana
PPTX
Capacitación
PPTX
Higiene y seguridad insdustrial
PPTX
Metodo científico
DOC
Claudia Spencer Resume 2016
PDF
almisbarIEEE-1
PPTX
Color assignment
GLIMPSE Social Media
digitale-agenda-vernieuwen-vertrouwen-versnellen
Login cat tekmonks - v3
Alexis Resume[241
Números reales
Intro til Dialogmarkedsføring
Informática en la producción audiovisual
Dia de la cancion criolla
ภาษาไทดำ บทเรียนมัลติมีเดีย : Multimedia Lesson on Tai Dam Language.
VALUEPRO TEASER
Educación intercultural: Mapeo y diversidad cultural.
Virus y vacunas andres cardona
Login cat tekmonks - v4
La narrativa hispanoamericana
Capacitación
Higiene y seguridad insdustrial
Metodo científico
Claudia Spencer Resume 2016
almisbarIEEE-1
Color assignment
Ad

Similar to Manikyam_Hadoop_5+Years (20)

DOCX
sudipto_resume
DOCX
Aryaan_CV
DOC
Nagarjuna_Damarla
DOC
DOC
ChandraSekhar CV
DOCX
Prasanna Resume
DOCX
Arindam Sengupta _ Resume
DOCX
PDF
Ashish dwivedi
DOC
Resume (2)
DOCX
ChetanResume
DOCX
hadoop resume
DOCX
Balamurugan.KM_Arch
DOC
Pankaj Resume for Hadoop,Java,J2EE - Outside World
DOCX
Anil_BigData Resume
DOCX
Pushpendra
DOCX
Sunshine consulting mopuru babu cv_java_j2ee_spring_bigdata_scala
DOCX
Sudhanshu kumar hadoop
sudipto_resume
Aryaan_CV
Nagarjuna_Damarla
ChandraSekhar CV
Prasanna Resume
Arindam Sengupta _ Resume
Ashish dwivedi
Resume (2)
ChetanResume
hadoop resume
Balamurugan.KM_Arch
Pankaj Resume for Hadoop,Java,J2EE - Outside World
Anil_BigData Resume
Pushpendra
Sunshine consulting mopuru babu cv_java_j2ee_spring_bigdata_scala
Sudhanshu kumar hadoop

Manikyam_Hadoop_5+Years

  • 1. M.Manikyam Email:manikyam.m20@gmailcom Mobile: +91-7075195124 Carrer Objective: To work in a professionaland solution oriented environment where I can get enough opportunity to continuously innovate and improve Software Products as well as myself, which in the long run will allow me to learn and establish processes and standards. Professional summary:  Having 5+ years of experience, out of which 3+ years experience in Spark & Hadoop development.  Good experience creating real time data streaming solutions using Apache Spark Core, SparkSQL & DataFrames, Spark Streaming.  Hands on experience with Big Data core components and Ecosystem(Spark, Spark SQL, Spark Streaming, Hadoop, HDFS, MapReduce, YARN, Zookeeper, Hive, Hbase, Pig, Sqoop, Flume, Kafka, Storm, Oozie).  Experience in importing and exporting data using Sqoop from HDFS to Relational DataBase Systems (RDBMS) and from RDBMS to HDFS.  Involved in Integrating Hive with Hbase, Pig with Hbase and Hive with Tez.  Good Knowledge on NOSQL databases like as Hbase, Cassandra and MongoDB.  Experience in Hadoop Administration in Cloudera Distribution and have knowledge of HortonWorks Distribution.  Ability to adapt to evolving technology strong sense ofresponsibility and accomplishment. Technical Skills: Bigdata Skills : Spark, Hadoop, HDFS, MapReduce, YARN, Zookeeper, Hive, Hbase, Pig, Sqoop, Flume, Kafka, Storm, Oozie. Languages : Scala, Java, C++, C Scripting Languages : HTML, CSS, JavaScript. Web Application Server : Apache Tomcat. NoSql DataBases : Hbase, Phoenix, Cassandra, MongoDB Databases : Mysql, Oracle, Derby IDE’s : Eclipse, EditPlus JavaEE Technologies : Servlet, JSP. Professional Experience:  Working as a Software Engineer in TATA CONSULTANCY SERVICE in Hyd, From July 2013 to TillDate.  Worked as a Software Engineer in WIPRO Hyd from June 2011 to June 2013. Work Experience: PROJECT # 1 Project Name : E-Commerce Data Pipe Line Role : Developer Client Name : Obsessory CompanyName : TCS Duration : Jan 2015 to Till Now Team Size : 10 Environment : Cassandra,Hive, Spark (Core, SQL, MLLib, Streaming), Hadoop, MapReduce, Scala, Java Project Description: Obsessory is a technology company that provides a web and mobile platform to assist shoppers in discovery, search, comparison, and tracking of items across the Internet. Obsessory’s powerful search engine catalogs millions of products from online stores on a daily basis and uses proprietary algorithms to enhance the depth and breadth of the user’s search. It employs adaptive and social learning to continuously refine the search results present the user with th e most relevant selection of items. Role & Responsibilities: 1. Crawling of Data from 100 + sites based on ontology maintenance. 2. Designed schema and modeling of data and written algorithm to store all validated data in Cassandra using Spring Data Cassandra Rest. 3. To standardize the Input Merchants Data, Uploading images, indexthe given Data sets into Hsearch and Persist the data on Hbase tables. 4. Setting up the Spark Streaming and Kafka Cluster and developed a Spark Streaming Kafka App. 5. Generate Stock Alerts, Price Alerts, Popular Product Alerts, New Arrivals for each user based on given likes, favorite, shares count information.
  • 2. PROJECT # 2: Project Name : Truck Events Analysis Role : Developer Client Name : HortonWorks CompanyName : TCS Duration : Jan 2014 to Dec 2014 Team Size : 10 Environment : Hadoop, HDFS, Hive, Hbase, Kafka, Storm, Rabbit-MQ WebStormp, Google Maps, New York City Truck Routes from NYC DOT. Truck Events Data Generated using a custom. Project Description: The Trucking business is a high risk business in which truck driver venture into remote areas, aften in harsh weather conditions and chaotic traffic on a daily basis. Using this solution illustrating Modern Data Architecture with Hortonworks Data Platform, we have developed a centralized management system that can help reduce risk and lower the total cost of operations. Responsibilities: 1. Developed a simulator to send / emit events based on NYC DOT data file. 2. Built Kafka Producer to accept / send events to Kafka Producer which is on Storm Spout. 3. Written Storm topology to accept events fromKafka Producer and Process Events. 4. Developed Storm Bolts to Emit data into Hbase, HDFS, Rabbit-MQ Web Stomp. 5. Hive Queries to Map Truck Events Data, Weather Data, Traffic Data POC: Title : Research and Analysis in Twitter Data Role : Admin and Developer CompanyName : TCS Duration : Aug-2013 to Dec-2013 Environment : Hadoop, Hive, Flume. Operating Systems : Ubuntu. Description: It was based on a Social media has gained immense popularity with marketing teams, and Twitter is an effective tool for a company to get people excited about its products. Twitter makes it easy to engage users and communicate directly with them, and in turn, users can provide word-of-mouth marketing for companies by discussing the products.Analyzing an application which generates logs to keep track of the status (health) Responsibilities:  Trained the team on Hadoop and Eco SystemComponents.  Installed Apache Hadoop Cluster.  Installed Flume agent on source and retrieved the incremental data to HDFS.  Loading the Twitter data into HDFS using Flume  Installed Pig, Hive for Analysis  Inserting data into Hive using JSON SerDe  Analyzing data for creating various reports PROJECT # 3 Project Name : ColeHaan B2B Development & Upgrade Role : Developer Client Name : ColeHaan. CompanyName : Wipro Duration : July 2011 to June2013 Team Size : 6 Environment : Java, NWDI, NWDS, Spring IoC Project Description: Online shopping portal for Business to Business which is developed on Java. Vendors raise a sales order from sited which will intern raise a sales order in back end SAP system. As back end SAP systemhas been upgraded to newer version application stopped working. Fixing of application and developing same has done as a part of this project implementation. Responsibilities: 1. Responsible for all java related activities like analysis, design and development. 2. Participated R & D for two months and provided a solution for making this application work for upgraded backend SAP system. Now this solution is highly used in organization for SAP upgrade related projects. 3. Further developments in this project using J2EE and related technologies 4. Interacting with clients for giving demo’s and taking requirements, suggestions.
  • 3. Achievements:  Received Best performer award for 2015.  Consistently achieved high quality assurance ratings.  Resolved Critical issues, and got good feedback fromclients. Educational Qualification: Bachelor of Engineering (B.Tech) from Intel Engineering College (CSE), JNTU Anantapuram2011. Date : Place: (M.Manikyam)