SlideShare a Scribd company logo
RAMA PRASAD OWK
ETL/Hadoop Developer
Email: ramprasad1261@gmail.com
Cell: +1 248 525 3827
_______________________________________________________________________________________________
Professional Summary
Overall 6+ years of IT experience in System Analysis, design, development and implementation of Data Warehouses
as an ETL Developer using IBM Data Stage and worked as Hadoop developer.
▪ Involved in Designing, Developing, Documenting, Testing of ETL jobs and mappings in Server and Parallel
jobs using Data Stage to populate tables in Data Warehouse and Data marts.
▪ Expertise in using various Hadoop infrastructures such as MapReduce, Pig, Hive, HBase, Sqoop,Spark and
Spark Streaming for data storage and analysis.
▪ Experienced in troubleshooting errors in HBase Shell/API, Pig, Hive and MapReduce.
▪ Highly experienced in importing and exporting data between HDFS and Relational Database Management
systems using Sqoop.
▪ Collected logs data from various sources and integrated in to HDFS using Flume.
▪ Good experience in Generating Statistics/extracts/reports from the Hadoop.
▪ Experience in writing Spark Applications using spark-shell, pyspark, spark-submit.
▪ Developed prototype Spark applications using Spark-Core, Spark SQL, DataFrame API
▪ Good experience in generating Statistics and reports from the Hadoop.
▪ Experience in Python language to write a scripts in Hadoop environment.
▪ Capable of processing large sets of structured, semi-structured and unstructured data using hadoop.
▪ Proficient in developing strategies for Extraction, Transformation and Loading (ETL) mechanism.
▪ Good Knowledge in designing Parallel jobs using various stages like Join, Merge, Lookup, Remove
duplicates, Filter, Dataset, Lookup file set, Complex flat file, Modify, Aggregator.
▪ Designed Server jobs using various types of stages like Sequential file, ODBC, Hashed file, Aggregator,
Transformer, Sort, Link Partitioner and Link Collector.
▪ Expert in working with Data Stage Designer and Director.
▪ Experience in analyzing the data generated by the business process, defining the granularity, source to target
mapping of the data elements, creating Indexes and Aggregate tables for the data warehouse design and
development.
▪ Good knowledge of studying the data dependencies using metadata stored in the repository and prepared
batches for the existing sessions to facilitate scheduling of multiple sessions.
▪ Proven track record in troubleshooting of Data Stage jobs and addressing production issues like performance
tuning and enhancement.
▪ Experienced in UNIXshell scripts for the automation of processes and scheduling the Data Stage jobs using
wrappers.
▪ Involved in unit testing, system integration testing, implementation and maintenance of databases jobs.
▪ Effective in cross-functional and global environments to manage multiple tasks and assignments concurrently
with effective communication skills.
Technical Skill Set
ETL Tools IBM Web Sphere Data stage and Quality Stage 8.1 , 8.5, 9.1 & 11.5, Ascential Data
Stage 7.5
Big Data Ecosystems
Hadoop,Mapreduce,HDFS,Hive,Sqoop,Pig,Spark, HBase
Databases Oracle,Greenplum,Teradata
Tools and Utilities SQL Developer,Teradata SQL Assistant, PGAdmin III
FTP Tool Tumbleweed, Sterling Commerce
Programming Languages Python, SQL, UNIX Shell Script
Operating Systems
Windows XP, LINUX
Education
▪ Bachelor of Technology in Information Technology.
Professional Certification
▪ IBM Certified Solution Developer - InfoSphere DataStage v8.5 Certified on 09/26/2013.
Experience Summary
Client: United Services Automobile Association July 2016 – Till Date
Role: DataStage/Hadoop Developer
The United Services Automobile Association (USAA) is a Texas-based Fortune 500 diversified financial services
group of companies including a Texas Department of Insurance regulated reciprocal inter-insurance exchange and
subsidiaries offering banking, investing, and insurance to people and families that serve, or served, in the United States
military. At the end of 2015, there were 11.4 million members.
USAA was founded in 1922 by a group of U.S. Army officers as a mechanism for mutual self-insurance when they
were unable to secure auto insurance because of the perception that they, as military officers, were a high-risk group.
USAA has since expanded to offer banking and insurance services to past and present members of the Armed Forces,
officers and enlisted, and their immediate families.
Solvency project basically provides the team to build the reports on business objects which can be used for business
analysis.
Responsibilities:
 Involved in understanding of data modeling documents along with the data modeler.
 Developed Sqoop Scripts to extract data from DB2 EDW source databases onto HDFS.
 Developed custom Map Reduce Job to perform data cleanup, transform Data from Text to Avro & write
output directly into hive tables by generating dynamic partitions
 Developed Custom FTP & SFTP drivers to pull flat files from UNIX, Windows into Hadoop & tokenize an
identified sensitive data from input records on the fly parallelly
 Developed Custom Input Format, Record Reader,Mapper, Reducer,Partitioner as part of developing end to
end hadoop applications.
 Developed Custom Sqoop tool to import data residing in any relational databases, tokenize an identified
sensitive column on the fly and store it into Hadoop.
 Worked on Hbase Java API to populate operational Hbase table with Key value.
 Experience in writing Spark Applications using spark-shell, pyspark, spark-submit
 Developed severalcustom User defined functions in Hive & Pig using Java & python
 Developed Sqoop Job to perform import/ Incremental Import of data from any relational tables into hadoop in
different formats such as text,avro, sequence,etc & into hive tables.
 Developed Sqoop Job to export the data from hadoop to relational tables for visualization and to generate
reports for the BI team.
 Developed ETL jobs as per business rules using ETL design document.
 Created the reusable jobs which can be used across the project.
 Enhanced the reusability of the jobs by making and deploying shared containers and multiple instances of the
jobs.
 Extensively used Control M tool for automation of scheduling jobs on daily, bi-weekly, weekly monthly
basis with proper dependencies.
 Wrote complex SQL queries using joins, sub queries and correlated sub queries
 Performed Unit testing and System Integration testing by developing and documenting test cases.
Environment: IBM-DataStage-9.1 & 11.5 Oracle,Hadoop, Netezza Database,Control M Scheduler
Client: Walmart Jan 2015 – June 2016
Role: DataStage/Hadoop Developer
Walmart is the largest retailer of consumer staples products in the world. It was founded by Sam Walton in 1962, who
started with the idea of a store that would offer its customers the lowest prices in the market. Walmart was
incorporated in 1969 following the success of Walton’s ‘everyday low prices’ pricing strategy. It currently operates in
27 countries around the world. The company drives growth by expansion in its retail area and investments in e-
commerce. Nonetheless, it has underperformed its competitor, Costco over the last few years due to the Costco’s better
customer service record. Walmart has lost customers to Costco over the years, primarily because it doesn’t pay its
employees well, which has led to demotivation in the workforce and poor customer service
EIM Innovations Hana project basically provides Data Cafe team to build analytical reports on Tableau which are
again used for further analysis by the business. The data supplied contains tables like Scan, Item, Comp_Hist,
Club,Store wise details for every hour and on daily basis.
Responsibilities:
 Involved in understanding of data modeling documents along with the data modeler.
 Imported data using Sqoop to load data from DB2 to HDFS on regular basis.
 Developing Scripts and Batch Job to schedule various Hadoop Program.
 Written Hive queries for data analysis to meet the business requirements.
 Creating Hive tables and working on them using Hive QL.
 Importing and exporting data into HDFS and Hive using Sqoop.
 Experienced in defining job flows.
 Involved in creating Hive tables, loading data and writing hive queries.
 Developed a custom File system plug in for Hadoop so that it can access files on Data Platform
 Extensively worked on DataStage jobs for splitting bulk data into subsets and to dynamically distribute to all
available processors to achieve best job performance.
 Developed ETL jobs as per business rules using ETL design document
 Converted complex job designs to different job segments and executed through job sequencer for better
performance and easy maintenance.
 Enhanced the reusability of the jobs by making and deploying shared containers and multiple instances of the
jobs.
 Imported the data residing in the host systems into the data mart developed in DB2,Green Plum etc.
 Wrote scripts to upload the data in to greenplum ,SAP hana since the current datastage 8.5 version don’t
have Greenplum and SAP Hana DB plugins.
 Extensively used CA7 tool which resides on the Mainframe for automation of scheduling jobs on daily, bi-
weekly, weekly monthly basis with proper dependencies.
 Wrote complex SQL queries using joins, sub queries and correlated sub queries
 Performed Unit testing and System Integration testing by developing and documenting test cases.
 Capable of processing large sets of structured, semi-structured and unstructured data
 Pre-processing using Hive and Pig
 Managing and deploying HBase
Environment: IBM-DataStage-9.1, Hadoop ,Greenplum ,Teradata Database,SAP HANA LINUX,CA7 Scheduler
Client: General Motors Feb 2014 – Dec 2014
Role: DataStage Developer
General Motors is one of the world's leading manufacturers of cars and trucks. Its domestic models include Buick,
Cadillac, Chevrolet and GMC and the company has a huge international presence, selling vehicles in major countries
across the world.
The purpose of this project (Part Launch Activity Network (PLAN)) is being developed to address a business gap in
the Global SAP solution for General Motors (GM) Customer Care and Aftersales (CCA) around the management
process for the release of new part numbers in General Motors.
Responsibilities:
 Involved in understanding of business processes and coordinated with business analysts to get specific user
requirements.
 Used Information Analyzer for Column Analysis, Primary Key Analysis and Foreign Key Analysis.
 Extensively worked on DataStage jobs for splitting bulk data into subsets and to dynamically distribute to all
available processors to achieve best job performance.
 Developed ETL jobs as per business rules using ETL design document
 Converted complex job designs to different job segments and executed through job sequencer for better
performance and easy maintenance.
 Used DataStage maps to load data from Source to target.
 Enhanced the reusability of the jobs by making and deploying shared containers and multiple instances of the
jobs.
 Imported the data residing in the host systems into the data mart developed in Oracle 10g.
 Extensively used Autosys for automation of scheduling jobs on daily, bi-weekly, weekly monthly basis with
proper dependencies.
 Wrote complex SQL queries using joins, sub queries and correlated sub queries
 Performed Unit testing and System Integration testing by developing and documenting test cases.
Environment: IBM-DataStage-8.5, Oracle 10g,LINUX
Client: General Motors April 2013– Jan 2014
Role: Data Stage Developer
General Motors is one of the world's leading manufacturers of cars and trucks. Its domestic models include Buick,
Cadillac, Chevrolet and GMC and the company has a huge international presence, selling vehicles in major countries
across the world.
The Purpose this project ( Pricing Data) is distributed to severalrecipients, that is either to support different systems or
the data is sold to agencies for them to offer it to their customers. These files were created manually by the usage of
Excel or Access and distributed via e-mail or FTP to the recipients. We developed interfaces to automate the process
of data sharing with multiple recipients.
Responsibilities:
 Worked on DataStage Designer, Manager and Director.
 Worked with the Business analysts and the DBAs for requirements gathering, analysis, testing, and metrics
and project coordination.
 Involved in extracting the data from different data sources like Oracle and flat files.
 Involved in creating and maintaining Sequencer and Batch jobs.
 Creating ETL Job flow design.
 Used ETL to load data into the Oracle warehouse.
 Created various standard/reusable jobs in DataStage using various active and passive stages like Sort, Lookup,
Filter, Join, Transformer, aggregator, Change Capture Data,Sequential file, DataSets.
 Involved in development of Job Sequencing using the Sequencer.
 Used Remove Duplicates stage to remove the duplicates in the data.
 Used designer and director to schedules and monitor jobs and to collect the performance statistics
 Extensively worked with database objects including tables, views and triggers.
 Creating local and shared containers to facilitate ease and reuse of jobs.
 Implemented the underlying logic for Slowly Changing Dimensions
 Worked with Developers to troubleshoot and resolve issues in job logic as well as performance.
 Documented ETL test plans, test cases, test scripts, and validations based on design specifications for unit
testing, system testing, functional testing, prepared test data for testing, error handling and analysis.
Environment: IBM-DataStage-8.5, Oracle 10g, LINUX
Client: General Motors Sep 2012 – March 2013
Role: Datastage Developer
General Motors is one of the world's leading manufacturers of cars and trucks. Its domestic models include Buick,
Cadillac, Chevrolet and GMC and the company has a huge international presence, selling vehicles in major countries
across the world.
GM ADP Payroll project will outsource the payroll processing data to a third party provider - ADP which will process
the Employee Payrolls and sends back the data that will be used by various Downstream Payroll related Applications.
Currently 2 legacy applications are doing the payroll processing - HPS (Hourly Payroll System) and SPS (Salaried
payroll system) which involves lot of manual activities and need very long time to process the complete Payroll data.
With increase in number of employees, this process is expected to become more cumbersome.
So General Motors decided to decommission the existing HPS/SPS and carry out processing through a 3rd party
automated application called ADP.
Responsibilities:
 Involved as ETL Developer during the analysis, planning, design, development, and implementation stages of
projects
 Prepared Data Mapping Documents and Design the ETL jobs based on the DMD with required Tables in the
Dev Environment.
 Active participation in decision making and QA meetings and regularly interacted with the Business Analysts
&development team to gain a better understanding of the Business Process,Requirements & Design.
 Used DataStage as an ETL tool to extract data from sources systems, loaded the data into the
ORACLE database.
 Designed and Developed Data stage Jobs to Extract data from heterogeneous sources,Applied transform
logics to extracted data and Loaded into Data Warehouse Databases.
 Created Datastage jobs using different stages like Transformer,Aggregator, Sort, Join, Merge, Lookup, Data
Set, Funnel, Remove Duplicates, Copy, Modify, Filter, Change Data Capture,Sample, Surrogate Key, Column
Generator, Row Generator, Etc.
 Extensively worked with Join, Look up (Normal and Sparse) and Merge stages.
 Extensively worked with sequential file, dataset, file set and look up file set stages.
 Extensively used Parallel Stages like Row Generator, Column Generator, Head,and Peek for development and
de-bugging purposes.
 Used the Data Stage Director and its run-time engine to schedule running the solution, testing and debugging
its components, and monitoring the resulting executable versions on ad hoc or scheduled basis.
 Converted complex job designs to different job segments and executed through job sequencer for better
performance and easy maintenance.
 Creation of jobs sequences.
 Maintained Data Warehouse by loading dimensions and facts as part of project. Also worked for different
enhancements in FACT tables.
 Created shell script to run data stage jobs from UNIXand then schedule this script to run data stage jobs
through scheduling tool.
 Coordinate with team members and administer all onsite and offshore work packages.
 Analyze performance and monitor work with capacity planning.
 Performed performance tuning of the jobs by interpreting performance statistics of the jobs developed.
 Documented ETL test plans, test cases,test scripts, and validations based on design specifications for unit
testing, system testing, functional testing, prepared test data for testing, error handling and analysis.
 Participated in weekly status meetings
Environment: IBM-DataStage-8.5, Oracle 10g, LINUX
Client: General Motors Feb 2011 – Aug 2012
Role: Team Member
General Motors is one of the world's leading manufacturers of cars and trucks. Its domestic models include Buick,
Cadillac, Chevrolet and GMC and the company has a huge international presence, selling vehicles in major countries
across the world.
This project having Global Strategic Pricing interfaces in production environment which are determine effective
pricing structure including determining usages of parts,sellling price, demand, warranty, currency exchange rate,etc
Responsibilities:
 Supporting 350 interfaces specifications developed in different Datastage versions
 Supporting 30 interface specifications mainly to / from SAP via R/3 Plugin
 Analysed different Architecture designs for Batch Transaction Interfaces between
different Systems (Mainframe, Weblogic, SAP).
 Strictly observed GM GIF (Global Integration Foundation) EAI Standard for Error Handling,
Audit report generation using CSF (Common Services Framework)
 Develop and Deployment of MERS in Production environment.
 Meeting with Clients for requirement gathering, addressing issues and change request.
 Analysis of issues occurring in a production, resolving issues in Development and
Test environment and roll over the changes to production.
 Involved in SOX Audit process.
 Involved in Change Management Process.
 Working on Production Support Calls.
 Co-ordination between onsite and offshore team.
Environment: IBM-DataStage-8.1, IBM-DataStage-8.5,AscentialData Stage 7.5, Oracle 10g, LINUX

More Related Content

DOCX
Maharshi_Amin_416
PDF
Rajeev kumar apache_spark & scala developer
DOC
Ramesh BODS_IS
DOCX
Zaheer's Resume
DOC
simha msbi resume
DOC
Ramesh BODS_IS
PDF
SANTOSH_V
Maharshi_Amin_416
Rajeev kumar apache_spark & scala developer
Ramesh BODS_IS
Zaheer's Resume
simha msbi resume
Ramesh BODS_IS
SANTOSH_V

What's hot (19)

DOC
CHETHAN RESUME_MSBI DEVELOPER
DOCX
Senior systems engineer at Infosys with 2.4yrs of experience on Bigdata & hadoop
DOC
Hadoop Big Data Resume
DOCX
Resume_Abhinav_Hadoop_Developer
PDF
Hadoop-DS: Which SQL-on-Hadoop Rules the Herd
PDF
Big SQL 3.0 - Fast and easy SQL on Hadoop
DOCX
Borden_resume_1JUN16
DOCX
Resume_Asad_updated_DEC2016
PDF
Big SQL Competitive Summary - Vendor Landscape
PDF
How can Hadoop & SAP be integrated
DOC
Sql developer resume
PPT
Tableau Architecture
DOC
Bi developer gary t
PDF
How pig and hadoop fit in data processing architecture
DOCX
Resume pragnya puspita padhi
DOC
Bi developer gary thompson
PDF
Delta machenism with db connect
DOCX
sap hana resume
PDF
Db connect with sap bw
CHETHAN RESUME_MSBI DEVELOPER
Senior systems engineer at Infosys with 2.4yrs of experience on Bigdata & hadoop
Hadoop Big Data Resume
Resume_Abhinav_Hadoop_Developer
Hadoop-DS: Which SQL-on-Hadoop Rules the Herd
Big SQL 3.0 - Fast and easy SQL on Hadoop
Borden_resume_1JUN16
Resume_Asad_updated_DEC2016
Big SQL Competitive Summary - Vendor Landscape
How can Hadoop & SAP be integrated
Sql developer resume
Tableau Architecture
Bi developer gary t
How pig and hadoop fit in data processing architecture
Resume pragnya puspita padhi
Bi developer gary thompson
Delta machenism with db connect
sap hana resume
Db connect with sap bw
Ad

Similar to Rama prasad owk etl hadoop_developer (20)

DOCX
Prashanth Kumar_Hadoop_NEW
DOC
Pallavi_Resume
DOC
DOCX
Sunshine consulting mopuru babu cv_java_j2ee_spring_bigdata_scala
DOCX
Kumar Godasi - Resume
PDF
jagadeesh updated
DOCX
Resume_kallesh_latest
DOC
Nagarjuna_Damarla
DOC
Informatica 5+years of experince
DOC
Informatica 5+years of experince
DOC
Informatica_5+years of experince
DOC
Resume_VipinKP
DOC
DeepeshRehi
DOCX
Chandan's_Resume
DOC
Sunshine consulting Mopuru Babu CV_Java_J2ee_Spring_Bigdata_Scala_Spark
DOC
Sunshine consulting mopuru babu cv_java_j2_ee_spring_bigdata_scala_Spark
DOC
Ganesh CV
DOCX
BigData_Krishna Kumar Sharma
DOC
Resume_of_Vasudevan - Hadoop
Prashanth Kumar_Hadoop_NEW
Pallavi_Resume
Sunshine consulting mopuru babu cv_java_j2ee_spring_bigdata_scala
Kumar Godasi - Resume
jagadeesh updated
Resume_kallesh_latest
Nagarjuna_Damarla
Informatica 5+years of experince
Informatica 5+years of experince
Informatica_5+years of experince
Resume_VipinKP
DeepeshRehi
Chandan's_Resume
Sunshine consulting Mopuru Babu CV_Java_J2ee_Spring_Bigdata_Scala_Spark
Sunshine consulting mopuru babu cv_java_j2_ee_spring_bigdata_scala_Spark
Ganesh CV
BigData_Krishna Kumar Sharma
Resume_of_Vasudevan - Hadoop
Ad

Recently uploaded (20)

PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
Machine learning based COVID-19 study performance prediction
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PPTX
Programs and apps: productivity, graphics, security and other tools
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PPTX
Cloud computing and distributed systems.
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
Encapsulation_ Review paper, used for researhc scholars
PPTX
MYSQL Presentation for SQL database connectivity
PDF
Approach and Philosophy of On baking technology
DOCX
The AUB Centre for AI in Media Proposal.docx
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Machine learning based COVID-19 study performance prediction
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Programs and apps: productivity, graphics, security and other tools
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Cloud computing and distributed systems.
Per capita expenditure prediction using model stacking based on satellite ima...
Encapsulation_ Review paper, used for researhc scholars
MYSQL Presentation for SQL database connectivity
Approach and Philosophy of On baking technology
The AUB Centre for AI in Media Proposal.docx
Understanding_Digital_Forensics_Presentation.pptx
Building Integrated photovoltaic BIPV_UPV.pdf
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
“AI and Expert System Decision Support & Business Intelligence Systems”
Chapter 3 Spatial Domain Image Processing.pdf
The Rise and Fall of 3GPP – Time for a Sabbatical?
Digital-Transformation-Roadmap-for-Companies.pptx

Rama prasad owk etl hadoop_developer

  • 1. RAMA PRASAD OWK ETL/Hadoop Developer Email: ramprasad1261@gmail.com Cell: +1 248 525 3827 _______________________________________________________________________________________________ Professional Summary Overall 6+ years of IT experience in System Analysis, design, development and implementation of Data Warehouses as an ETL Developer using IBM Data Stage and worked as Hadoop developer. ▪ Involved in Designing, Developing, Documenting, Testing of ETL jobs and mappings in Server and Parallel jobs using Data Stage to populate tables in Data Warehouse and Data marts. ▪ Expertise in using various Hadoop infrastructures such as MapReduce, Pig, Hive, HBase, Sqoop,Spark and Spark Streaming for data storage and analysis. ▪ Experienced in troubleshooting errors in HBase Shell/API, Pig, Hive and MapReduce. ▪ Highly experienced in importing and exporting data between HDFS and Relational Database Management systems using Sqoop. ▪ Collected logs data from various sources and integrated in to HDFS using Flume. ▪ Good experience in Generating Statistics/extracts/reports from the Hadoop. ▪ Experience in writing Spark Applications using spark-shell, pyspark, spark-submit. ▪ Developed prototype Spark applications using Spark-Core, Spark SQL, DataFrame API ▪ Good experience in generating Statistics and reports from the Hadoop. ▪ Experience in Python language to write a scripts in Hadoop environment. ▪ Capable of processing large sets of structured, semi-structured and unstructured data using hadoop. ▪ Proficient in developing strategies for Extraction, Transformation and Loading (ETL) mechanism. ▪ Good Knowledge in designing Parallel jobs using various stages like Join, Merge, Lookup, Remove duplicates, Filter, Dataset, Lookup file set, Complex flat file, Modify, Aggregator. ▪ Designed Server jobs using various types of stages like Sequential file, ODBC, Hashed file, Aggregator, Transformer, Sort, Link Partitioner and Link Collector. ▪ Expert in working with Data Stage Designer and Director. ▪ Experience in analyzing the data generated by the business process, defining the granularity, source to target mapping of the data elements, creating Indexes and Aggregate tables for the data warehouse design and development. ▪ Good knowledge of studying the data dependencies using metadata stored in the repository and prepared batches for the existing sessions to facilitate scheduling of multiple sessions. ▪ Proven track record in troubleshooting of Data Stage jobs and addressing production issues like performance tuning and enhancement. ▪ Experienced in UNIXshell scripts for the automation of processes and scheduling the Data Stage jobs using wrappers. ▪ Involved in unit testing, system integration testing, implementation and maintenance of databases jobs. ▪ Effective in cross-functional and global environments to manage multiple tasks and assignments concurrently with effective communication skills.
  • 2. Technical Skill Set ETL Tools IBM Web Sphere Data stage and Quality Stage 8.1 , 8.5, 9.1 & 11.5, Ascential Data Stage 7.5 Big Data Ecosystems Hadoop,Mapreduce,HDFS,Hive,Sqoop,Pig,Spark, HBase Databases Oracle,Greenplum,Teradata Tools and Utilities SQL Developer,Teradata SQL Assistant, PGAdmin III FTP Tool Tumbleweed, Sterling Commerce Programming Languages Python, SQL, UNIX Shell Script Operating Systems Windows XP, LINUX Education ▪ Bachelor of Technology in Information Technology. Professional Certification ▪ IBM Certified Solution Developer - InfoSphere DataStage v8.5 Certified on 09/26/2013. Experience Summary Client: United Services Automobile Association July 2016 – Till Date Role: DataStage/Hadoop Developer The United Services Automobile Association (USAA) is a Texas-based Fortune 500 diversified financial services group of companies including a Texas Department of Insurance regulated reciprocal inter-insurance exchange and subsidiaries offering banking, investing, and insurance to people and families that serve, or served, in the United States military. At the end of 2015, there were 11.4 million members. USAA was founded in 1922 by a group of U.S. Army officers as a mechanism for mutual self-insurance when they were unable to secure auto insurance because of the perception that they, as military officers, were a high-risk group. USAA has since expanded to offer banking and insurance services to past and present members of the Armed Forces, officers and enlisted, and their immediate families. Solvency project basically provides the team to build the reports on business objects which can be used for business analysis.
  • 3. Responsibilities:  Involved in understanding of data modeling documents along with the data modeler.  Developed Sqoop Scripts to extract data from DB2 EDW source databases onto HDFS.  Developed custom Map Reduce Job to perform data cleanup, transform Data from Text to Avro & write output directly into hive tables by generating dynamic partitions  Developed Custom FTP & SFTP drivers to pull flat files from UNIX, Windows into Hadoop & tokenize an identified sensitive data from input records on the fly parallelly  Developed Custom Input Format, Record Reader,Mapper, Reducer,Partitioner as part of developing end to end hadoop applications.  Developed Custom Sqoop tool to import data residing in any relational databases, tokenize an identified sensitive column on the fly and store it into Hadoop.  Worked on Hbase Java API to populate operational Hbase table with Key value.  Experience in writing Spark Applications using spark-shell, pyspark, spark-submit  Developed severalcustom User defined functions in Hive & Pig using Java & python  Developed Sqoop Job to perform import/ Incremental Import of data from any relational tables into hadoop in different formats such as text,avro, sequence,etc & into hive tables.  Developed Sqoop Job to export the data from hadoop to relational tables for visualization and to generate reports for the BI team.  Developed ETL jobs as per business rules using ETL design document.  Created the reusable jobs which can be used across the project.  Enhanced the reusability of the jobs by making and deploying shared containers and multiple instances of the jobs.  Extensively used Control M tool for automation of scheduling jobs on daily, bi-weekly, weekly monthly basis with proper dependencies.  Wrote complex SQL queries using joins, sub queries and correlated sub queries  Performed Unit testing and System Integration testing by developing and documenting test cases. Environment: IBM-DataStage-9.1 & 11.5 Oracle,Hadoop, Netezza Database,Control M Scheduler Client: Walmart Jan 2015 – June 2016 Role: DataStage/Hadoop Developer Walmart is the largest retailer of consumer staples products in the world. It was founded by Sam Walton in 1962, who started with the idea of a store that would offer its customers the lowest prices in the market. Walmart was incorporated in 1969 following the success of Walton’s ‘everyday low prices’ pricing strategy. It currently operates in 27 countries around the world. The company drives growth by expansion in its retail area and investments in e- commerce. Nonetheless, it has underperformed its competitor, Costco over the last few years due to the Costco’s better customer service record. Walmart has lost customers to Costco over the years, primarily because it doesn’t pay its employees well, which has led to demotivation in the workforce and poor customer service
  • 4. EIM Innovations Hana project basically provides Data Cafe team to build analytical reports on Tableau which are again used for further analysis by the business. The data supplied contains tables like Scan, Item, Comp_Hist, Club,Store wise details for every hour and on daily basis. Responsibilities:  Involved in understanding of data modeling documents along with the data modeler.  Imported data using Sqoop to load data from DB2 to HDFS on regular basis.  Developing Scripts and Batch Job to schedule various Hadoop Program.  Written Hive queries for data analysis to meet the business requirements.  Creating Hive tables and working on them using Hive QL.  Importing and exporting data into HDFS and Hive using Sqoop.  Experienced in defining job flows.  Involved in creating Hive tables, loading data and writing hive queries.  Developed a custom File system plug in for Hadoop so that it can access files on Data Platform  Extensively worked on DataStage jobs for splitting bulk data into subsets and to dynamically distribute to all available processors to achieve best job performance.  Developed ETL jobs as per business rules using ETL design document  Converted complex job designs to different job segments and executed through job sequencer for better performance and easy maintenance.  Enhanced the reusability of the jobs by making and deploying shared containers and multiple instances of the jobs.  Imported the data residing in the host systems into the data mart developed in DB2,Green Plum etc.  Wrote scripts to upload the data in to greenplum ,SAP hana since the current datastage 8.5 version don’t have Greenplum and SAP Hana DB plugins.  Extensively used CA7 tool which resides on the Mainframe for automation of scheduling jobs on daily, bi- weekly, weekly monthly basis with proper dependencies.  Wrote complex SQL queries using joins, sub queries and correlated sub queries  Performed Unit testing and System Integration testing by developing and documenting test cases.  Capable of processing large sets of structured, semi-structured and unstructured data  Pre-processing using Hive and Pig  Managing and deploying HBase Environment: IBM-DataStage-9.1, Hadoop ,Greenplum ,Teradata Database,SAP HANA LINUX,CA7 Scheduler
  • 5. Client: General Motors Feb 2014 – Dec 2014 Role: DataStage Developer General Motors is one of the world's leading manufacturers of cars and trucks. Its domestic models include Buick, Cadillac, Chevrolet and GMC and the company has a huge international presence, selling vehicles in major countries across the world. The purpose of this project (Part Launch Activity Network (PLAN)) is being developed to address a business gap in the Global SAP solution for General Motors (GM) Customer Care and Aftersales (CCA) around the management process for the release of new part numbers in General Motors. Responsibilities:  Involved in understanding of business processes and coordinated with business analysts to get specific user requirements.  Used Information Analyzer for Column Analysis, Primary Key Analysis and Foreign Key Analysis.  Extensively worked on DataStage jobs for splitting bulk data into subsets and to dynamically distribute to all available processors to achieve best job performance.  Developed ETL jobs as per business rules using ETL design document  Converted complex job designs to different job segments and executed through job sequencer for better performance and easy maintenance.  Used DataStage maps to load data from Source to target.  Enhanced the reusability of the jobs by making and deploying shared containers and multiple instances of the jobs.  Imported the data residing in the host systems into the data mart developed in Oracle 10g.  Extensively used Autosys for automation of scheduling jobs on daily, bi-weekly, weekly monthly basis with proper dependencies.  Wrote complex SQL queries using joins, sub queries and correlated sub queries  Performed Unit testing and System Integration testing by developing and documenting test cases. Environment: IBM-DataStage-8.5, Oracle 10g,LINUX
  • 6. Client: General Motors April 2013– Jan 2014 Role: Data Stage Developer General Motors is one of the world's leading manufacturers of cars and trucks. Its domestic models include Buick, Cadillac, Chevrolet and GMC and the company has a huge international presence, selling vehicles in major countries across the world. The Purpose this project ( Pricing Data) is distributed to severalrecipients, that is either to support different systems or the data is sold to agencies for them to offer it to their customers. These files were created manually by the usage of Excel or Access and distributed via e-mail or FTP to the recipients. We developed interfaces to automate the process of data sharing with multiple recipients. Responsibilities:  Worked on DataStage Designer, Manager and Director.  Worked with the Business analysts and the DBAs for requirements gathering, analysis, testing, and metrics and project coordination.  Involved in extracting the data from different data sources like Oracle and flat files.  Involved in creating and maintaining Sequencer and Batch jobs.  Creating ETL Job flow design.  Used ETL to load data into the Oracle warehouse.  Created various standard/reusable jobs in DataStage using various active and passive stages like Sort, Lookup, Filter, Join, Transformer, aggregator, Change Capture Data,Sequential file, DataSets.  Involved in development of Job Sequencing using the Sequencer.  Used Remove Duplicates stage to remove the duplicates in the data.  Used designer and director to schedules and monitor jobs and to collect the performance statistics  Extensively worked with database objects including tables, views and triggers.  Creating local and shared containers to facilitate ease and reuse of jobs.  Implemented the underlying logic for Slowly Changing Dimensions  Worked with Developers to troubleshoot and resolve issues in job logic as well as performance.  Documented ETL test plans, test cases, test scripts, and validations based on design specifications for unit testing, system testing, functional testing, prepared test data for testing, error handling and analysis. Environment: IBM-DataStage-8.5, Oracle 10g, LINUX
  • 7. Client: General Motors Sep 2012 – March 2013 Role: Datastage Developer General Motors is one of the world's leading manufacturers of cars and trucks. Its domestic models include Buick, Cadillac, Chevrolet and GMC and the company has a huge international presence, selling vehicles in major countries across the world. GM ADP Payroll project will outsource the payroll processing data to a third party provider - ADP which will process the Employee Payrolls and sends back the data that will be used by various Downstream Payroll related Applications. Currently 2 legacy applications are doing the payroll processing - HPS (Hourly Payroll System) and SPS (Salaried payroll system) which involves lot of manual activities and need very long time to process the complete Payroll data. With increase in number of employees, this process is expected to become more cumbersome. So General Motors decided to decommission the existing HPS/SPS and carry out processing through a 3rd party automated application called ADP. Responsibilities:  Involved as ETL Developer during the analysis, planning, design, development, and implementation stages of projects  Prepared Data Mapping Documents and Design the ETL jobs based on the DMD with required Tables in the Dev Environment.  Active participation in decision making and QA meetings and regularly interacted with the Business Analysts &development team to gain a better understanding of the Business Process,Requirements & Design.  Used DataStage as an ETL tool to extract data from sources systems, loaded the data into the ORACLE database.  Designed and Developed Data stage Jobs to Extract data from heterogeneous sources,Applied transform logics to extracted data and Loaded into Data Warehouse Databases.  Created Datastage jobs using different stages like Transformer,Aggregator, Sort, Join, Merge, Lookup, Data Set, Funnel, Remove Duplicates, Copy, Modify, Filter, Change Data Capture,Sample, Surrogate Key, Column Generator, Row Generator, Etc.  Extensively worked with Join, Look up (Normal and Sparse) and Merge stages.  Extensively worked with sequential file, dataset, file set and look up file set stages.  Extensively used Parallel Stages like Row Generator, Column Generator, Head,and Peek for development and de-bugging purposes.  Used the Data Stage Director and its run-time engine to schedule running the solution, testing and debugging its components, and monitoring the resulting executable versions on ad hoc or scheduled basis.  Converted complex job designs to different job segments and executed through job sequencer for better performance and easy maintenance.  Creation of jobs sequences.  Maintained Data Warehouse by loading dimensions and facts as part of project. Also worked for different enhancements in FACT tables.  Created shell script to run data stage jobs from UNIXand then schedule this script to run data stage jobs through scheduling tool.  Coordinate with team members and administer all onsite and offshore work packages.  Analyze performance and monitor work with capacity planning.  Performed performance tuning of the jobs by interpreting performance statistics of the jobs developed.
  • 8.  Documented ETL test plans, test cases,test scripts, and validations based on design specifications for unit testing, system testing, functional testing, prepared test data for testing, error handling and analysis.  Participated in weekly status meetings Environment: IBM-DataStage-8.5, Oracle 10g, LINUX Client: General Motors Feb 2011 – Aug 2012 Role: Team Member General Motors is one of the world's leading manufacturers of cars and trucks. Its domestic models include Buick, Cadillac, Chevrolet and GMC and the company has a huge international presence, selling vehicles in major countries across the world. This project having Global Strategic Pricing interfaces in production environment which are determine effective pricing structure including determining usages of parts,sellling price, demand, warranty, currency exchange rate,etc Responsibilities:  Supporting 350 interfaces specifications developed in different Datastage versions  Supporting 30 interface specifications mainly to / from SAP via R/3 Plugin  Analysed different Architecture designs for Batch Transaction Interfaces between different Systems (Mainframe, Weblogic, SAP).  Strictly observed GM GIF (Global Integration Foundation) EAI Standard for Error Handling, Audit report generation using CSF (Common Services Framework)  Develop and Deployment of MERS in Production environment.  Meeting with Clients for requirement gathering, addressing issues and change request.  Analysis of issues occurring in a production, resolving issues in Development and Test environment and roll over the changes to production.  Involved in SOX Audit process.  Involved in Change Management Process.  Working on Production Support Calls.  Co-ordination between onsite and offshore team. Environment: IBM-DataStage-8.1, IBM-DataStage-8.5,AscentialData Stage 7.5, Oracle 10g, LINUX