SlideShare a Scribd company logo
Hadoop: An Industry Perspective
OutlineWhat is Hadoop?Overview of HDFS and MapReduceHow Hadoop augments an RDBMS?Industry Business Needs:Data Consolidation (Structured or Not)Data Schema Agility (Evolve Schema Fast)Query Language Flexibility (Data Engineering)Data Economics (Store More for Longer)Conclusion
What is Hadoop?A scalable fault-tolerant distributed system  for data storage and processingIts scalability comes from the marriage of:HDFS: Self-Healing High-Bandwidth Clustered StorageMapReduce: Fault-Tolerant Distributed ProcessingOperates on structured and complex dataA large and active ecosystem (many developers and additions like HBase, Hive, Pig, …)Open source under the Apache Licensehttp://wiki.apache.org/hadoop/
Hadoop History2002-2004: Doug Cutting and Mike Cafarella started working on Nutch2003-2004: Google publishes GFS and MapReduce papers 2004: Cutting adds DFS & MapReduce support to Nutch2006: Yahoo! hires Cutting, Hadoop spins out of Nutch2007: NY Times converts 4TB of archives over 100 EC2s2008: Web-scale deployments at Y!, Facebook, Last.fmApril 2008: Yahoo does fastest sort of a TB, 3.5mins over 910 nodesMay 2009:Yahoo does fastest sort of a TB, 62secs over 1460 nodesYahoo sorts a PB in 16.25hours over 3658 nodesJune 2009, Oct 2009: Hadoop Summit, Hadoop WorldSeptember 2009: Doug Cutting joins Cloudera
Hadoop Design AxiomsSystem Shall Manage and Heal ItselfPerformance Shall Scale Linearly Compute Shall Move to DataSimple Core, Modular and Extensible
HDFS: Hadoop Distributed File SystemBlock Size = 64MBReplication Factor = 3Cost/GB is a few ¢/month vs $/month
MapReduce: Distributed Processing
Apache Hadoop EcosystemBI ReportingETL ToolsRDBMSHive (SQL)SqoopPig (Data Flow)MapReduce (Job Scheduling/Execution System)(Streaming/Pipes APIs)HBase(key-value store)Avro (Serialization)Zookeepr (Coordination)HDFS(Hadoop Distributed File System)
Use The Right Tool For The Right Job Relational Databases:Hadoop:When to use?Affordable Storage/Compute
Structured or Not (Agility)
Resilient Auto ScalabilityWhen to use?Interactive Reporting (<1sec)
Multistep Transactions
Lots of Inserts/Updates/DeletesTypical Hadoop ArchitectureBusiness UsersEnd CustomersBusiness IntelligenceInteractive ApplicationOLAP Data MartOLTP Data StoreEngineersHadoop: Storage and Batch ProcessingData Collection
Complex Data is Growing Really FastGartner – 2009Enterprise Data will grow 650% in the next 5 years.
80% of this data will be unstructured (complex)dataIDC – 200885% of all corporate information is in unstructured (complex) forms
Growth of unstructured data (61.7% CAGR) will far outpace that of transactional dataData Consolidation: One Place For AllComplex DataDocumentsWeb feedsSystem logsOnline forumsSharePointSensor dataEMB archivesImages/VideoStructured Data (“relational”) CRMFinancialsLogisticsData MartsInventorySales recordsHR recordsWeb ProfilesA single data system to enable processing across the universe of data types.
Data Agility: Schema on Read vs Write Schema-on-Read:Schema-on-Write:Schema must be created before data is loaded.
An explicit load operation has to take place which transforms the data to the internal structure of the database.
New columns must be added explicitly before data for such columns can be loaded into the database.
Read is Fast.
Standards/Governance.
Data is simply copied to the file store, no special transformation is needed.
A SerDe (Serializer/Deserlizer) is applied during read time to extract the required columns.
New data can start flowing anytime and will appear retroactively once the SerDe is updated to parse them.
Load is Fast

More Related Content

PPT
Cluster Tutorial
PPT
Hazelcast
PPTX
What Is Apache Spark? | Introduction To Apache Spark | Apache Spark Tutorial ...
PDF
Introduction to Apache Hive
PDF
Building Modern Data Streaming Apps with Python
PPTX
Programming in Spark using PySpark
PDF
Amazon OpenSearch Service
PDF
8 - OpenShift - A look at a container platform: what's in the box
Cluster Tutorial
Hazelcast
What Is Apache Spark? | Introduction To Apache Spark | Apache Spark Tutorial ...
Introduction to Apache Hive
Building Modern Data Streaming Apps with Python
Programming in Spark using PySpark
Amazon OpenSearch Service
8 - OpenShift - A look at a container platform: what's in the box

What's hot (20)

PPTX
Cloud Service Models
PDF
NoSQL Essentials: Cassandra
PPTX
Introduction to PaaS
PPT
Evolution of the cloud
PPTX
Hadoop introduction , Why and What is Hadoop ?
PDF
Apache Spark & Hadoop
DOCX
Cloud Storage and Security
PDF
Introduction to IBM Spectrum Scale and Its Use in Life Science
PPTX
Apache Spark Architecture | Apache Spark Architecture Explained | Apache Spar...
PPTX
Hyper-Converged Infrastructure: Concepts
PPT
An Introduction to JVM Internals and Garbage Collection in Java
PDF
Apache Spark - Basics of RDD | Big Data Hadoop Spark Tutorial | CloudxLab
PDF
DDN: Massively-Scalable Platforms and Solutions Engineered for the Big Data a...
PDF
Apache Sqoop Tutorial | Sqoop: Import & Export Data From MySQL To HDFS | Hado...
ODP
Redis overview
PPTX
Big Data Technology Stack : Nutshell
PDF
Hadoop Overview & Architecture
 
PDF
Distributed applications using Hazelcast
Cloud Service Models
NoSQL Essentials: Cassandra
Introduction to PaaS
Evolution of the cloud
Hadoop introduction , Why and What is Hadoop ?
Apache Spark & Hadoop
Cloud Storage and Security
Introduction to IBM Spectrum Scale and Its Use in Life Science
Apache Spark Architecture | Apache Spark Architecture Explained | Apache Spar...
Hyper-Converged Infrastructure: Concepts
An Introduction to JVM Internals and Garbage Collection in Java
Apache Spark - Basics of RDD | Big Data Hadoop Spark Tutorial | CloudxLab
DDN: Massively-Scalable Platforms and Solutions Engineered for the Big Data a...
Apache Sqoop Tutorial | Sqoop: Import & Export Data From MySQL To HDFS | Hado...
Redis overview
Big Data Technology Stack : Nutshell
Hadoop Overview & Architecture
 
Distributed applications using Hazelcast
Ad

Similar to Hadoop: An Industry Perspective (20)

PPTX
Hadoop: Distributed Data Processing
PPT
How Hadoop Revolutionized Data Warehousing at Yahoo and Facebook
PPT
Hadoop a Natural Choice for Data Intensive Log Processing
PPT
Hive @ Hadoop day seattle_2010
PPTX
Big data ppt
PPTX
Hadoop and BigData - July 2016
PPTX
Sf NoSQL MeetUp: Apache Hadoop and HBase
PDF
Harnessing Hadoop: Understanding the Big Data Processing Options for Optimizi...
PPTX
Big data
PPTX
Big data
PPTX
Hadoop_arunam_ppt
PPT
Finding the needles in the haystack. An Overview of Analyzing Big Data with H...
PDF
Big Data Essentials meetup @ IBM Ljubljana 23.06.2015
PPTX
عصر کلان داده، چرا و چگونه؟
PPTX
Presentation sreenu dwh-services
PPT
Big Data Analytics 2014
PPTX
Hands on Hadoop and pig
PPTX
Big data concepts
PPT
Hadoop Frameworks Panel__HadoopSummit2010
PPTX
Hadoop - A big data initiative
Hadoop: Distributed Data Processing
How Hadoop Revolutionized Data Warehousing at Yahoo and Facebook
Hadoop a Natural Choice for Data Intensive Log Processing
Hive @ Hadoop day seattle_2010
Big data ppt
Hadoop and BigData - July 2016
Sf NoSQL MeetUp: Apache Hadoop and HBase
Harnessing Hadoop: Understanding the Big Data Processing Options for Optimizi...
Big data
Big data
Hadoop_arunam_ppt
Finding the needles in the haystack. An Overview of Analyzing Big Data with H...
Big Data Essentials meetup @ IBM Ljubljana 23.06.2015
عصر کلان داده، چرا و چگونه؟
Presentation sreenu dwh-services
Big Data Analytics 2014
Hands on Hadoop and pig
Big data concepts
Hadoop Frameworks Panel__HadoopSummit2010
Hadoop - A big data initiative
Ad

More from Cloudera, Inc. (20)

PPTX
Partner Briefing_January 25 (FINAL).pptx
PPTX
Cloudera Data Impact Awards 2021 - Finalists
PPTX
2020 Cloudera Data Impact Awards Finalists
PPTX
Edc event vienna presentation 1 oct 2019
PPTX
Machine Learning with Limited Labeled Data 4/3/19
PPTX
Data Driven With the Cloudera Modern Data Warehouse 3.19.19
PPTX
Introducing Cloudera DataFlow (CDF) 2.13.19
PPTX
Introducing Cloudera Data Science Workbench for HDP 2.12.19
PPTX
Shortening the Sales Cycle with a Modern Data Warehouse 1.30.19
PPTX
Leveraging the cloud for analytics and machine learning 1.29.19
PPTX
Modernizing the Legacy Data Warehouse – What, Why, and How 1.23.19
PPTX
Leveraging the Cloud for Big Data Analytics 12.11.18
PPTX
Modern Data Warehouse Fundamentals Part 3
PPTX
Modern Data Warehouse Fundamentals Part 2
PPTX
Modern Data Warehouse Fundamentals Part 1
PPTX
Extending Cloudera SDX beyond the Platform
PPTX
Federated Learning: ML with Privacy on the Edge 11.15.18
PPTX
Analyst Webinar: Doing a 180 on Customer 360
PPTX
Build a modern platform for anti-money laundering 9.19.18
PPTX
Introducing the data science sandbox as a service 8.30.18
Partner Briefing_January 25 (FINAL).pptx
Cloudera Data Impact Awards 2021 - Finalists
2020 Cloudera Data Impact Awards Finalists
Edc event vienna presentation 1 oct 2019
Machine Learning with Limited Labeled Data 4/3/19
Data Driven With the Cloudera Modern Data Warehouse 3.19.19
Introducing Cloudera DataFlow (CDF) 2.13.19
Introducing Cloudera Data Science Workbench for HDP 2.12.19
Shortening the Sales Cycle with a Modern Data Warehouse 1.30.19
Leveraging the cloud for analytics and machine learning 1.29.19
Modernizing the Legacy Data Warehouse – What, Why, and How 1.23.19
Leveraging the Cloud for Big Data Analytics 12.11.18
Modern Data Warehouse Fundamentals Part 3
Modern Data Warehouse Fundamentals Part 2
Modern Data Warehouse Fundamentals Part 1
Extending Cloudera SDX beyond the Platform
Federated Learning: ML with Privacy on the Edge 11.15.18
Analyst Webinar: Doing a 180 on Customer 360
Build a modern platform for anti-money laundering 9.19.18
Introducing the data science sandbox as a service 8.30.18

Recently uploaded (20)

PDF
Basic Mud Logging Guide for educational purpose
PDF
O5-L3 Freight Transport Ops (International) V1.pdf
PDF
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
PDF
TR - Agricultural Crops Production NC III.pdf
PPTX
Institutional Correction lecture only . . .
PPTX
Pharmacology of Heart Failure /Pharmacotherapy of CHF
PPTX
BOWEL ELIMINATION FACTORS AFFECTING AND TYPES
PDF
Business Ethics Teaching Materials for college
PDF
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
PPTX
PPH.pptx obstetrics and gynecology in nursing
PDF
102 student loan defaulters named and shamed – Is someone you know on the list?
PDF
STATICS OF THE RIGID BODIES Hibbelers.pdf
PDF
Abdominal Access Techniques with Prof. Dr. R K Mishra
PDF
Insiders guide to clinical Medicine.pdf
PPTX
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
PDF
Anesthesia in Laparoscopic Surgery in India
PDF
Origin of periodic table-Mendeleev’s Periodic-Modern Periodic table
PDF
O7-L3 Supply Chain Operations - ICLT Program
PDF
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
PDF
Module 4: Burden of Disease Tutorial Slides S2 2025
Basic Mud Logging Guide for educational purpose
O5-L3 Freight Transport Ops (International) V1.pdf
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
TR - Agricultural Crops Production NC III.pdf
Institutional Correction lecture only . . .
Pharmacology of Heart Failure /Pharmacotherapy of CHF
BOWEL ELIMINATION FACTORS AFFECTING AND TYPES
Business Ethics Teaching Materials for college
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
PPH.pptx obstetrics and gynecology in nursing
102 student loan defaulters named and shamed – Is someone you know on the list?
STATICS OF THE RIGID BODIES Hibbelers.pdf
Abdominal Access Techniques with Prof. Dr. R K Mishra
Insiders guide to clinical Medicine.pdf
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
Anesthesia in Laparoscopic Surgery in India
Origin of periodic table-Mendeleev’s Periodic-Modern Periodic table
O7-L3 Supply Chain Operations - ICLT Program
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
Module 4: Burden of Disease Tutorial Slides S2 2025

Hadoop: An Industry Perspective

  • 2. OutlineWhat is Hadoop?Overview of HDFS and MapReduceHow Hadoop augments an RDBMS?Industry Business Needs:Data Consolidation (Structured or Not)Data Schema Agility (Evolve Schema Fast)Query Language Flexibility (Data Engineering)Data Economics (Store More for Longer)Conclusion
  • 3. What is Hadoop?A scalable fault-tolerant distributed system for data storage and processingIts scalability comes from the marriage of:HDFS: Self-Healing High-Bandwidth Clustered StorageMapReduce: Fault-Tolerant Distributed ProcessingOperates on structured and complex dataA large and active ecosystem (many developers and additions like HBase, Hive, Pig, …)Open source under the Apache Licensehttp://wiki.apache.org/hadoop/
  • 4. Hadoop History2002-2004: Doug Cutting and Mike Cafarella started working on Nutch2003-2004: Google publishes GFS and MapReduce papers 2004: Cutting adds DFS & MapReduce support to Nutch2006: Yahoo! hires Cutting, Hadoop spins out of Nutch2007: NY Times converts 4TB of archives over 100 EC2s2008: Web-scale deployments at Y!, Facebook, Last.fmApril 2008: Yahoo does fastest sort of a TB, 3.5mins over 910 nodesMay 2009:Yahoo does fastest sort of a TB, 62secs over 1460 nodesYahoo sorts a PB in 16.25hours over 3658 nodesJune 2009, Oct 2009: Hadoop Summit, Hadoop WorldSeptember 2009: Doug Cutting joins Cloudera
  • 5. Hadoop Design AxiomsSystem Shall Manage and Heal ItselfPerformance Shall Scale Linearly Compute Shall Move to DataSimple Core, Modular and Extensible
  • 6. HDFS: Hadoop Distributed File SystemBlock Size = 64MBReplication Factor = 3Cost/GB is a few ¢/month vs $/month
  • 8. Apache Hadoop EcosystemBI ReportingETL ToolsRDBMSHive (SQL)SqoopPig (Data Flow)MapReduce (Job Scheduling/Execution System)(Streaming/Pipes APIs)HBase(key-value store)Avro (Serialization)Zookeepr (Coordination)HDFS(Hadoop Distributed File System)
  • 9. Use The Right Tool For The Right Job Relational Databases:Hadoop:When to use?Affordable Storage/Compute
  • 10. Structured or Not (Agility)
  • 11. Resilient Auto ScalabilityWhen to use?Interactive Reporting (<1sec)
  • 13. Lots of Inserts/Updates/DeletesTypical Hadoop ArchitectureBusiness UsersEnd CustomersBusiness IntelligenceInteractive ApplicationOLAP Data MartOLTP Data StoreEngineersHadoop: Storage and Batch ProcessingData Collection
  • 14. Complex Data is Growing Really FastGartner – 2009Enterprise Data will grow 650% in the next 5 years.
  • 15. 80% of this data will be unstructured (complex)dataIDC – 200885% of all corporate information is in unstructured (complex) forms
  • 16. Growth of unstructured data (61.7% CAGR) will far outpace that of transactional dataData Consolidation: One Place For AllComplex DataDocumentsWeb feedsSystem logsOnline forumsSharePointSensor dataEMB archivesImages/VideoStructured Data (“relational”) CRMFinancialsLogisticsData MartsInventorySales recordsHR recordsWeb ProfilesA single data system to enable processing across the universe of data types.
  • 17. Data Agility: Schema on Read vs Write Schema-on-Read:Schema-on-Write:Schema must be created before data is loaded.
  • 18. An explicit load operation has to take place which transforms the data to the internal structure of the database.
  • 19. New columns must be added explicitly before data for such columns can be loaded into the database.
  • 22. Data is simply copied to the file store, no special transformation is needed.
  • 23. A SerDe (Serializer/Deserlizer) is applied during read time to extract the required columns.
  • 24. New data can start flowing anytime and will appear retroactively once the SerDe is updated to parse them.
  • 26. Evolving Schemas/AgilityQuery Language FlexibilityJava MapReduce: Gives the most flexibility and performance, but potentially long development cycle (the “assembly language” of Hadoop).
  • 27. Streaming MapReduce: Allows you to develop in any programming language of your choice, but slightly lower performance and less flexibility.
  • 28. Pig: A relatively new language out of Yahoo, suitable for batch dataflowworkloads
  • 29. Hive: A SQL interpreter on top of MapReduce, also includes a meta-store mapping files to their schemas and associated SerDe’s. Hive also supports User-Defined-Functions and pluggable MapReduce streaming functions in any language.Hive Extensible Data TypesSTRUCTS:
  • 35. JSON:
  • 36. SELECT get_json_object(mycolumn,objpath)Data Economics (Return On Byte) Return on Byte = value to be extracted from that byte / cost of storing that byte.
  • 37. If ROB is < 1 then it will be buried into tape wasteland, thus we need cheaper active storage.High ROBLow ROB
  • 38. Case Studies: Hadoop World ‘09VISA: Large Scale Transaction AnalysisJP Morgan Chase: Data Processing for Financial ServicesChina Mobile: Data Mining Platform for Telecom IndustryRackspace: Cross Data Center Log ProcessingBooz Allen Hamilton: Protein Alignment using HadoopeHarmony: Matchmaking in the Hadoop CloudGeneral Sentiment: Understanding Natural LanguageYahoo!: Social Graph AnalysisVisible Technologies: Real-Time Business IntelligenceFacebook: Rethinking the Data Warehouse with Hadoop and HiveSlides and Videos at http://www.cloudera.com/hadoop-world-nyc
  • 40. ConclusionHadoop is a scalable distributed data processing system which enables:Consolidation (Structured or Not)Data Agility (Evolving Schemas)Query Flexibility (Any Language)Economical Storage (ROB > 1)
  • 41. Contact InformationAmrAwadallahCTO, Cloudera Inc.aaa@cloudera.comhttp://twitter.com/awadallahOnline Training Videos and Info:http://cloudera.com/hadoop-traininghttp://cloudera.com/bloghttp://twitter.com/cloudera
  • 43. MapReduce: The Programming ModelSELECT word, COUNT(1) FROM docs GROUP BY word;cat *.txt | mapper.pl | sort | reducer.pl > out.txt(docid, text)(words, counts)Map 1(sorted words, counts)Reduce 1Output File 1(sorted words, sum of counts)Split 1Be, 5“To Be Or Not To Be?”Be, 30Be, 12Reduce iOutput File i(sorted words, sum of counts)(docid, text)Map iSplit iBe, 7Be, 6ShuffleReduce ROutput File R(sorted words, sum of counts)(docid, text)Map M(sorted words, counts)(words, counts)Split N
  • 44. Hadoop High-Level ArchitectureHadoop ClientContacts Name Node for data or Job Tracker to submit jobsName NodeMaintains mapping of file blocks to data node slavesJob TrackerSchedules jobs across task tracker slavesData NodeStores and serves blocks of dataTask TrackerRuns tasks (work units) within a jobShare Physical Node
  • 45. Economics of Hadoop StorageTypical Hardware:Two Quad Core Nehalems24GB RAM12 * 1TB SATA disks (JBOD mode, no need for RAID)1 Gigabit Ethernet cardCost/node: $5K/nodeEffective HDFS Space:¼ reserved for temp shuffle space, which leaves 9TB/node3 way replication leads to 3TB effective HDFS space/nodeBut assuming 7x compression that becomes ~ 20TB/nodeEffective Cost per user TB: $250/TBOther solutions cost in the range of $5K to $100K per user TB
  • 46. Data Engineering vs Business IntelligenceBusiness Intelligence:
  • 47. The practice of extracting business numbers to monitor and evaluate the health of the business.
  • 48. Humans make decisions based on these numbers to improve revenues or reduce costs.
  • 50. The science of writing algorithms that convertdata into money  Alternatively, how to automatically transform data into new features that increase revenues or reduce costs.

Editor's Notes

  • #4: The system is self-healing in the sense that it automatically routes around failure. If a node fails then its workload and data are transparently shifted some where else.The system is intelligent in the sense that the MapReduce scheduler optimizes for the processing to happen on the same node storing the associated data (or co-located on the same leaf Ethernet switch), it also speculatively executes redundant tasks if certain nodes are detected to be slow.One of the key benefits of Hadoop is the ability to just upload any unstructured files to it without having to “schematize” them first. You can dump any type of data into Hadoop then the input record readers will abstract it out as if it was structured (i.e. schema on read vs on write)Open Source Software allows for innovation by partners and customers. It also enables third-party inspection of source code which provides assurances on security and product quality.1 HDD = 75 MB/sec, 1000 HDDs = 75 GB/sec, the “head of fileserver” bottleneck is eliminated.
  • #5: http://developer.yahoo.net/blogs/hadoop/2009/05/hadoop_sorts_a_petabyte_in_162.html100s of deployments worldwide (http://wiki.apache.org/hadoop/PoweredBy)
  • #6: Speculative Execution, Data rebalancing, Background Checksumming, etc.
  • #7: Pool commodity servers in a single hierarchical namespace.Designed for large files that are written once and read many times.Example here shows what happens with a replication factor of 3, each data block is present in at least 3 separate data nodes.Typical Hadoop node is eight cores with 16GB ram and four 1TB SATA disks.Default block size is 64MB, though most folks now set it to 128MB
  • #8: Differentiate between MapReduce the platform and MapReduce the programming model. The analogy is similar to the RDBMs which executes the queries, and SQL which is the language for the queries.MapReduce can run on top of HDFS or a selection of other storage systemsIntelligent scheduling algorithms for locality, sharing, and resource optimization.
  • #9: HBase: Low Latency Random-Access with per-row consistency for updates/inserts/deletes
  • #10: Sports car is refined, accelerates very fast, and has a lot of addons/features. But it is pricey on a per byte basis and is expensive to maintain.Cargo train is rough, missing a lot of “luxury”, slow to accelerate, but it can carry almost anything and once it gets going it can move a lot of stuff very economically.Hadoop:A data grid operating systemStores Files (Unstructured)Stores 10s of petabytesProcesses 10s of PB/jobWeak ConsistencyScan all blocks in all filesQueries &amp; Data ProcessingBatch response (&gt;1sec)Relational Databases:An ACID Database systemStores Tables (Schema)Stores 100s of terabytesProcesses 10s of TB/queryTransactional ConsistencyLookup rows using indexMostly queriesInteractive responseHadoop Myths:Hadoop MapReduce requires Rocket ScientistsHadoop has the benefit of both worlds, the simplicity of SQL and the power of Java (or any other language for that matter)Hadoop is not very efficient hardware wiseHadoop optimizes for scalability, stability and flexibility versus squeezing every tiny bit of hardware performance It is cost efficient to throw more “pizza box” servers to gain performance than hire more engineers to manage, configure, and optimize the system or pay 10x the hardware cost in softwareHadoop can’t do quick random lookupsHBase enables low-latency key-value pair lookups (no fast joins)Hadoop doesn’t support updates/inserts/deletesNot for multi-row transactions, but HBase enables transactions with row-level consistency semanticsHadoop isn’t highly availableThough Hadoop rarely loses data, it can suffer from down-time if the master NameNode goes down. This issue is currently being addressed, and there are HW/OS/VM solutions for itHadoop can’t be backed-up/recovered quicklyHDFS, like other file systems, can copy files very quickly. It also has utilities to copy data between HDFS clustersHadoop doesn’t have securityHadoop has Unix style user/group permissions, and the community is working on improving its security modelHadoop can’t talk to other systemsHadoop can talk to BI tools using JDBC, to RDBMSes using Sqoop, and to other systems using FUSE, WebDAV &amp; FTP
  • #11: The solution is to *augment* the current RDBMSes with a “smart” storage/processing system. The original event level data is kept in this smart storage layer and can be mined as needed. The aggregate data is kept in the RDBMSes for interactive reporting and analytics.
  • #15: Hive Features: A subset of SQL covering the most common statementsAgile data types: Array, Map, Struct, and JSON objectsUser Defined Functions and AggregatesRegular Expression supportMapReduce streaming supportJDBC/ODBC supportPartitions and Buckets (for performance optimization)In The Works: Indices, Columnar Storage, Views, Microstrategy compatibility, Explode/CollectMore details: http://wiki.apache.org/hadoop/HiveQuery: SELECT, FROM, WHERE, JOIN, GROUP BY, SORT BY, LIMIT, DISTINCT, UNION ALLJoin: LEFT, RIGHT, FULL, OUTER, INNERDDL: CREATE TABLE, ALTER TABLE, DROP TABLE, DROP PARTITION, SHOW TABLES, SHOW PARTITIONSDML: LOAD DATA INTO, FROM INSERTTypes: TINYINT, INT, BIGINT, BOOLEAN, DOUBLE, STRING, ARRAY, MAP, STRUCT, JSON OBJECTQuery:Subqueries in FROM, User Defined Functions, User Defined Aggregates, Sampling (TABLESAMPLE)Relational: IS NULL, IS NOT NULL, LIKE, REGEXPBuilt in aggregates: COUNT, MAX, MIN, AVG, SUMBuilt in functions: CAST, IF, REGEXP_REPLACE, …Other: EXPLAIN, MAP, REDUCE, DISTRIBUTE BYList and Map operators: array[i], map[k], struct.field
  • #23: Think: SELECT word, count(*) FROM documents GROUP BY wordCheckout ParBASH:http://cloud-dev.blogspot.com/2009/06/introduction-to-parbash.html
  • #24: The Data Node slave and the Task Tracker slave can, and should, share the same server instance to leverage data locality whenever possible.The NameNode and JobTracker are currently SPOFs which can affect the availability of the system by around 15 mins (no data loss though, so the system is reliable, but can suffer from downtime occasionally). That issue is currently being addressed by the Apache Hadoop community using Zookeeper.