SlideShare a Scribd company logo
The First Class Integration of Solr with Hadoop
THE FIRST CLASS INTEGRATION OF SOLR WITH
HADOOP
Mark Miller (Cloudera)
WHO AM I?
Cloudera employee, Lucene/Solr committer, Lucene PMC member,
Apache member

!
First job out of college was in the Newspaper archiving business.

!
First full time employee at LucidWorks - a startup around Lucene/Solr.

!
Spent a couple years as “Core” engineering manager, reporting to the
VP of engineering.
•

Very fast and feature rich ‘core’ search engine library.

•

Compact and powerful, Lucene is an extremely popular full-text search library.

•

Provides low level API’s for analyzing, indexing, and searching text, along with a
myriad of related features.

!
!
!

•

Just the core - either you write the ‘glue’ or use a higher level search engine built
with Lucene.
• Solr (pronounced "solar") is an open source enterprise search
platform from the Apache Lucene project. Its major features
include full-text search, hit highlighting, faceted search,
dynamic clustering, database integration, and rich document
(e.g., Word, PDF) handling. Providing distributed search and
index replication, Solr is highly scalable. Solr is the most
popular enterprise search engine.



- Wikipedia
SEARCH ON HADOOP HISTORY
•
•
•
•
•
•
•

Katta
Blur
SolBase
HBASE-3529
SOLR-1301
SOLR-1045
Ad-Hoc
...
THE PLAN: STRENGTHEN THE FAMILY BONDS
•

No need to build something radically new - we have the pieces we need.

•

Focus on integration points.

•

Create high quality, first class integrations and contribute the work to the projects
involved.

!
!
!

•

Focus on integration and quality first - then performance and scale.
SOLRCLOUD
SOLR INTEGRATION
•

Read and Write directly to HDFS

•
•

First Class Custom Directory Support in Solr
Support Solr Replication on HDFS

•

Other improvements around usability and configuration

!
!
READ AND WRITE DIRECTLY TO HDFS

•

Lucene did not historically support append only file system

•

“Flexible Indexing” brought around support for append only filesystem support

•

Lucene support append only filesystem by default since 4.2

!
!
LUCENE DIRECTORY ABSTRACTION
•
•

It’s how Lucene interacts with index files.
Solr uses the Lucene library and offers DirectoryFactory

!
•
•
•
•
•
•
•
•

Class Directory {
listAll();
createOutput(file, context);
openInput(file, context);
deleteFile(file);
makeLock(file);
clearLock(file);
…
PUTTING THE INDEX IN HDFS
•

Solr relies on the filesystem cache to operate at full speed.

•

HDFS not known for it’s random access speed.

•

Apache Blur has already solved this with an HdfsDirectory that works on top of a
BlockDirectory.

!
!
!

•

The “block cache” caches the hot blocks of the index off heap (direct byte array)
and takes the place of the filesystem cache.

!
•

We contributed back optional ‘write’ caching.

!
!
PUTTING THE TRANSACTIONLOG IN HDFS
•

HdfsUpdateLog added - extends UpdateLog

•

Triggered by setting the UpdateLog dataDir to something that starts with hdfs:/ - no
additional configuration necessary.

!
!

•

Same extensive testing as used on UpdateLog
RUNNING SOLR ON HDFS
•

Set DirectoryFactory to HdfsDirectoryFactory and set the dataDir to a location in
hdfs.

!
•

Set LockType to ‘hdfs’

•

Use an UpdateLog dataDir location that begins with ‘hdfs:/’

•
•
•

Or java -Dsolr.directoryFactory=HdfsDirectoryFactory
-Dsolr.lockType=solr.HdfsLockFactory
-Dsolr.updatelog=hdfs://host:port/path -jar start.jar

!
!
SOLR REPLICATION ON HDFS
!
•

While Solr has exposed a plug-able DirectoryFactory for a long time now, it was
really quite limited.

!
•

Most glaring, only a local file system based Directory would work with replication.

•

There where also other more minor areas that relied on a local filesystem Directory
implementation.

!
FUTURE SOLR REPLICATION ON HDFS
•

Take advantage of “distributed filesystem” and allow for something similar to HBase
regions.

!
•

If a node goes down, the data is still available in HDFS - allow for that index to be
automatically served by a node that is still up if it has the capacity.

Solr
Node

Solr
Node

Solr
Node
HDFS
• Leader reads and writes index files to HDFS
• Replicas only read from HDFS, write to /dev/null

Leader

Replica

HDFS

Replica
MAP REDUCE INDEX BUILDING
•

Scalable index creation via map-reduce

•

Many initial ‘homegrown’ implementations sent documents from reducer to
SolrCloud over http

!
!

•

To really scale, you want the reducers to create the indexes in HDFS and then load
them up with Solr

!
•

The ideal impl will allow using as many reducers as are available in your hadoop
cluster, and then merge the indexes down to the correct number of ‘shards’
MR INDEX BUILDING
Mapper:
Parse input

Mapper:
Parse input

Mapper:
Parse input

Arbitrary reducing steps of indexing and merging

End-Reducer

End-Reducer

Index

Index
SOLRCLOUD AWARE
•

Can ‘inspect’ ZooKeeper to learn about Solr cluster.

•

What URL’s to GoLive to.

•

The Schema to use when building indexes.

•

Match hash -> shard assignments of a Solr cluster.

!
!
!
GOLIVE
!
•
•
•
•

After building your indexes with map-reduce, how do you deploy them to
your Solr cluster?
We want it to be easy - so we built the GoLive option.
GoLive allows you to easily merge the indexes you have created
atomically into a live running Solr cluster.
Paired with the ZooKeeper Aware ability, this allows you to simply point
your map-reduce job to your Solr cluster and it will automatically discover
how many shards to build and what locations to deliver the final indexes to
in HDFS.
FLUME SOLR SYNC
• Flume is a distributed, reliable, and available service for
efficiently collecting, aggregating, and moving large amounts of
log data. It has a simple and flexible architecture based on
streaming data flows. It is robust and fault tolerant with tunable
reliability mechanisms and many failover and recovery
mechanisms. It uses a simple extensible data model that allows
for online analytic application.
FLUME SOLR SYNC
Logs

Other
Flume

Flume

Solr
HDFS
SOLRCLOUD AWARE
•

Can ‘inspect’ ZooKeeper to learn about Solr cluster.

•

What URL’s to send data to.

•

The Schema for the collection being indexed to.

!
!
HBASE INTEGRATION
•
•
•
•
•
•
•
•

Collaboration between NGData & Cloudera
NGData are creators of the Lily data management platform
Lily HBase Indexer
Service which acts as a HBase replication listener
HBase replication features, such as filtering, supported
Replication updates trigger indexing of updates (rows)
Integrates Morphlines library for ETL of rows
AL2 licensed on github https://github.com/ngdata
Triggers on
updates

interactive load

HBase

HDFS

Indexer(s
)

Solr
Solr
server
Solr
server
Solr
server
Solr
server
server
MORPHLINES
•

A morphline is a configuration file that allows you to define ETL transformation
pipelines

!
•

Extract content from input files, transform content, load content (eg to Solr)

•

Uses Tika to extract content from a large variety of input documents

•

Part of the CDK (Cloudera Development Kit)

!
!
syslog

Flume
Agent

Solr Sink
Command: readLine
Command: grok
Command: loadSolr

Solr

•
•
•
•
•
•
•
•
•
•
•

Open Source framework for simple ETL
Ships as part Cloudera Developer Kit (CDK)
It’s a Java library
AL2 licensed on github https://github.com/cloudera/cdk
Similar to Unix pipelines
Configuration over coding
Supports common Hadoop formats
Avro
Sequence file
Text
Etc…

!
•
•
•
•
•
•
•
•

Integrate with and load into Apache Solr
Flexible log file analysis
Single-line record, multi-line records, CSV files
Regex based pattern matching and extraction
Integration with Avro
Integration with Apache Hadoop Sequence Files
Integration with SolrCell and all Apache Tika parsers
Auto-detection of MIME types from binary data using Apache Tika
•
•
•
•
•
•
•
•
•
•

Scripting support for dynamic java code
Operations on fields for assignment and comparison
Operations on fields with list and set semantics
if-then-else conditionals
A small rules engine (tryRules)
String and timestamp conversions
slf4j logging
Yammer metrics and counters
Decompression and unpacking of arbitrarily nested container file
formats
Etc…
MORPHLINES EXAMPLE CONFIG

Example Input
<164>Feb  4 10:46:14 syslog sshd[607]: listening on 0.0.0.0 po
Output Record
syslog_pri:164
syslog_timestamp:Feb  4 10:46:14
syslog_hostname:syslog
syslog_program:sshd
syslog_pid:607
syslog_message:listening on 0.0.0.0 port 22.

morphlines : [
 {
   id : morphline1
   importCommands : ["com.cloudera.**", "org.apache.solr.**"]
   commands : [
     { readLine {} }                    
     { 
       grok { 




         dictionaryFiles : [/tmp/grok-dictionaries]                               
         expressions : { 
           message : """<%{POSINT:syslog_pri}>%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %
{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}"""
         }
       }
     }
     { loadSolr {} }     
    ]
 }
HUE INTEGRATION
•
•
•
•
•

Hue
Simple UI
Navigated, faceted drill down
Customizable display
Full text search, standard Solr
API and query language
CLOUDERA SEARCH
•

https://ccp.cloudera.com/display/SUPPORT/Downloads

•

Or Google

•

“cloudera search download”

!
!
Mark Miller, Cloudera

@heismark

More Related Content

PDF
SolrCloud on Hadoop
PDF
Using Morphlines for On-the-Fly ETL
PPTX
Apache Hadoop India Summit 2011 talk "Searching Information Inside Hadoop Pla...
PDF
Large Scale ETL for Hadoop and Cloudera Search using Morphlines
PDF
Solr on HDFS - Past, Present, and Future: Presented by Mark Miller, Cloudera
PPTX
Adding Search to the Hadoop Ecosystem
PDF
Cross Datacenter Replication in Apache Solr 6
PDF
Solr Recipes
SolrCloud on Hadoop
Using Morphlines for On-the-Fly ETL
Apache Hadoop India Summit 2011 talk "Searching Information Inside Hadoop Pla...
Large Scale ETL for Hadoop and Cloudera Search using Morphlines
Solr on HDFS - Past, Present, and Future: Presented by Mark Miller, Cloudera
Adding Search to the Hadoop Ecosystem
Cross Datacenter Replication in Apache Solr 6
Solr Recipes

What's hot (20)

PPTX
Big data, just an introduction to Hadoop and Scripting Languages
PPTX
Case study of Rujhaan.com (A social news app )
PDF
Solr+Hadoop = Big Data Search
PDF
Rapid Prototyping with Solr
PPTX
Solr 4: Run Solr in SolrCloud Mode on your local file system.
PDF
Introduction to Impala
PDF
Search On Hadoop
PDF
High Performance Solr
PDF
Apache Hadoop 1.1
PPTX
May 2013 HUG: Apache Sqoop 2 - A next generation of data transfer tools
PDF
Cloudera Impala: A Modern SQL Engine for Apache Hadoop
PDF
Building and Running Solr-as-a-Service: Presented by Shai Erera, IBM
PDF
NYC HUG - Application Architectures with Apache Hadoop
PDF
Optimizing Hive Queries
PDF
Giraph+Gora in ApacheCon14
PPTX
Hadoop and rdbms with sqoop
PDF
Apache Sqoop: Unlocking Hadoop for Your Relational Database
PPTX
NYC Lucene/Solr Meetup: Spark / Solr
PDF
Presentations from the Cloudera Impala meetup on Aug 20 2013
PDF
H-Hypermap - Heatmap Analytics at Scale: Presented by David Smiley, D W Smile...
Big data, just an introduction to Hadoop and Scripting Languages
Case study of Rujhaan.com (A social news app )
Solr+Hadoop = Big Data Search
Rapid Prototyping with Solr
Solr 4: Run Solr in SolrCloud Mode on your local file system.
Introduction to Impala
Search On Hadoop
High Performance Solr
Apache Hadoop 1.1
May 2013 HUG: Apache Sqoop 2 - A next generation of data transfer tools
Cloudera Impala: A Modern SQL Engine for Apache Hadoop
Building and Running Solr-as-a-Service: Presented by Shai Erera, IBM
NYC HUG - Application Architectures with Apache Hadoop
Optimizing Hive Queries
Giraph+Gora in ApacheCon14
Hadoop and rdbms with sqoop
Apache Sqoop: Unlocking Hadoop for Your Relational Database
NYC Lucene/Solr Meetup: Spark / Solr
Presentations from the Cloudera Impala meetup on Aug 20 2013
H-Hypermap - Heatmap Analytics at Scale: Presented by David Smiley, D W Smile...
Ad

Similar to The First Class Integration of Solr with Hadoop (20)

PDF
Solr + Hadoop = Big Data Search
PDF
Cloudera search
PDF
Search On Hadoop Frontier Meetup
PPTX
Solr + Hadoop: Interactive Search for Hadoop
PDF
Search onhadoopsfhug081413
KEY
Intro to Apache Solr for Drupal
PDF
Solr Recipes Workshop
PPTX
Lecture 2_ Intro to laravel.pptx
PPTX
TriHUG: Lucene Solr Hadoop
PDF
Hortonworks Technical Workshop - HDP Search
PDF
Your Big Data Stack is Too Big!: Presented by Timothy Potter, Lucidworks
PDF
Introduction to Solr
PDF
Introduction to Solr
KEY
Apache Solr - Enterprise search platform
PPT
Apache Content Technologies
PDF
Ingesting hdfs intosolrusingsparktrimmed
PPTX
Laravel ppt
PDF
Solr 8 interview
PDF
Cloud Infrastructures Slide Set 7 - Docker - Neo4j | anynines
PPTX
What-is-Laravel and introduciton to Laravel
Solr + Hadoop = Big Data Search
Cloudera search
Search On Hadoop Frontier Meetup
Solr + Hadoop: Interactive Search for Hadoop
Search onhadoopsfhug081413
Intro to Apache Solr for Drupal
Solr Recipes Workshop
Lecture 2_ Intro to laravel.pptx
TriHUG: Lucene Solr Hadoop
Hortonworks Technical Workshop - HDP Search
Your Big Data Stack is Too Big!: Presented by Timothy Potter, Lucidworks
Introduction to Solr
Introduction to Solr
Apache Solr - Enterprise search platform
Apache Content Technologies
Ingesting hdfs intosolrusingsparktrimmed
Laravel ppt
Solr 8 interview
Cloud Infrastructures Slide Set 7 - Docker - Neo4j | anynines
What-is-Laravel and introduciton to Laravel
Ad

More from lucenerevolution (20)

PDF
Text Classification Powered by Apache Mahout and Lucene
PDF
State of the Art Logging. Kibana4Solr is Here!
PDF
Search at Twitter
PDF
Building Client-side Search Applications with Solr
PDF
Integrate Solr with real-time stream processing applications
PDF
Scaling Solr with SolrCloud
PDF
Administering and Monitoring SolrCloud Clusters
PDF
Implementing a Custom Search Syntax using Solr, Lucene, and Parboiled
PDF
Using Solr to Search and Analyze Logs
PDF
Enhancing relevancy through personalization & semantic search
PDF
Real-time Inverted Search in the Cloud Using Lucene and Storm
PDF
Solr's Admin UI - Where does the data come from?
PDF
Schemaless Solr and the Solr Schema REST API
PDF
High Performance JSON Search and Relational Faceted Browsing with Lucene
PDF
Text Classification with Lucene/Solr, Apache Hadoop and LibSVM
PDF
Faceted Search with Lucene
PDF
Recent Additions to Lucene Arsenal
PDF
Turning search upside down
PDF
Spellchecking in Trovit: Implementing a Contextual Multi-language Spellchecke...
PDF
Shrinking the haystack wes caldwell - final
Text Classification Powered by Apache Mahout and Lucene
State of the Art Logging. Kibana4Solr is Here!
Search at Twitter
Building Client-side Search Applications with Solr
Integrate Solr with real-time stream processing applications
Scaling Solr with SolrCloud
Administering and Monitoring SolrCloud Clusters
Implementing a Custom Search Syntax using Solr, Lucene, and Parboiled
Using Solr to Search and Analyze Logs
Enhancing relevancy through personalization & semantic search
Real-time Inverted Search in the Cloud Using Lucene and Storm
Solr's Admin UI - Where does the data come from?
Schemaless Solr and the Solr Schema REST API
High Performance JSON Search and Relational Faceted Browsing with Lucene
Text Classification with Lucene/Solr, Apache Hadoop and LibSVM
Faceted Search with Lucene
Recent Additions to Lucene Arsenal
Turning search upside down
Spellchecking in Trovit: Implementing a Contextual Multi-language Spellchecke...
Shrinking the haystack wes caldwell - final

Recently uploaded (20)

PDF
DP Operators-handbook-extract for the Mautical Institute
PDF
A comparative study of natural language inference in Swahili using monolingua...
PPTX
Programs and apps: productivity, graphics, security and other tools
PPTX
A Presentation on Artificial Intelligence
PPTX
1. Introduction to Computer Programming.pptx
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PPTX
cloud_computing_Infrastucture_as_cloud_p
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
Mushroom cultivation and it's methods.pdf
PPTX
SOPHOS-XG Firewall Administrator PPT.pptx
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PDF
ENT215_Completing-a-large-scale-migration-and-modernization-with-AWS.pdf
PDF
A novel scalable deep ensemble learning framework for big data classification...
PDF
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
Getting Started with Data Integration: FME Form 101
PDF
gpt5_lecture_notes_comprehensive_20250812015547.pdf
PDF
Assigned Numbers - 2025 - Bluetooth® Document
PDF
Hybrid model detection and classification of lung cancer
PDF
From MVP to Full-Scale Product A Startup’s Software Journey.pdf
DP Operators-handbook-extract for the Mautical Institute
A comparative study of natural language inference in Swahili using monolingua...
Programs and apps: productivity, graphics, security and other tools
A Presentation on Artificial Intelligence
1. Introduction to Computer Programming.pptx
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
cloud_computing_Infrastucture_as_cloud_p
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Mushroom cultivation and it's methods.pdf
SOPHOS-XG Firewall Administrator PPT.pptx
MIND Revenue Release Quarter 2 2025 Press Release
ENT215_Completing-a-large-scale-migration-and-modernization-with-AWS.pdf
A novel scalable deep ensemble learning framework for big data classification...
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
Encapsulation_ Review paper, used for researhc scholars
Getting Started with Data Integration: FME Form 101
gpt5_lecture_notes_comprehensive_20250812015547.pdf
Assigned Numbers - 2025 - Bluetooth® Document
Hybrid model detection and classification of lung cancer
From MVP to Full-Scale Product A Startup’s Software Journey.pdf

The First Class Integration of Solr with Hadoop

  • 2. THE FIRST CLASS INTEGRATION OF SOLR WITH HADOOP Mark Miller (Cloudera)
  • 3. WHO AM I? Cloudera employee, Lucene/Solr committer, Lucene PMC member, Apache member ! First job out of college was in the Newspaper archiving business. ! First full time employee at LucidWorks - a startup around Lucene/Solr. ! Spent a couple years as “Core” engineering manager, reporting to the VP of engineering.
  • 4. • Very fast and feature rich ‘core’ search engine library. • Compact and powerful, Lucene is an extremely popular full-text search library. • Provides low level API’s for analyzing, indexing, and searching text, along with a myriad of related features. ! ! ! • Just the core - either you write the ‘glue’ or use a higher level search engine built with Lucene.
  • 5. • Solr (pronounced "solar") is an open source enterprise search platform from the Apache Lucene project. Its major features include full-text search, hit highlighting, faceted search, dynamic clustering, database integration, and rich document (e.g., Word, PDF) handling. Providing distributed search and index replication, Solr is highly scalable. Solr is the most popular enterprise search engine.
 
 - Wikipedia
  • 6. SEARCH ON HADOOP HISTORY • • • • • • • Katta Blur SolBase HBASE-3529 SOLR-1301 SOLR-1045 Ad-Hoc
  • 7. ...
  • 8. THE PLAN: STRENGTHEN THE FAMILY BONDS • No need to build something radically new - we have the pieces we need. • Focus on integration points. • Create high quality, first class integrations and contribute the work to the projects involved. ! ! ! • Focus on integration and quality first - then performance and scale.
  • 10. SOLR INTEGRATION • Read and Write directly to HDFS • • First Class Custom Directory Support in Solr Support Solr Replication on HDFS • Other improvements around usability and configuration ! !
  • 11. READ AND WRITE DIRECTLY TO HDFS • Lucene did not historically support append only file system • “Flexible Indexing” brought around support for append only filesystem support • Lucene support append only filesystem by default since 4.2 ! !
  • 12. LUCENE DIRECTORY ABSTRACTION • • It’s how Lucene interacts with index files. Solr uses the Lucene library and offers DirectoryFactory ! • • • • • • • • Class Directory { listAll(); createOutput(file, context); openInput(file, context); deleteFile(file); makeLock(file); clearLock(file); …
  • 13. PUTTING THE INDEX IN HDFS • Solr relies on the filesystem cache to operate at full speed. • HDFS not known for it’s random access speed. • Apache Blur has already solved this with an HdfsDirectory that works on top of a BlockDirectory. ! ! ! • The “block cache” caches the hot blocks of the index off heap (direct byte array) and takes the place of the filesystem cache. ! • We contributed back optional ‘write’ caching. ! !
  • 14. PUTTING THE TRANSACTIONLOG IN HDFS • HdfsUpdateLog added - extends UpdateLog • Triggered by setting the UpdateLog dataDir to something that starts with hdfs:/ - no additional configuration necessary. ! ! • Same extensive testing as used on UpdateLog
  • 15. RUNNING SOLR ON HDFS • Set DirectoryFactory to HdfsDirectoryFactory and set the dataDir to a location in hdfs. ! • Set LockType to ‘hdfs’ • Use an UpdateLog dataDir location that begins with ‘hdfs:/’ • • • Or java -Dsolr.directoryFactory=HdfsDirectoryFactory -Dsolr.lockType=solr.HdfsLockFactory -Dsolr.updatelog=hdfs://host:port/path -jar start.jar ! !
  • 16. SOLR REPLICATION ON HDFS ! • While Solr has exposed a plug-able DirectoryFactory for a long time now, it was really quite limited. ! • Most glaring, only a local file system based Directory would work with replication. • There where also other more minor areas that relied on a local filesystem Directory implementation. !
  • 17. FUTURE SOLR REPLICATION ON HDFS • Take advantage of “distributed filesystem” and allow for something similar to HBase regions. ! • If a node goes down, the data is still available in HDFS - allow for that index to be automatically served by a node that is still up if it has the capacity. Solr Node Solr Node Solr Node HDFS
  • 18. • Leader reads and writes index files to HDFS • Replicas only read from HDFS, write to /dev/null Leader Replica HDFS Replica
  • 19. MAP REDUCE INDEX BUILDING • Scalable index creation via map-reduce • Many initial ‘homegrown’ implementations sent documents from reducer to SolrCloud over http ! ! • To really scale, you want the reducers to create the indexes in HDFS and then load them up with Solr ! • The ideal impl will allow using as many reducers as are available in your hadoop cluster, and then merge the indexes down to the correct number of ‘shards’
  • 20. MR INDEX BUILDING Mapper: Parse input Mapper: Parse input Mapper: Parse input Arbitrary reducing steps of indexing and merging End-Reducer End-Reducer Index Index
  • 21. SOLRCLOUD AWARE • Can ‘inspect’ ZooKeeper to learn about Solr cluster. • What URL’s to GoLive to. • The Schema to use when building indexes. • Match hash -> shard assignments of a Solr cluster. ! ! !
  • 22. GOLIVE ! • • • • After building your indexes with map-reduce, how do you deploy them to your Solr cluster? We want it to be easy - so we built the GoLive option. GoLive allows you to easily merge the indexes you have created atomically into a live running Solr cluster. Paired with the ZooKeeper Aware ability, this allows you to simply point your map-reduce job to your Solr cluster and it will automatically discover how many shards to build and what locations to deliver the final indexes to in HDFS.
  • 23. FLUME SOLR SYNC • Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of log data. It has a simple and flexible architecture based on streaming data flows. It is robust and fault tolerant with tunable reliability mechanisms and many failover and recovery mechanisms. It uses a simple extensible data model that allows for online analytic application.
  • 25. SOLRCLOUD AWARE • Can ‘inspect’ ZooKeeper to learn about Solr cluster. • What URL’s to send data to. • The Schema for the collection being indexed to. ! !
  • 26. HBASE INTEGRATION • • • • • • • • Collaboration between NGData & Cloudera NGData are creators of the Lily data management platform Lily HBase Indexer Service which acts as a HBase replication listener HBase replication features, such as filtering, supported Replication updates trigger indexing of updates (rows) Integrates Morphlines library for ETL of rows AL2 licensed on github https://github.com/ngdata
  • 28. MORPHLINES • A morphline is a configuration file that allows you to define ETL transformation pipelines ! • Extract content from input files, transform content, load content (eg to Solr) • Uses Tika to extract content from a large variety of input documents • Part of the CDK (Cloudera Development Kit) ! !
  • 29. syslog Flume Agent Solr Sink Command: readLine Command: grok Command: loadSolr Solr • • • • • • • • • • • Open Source framework for simple ETL Ships as part Cloudera Developer Kit (CDK) It’s a Java library AL2 licensed on github https://github.com/cloudera/cdk Similar to Unix pipelines Configuration over coding Supports common Hadoop formats Avro Sequence file Text Etc… !
  • 30. • • • • • • • • Integrate with and load into Apache Solr Flexible log file analysis Single-line record, multi-line records, CSV files Regex based pattern matching and extraction Integration with Avro Integration with Apache Hadoop Sequence Files Integration with SolrCell and all Apache Tika parsers Auto-detection of MIME types from binary data using Apache Tika
  • 31. • • • • • • • • • • Scripting support for dynamic java code Operations on fields for assignment and comparison Operations on fields with list and set semantics if-then-else conditionals A small rules engine (tryRules) String and timestamp conversions slf4j logging Yammer metrics and counters Decompression and unpacking of arbitrarily nested container file formats Etc…
  • 32. MORPHLINES EXAMPLE CONFIG Example Input <164>Feb  4 10:46:14 syslog sshd[607]: listening on 0.0.0.0 po Output Record syslog_pri:164 syslog_timestamp:Feb  4 10:46:14 syslog_hostname:syslog syslog_program:sshd syslog_pid:607 syslog_message:listening on 0.0.0.0 port 22. morphlines : [  {    id : morphline1    importCommands : ["com.cloudera.**", "org.apache.solr.**"]    commands : [      { readLine {} }                          {        grok { 
          dictionaryFiles : [/tmp/grok-dictionaries]                                         expressions : {            message : """<%{POSINT:syslog_pri}>%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} % {DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}"""          }        }      }      { loadSolr {} }          ]  }
  • 33. HUE INTEGRATION • • • • • Hue Simple UI Navigated, faceted drill down Customizable display Full text search, standard Solr API and query language