SlideShare a Scribd company logo
Portable Lucene index format
          Andrzej Białecki

           Oct 19, 2011
whoami
§  Started using Lucene in 2003 (1.2-dev...)
§  Created Luke – the Lucene Index Toolbox
§  Apache Nutch, Hadoop, Lucene/Solr committer,
    Lucene and Nutch PMC member
§  Developer at Lucid Imagination




                          3
Agenda
§    Data compatibility challenge
§    Portable format design & applications
§    Implementation details
§    Status & future




                            4
Lucene index format
§  Binary
   •  Not human-readable*!              NRMÿt|yzqq{x …
      * except for committer-humans 
§  Highly optimized
   •  Efficient random I/O access
§  Compressed
   •  Data-specific compression
§  Strict, well-defined
   •  Details specific to a particular version of Lucene
   •  Both minor and major differences between versions


                                 5
Backward-compatibility
§  Compatibility of API
   •  Planned obsolescence across major releases
   •  Relatively easy (@deprecated APIs)
§  Compatibility of data
   •  Complex and costly
   •  Still required for upgrades to an N+1 release
   •  Caveats when re-using old data
      §  E.g. incompatible Analyzer-s, sub-optimal field types
§  Forward-compatibility in Lucene?
   •  Forget it – and for good reasons


                                  6
Backup and archival
§  Do we need to keep old data?
   •  Typically index is not a System Of Record
§  But index backups still occasionally needed
   •    If costly to build (time / manual effort / CPU / IO …)
   •    As a fallback copy during upgrades
   •    To validate claims or to prove compliance
   •    To test performance / quality across updates
§  Archival data should use a long-term viable format
   •  Timespan longer than shelf-life of a software version
   •  Keep all tools, platforms and environments?


                                              Ouch…



                                        7
MN
§  Lucene currently supports some
                                            2.9   3.x    4.x
    backward-compatibility
                                            +/+   +/-    -/-    2.9
   •  Only latest N <= 2 releases
                                            -/+   +/+?   +/-    3.x
   •  High maintenance and expertise
                                            -/-   -/+    +/+?   4.x
      costs
   •  Very fragile
§  Ever higher cost of broader back-
    compat
   •  M source formats * N target formats
   •  Forward-compat? Forget it


                            8
Conclusion
      If you need to read data
      from versions other than N
      or N-1 you will suffer greatly




      There should be a better
      solution for a broader data
      compatibility …

      9
Portable format idea
§  Can we come up with one simple,
    generic, flexible format that fits data
    from any version?
§  Who cares if it’s sub-optimal as long
    as it’s readable!



                      Use a format so simple
                      that it needs no
                      version-specific tools

                 10
Portable format: goals
§  Reduced costs of long-term back-compat
   •  M * portable                         2.9   3.x   4.x

   •  Implement only one format right      +/+   +/+   +/+   P

§  Long-term readability
   •  Readable with simple tools (text editor, less)
   •  Easy to post-process with simple tools
§  Extensible
   •  Easy to add data and metadata on many levels
   •  Easy to add other index parts, or non-index data
§  Reducible
   •  Easy to ignore / skip unknown data (forward-compat)
                             11
Portable format: design
§  Focus on function and not form / implementation
   •  Follow the logical model of a Lucene index, which
      is more or less constant across releases
§  Relaxed and forgiving
   •  Arbitrary number and type of entries
   •  Not thwarted by unknown data (ignore / skip)
   •  Figure out best defaults if data is missing
§  Focus on clarity and not minimal size
   •  De-normalize data if it increases portability
§  Simple metadata (per item, per section)

                            12
Index model: sections
§  Global index data
                                                       index
   •  User data
                                                    Global data
   •  Source format details (informational)
   •  List of commit points (and the current        Segment 1
      commit)
   •  List of index sections (segments, other?)
§  Segment sections                                Segment 2
   •  Segment info in metadata
   •  List of fields and their types (FieldInfos)
   •  Per-segment data
                                                    Segment 3



                                 13
Per-segment data
§  Logical model instead of implementation details
§  Per-segment sections          Segment 1     Segment 2

   •  Existing known sections      Segment &      Segment &
                                   Field meta     Field meta
   •  Other future sections …
                                      stored         stored
                                      terms          terms
                                    term freqs     term freqs
                                     postings       postings
§  This is the standard model!      payloads       payloads
   •  Just be more relaxed         term vectors   term vectors
                                    doc values     doc values
   •  And use plain-text formats
                                     deletes        deletes
§  SimpleTextCodec?
   •  Not quite (yet?)
                             14
PortableCodec
§  Codec API allows to customize on-disk formats
§  API is not complete yet …
  •  Term dict + postings, docvalues, segment infos
§  … but it’s coming: LUCENE-2621
  •  Custom stored fields storage (e.g. NoSQL)
  •  More efficient term freq. vectors storage
  •  Eventually will make PortableCodec possible
     §  Write and read data in portable format
     §  Reuse / extend SimpleTextCodec
§  In the meantime … use codec-independent
    export / import utilities
                                 15
Pre-analyzed input
§  SOLR-1535 PreAnalyzedField
   •  Allows submitting pre-analyzed documents
   •  Too limited format
§  “Inverted document” in portable format
   •  Pre-analyzed, pre-inverted (e.g. with term vectors)
   •  Easy to construct using non-Lucene tools
§  Applications
   •  Integration with external pipelines
       §  Text analysis, segmentation, stemming, POS-tagging, other NLP –
           produces pre-analyzed input
   •  Other ideas
       §    Transaction log
       §    Fine-grained shard rebalancing (nano-sharding)
       §    Rolling updates of a cluster
       §    …
                                          16
Serialization format: goals
§  Generic enough:
   •  Flexible enough to express all known index parts
§  Specific enough
   •  Predefined sections common to all versions
   •  Well-defined record boundaries (easy to ignore / skip)
   •  Well-defined behavior on missing sections / attributes
§  Support for arbitrary sections
§  Easily parsed, preferably with no external libs
§  Only reasonably efficient


                             17
Serialization format: design
§  Container - could be a ZIP
   •  Simple, well-known, single file, streamable, fast
      seek, compressible
   •  But limited to 4GB (ZIP-64 only in Java 7)
   •  Needs un-zipping to process with text utilities
§  Or could be just a Lucene Directory
   •  That is, a bunch of files with predefined names
      §  You can ZIP them yourself if needed, or TAR, or whatever
§  Simple plain-text formats
   •  Line-oriented records, space-separated fields
      §  Or Avro?
   •  Per-section metadata in java.util.Properties format
                                18
Implementation
§  See LUCENE-3491 for more details
§  PortableCodec proof-of-concept
   •  Uses extended Codec API in LUCENE-2621
   •  Reuses SimpleTextCodec
   •  Still some parts are not handled …
§  PortableIndexExporter tool
   •  Uses regular Lucene IndexReader API
   •  Traverses all data structures for a complete export
   •  Mimics the expected data from PortableCodec (to
      be completed in the future)
§  Serialization: plain-text files in Directory
                            19
Example data
     §  Directory contents
	
  	
  	
  120	
  10	
  Oct	
  21:03	
  _0.docvals	
  
	
  	
  	
  	
  	
  3	
  10	
  Oct	
  21:03	
  _0.f1.norms	
  
	
  	
  	
  	
  	
  3	
  10	
  Oct	
  21:03	
  _0.f2.norms	
  
	
  	
  	
  152	
  10	
  Oct	
  21:03	
  _0.fields	
  
	
  	
  1090	
  10	
  Oct	
  21:03	
  _0.postings	
  
	
  	
  	
  244	
  10	
  Oct	
  21:03	
  _0.stored	
  
	
  	
  	
  457	
  10	
  Oct	
  21:03	
  _0.terms	
  
	
  	
  1153	
  10	
  Oct	
  21:03	
  _0.vectors	
  
	
  	
  	
  701	
  10	
  Oct	
  21:03	
  commits.1318273420000.meta	
  
	
  	
  	
  342	
  10	
  Oct	
  21:03	
  index.meta	
  
	
  	
  1500	
  10	
  Oct	
  21:03	
  seginfos.meta	
  

                                         20
Example data
§  Index.meta

#Mon	
  Oct	
  17	
  15:19:35	
  CEST	
  2011	
  
#Created	
  with	
  PortableIndexExporter	
  
fieldNames=[f1,	
  f3,	
  f2]	
  
numDeleted=0	
  
numDocs=2	
  
readers=1	
  
readerVersion=1318857574927	
  




                                21
Example data
§  Seginfos.meta (SegmentInfos)
  version=1318857445991	
  
  size=1	
  
  files=[_0.nrm,	
  _0.fnm,	
  _0.tvx,	
  _0.docvals,	
  _0.tvd,	
  _0.fdx,	
  _0.tvf
  0.name=_0	
  
  0.version=4.0	
  
  0.files=[_0.nrm,	
  _0.fnm,	
  _0.tvx,	
  _0.docvals,	
  _0.tvd,	
  _0.tvf,	
  _0.f
  0.fields=3	
  
  0.usesCompond=false	
  
  0.field.f1.number=0	
  
  0.field.f2.number=1	
  
  0.field.f3.number=2	
  
  0.field.f1.flags=IdfpVopN-­‐-­‐-­‐-­‐-­‐	
  
  0.field.f2.flags=IdfpVopN-­‐Df64	
  
  0.field.f3.flags=-­‐-­‐-­‐-­‐-­‐-­‐-­‐N-­‐Dvin	
  
  0.diag.java.vendor=Apple	
  Inc.	
  
  0.hasProx=true	
  
  0.hasVectors=true	
  
  0.docCount=2	
                              22
  0.delCount=0	
  
Example data
§  Commits.1318273420000.meta (IndexCommit)

segment=segments_1	
  
generation=1	
  
timestamp=1318857575000	
  
version=1318857574927	
  
deleted=false	
  
optimized=true	
  
files=[_0.nrm,	
  _0.fnm,	
  _0_1.skp,	
  _0_1.tii,	
  _0.tvd	
  …




                              23
Example data
§  _0.fields (FieldInfos in Luke-like format)
field  	
  f1	
  
	
  flags	
  IdfpVopN-­‐-­‐-­‐-­‐-­‐	
  
	
  codec	
  Standard	
  
field	
  f2	
  
	
  flags	
  IdfpVopN-­‐Df64	
  
	
  codec	
  MockRandom	
  
field	
  f3	
  
	
  flags	
  -­‐-­‐-­‐-­‐-­‐-­‐-­‐N-­‐Dvin	
  
	
  codec	
  MockVariableIntBlock	
  



                                      24
Example data
   §  _0.stored (stored fields – LUCENE-2621)
doc	
  2	
  
	
  field	
  f1	
  
	
  	
  str	
  this	
  is	
  a	
  test	
  
	
  field	
  f2	
  
	
  	
  str	
  another	
  field	
  with	
  test	
  string	
  
doc	
  2	
  
	
  field	
  f1	
  
	
  	
  str	
  doc	
  two	
  this	
  is	
  a	
  test	
  
	
  field	
  f2	
  
	
  	
  str	
  doc	
  two	
  another	
  field	
  with	
  test	
  string	
  



                                           25
Example data
§  _0.terms – terms dict (via SimpleTextCodec)
field	
  f1	
                            field	
  f2	
  
	
  term	
  a	
                          	
  term	
  another	
  
	
  	
  freq	
  2	
                      	
  	
  freq	
  2	
  
	
  	
  totalFreq	
  2	
                 	
  	
  totalFreq	
  2	
  
	
  term	
  doc	
                        	
  term	
  doc	
  
	
  	
  freq	
  1	
                      	
  	
  freq	
  1	
  
	
  	
  totalFreq	
  1	
                 	
  	
  totalFreq	
  1	
  
	
  term	
  is	
                         	
  term	
  field	
  
	
  	
  freq	
  2	
                      	
  	
  freq	
  2	
  
	
  	
  totalFreq	
  2	
                 	
  	
  totalFreq	
  2	
  
-­‐term	
  test	
                        	
  term	
  string	
  
	
  	
  freq	
  2	
                      	
  	
  freq	
  2	
  
	
  	
  totalFreq	
  2	
                 	
  	
  totalFreq	
  2	
  
…	
                                      …	
  
                                    26
Example data
§  _0.postings (via SimpleTextCodec)

field	
  f1	
                                  field	
  f2	
  
	
  	
  term	
  a	
                            	
  	
  term	
  another	
  
	
  	
  	
  	
  doc	
  0	
                     	
  	
  	
  	
  doc	
  0	
  
	
  	
  	
  	
  	
  	
  freq	
  1	
            	
  	
  	
  	
  	
  	
  freq	
  1	
  
	
  	
  	
  	
  	
  	
  pos	
  2	
             	
  	
  	
  	
  	
  	
  pos	
  0	
  
	
  	
  	
  	
  doc	
  1	
                     	
  	
  	
  	
  doc	
  1	
  
	
  	
  	
  	
  	
  	
  freq	
  1	
            	
  	
  	
  	
  	
  	
  freq	
  1	
  
	
  	
  	
  	
  	
  	
  pos	
  4	
             	
  	
  	
  	
  	
  	
  pos	
  2	
  
	
  	
  term	
  doc	
                          	
  	
  term	
  doc	
  
	
  	
  	
  	
  doc	
  1	
                     	
  	
  	
  	
  doc	
  1	
  
	
  	
  	
  	
  	
  	
  freq	
  1	
            	
  	
  	
  	
  	
  	
  freq	
  1	
  
	
  	
  	
  	
  	
  	
  pos	
  0	
             	
  	
  	
  	
  	
  	
  pos	
  0	
  
…	
                                            	
  	
  …	
  

                                               27
Example data
§  _0.vectors
doc	
  0	
                                   doc	
  1	
  
	
  field	
  f1	
                            	
  field	
  f1	
  
	
  	
  term	
  a	
                          	
  	
  term	
  a	
  
	
  	
  	
  freq	
  1	
                      	
  	
  	
  freq	
  1	
  
	
  	
  	
  offs	
  8-­‐9	
                  	
  	
  	
  offs	
  16-­‐17	
  
	
  	
  	
  positions	
  2	
                 	
  	
  	
  positions	
  4	
  
	
  	
  term	
  is	
                         	
  	
  term	
  doc	
  
	
  	
  	
  freq	
  1	
                      	
  	
  	
  freq	
  1	
  
	
  	
  	
  offs	
  5-­‐7	
                  	
  	
  	
  offs	
  0-­‐3	
  
	
  	
  	
  positions	
  1	
                 	
  	
  	
  positions	
  0	
  
…	
                                          	
  	
  term	
  is	
  
field	
  f2	
                                	
  	
  	
  freq	
  1	
  
	
  	
  term	
  another	
                    	
  	
  	
  offs	
  13-­‐15	
  
	
  	
  	
  freq	
  1	
                      	
  	
  	
  positions	
  3	
  
	
  	
  	
  offs	
  0-­‐7	
                  …	
  
	
  	
  	
  positions	
  0	
  
…	
  
                                        28
Example data
§  _0.docvals

field	
  f2	
  
	
  type	
  FLOAT_64	
  
0	
  0.0	
  
1	
  0.0010	
  
	
  docs	
  2	
  
	
  end	
  
field	
  f3	
  
	
  type	
  VAR_INTS	
  
0	
  0	
  
1	
  1234567890	
  
	
  docs	
  2	
  
	
  end	
  
END	
  

                                  29
Current status
§  PortableIndexExporter tool working
  •  Usable as an archival tool
  •  Exports complete index data
     §  Unlike XMLExporter in Luke
  •  No corresponding import tool yet …
§  Codec-based tools incomplete
  •  … because the Codec API is incomplete
§  More work needed to reach the cross-compat goals
§  Help is welcome! J



                                 30
Summary
§  Portability of data is a challenge
   •  Binary back-compat difficult and fragile
§  Long-term archival is a challenge
§  Solution: use simple data formats
§  Portable index format:
   •    Uses plain text files
   •    Extensible – back-compat
   •    Reducible – forward-compat
   •    Enables “pre-analyzed input” pipelines
   •    PortableCodec
         §  for now PortableIndexExporter tool

                                31
QA

§  More details: LUCENE-3491
      http://issues.apache.org/jira/browse/LUCENE-3491


§  Contact:




                   Thank you!

                               32

More Related Content

PDF
Linuxのプロセススケジューラ(Reading the Linux process scheduler)
PDF
Openstack kolla 20171025 josug v3
PDF
2018 builderscon airflowを用いて、 複雑大規模なジョブフロー管理 に立ち向かう
PDF
負荷試験ツールlocustを使おう
PDF
Hinemos ver.6.0 機能紹介
PDF
Redmineの情報を自分好みに見える化した話
PDF
コンテナ時代にインフラエンジニアは何をするのか
PDF
Ad設計
Linuxのプロセススケジューラ(Reading the Linux process scheduler)
Openstack kolla 20171025 josug v3
2018 builderscon airflowを用いて、 複雑大規模なジョブフロー管理 に立ち向かう
負荷試験ツールlocustを使おう
Hinemos ver.6.0 機能紹介
Redmineの情報を自分好みに見える化した話
コンテナ時代にインフラエンジニアは何をするのか
Ad設計

What's hot (20)

PPTX
Using Rook to Manage Kubernetes Storage with Ceph
PDF
InnoDBのすゝめ(仮)
PDF
今さら聞けない!Active Directoryドメインサービス入門
PPTX
コンテナネットワーキング(CNI)最前線
PDF
爆速クエリエンジン”Presto”を使いたくなる話
PDF
Do Wide and Deep Networks Learn the Same Things: Uncovering How Neural Networ...
PPTX
さくっと理解するSpring bootの仕組み
PDF
継続使用と新規追加したRedmine Plugin
PDF
ホワイトボックス・スイッチの期待と現実
 
PPTX
Web App for Containers + MySQLでコンテナ対応したPHPアプリを作ろう!
PDF
30分でわかる! コンピュータネットワーク
PDF
ONIC2017 プログラマブル・データプレーン時代に向けた ネットワーク・オペレーションスタック
PPTX
Canonicalが支える、さくっと使えるUbuntu OpenStack - OpenStack Day in ITpro EXPO 2014
PDF
ゲームのインフラをAwsで実戦tips全て見せます
PDF
OpenStack Swift紹介
PDF
君にもできる! にゅーとろん君になってみよー!! 「Neutronになって理解するOpenStack Net - OpenStack最新情報セミナー ...
PDF
Hadoopの標準GUI HUEの最新情報
PDF
ドメイン駆動設計 の 実践 Part3 DDD
PDF
Docker Compose入門~今日から始めるComposeの初歩からswarm mode対応まで
PDF
systemd 再入門
Using Rook to Manage Kubernetes Storage with Ceph
InnoDBのすゝめ(仮)
今さら聞けない!Active Directoryドメインサービス入門
コンテナネットワーキング(CNI)最前線
爆速クエリエンジン”Presto”を使いたくなる話
Do Wide and Deep Networks Learn the Same Things: Uncovering How Neural Networ...
さくっと理解するSpring bootの仕組み
継続使用と新規追加したRedmine Plugin
ホワイトボックス・スイッチの期待と現実
 
Web App for Containers + MySQLでコンテナ対応したPHPアプリを作ろう!
30分でわかる! コンピュータネットワーク
ONIC2017 プログラマブル・データプレーン時代に向けた ネットワーク・オペレーションスタック
Canonicalが支える、さくっと使えるUbuntu OpenStack - OpenStack Day in ITpro EXPO 2014
ゲームのインフラをAwsで実戦tips全て見せます
OpenStack Swift紹介
君にもできる! にゅーとろん君になってみよー!! 「Neutronになって理解するOpenStack Net - OpenStack最新情報セミナー ...
Hadoopの標準GUI HUEの最新情報
ドメイン駆動設計 の 実践 Part3 DDD
Docker Compose入門~今日から始めるComposeの初歩からswarm mode対応まで
systemd 再入門
Ad

Viewers also liked (20)

PPT
Finite State Queries In Lucene
PDF
Lucene
PDF
Dawid Weiss- Finite state automata in lucene
ODP
Lucene And Solr Intro
PPTX
Introduction to Lucene and Solr - 1
PPTX
Apache lucene
PDF
Analytics in olap with lucene & hadoop
PDF
Beyond full-text searches with Lucene and Solr
PPT
Lucene and MySQL
PPT
Lucandra
PDF
DocValues aka. Column Stride Fields in Lucene 4.0 - By Willnauer Simon
PDF
The Evolution of Lucene & Solr Numerics from Strings to Points: Presented by ...
PDF
Building and Running Solr-as-a-Service: Presented by Shai Erera, IBM
PDF
Lucene for Solr Developers
PDF
Berlin Buzzwords 2013 - How does lucene store your data?
PDF
Architecture and Implementation of Apache Lucene: Marter's Thesis
PDF
Near Real Time Indexing: Presented by Umesh Prasad & Thejus V M, Flipkart
PPT
Lucene Introduction
PDF
Text categorization with Lucene and Solr
PPTX
Introduction to Lucene & Solr and Usecases
Finite State Queries In Lucene
Lucene
Dawid Weiss- Finite state automata in lucene
Lucene And Solr Intro
Introduction to Lucene and Solr - 1
Apache lucene
Analytics in olap with lucene & hadoop
Beyond full-text searches with Lucene and Solr
Lucene and MySQL
Lucandra
DocValues aka. Column Stride Fields in Lucene 4.0 - By Willnauer Simon
The Evolution of Lucene & Solr Numerics from Strings to Points: Presented by ...
Building and Running Solr-as-a-Service: Presented by Shai Erera, IBM
Lucene for Solr Developers
Berlin Buzzwords 2013 - How does lucene store your data?
Architecture and Implementation of Apache Lucene: Marter's Thesis
Near Real Time Indexing: Presented by Umesh Prasad & Thejus V M, Flipkart
Lucene Introduction
Text categorization with Lucene and Solr
Introduction to Lucene & Solr and Usecases
Ad

Similar to Portable Lucene Index Format & Applications - Andrzej Bialecki (20)

PDF
Flexible Indexing in Lucene 4.0
PPT
PDF
Drill architecture 20120913
PDF
Distributed Data processing in a Cloud
PDF
Is Your Index Reader Really Atomic or Maybe Slow?
PPTX
Intro to Big Data and NoSQL
PPTX
Oracle OpenWo2014 review part 03 three_paa_s_database
PDF
Improved Search With Lucene 4.0 - NOVA Lucene/Solr Meetup
PDF
What is in a Lucene index?
PDF
Still All on One Server: Perforce at Scale
PDF
What's brewing in the eZ Systems extensions kitchen
PDF
Fun with flexible indexing
PDF
Jay Kreps on Project Voldemort Scaling Simple Storage At LinkedIn
PPTX
Timesten Architecture
PPTX
What’s the Deal with Containers, Anyway?
PDF
Data Lake and the rise of the microservices
PPTX
Pune-Cocoa: Blocks and GCD
PPT
HDFS_architecture.ppt
PDF
From 0 to syncing
PPTX
Lessons learned from running Spark on Docker
Flexible Indexing in Lucene 4.0
Drill architecture 20120913
Distributed Data processing in a Cloud
Is Your Index Reader Really Atomic or Maybe Slow?
Intro to Big Data and NoSQL
Oracle OpenWo2014 review part 03 three_paa_s_database
Improved Search With Lucene 4.0 - NOVA Lucene/Solr Meetup
What is in a Lucene index?
Still All on One Server: Perforce at Scale
What's brewing in the eZ Systems extensions kitchen
Fun with flexible indexing
Jay Kreps on Project Voldemort Scaling Simple Storage At LinkedIn
Timesten Architecture
What’s the Deal with Containers, Anyway?
Data Lake and the rise of the microservices
Pune-Cocoa: Blocks and GCD
HDFS_architecture.ppt
From 0 to syncing
Lessons learned from running Spark on Docker

More from lucenerevolution (20)

PDF
Text Classification Powered by Apache Mahout and Lucene
PDF
State of the Art Logging. Kibana4Solr is Here!
PDF
Search at Twitter
PDF
Building Client-side Search Applications with Solr
PDF
Integrate Solr with real-time stream processing applications
PDF
Scaling Solr with SolrCloud
PDF
Administering and Monitoring SolrCloud Clusters
PDF
Implementing a Custom Search Syntax using Solr, Lucene, and Parboiled
PDF
Using Solr to Search and Analyze Logs
PDF
Enhancing relevancy through personalization & semantic search
PDF
Real-time Inverted Search in the Cloud Using Lucene and Storm
PDF
Solr's Admin UI - Where does the data come from?
PDF
Schemaless Solr and the Solr Schema REST API
PDF
High Performance JSON Search and Relational Faceted Browsing with Lucene
PDF
Text Classification with Lucene/Solr, Apache Hadoop and LibSVM
PDF
Faceted Search with Lucene
PDF
Recent Additions to Lucene Arsenal
PDF
Turning search upside down
PDF
Spellchecking in Trovit: Implementing a Contextual Multi-language Spellchecke...
PDF
Shrinking the haystack wes caldwell - final
Text Classification Powered by Apache Mahout and Lucene
State of the Art Logging. Kibana4Solr is Here!
Search at Twitter
Building Client-side Search Applications with Solr
Integrate Solr with real-time stream processing applications
Scaling Solr with SolrCloud
Administering and Monitoring SolrCloud Clusters
Implementing a Custom Search Syntax using Solr, Lucene, and Parboiled
Using Solr to Search and Analyze Logs
Enhancing relevancy through personalization & semantic search
Real-time Inverted Search in the Cloud Using Lucene and Storm
Solr's Admin UI - Where does the data come from?
Schemaless Solr and the Solr Schema REST API
High Performance JSON Search and Relational Faceted Browsing with Lucene
Text Classification with Lucene/Solr, Apache Hadoop and LibSVM
Faceted Search with Lucene
Recent Additions to Lucene Arsenal
Turning search upside down
Spellchecking in Trovit: Implementing a Contextual Multi-language Spellchecke...
Shrinking the haystack wes caldwell - final

Recently uploaded (20)

PPTX
Big Data Technologies - Introduction.pptx
PDF
Review of recent advances in non-invasive hemoglobin estimation
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Machine learning based COVID-19 study performance prediction
PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PPTX
A Presentation on Artificial Intelligence
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PPTX
MYSQL Presentation for SQL database connectivity
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PDF
Modernizing your data center with Dell and AMD
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
CIFDAQ's Market Insight: SEC Turns Pro Crypto
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
Big Data Technologies - Introduction.pptx
Review of recent advances in non-invasive hemoglobin estimation
NewMind AI Weekly Chronicles - August'25 Week I
Mobile App Security Testing_ A Comprehensive Guide.pdf
Machine learning based COVID-19 study performance prediction
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
Agricultural_Statistics_at_a_Glance_2022_0.pdf
A Presentation on Artificial Intelligence
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Unlocking AI with Model Context Protocol (MCP)
The Rise and Fall of 3GPP – Time for a Sabbatical?
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
MYSQL Presentation for SQL database connectivity
Reach Out and Touch Someone: Haptics and Empathic Computing
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
Modernizing your data center with Dell and AMD
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
CIFDAQ's Market Insight: SEC Turns Pro Crypto
Per capita expenditure prediction using model stacking based on satellite ima...

Portable Lucene Index Format & Applications - Andrzej Bialecki

  • 1. Portable Lucene index format Andrzej Białecki Oct 19, 2011
  • 2. whoami §  Started using Lucene in 2003 (1.2-dev...) §  Created Luke – the Lucene Index Toolbox §  Apache Nutch, Hadoop, Lucene/Solr committer, Lucene and Nutch PMC member §  Developer at Lucid Imagination 3
  • 3. Agenda §  Data compatibility challenge §  Portable format design & applications §  Implementation details §  Status & future 4
  • 4. Lucene index format §  Binary •  Not human-readable*! NRMÿt|yzqq{x … * except for committer-humans  §  Highly optimized •  Efficient random I/O access §  Compressed •  Data-specific compression §  Strict, well-defined •  Details specific to a particular version of Lucene •  Both minor and major differences between versions 5
  • 5. Backward-compatibility §  Compatibility of API •  Planned obsolescence across major releases •  Relatively easy (@deprecated APIs) §  Compatibility of data •  Complex and costly •  Still required for upgrades to an N+1 release •  Caveats when re-using old data §  E.g. incompatible Analyzer-s, sub-optimal field types §  Forward-compatibility in Lucene? •  Forget it – and for good reasons 6
  • 6. Backup and archival §  Do we need to keep old data? •  Typically index is not a System Of Record §  But index backups still occasionally needed •  If costly to build (time / manual effort / CPU / IO …) •  As a fallback copy during upgrades •  To validate claims or to prove compliance •  To test performance / quality across updates §  Archival data should use a long-term viable format •  Timespan longer than shelf-life of a software version •  Keep all tools, platforms and environments? Ouch… 7
  • 7. MN §  Lucene currently supports some 2.9 3.x 4.x backward-compatibility +/+ +/- -/- 2.9 •  Only latest N <= 2 releases -/+ +/+? +/- 3.x •  High maintenance and expertise -/- -/+ +/+? 4.x costs •  Very fragile §  Ever higher cost of broader back- compat •  M source formats * N target formats •  Forward-compat? Forget it 8
  • 8. Conclusion If you need to read data from versions other than N or N-1 you will suffer greatly There should be a better solution for a broader data compatibility … 9
  • 9. Portable format idea §  Can we come up with one simple, generic, flexible format that fits data from any version? §  Who cares if it’s sub-optimal as long as it’s readable! Use a format so simple that it needs no version-specific tools 10
  • 10. Portable format: goals §  Reduced costs of long-term back-compat •  M * portable 2.9 3.x 4.x •  Implement only one format right +/+ +/+ +/+ P §  Long-term readability •  Readable with simple tools (text editor, less) •  Easy to post-process with simple tools §  Extensible •  Easy to add data and metadata on many levels •  Easy to add other index parts, or non-index data §  Reducible •  Easy to ignore / skip unknown data (forward-compat) 11
  • 11. Portable format: design §  Focus on function and not form / implementation •  Follow the logical model of a Lucene index, which is more or less constant across releases §  Relaxed and forgiving •  Arbitrary number and type of entries •  Not thwarted by unknown data (ignore / skip) •  Figure out best defaults if data is missing §  Focus on clarity and not minimal size •  De-normalize data if it increases portability §  Simple metadata (per item, per section) 12
  • 12. Index model: sections §  Global index data index •  User data Global data •  Source format details (informational) •  List of commit points (and the current Segment 1 commit) •  List of index sections (segments, other?) §  Segment sections Segment 2 •  Segment info in metadata •  List of fields and their types (FieldInfos) •  Per-segment data Segment 3 13
  • 13. Per-segment data §  Logical model instead of implementation details §  Per-segment sections Segment 1 Segment 2 •  Existing known sections Segment & Segment & Field meta Field meta •  Other future sections … stored stored terms terms term freqs term freqs postings postings §  This is the standard model! payloads payloads •  Just be more relaxed term vectors term vectors doc values doc values •  And use plain-text formats deletes deletes §  SimpleTextCodec? •  Not quite (yet?) 14
  • 14. PortableCodec §  Codec API allows to customize on-disk formats §  API is not complete yet … •  Term dict + postings, docvalues, segment infos §  … but it’s coming: LUCENE-2621 •  Custom stored fields storage (e.g. NoSQL) •  More efficient term freq. vectors storage •  Eventually will make PortableCodec possible §  Write and read data in portable format §  Reuse / extend SimpleTextCodec §  In the meantime … use codec-independent export / import utilities 15
  • 15. Pre-analyzed input §  SOLR-1535 PreAnalyzedField •  Allows submitting pre-analyzed documents •  Too limited format §  “Inverted document” in portable format •  Pre-analyzed, pre-inverted (e.g. with term vectors) •  Easy to construct using non-Lucene tools §  Applications •  Integration with external pipelines §  Text analysis, segmentation, stemming, POS-tagging, other NLP – produces pre-analyzed input •  Other ideas §  Transaction log §  Fine-grained shard rebalancing (nano-sharding) §  Rolling updates of a cluster §  … 16
  • 16. Serialization format: goals §  Generic enough: •  Flexible enough to express all known index parts §  Specific enough •  Predefined sections common to all versions •  Well-defined record boundaries (easy to ignore / skip) •  Well-defined behavior on missing sections / attributes §  Support for arbitrary sections §  Easily parsed, preferably with no external libs §  Only reasonably efficient 17
  • 17. Serialization format: design §  Container - could be a ZIP •  Simple, well-known, single file, streamable, fast seek, compressible •  But limited to 4GB (ZIP-64 only in Java 7) •  Needs un-zipping to process with text utilities §  Or could be just a Lucene Directory •  That is, a bunch of files with predefined names §  You can ZIP them yourself if needed, or TAR, or whatever §  Simple plain-text formats •  Line-oriented records, space-separated fields §  Or Avro? •  Per-section metadata in java.util.Properties format 18
  • 18. Implementation §  See LUCENE-3491 for more details §  PortableCodec proof-of-concept •  Uses extended Codec API in LUCENE-2621 •  Reuses SimpleTextCodec •  Still some parts are not handled … §  PortableIndexExporter tool •  Uses regular Lucene IndexReader API •  Traverses all data structures for a complete export •  Mimics the expected data from PortableCodec (to be completed in the future) §  Serialization: plain-text files in Directory 19
  • 19. Example data §  Directory contents      120  10  Oct  21:03  _0.docvals            3  10  Oct  21:03  _0.f1.norms            3  10  Oct  21:03  _0.f2.norms        152  10  Oct  21:03  _0.fields      1090  10  Oct  21:03  _0.postings        244  10  Oct  21:03  _0.stored        457  10  Oct  21:03  _0.terms      1153  10  Oct  21:03  _0.vectors        701  10  Oct  21:03  commits.1318273420000.meta        342  10  Oct  21:03  index.meta      1500  10  Oct  21:03  seginfos.meta   20
  • 20. Example data §  Index.meta #Mon  Oct  17  15:19:35  CEST  2011   #Created  with  PortableIndexExporter   fieldNames=[f1,  f3,  f2]   numDeleted=0   numDocs=2   readers=1   readerVersion=1318857574927   21
  • 21. Example data §  Seginfos.meta (SegmentInfos) version=1318857445991   size=1   files=[_0.nrm,  _0.fnm,  _0.tvx,  _0.docvals,  _0.tvd,  _0.fdx,  _0.tvf 0.name=_0   0.version=4.0   0.files=[_0.nrm,  _0.fnm,  _0.tvx,  _0.docvals,  _0.tvd,  _0.tvf,  _0.f 0.fields=3   0.usesCompond=false   0.field.f1.number=0   0.field.f2.number=1   0.field.f3.number=2   0.field.f1.flags=IdfpVopN-­‐-­‐-­‐-­‐-­‐   0.field.f2.flags=IdfpVopN-­‐Df64   0.field.f3.flags=-­‐-­‐-­‐-­‐-­‐-­‐-­‐N-­‐Dvin   0.diag.java.vendor=Apple  Inc.   0.hasProx=true   0.hasVectors=true   0.docCount=2   22 0.delCount=0  
  • 22. Example data §  Commits.1318273420000.meta (IndexCommit) segment=segments_1   generation=1   timestamp=1318857575000   version=1318857574927   deleted=false   optimized=true   files=[_0.nrm,  _0.fnm,  _0_1.skp,  _0_1.tii,  _0.tvd  … 23
  • 23. Example data §  _0.fields (FieldInfos in Luke-like format) field  f1    flags  IdfpVopN-­‐-­‐-­‐-­‐-­‐    codec  Standard   field  f2    flags  IdfpVopN-­‐Df64    codec  MockRandom   field  f3    flags  -­‐-­‐-­‐-­‐-­‐-­‐-­‐N-­‐Dvin    codec  MockVariableIntBlock   24
  • 24. Example data §  _0.stored (stored fields – LUCENE-2621) doc  2    field  f1      str  this  is  a  test    field  f2      str  another  field  with  test  string   doc  2    field  f1      str  doc  two  this  is  a  test    field  f2      str  doc  two  another  field  with  test  string   25
  • 25. Example data §  _0.terms – terms dict (via SimpleTextCodec) field  f1   field  f2    term  a    term  another      freq  2      freq  2      totalFreq  2      totalFreq  2    term  doc    term  doc      freq  1      freq  1      totalFreq  1      totalFreq  1    term  is    term  field      freq  2      freq  2      totalFreq  2      totalFreq  2   -­‐term  test    term  string      freq  2      freq  2      totalFreq  2      totalFreq  2   …   …   26
  • 26. Example data §  _0.postings (via SimpleTextCodec) field  f1   field  f2      term  a      term  another          doc  0          doc  0              freq  1              freq  1              pos  2              pos  0          doc  1          doc  1              freq  1              freq  1              pos  4              pos  2      term  doc      term  doc          doc  1          doc  1              freq  1              freq  1              pos  0              pos  0   …      …   27
  • 27. Example data §  _0.vectors doc  0   doc  1    field  f1    field  f1      term  a      term  a        freq  1        freq  1        offs  8-­‐9        offs  16-­‐17        positions  2        positions  4      term  is      term  doc        freq  1        freq  1        offs  5-­‐7        offs  0-­‐3        positions  1        positions  0   …      term  is   field  f2        freq  1      term  another        offs  13-­‐15        freq  1        positions  3        offs  0-­‐7   …        positions  0   …   28
  • 28. Example data §  _0.docvals field  f2    type  FLOAT_64   0  0.0   1  0.0010    docs  2    end   field  f3    type  VAR_INTS   0  0   1  1234567890    docs  2    end   END   29
  • 29. Current status §  PortableIndexExporter tool working •  Usable as an archival tool •  Exports complete index data §  Unlike XMLExporter in Luke •  No corresponding import tool yet … §  Codec-based tools incomplete •  … because the Codec API is incomplete §  More work needed to reach the cross-compat goals §  Help is welcome! J 30
  • 30. Summary §  Portability of data is a challenge •  Binary back-compat difficult and fragile §  Long-term archival is a challenge §  Solution: use simple data formats §  Portable index format: •  Uses plain text files •  Extensible – back-compat •  Reducible – forward-compat •  Enables “pre-analyzed input” pipelines •  PortableCodec §  for now PortableIndexExporter tool 31
  • 31. QA §  More details: LUCENE-3491 http://issues.apache.org/jira/browse/LUCENE-3491 §  Contact: Thank you! 32