SlideShare a Scribd company logo
Large Data Analyze
with PyTables
Personal Profile:
●
Ali Hallaji
●
Parallel Processing and Large Data Analyze
●
Senior Python Developer at innfinision Cloud Solutions
●
Ali.Hallaji@innfinision.net
●
Innfinision.net
innfinision Cloud Solutions:
●
Providing Cloud, Virtualization and Data Center Solutions
●
Developing Software for Cloud Environments
●
Providing Services to Telecom, Education, Broadcasting & Health Fields
●
Supporting OpenStack Foundation as the First Iranian Company
●
First Supporter of IRAN OpenStack Community
Large Data Analyze with PyTables innfinision.net
●
Outline
●
What is PyTables ?
●
Numexpr & Numpy
●
Compressing Data
●
What is HDF5?
●
Querying your data in many different ways, fast
●
Design goals
Agenda:
Outline
innfinision.netLarge Data Analyze with PyTables
Outline
innfinision.netLarge Data Analyze with PyTables
The Starving CPU Problem
● Getting the Most Out of Computers
●
Caches and Data Locality
●
Techniques For Fighting Data
Starvation
High Performance Libraries
●
Why Should You Use Them?
●
In-Core High Performance
●
Out-of-Core High Performance
Libraries
Getting the Most Out of Computers
innfinision.netLarge Data Analyze with PyTables
Getting the Most Out of Computers
innfinision.netLarge Data Analyze with PyTables
Computers nowadays are very powerful:
● Extremely fast CPU’s (multicores)
●
Large amounts of RAM
●
Huge disk capacities
But they are facing a pervasive problem:
An ever-increasing mismatch between CPU, memory and disk speeds (the
so-called “Starving CPU problem”)
This introduces tremendous difficulties in getting the most out of
computers.
CPU vs Memory cycle Trend
innfinision.netLarge Data Analyze with PyTables
Cycle time is the time, usually measured in nanosecond s, between the start of
one random access memory ( RAM ) access to the time when the next access can
be started
History
●
In the 1970s and 1980s the memory subsystem was able to
deliver all the data that processors required in time.
●
In the good old days, the processor was the key bottleneck.
●
But in the 1990s things started to change...
CPU vs Memory cycle Trend
innfinision.netLarge Data Analyze with PyTables
dd
The CPU Starvation Problem
innfinision.netLarge Data Analyze with PyTables
Known facts (in 2010):
●
Memory latency is much higher (around 250x) than processors and it has been
an essential bottleneck for the past twenty years.
●
Memory throughput is improving at a better rate than memory latency, but it
is also much slower than processors (about 25x).
The result is that CPUs in our current computers are suffering from
a serious data starvation problem: they could consume (much!)
more data than the system can possibly deliver.
What Is the Industry Doing to Alleviate CPU Starvation?
innfinision.netLarge Data Analyze with PyTables
●
They are improving memory throughput: cheap to implement
(more data is transmitted on each clock cycle).
●
They are adding big caches in the CPU dies.
Why Is a Cache Useful?
innfinision.netLarge Data Analyze with PyTables
●
Caches are closer to the processor (normally in the same die),
so both the latency and throughput are improved.
●
However: the faster they run the smaller they must be.
●
They are effective mainly in a couple of scenarios:
●
Time locality: when the dataset is reused.
● Spatial locality: when the dataset is accessed sequentially.
Time Locality
innfinision.netLarge Data Analyze with PyTables
Parts of the dataset are reused
Spatial Locality
innfinision.netLarge Data Analyze with PyTables
Dataset is accessed sequentially
Why High Performance Libraries?
innfinision.netLarge Data Analyze with PyTables
●
High performance libraries are made by people that knows very
well the different optimization techniques.
●
You may be tempted to create original algorithms that can be
faster than these, but in general, it is very difficult to beat
them.
● In some cases, it may take some time to get used to them, but
the effort pays off in the long run.
Some In-Core High Performance Libraries
innfinision.netLarge Data Analyze with PyTables
●
ATLAS/MKL (Intel’s Math Kernel Library): Uses memory efficient algorithms as well
as SIMD and multi-core algorithms linear algebra operations.→
●
VML (Intel’s Vector Math Library): Uses SIMD and multi-core to compute basic math
functions (sin, cos, exp, log...) in vectors.
●
Numexpr: Performs potentially complex operations with NumPy arrays without the
overhead of temporaries. Can make use of multi-cores.
●
Blosc: A multi-threaded compressor that can transmit data from caches to memory,
and back, at speeds that can be larger than a OS memcpy().
What is PyTables ?
innfinision.netLarge Data Analyze with PyTables
PyTables
innfinision.netLarge Data Analyze with PyTables
PyTables is a package for managing hierarchical datasets and designed to efficiently
and easily cope with extremely large amounts of data. You can download PyTables
and use it for free. You can access documentation, some examples of use and
presentations in the HowToUse section.
PyTables is built on top of the HDF5 library, using the Python language and the
NumPy package. It features an object-oriented interface that, combined with C
extensions for the performance-critical parts of the code (generated using Cython),
makes it a fast, yet extremely easy to use tool for interactively browse, process and
search very large amounts of data. One important feature of PyTables is that it
optimizes memory and disk resources so that data takes much less space (specially if
on-flight compression is used) than other solutions such as relational or object
oriented databases.
Numexpr & Numpy
innfinision.netLarge Data Analyze with PyTables
Numexpr: Dealing with Complex Expressions
innfinision.netLarge Data Analyze with PyTables
●
Wears a specialized virtual machine for evaluating expressions.
●
It accelerates computations by using blocking and by avoiding
temporaries.
●
Multi-threaded: can use several cores automatically.
●
It has support for Intel’s VML (Vector Math Library), so you
can accelerate the evaluation of transcendental (sin, cos,
atanh, sqrt. . . ) functions too.
NumPy: A Powerful Data Container for Python
innfinision.netLarge Data Analyze with PyTables
NumPy provides a very powerful, object oriented, multi dimensional data container:
●
array[index]: retrieves a portion of a data container
●
(array1**3 / array2) - sin(array3): evaluates potentially complex
expressions
● numpy.dot(array1, array2): access to optimized BLAS (*GEMM) functions
NumPy And Temporaries
innfinision.netLarge Data Analyze with PyTables
Computing “a*b+c” with numpy. Temporaries goes to memory
Numexpr Avoids (Big) Temporaries
innfinision.netLarge Data Analyze with PyTables
Computing “a*b+c” with numexpr. Temporaries in memory are avoided.
Numexpr Performance (Using Multiple Threads)
innfinision.netLarge Data Analyze with PyTables
Time to evaluate polynomial : ((.25*x + .75)*x – 1.5)*x -2
Compression
innfinision.netLarge Data Analyze with PyTables
Why Compression
innfinision.netLarge Data Analyze with PyTables
●
Lets you store more data using the same space
●
Uses more CPU, but CPU time is cheap compared with disk access
●
Different compressors for different uses:
Bzip2, zlib, lzo, Blosc
Why Compression
innfinision.netLarge Data Analyze with PyTables
3X more data
Why Compression
innfinision.netLarge Data Analyze with PyTables
Less data needs to be transmitted to the CPU
Transmission + decompression faster than direct transfer?
Blosc: Compressing faster than Memory Speed
innfinision.netLarge Data Analyze with PyTables
What is HDF5 ?
innfinision.netLarge Data Analyze with PyTables
HDF5
innfinision.netLarge Data Analyze with PyTables
HDF5 is a data model, library, and file format for storing and managing data. It
supports an unlimited variety of datatypes, and is designed for flexible and
efficient I/O and for high volume and complex data. HDF5 is portable and is
extensible, allowing applications to evolve in their use of HDF5. The HDF5
Technology suite includes tools and applications for managing, manipulating,
viewing, and analyzing data in the HDF5 format.
The HDF5 technology suite includes:
innfinision.netLarge Data Analyze with PyTables
●
A versatile data model that can represent very complex data objects and a wide
variety of metadata.
●
A completely portable file format with no limit on the number or size of data
objects in the collection.
●
A software library that runs on a range of computational platforms, from laptops to
massively parallel systems, and implements a high-level API with C, C++,
Fortran 90, and Java interfaces.
●
A rich set of integrated performance features that allow for access time and
storage space optimizations.
●
Tools and applications for managing, manipulating, viewing, and analyzing the data
in the collection.
Data structures
innfinision.netLarge Data Analyze with PyTables
High level of flexibility for structuring your data:
●
Datatypes: scalars (numerical & strings), records, enumerated, time...
●
Table support multidimensional cells and nested records
●
Multidimensional arrays
●
Variable lengths arrays
Attributes:
innfinision.netLarge Data Analyze with PyTables
Metadata about data
● Dataset hierarchy
innfinision.netLarge Data Analyze with PyTables
Querying your data in many different
ways, fast
innfinision.netLarge Data Analyze with PyTables
PyTables Query
innfinision.netLarge Data Analyze with PyTables
One characteristic that sets PyTables apart from similar tools is its capability to
perform extremely fast queries on your tables in order to facilitate as much as
possible your main goal: get important information *out* of your datasets.
PyTables achieves so via a very flexible and efficient query iterator, named
Table.where(). This, in combination with OPSI, the powerful indexing engine that
comes with PyTables, and the efficiency of underlying tools like NumPy, HDF5,
Numexpr and Blosc, makes of PyTables one of the fastest and more powerful query
engines available.
Different query modes
innfinision.netLarge Data Analyze with PyTables
Regular query:
●
[ r[‘c1’] for r in table
if r[‘c2’] > 2.1 and r[‘c3’] == True)) ]
In-kernel query:
●
[ r[‘c1’] for r in table.where(‘(c2>2.1)&(c3==True)’) ]
Indexed query:
●
table.cols.c2.createIndex()
●
table.cols.c3.createIndex()
●
[ r[‘c1’] for r in table.where(‘(c2>2.1)&(c3==True)’) ]
innfinision.netLarge Data Analyze with PyTables
This presentation has been collected from
several other presentations(PyTables presentation).
For more presentation refer to this
link (http://pytables.org/moin/HowToUse#Presentations).
Ali Hallaji
Ali.Hallaji@innfinision.net
innfinision.net
Thank you

More Related Content

PDF
PyTables
PDF
Scalable high-dimensional indexing with Hadoop
PPTX
Session 2
PDF
Terabyte-scale image similarity search: experience and best practice
PDF
Scaling PyData Up and Out
PDF
Parikshit Ram – Senior Machine Learning Scientist, Skytree at MLconf ATL
PPTX
Role of python in hpc
PPTX
Distro-independent Hadoop cluster management
PyTables
Scalable high-dimensional indexing with Hadoop
Session 2
Terabyte-scale image similarity search: experience and best practice
Scaling PyData Up and Out
Parikshit Ram – Senior Machine Learning Scientist, Skytree at MLconf ATL
Role of python in hpc
Distro-independent Hadoop cluster management

What's hot (19)

PDF
Data Structures for Statistical Computing in Python
ODP
First NL-HUG: Large-scale data processing at SARA with Apache Hadoop
PDF
eScience Cluster Arch. Overview
PDF
Apache Spark 101 - Demi Ben-Ari
PDF
Machine learning at scale with Google Cloud Platform
PDF
Large-Scale Data Storage and Processing for Scientists with Hadoop
PPTX
TensorFrames: Google Tensorflow on Apache Spark
PDF
Scipy 2011 Time Series Analysis in Python
PPTX
Beyond Kaggle: Solving Data Science Challenges at Scale
PDF
Data Science and Deep Learning on Spark with 1/10th of the Code with Roope As...
PDF
Tokyo Webmining Talk1
PPTX
Lessons learned from designing a QA Automation for analytics databases (big d...
PDF
Machine learning at Scale with Apache Spark
PPTX
Lessons learned from designing QA automation event streaming platform(IoT big...
PPTX
Scaling Python to CPUs and GPUs
PDF
Modeling with Hadoop kdd2011
PPTX
Making Machine Learning Scale: Single Machine and Distributed
PPTX
Surge: Rise of Scalable Machine Learning at Yahoo!
PDF
Research Papers Recommender based on Digital Repositories Metadata
Data Structures for Statistical Computing in Python
First NL-HUG: Large-scale data processing at SARA with Apache Hadoop
eScience Cluster Arch. Overview
Apache Spark 101 - Demi Ben-Ari
Machine learning at scale with Google Cloud Platform
Large-Scale Data Storage and Processing for Scientists with Hadoop
TensorFrames: Google Tensorflow on Apache Spark
Scipy 2011 Time Series Analysis in Python
Beyond Kaggle: Solving Data Science Challenges at Scale
Data Science and Deep Learning on Spark with 1/10th of the Code with Roope As...
Tokyo Webmining Talk1
Lessons learned from designing a QA Automation for analytics databases (big d...
Machine learning at Scale with Apache Spark
Lessons learned from designing QA automation event streaming platform(IoT big...
Scaling Python to CPUs and GPUs
Modeling with Hadoop kdd2011
Making Machine Learning Scale: Single Machine and Distributed
Surge: Rise of Scalable Machine Learning at Yahoo!
Research Papers Recommender based on Digital Repositories Metadata
Ad

Viewers also liked (9)

PDF
Didactics Of Love 1.4
DOC
MichaelOdzaResume0216
PPTX
Comedy bank presentation
PDF
Academy of Art University / 2.12.11
PPT
Craniofacial musculature
PDF
What happens when justin bieber instagrams your book chad cannon pu_bu13
PPTX
antidperesiva
PDF
PyTables
PDF
เนื้อหาเวลา
Didactics Of Love 1.4
MichaelOdzaResume0216
Comedy bank presentation
Academy of Art University / 2.12.11
Craniofacial musculature
What happens when justin bieber instagrams your book chad cannon pu_bu13
antidperesiva
PyTables
เนื้อหาเวลา
Ad

Similar to Py tables (20)

PDF
It's the memory, stupid! CodeJam 2014
PDF
Fast and Scalable Python
PPT
Euro python2011 High Performance Python
PDF
Hdf5 is for Lovers (PyData SV 2013)
PDF
PyData Paris 2015 - Closing keynote Francesc Alted
PDF
Which library should you choose for data-science? That's the question!
PDF
London level39
PDF
Big data berlin
PDF
(Ebook) High Performance Python by Micha Gorelick, Ian Ozsvald
PDF
PyCon Estonia 2019
PPTX
Unit II - Data Science (3) VI semester SRMIST
PDF
Intel Distribution for Python - Scaling for HPC and Big Data
PDF
New Capabilities in the PyData Ecosystem
PDF
PyData Boston 2013
PDF
Anaconda and PyData Solutions
PDF
Migrating from matlab to python
PDF
Scientific Python
PDF
Python for Computer Vision - Revision
PPTX
Creating Profiling Tools to Analyze and Optimize FiPy Presentation
It's the memory, stupid! CodeJam 2014
Fast and Scalable Python
Euro python2011 High Performance Python
Hdf5 is for Lovers (PyData SV 2013)
PyData Paris 2015 - Closing keynote Francesc Alted
Which library should you choose for data-science? That's the question!
London level39
Big data berlin
(Ebook) High Performance Python by Micha Gorelick, Ian Ozsvald
PyCon Estonia 2019
Unit II - Data Science (3) VI semester SRMIST
Intel Distribution for Python - Scaling for HPC and Big Data
New Capabilities in the PyData Ecosystem
PyData Boston 2013
Anaconda and PyData Solutions
Migrating from matlab to python
Scientific Python
Python for Computer Vision - Revision
Creating Profiling Tools to Analyze and Optimize FiPy Presentation

More from Ali Hallaji (7)

PDF
Next Generation FIDS for airports
PDF
Next Generation FIDS for airports
PDF
Sharding
PDF
MongoDB
PDF
PDF
PDF
Next Generation FIDS for airports
Next Generation FIDS for airports
Sharding
MongoDB

Py tables

  • 2. Personal Profile: ● Ali Hallaji ● Parallel Processing and Large Data Analyze ● Senior Python Developer at innfinision Cloud Solutions ● Ali.Hallaji@innfinision.net ● Innfinision.net
  • 3. innfinision Cloud Solutions: ● Providing Cloud, Virtualization and Data Center Solutions ● Developing Software for Cloud Environments ● Providing Services to Telecom, Education, Broadcasting & Health Fields ● Supporting OpenStack Foundation as the First Iranian Company ● First Supporter of IRAN OpenStack Community
  • 4. Large Data Analyze with PyTables innfinision.net ● Outline ● What is PyTables ? ● Numexpr & Numpy ● Compressing Data ● What is HDF5? ● Querying your data in many different ways, fast ● Design goals Agenda:
  • 6. Outline innfinision.netLarge Data Analyze with PyTables The Starving CPU Problem ● Getting the Most Out of Computers ● Caches and Data Locality ● Techniques For Fighting Data Starvation High Performance Libraries ● Why Should You Use Them? ● In-Core High Performance ● Out-of-Core High Performance Libraries
  • 7. Getting the Most Out of Computers innfinision.netLarge Data Analyze with PyTables
  • 8. Getting the Most Out of Computers innfinision.netLarge Data Analyze with PyTables Computers nowadays are very powerful: ● Extremely fast CPU’s (multicores) ● Large amounts of RAM ● Huge disk capacities But they are facing a pervasive problem: An ever-increasing mismatch between CPU, memory and disk speeds (the so-called “Starving CPU problem”) This introduces tremendous difficulties in getting the most out of computers.
  • 9. CPU vs Memory cycle Trend innfinision.netLarge Data Analyze with PyTables Cycle time is the time, usually measured in nanosecond s, between the start of one random access memory ( RAM ) access to the time when the next access can be started History ● In the 1970s and 1980s the memory subsystem was able to deliver all the data that processors required in time. ● In the good old days, the processor was the key bottleneck. ● But in the 1990s things started to change...
  • 10. CPU vs Memory cycle Trend innfinision.netLarge Data Analyze with PyTables dd
  • 11. The CPU Starvation Problem innfinision.netLarge Data Analyze with PyTables Known facts (in 2010): ● Memory latency is much higher (around 250x) than processors and it has been an essential bottleneck for the past twenty years. ● Memory throughput is improving at a better rate than memory latency, but it is also much slower than processors (about 25x). The result is that CPUs in our current computers are suffering from a serious data starvation problem: they could consume (much!) more data than the system can possibly deliver.
  • 12. What Is the Industry Doing to Alleviate CPU Starvation? innfinision.netLarge Data Analyze with PyTables ● They are improving memory throughput: cheap to implement (more data is transmitted on each clock cycle). ● They are adding big caches in the CPU dies.
  • 13. Why Is a Cache Useful? innfinision.netLarge Data Analyze with PyTables ● Caches are closer to the processor (normally in the same die), so both the latency and throughput are improved. ● However: the faster they run the smaller they must be. ● They are effective mainly in a couple of scenarios: ● Time locality: when the dataset is reused. ● Spatial locality: when the dataset is accessed sequentially.
  • 14. Time Locality innfinision.netLarge Data Analyze with PyTables Parts of the dataset are reused
  • 15. Spatial Locality innfinision.netLarge Data Analyze with PyTables Dataset is accessed sequentially
  • 16. Why High Performance Libraries? innfinision.netLarge Data Analyze with PyTables ● High performance libraries are made by people that knows very well the different optimization techniques. ● You may be tempted to create original algorithms that can be faster than these, but in general, it is very difficult to beat them. ● In some cases, it may take some time to get used to them, but the effort pays off in the long run.
  • 17. Some In-Core High Performance Libraries innfinision.netLarge Data Analyze with PyTables ● ATLAS/MKL (Intel’s Math Kernel Library): Uses memory efficient algorithms as well as SIMD and multi-core algorithms linear algebra operations.→ ● VML (Intel’s Vector Math Library): Uses SIMD and multi-core to compute basic math functions (sin, cos, exp, log...) in vectors. ● Numexpr: Performs potentially complex operations with NumPy arrays without the overhead of temporaries. Can make use of multi-cores. ● Blosc: A multi-threaded compressor that can transmit data from caches to memory, and back, at speeds that can be larger than a OS memcpy().
  • 18. What is PyTables ? innfinision.netLarge Data Analyze with PyTables
  • 19. PyTables innfinision.netLarge Data Analyze with PyTables PyTables is a package for managing hierarchical datasets and designed to efficiently and easily cope with extremely large amounts of data. You can download PyTables and use it for free. You can access documentation, some examples of use and presentations in the HowToUse section. PyTables is built on top of the HDF5 library, using the Python language and the NumPy package. It features an object-oriented interface that, combined with C extensions for the performance-critical parts of the code (generated using Cython), makes it a fast, yet extremely easy to use tool for interactively browse, process and search very large amounts of data. One important feature of PyTables is that it optimizes memory and disk resources so that data takes much less space (specially if on-flight compression is used) than other solutions such as relational or object oriented databases.
  • 20. Numexpr & Numpy innfinision.netLarge Data Analyze with PyTables
  • 21. Numexpr: Dealing with Complex Expressions innfinision.netLarge Data Analyze with PyTables ● Wears a specialized virtual machine for evaluating expressions. ● It accelerates computations by using blocking and by avoiding temporaries. ● Multi-threaded: can use several cores automatically. ● It has support for Intel’s VML (Vector Math Library), so you can accelerate the evaluation of transcendental (sin, cos, atanh, sqrt. . . ) functions too.
  • 22. NumPy: A Powerful Data Container for Python innfinision.netLarge Data Analyze with PyTables NumPy provides a very powerful, object oriented, multi dimensional data container: ● array[index]: retrieves a portion of a data container ● (array1**3 / array2) - sin(array3): evaluates potentially complex expressions ● numpy.dot(array1, array2): access to optimized BLAS (*GEMM) functions
  • 23. NumPy And Temporaries innfinision.netLarge Data Analyze with PyTables Computing “a*b+c” with numpy. Temporaries goes to memory
  • 24. Numexpr Avoids (Big) Temporaries innfinision.netLarge Data Analyze with PyTables Computing “a*b+c” with numexpr. Temporaries in memory are avoided.
  • 25. Numexpr Performance (Using Multiple Threads) innfinision.netLarge Data Analyze with PyTables Time to evaluate polynomial : ((.25*x + .75)*x – 1.5)*x -2
  • 27. Why Compression innfinision.netLarge Data Analyze with PyTables ● Lets you store more data using the same space ● Uses more CPU, but CPU time is cheap compared with disk access ● Different compressors for different uses: Bzip2, zlib, lzo, Blosc
  • 28. Why Compression innfinision.netLarge Data Analyze with PyTables 3X more data
  • 29. Why Compression innfinision.netLarge Data Analyze with PyTables Less data needs to be transmitted to the CPU Transmission + decompression faster than direct transfer?
  • 30. Blosc: Compressing faster than Memory Speed innfinision.netLarge Data Analyze with PyTables
  • 31. What is HDF5 ? innfinision.netLarge Data Analyze with PyTables
  • 32. HDF5 innfinision.netLarge Data Analyze with PyTables HDF5 is a data model, library, and file format for storing and managing data. It supports an unlimited variety of datatypes, and is designed for flexible and efficient I/O and for high volume and complex data. HDF5 is portable and is extensible, allowing applications to evolve in their use of HDF5. The HDF5 Technology suite includes tools and applications for managing, manipulating, viewing, and analyzing data in the HDF5 format.
  • 33. The HDF5 technology suite includes: innfinision.netLarge Data Analyze with PyTables ● A versatile data model that can represent very complex data objects and a wide variety of metadata. ● A completely portable file format with no limit on the number or size of data objects in the collection. ● A software library that runs on a range of computational platforms, from laptops to massively parallel systems, and implements a high-level API with C, C++, Fortran 90, and Java interfaces. ● A rich set of integrated performance features that allow for access time and storage space optimizations. ● Tools and applications for managing, manipulating, viewing, and analyzing the data in the collection.
  • 34. Data structures innfinision.netLarge Data Analyze with PyTables High level of flexibility for structuring your data: ● Datatypes: scalars (numerical & strings), records, enumerated, time... ● Table support multidimensional cells and nested records ● Multidimensional arrays ● Variable lengths arrays
  • 35. Attributes: innfinision.netLarge Data Analyze with PyTables Metadata about data
  • 36. ● Dataset hierarchy innfinision.netLarge Data Analyze with PyTables
  • 37. Querying your data in many different ways, fast innfinision.netLarge Data Analyze with PyTables
  • 38. PyTables Query innfinision.netLarge Data Analyze with PyTables One characteristic that sets PyTables apart from similar tools is its capability to perform extremely fast queries on your tables in order to facilitate as much as possible your main goal: get important information *out* of your datasets. PyTables achieves so via a very flexible and efficient query iterator, named Table.where(). This, in combination with OPSI, the powerful indexing engine that comes with PyTables, and the efficiency of underlying tools like NumPy, HDF5, Numexpr and Blosc, makes of PyTables one of the fastest and more powerful query engines available.
  • 39. Different query modes innfinision.netLarge Data Analyze with PyTables Regular query: ● [ r[‘c1’] for r in table if r[‘c2’] > 2.1 and r[‘c3’] == True)) ] In-kernel query: ● [ r[‘c1’] for r in table.where(‘(c2>2.1)&(c3==True)’) ] Indexed query: ● table.cols.c2.createIndex() ● table.cols.c3.createIndex() ● [ r[‘c1’] for r in table.where(‘(c2>2.1)&(c3==True)’) ]
  • 40. innfinision.netLarge Data Analyze with PyTables This presentation has been collected from several other presentations(PyTables presentation). For more presentation refer to this link (http://pytables.org/moin/HowToUse#Presentations).