SlideShare a Scribd company logo
A data intensive future: how
can biology best take
advantage of the coming data
deluge?
C. Titus Brown
ctbrown@ucdavis.edu
Associate Professor, UC Davis
Choose your own adventure:
Either you believe that all this “Big Data” stuff is nonsense
and/or overblown:
 Please help me out by identifying my
misconceptions!
Or, you are interested in strategies and techniques for working
with lots of data, in which case:
 I hope to make some useful technical and
social/cultural points.
The obligatory slide about
abundant sequencing data.
http://www.genome.gov/sequencingcosts/
Also see: https://biomickwatson.wordpress.com/2015/03/25/the-cost-of-sequencing-is-
still-going-down/
Three general uses for
abundant sequencing data.
 Computational hypothesis falsification.
 Model comparison or evaluation of
sufficiency.
 Hypothesis generation.
http://ivory.idyll.org/blog/2015-what-to-do-with-sequencing-data.html
My lab’s goals re “data
intensive biology”
 Build open tools and evaluate approaches for
moving quickly from raw-ish data to
hypotheses.
 Work with collaborators to identify emerging
challenges that are preventing them from
doing their science.
 Train peers in data analysis techniques.
Investigating soil microbial
communities
 95% or more of soil microbes cannot be cultured in
lab.
 Very little transport in soil and sediment =>
slow mixing rates.
 Estimates of immense diversity:
 Billions of microbial cells per gram of soil.
 Million+ microbial species per gram of soil (Gans
et al, 2005)
 One observed lower bound for genomic sequence
complexity => 26 Gbp (Amazon Rain Forest
Microbial Observatory)
N. A. Krasil'nikov, SOIL MICROORGANISMS AND HIGHER PLANTS
http://www.soilandhealth.org/01aglibrary/010112krasil/010112krasil.ptII.html
“By 'soil' we understand (Vil'yams, 1931) a loose surface
layer of earth capable of yielding plant crops. In the
physical sense the soil represents a complex disperse
system consisting of three phases: solid, liquid, and
gaseous.”
Microbies live in & on:
• Surfaces of aggregate
particles;
• Pores within
microaggregates;
Questions to address
 Role of soil microbes in nutrient cycling:
 How does agricultural soil differ from native soil?
 How do soil microbial communities respond to
climate perturbation?
 Genome-level questions:
 What kind of strain-level heterogeneity is present
in the population?
 What are the phage and viral populations &
dynamic?
 What species are where, and how much is shared
between different geographical locations?
Must use culture
independent approaches
 Many reasons why you can’t or don’t want to
culture: cross-feeding, niche specificity, dormancy,
etc.
 If you want to get at underlying function, 16s
analysis alone is not sufficient.
Single-cell sequencing & shotgun metagenomics
are two common ways to investigate complex
microbial communities.
Shotgun metagenomics
 Collect samples;
 Extract DNA;
 Feed into sequencer;
 Computationally analyze.
Wikipedia: Environmental shotgun
sequencing.png
“Sequence it all and let the
bioinformaticians sort it out”
Great Prairie Grand Challenge -
-SAMPLING LOCATIONS
2008
A “Grand Challenge” dataset
(DOE/JGI)
0
100
200
300
400
500
600
Iowa,
Continuous
corn
Iowa, Native
Prairie
Kansas,
Cultivated
corn
Kansas,
Native
Prairie
Wisconsin,
Continuous
corn
Wisconsin,
Native
Prairie
Wisconsin,
Restored
Prairie
Wisconsin,
Switchgrass
BasepairsofSequencing(Gbp)
GAII HiSeq
Rumen (Hess et. al, 2011), 268 Gbp
MetaHIT (Qin et. al, 2011), 578 Gbp
NCBI nr database,
37 Gbp
Total: 1,846 Gbp soil metagenome
Rumen K-mer Filtered,
111 Gbp
Why do we need so much data?!
 20-40x coverage is necessary; 100x is ~sufficient.
 Mixed population sampling => sensitivity driven by
lowest abundance.
 For example, for E. coli in 1/1000 dilution, you
would need approximately 100x coverage of a 5mb
genome at 1/1000, or 500 Gbp of sequence!
(For soil, estimate is 50 Tbp)
 Sequencing is straightforward; data analysis is not.
“$1000 genome with $1m analysis”
Great Prairie Grand
Challenge - goals
 How much of the source metagenome can we reconstruct
from ~300-600 Gbp+ of shotgun sequencing? (Largest
data set ever sequenced, ~2010.)
 What can we learn about soil from looking at the
reconstructed metagenome? (See list of questions)
Great Prairie Grand
Challenge - goals
 How much of the source metagenome can we reconstruct
from ~300-600 Gbp+ of shotgun sequencing? (Largest
data set ever sequenced, ~2010.)
 What can we learn about soil from looking at the
reconstructed metagenome? (See list of questions)
(For complex ecological and evolutionary systems, we’re just
starting to get past the first question. More on that later.)
Conway T C , Bromage A J Bioinformatics 2011;27:479-486
© The Author 2011. Published by Oxford University Press. All rights reserved. For Permissions,
please email: journals.permissions@oup.com
De novo assembly scales with size of data, not
size of (meta)genome.
Why do assemblers scale
badly?
Memory usage ~ “real” variation + number of errors
Number of errors ~ size of data set
Our problem, in a nutshell:
We had so much data that we couldn’t
compute on it.
(This was, and is, a common problem in non-
model systems.)
Our solution: abundance
normalization (diginorm)
Conway T C , Bromage A J Bioinformatics 2011;27:479-486
© The Author 2011. Published by Oxford University Press. All rights reserved. For Permissions,
please email: journals.permissions@oup.com
Random sampling => deep sampling
needed
Typically 10-100x needed for robust recovery (30-300 Gbp for human)
Actual coverage varies widely from the average.
Low coverage introduces unavoidable breaks.
But! Shotgun sequencing is very redundant!
Lots of the high coverage simply isn’t needed.
(unnecessary data)
Digital normalization
Digital normalization
Digital normalization
Digital normalization
Digital normalization
Digital normalization
Contig assembly now scales with richness, not diversity.
Most samples can be assembled on commodity computers.
(information) (data)
Diginorm is widely useful:
1. Assembly of the H. contortus parasitic nematode
genome, a “high polymorphism/variable coverage”
problem.
(Schwarz et al., 2013; pmid 23985341)
2. Reference-free assembly of the lamprey (P. marinus)
transcriptome, a “big assembly” problem. (in prep)
3. Osedax symbiont metagenome, a “contaminated
metagenome” problem (Goffredi et al, 2013; pmid
24225886)
Changes the way analyses scale.
Conway T C , Bromage A J Bioinformatics 2011;27:479-486
© The Author 2011. Published by Oxford University Press. All rights reserved. For Permissions,
please email: journals.permissions@oup.com
Question: does this approach
negatively affect results? (No.)
3
70
25
1
36
13563
35
13
7
4 23 8 1
6
5
Diginorm V/O Raw V/O
Diginorm trinity Raw trinity
Evaluation of Molgula occulta transcriptome assembly approaches.
Lowe et al., 2014, https://peerj.com/preprints/505/
Putting it in perspective:
Total equivalent of ~1200 bacterial genomes
Human genome ~3 billion bp
Back to soil - what about the assembly results
for Iowa corn and prairie??
Total
Assembly
Total Contigs
(> 300 bp)
% Reads
Assembled
Predicted
protein
coding
2.5 bill 4.5 mill 19% 5.3 mill
3.5 bill 5.9 mill 22% 6.8 mill
Adina Howe
Resulting contigs are low
coverage.
Figure11: Coverage (median basepair) distribution of assembled contigs from soil metagenomes.
So, for soil:
 We really do need quite a bit more data to
comprehensively sample gene content of agricultural
soil;
 But at least now we can assemble what we already
have.
 Estimate required sequencing depth at 50 Tbp;
 Now also have 2-8 Tbp from Amazon Rain Forest
Microbial Observatory.
 …still not saturated coverage, but getting closer.
Biogeography: Iowa sample
overlap?
Corn and prairie De Bruijn graphs have 51% overlap.
Corn Prairie
Suggests that at greater depth, samples may have similar genomic content.
Putting it in perspective:
Total equivalent of ~1200 bacterial genomes
Human genome ~3 billion bp
Blocking problem: we don’t know what
most genes do!
Total
Assembly
Total Contigs
(> 300 bp)
% Reads
Assembled
Predicted
protein
coding
2.5 bill 4.5 mill 19% 5.3 mill
3.5 bill 5.9 mill 22% 6.8 mill
Howe et al, 2014; pmid 24632729
Reminder: the real challenge
is understanding
 We have gotten distracted by shiny toys:
sequencing!! Data!!
 Data is now plentiful! But:
 We typically have no knowledge of what > 50%
of an environmental metagenome “means”,
functionally.
http://ivory.idyll.org/blog/2014-function-of-unknown-genes.html
Data integration as a next
challenge
In 5-10 years, we will have nigh-infinite data.
(Genomic, transcriptomic, proteomic,
metabolomic, …?)
How do we explore these data sets?
Registration, cross-validation, integration with
models…
Carbon cycling in the ocean -
“DeepDOM” cruise, Kujawinski & Longnecker et al.
Integrating many different data types
to build understanding.
Figure 2. Summary of challenges associated with the data integration in the proposed project.
“DeepDOM” cruise: examination of dissolved organic matter & microbial metabolism
vs physical parameters – potential collab.
Data/analysis lifecycle
A few thoughts on next
steps.
 Enable scientists with better tools.
 Train a bioinformatics “middle class.”
 Accelerate science via the open science
“network effect”.
That is… what now?
Once you have all this data, what do you do?
"Business as usual simply cannot work.”
- David Haussler, 2014
Looking at millions to billions of (human)
genomes in the next 5-10 years.
Enabling scientists with
better tools -
Build robust, flexible computational frameworks
for data exploration, and make them open and
remixable.
Develop theory, algorithms, & software together,
and train people in its use.
(Oh, and stop pretending that we can develop
“black boxes” that will give you the right answer.)
Education and training - towards a
bioinformatics “middle class”
Biology is underprepared for data-intensive investigation.
We must teach and train the next generations.
=> Build a cohort of “data intensive biologists” who can use
data and tools as an intrinsic and unremarkable part of their
research.
~10-20 workshops / year, novice -> masterclass; open
materials.
dib-training.rtfd.org/
Can open science trigger a
“network effect”?
http://prasoondiwakar.com/wordpress/trivia/the-network-effect
The open science “network
effect”
If we have open tools, and trained users,
then what remains to hold us back?
Access to data.
The data deluge is here – it’s
just somewhat hidden.
I actually think this graph should be a much steeper.
Tackling data availability…
In 5-10 years, we will have nigh-infinite data.
(Genomic, transcriptomic, proteomic,
metabolomic, …?)
We currently have no good way of querying,
exploring, investigating, or mining these data
sets, especially across multiple locations..
Moreover, most data is unavailable until after
publication, and often it must then be “curated”
to become useful.
Pre-publication data sharing?
There is no obvious reason to make data available prior
to publication of its analysis.
There is no immediate reward for doing so.
Neither is there much systematized reward for doing
so.
(Citations and kudos feel good, but are cold comfort.)
Worse, there are good reasons not to do so.
If you make your data available, others can take
advantage of it…
This bears some similarity to
the Prisoners’ Dilemma:
Where “confession” is not
sharing your data.
Note: I’m not a game theorist
(but some of my best friends
are).
(Leighton Pritchard modification of
http://www.acting-man.com/?p=34313)
So, how do we get academics to
share their data!?
Well, what are people doing now?
Two successful “systems” (send me more!!)
1. Oceanographic research
2. Biomedical research
1. Research cruises are
expensive!
In oceanography,
individual researchers cannot
afford to set up a cruise.
So, they form scientific consortia.
These consortia have data sharing
and preprint sharing agreements.
(I’m told it works pretty well (?))
2. Some data makes more sense
when you have more data
Omberg et al., Nature Genetics, 2013.
Sage Bionetworks et al.:
Organize a consortium to generate
data;
Standardize data generation;
Share via common platform;
Store results, provenance, analysis
descriptions, and source code;
Run a leaderboard for a subset of
analyses;
Win!
This “walled garden” model
is interesting!
“Compete” on analysis, not on data.
Some notes -
 Sage model requires ~similar data in
common format;
 Common analysis platform then becomes
immediately useful;
 Data is ~easily re-usable by participants;
 Publication of data becomes straightforward;
 Both models are centralized and
coordinated. :(
So: can we drive data sharing via a decentralized
model, e.g. a distributed graph database?
Compute server
(Galaxy?
Arvados?)
Web interface + API
Data/
Info
Raw data sets
Public
servers
"Walled
garden"
server
Private
server
Graph query layer
Upload/submit
(NCBI, KBase)
Import
(MG-RAST,
SRA, EBI)
ivory.idyll.org/blog/2014-moore-ddd-award.html
My larger research vision:
100% buzzword compliantTM
Enable and incentivize sharing by providing
immediate utility; frictionless sharing.
Permissionless innovation for e.g. new data
mining approaches.
Plan for poverty with federated infrastructure
built on open & cloud.
Solve people’s current problems, while
remaining agile for the future.
ivory.idyll.org/blog/2014-moore-ddd-award.html
Thanks!
Please contact me at ctbrown@ucdavis.edu!
Soil collaborators: Tiedje (MSU), Jansson (PNNL), Tringe (JGI/DOE)

More Related Content

PDF
Enriching Scholarship Personal Genomics presentation
PPT
Cross-Disciplinary Biomedical Research at Calit2
PDF
Gcc talk baltimore july 2014
PPTX
Big data nebraska
PDF
Колкер Е. An introduction to MOPED: Multi-Omics Profiling Expression Database
PPTX
High throughput mining of the scholarly literature; talk at NIH
PDF
Methods to enhance the validity of precision guidelines emerging from big data
PPTX
Towards Responsible Content Mining: A Cambridge perspective
Enriching Scholarship Personal Genomics presentation
Cross-Disciplinary Biomedical Research at Calit2
Gcc talk baltimore july 2014
Big data nebraska
Колкер Е. An introduction to MOPED: Multi-Omics Profiling Expression Database
High throughput mining of the scholarly literature; talk at NIH
Methods to enhance the validity of precision guidelines emerging from big data
Towards Responsible Content Mining: A Cambridge perspective

What's hot (20)

PPTX
Big Data Field Museum
DOC
ABIcurator.doc
PPTX
Gene Wiki and Wikimedia Foundation SPARQL workshop
PDF
Biomarker-Vol9_reduced
PPT
Large scale machine learning challenges for systems biology
PDF
NRNB Annual Report 2011
PDF
Introduction to Bioinformatics
PPTX
2016 davis-plantbio
PPTX
Big data in biology
PPTX
Gene Wiki and Mark2Cure update for BD2K
PDF
Technology R&D Theme 3: Multi-scale Network Representations
PDF
2014 11-13-sbsm032-reproducible research
PPTX
Trends In Genomics
PPTX
ContentMining for France and Europe; Lessons from 2 years in UK
PDF
NCI systems epidemiology 03012019
PPT
DNA Testing: Living Longer Via Personal Genomics
PPT
Personal Genomes: what can I do with my data?
ODP
Bioc strucvariant seattle_11_09
PPT
Use of data
PPT
Bio Informatics
Big Data Field Museum
ABIcurator.doc
Gene Wiki and Wikimedia Foundation SPARQL workshop
Biomarker-Vol9_reduced
Large scale machine learning challenges for systems biology
NRNB Annual Report 2011
Introduction to Bioinformatics
2016 davis-plantbio
Big data in biology
Gene Wiki and Mark2Cure update for BD2K
Technology R&D Theme 3: Multi-scale Network Representations
2014 11-13-sbsm032-reproducible research
Trends In Genomics
ContentMining for France and Europe; Lessons from 2 years in UK
NCI systems epidemiology 03012019
DNA Testing: Living Longer Via Personal Genomics
Personal Genomes: what can I do with my data?
Bioc strucvariant seattle_11_09
Use of data
Bio Informatics
Ad

Viewers also liked (20)

PPTX
City of San Antonio passes Social Host Ordinance December 15, 2016
PPTX
2013 py con awesome big data algorithms
PDF
AMD Putting Server Virtualization to Work
PPTX
Theguesswho
PDF
Cross-Border Transactions from a U.S. Perspective
DOCX
ITP Instance Management Process V2
PPTX
Underage Drinking Parties in San Antonio 2016
PPS
وظائف القيادة
PPT
Tips And Tricks For Photos
PPT
PDF
BlackBerry Clinique-Short Review OS 7.1
PPT
Ele 2009 Opening Pvu
PDF
Catalyst Eye Tracking: Bing vs Google
PPTX
The Great Murder
PDF
Dock It Customer Intro 14 Aug 09
PDF
Undangan (Kak Melly n Kak Dicky)
PDF
PPP Project Development Fund Initiative-PbyR
PPT
RealTimeSchool
PPSX
Seniors u00 e9lu00e9ment_u00e9conomique_indispensable11-1
PPT
Theoretical framework d1 2016 11-18
City of San Antonio passes Social Host Ordinance December 15, 2016
2013 py con awesome big data algorithms
AMD Putting Server Virtualization to Work
Theguesswho
Cross-Border Transactions from a U.S. Perspective
ITP Instance Management Process V2
Underage Drinking Parties in San Antonio 2016
وظائف القيادة
Tips And Tricks For Photos
BlackBerry Clinique-Short Review OS 7.1
Ele 2009 Opening Pvu
Catalyst Eye Tracking: Bing vs Google
The Great Murder
Dock It Customer Intro 14 Aug 09
Undangan (Kak Melly n Kak Dicky)
PPP Project Development Fund Initiative-PbyR
RealTimeSchool
Seniors u00 e9lu00e9ment_u00e9conomique_indispensable11-1
Theoretical framework d1 2016 11-18
Ad

Similar to 2015 mcgill-talk (20)

PPTX
2014 marine-microbes-grc
PPTX
2014 sage-talk
PPTX
2013 alumni-webinar
PDF
Pathogen Genome Data
PPTX
2015 illinois-talk
PPTX
2014 nyu-bio-talk
PPTX
Big data nebraska
PPTX
2014 talk at NYU CUSP: "Biology Caught the Bus: Now what? Sequencing, Big Dat...
PDF
Iowa State Bioinformatics BCB Symposium 2018 - There and Back Again
PPTX
Emerging challenges in data-intensive genomics
PPTX
Rick Stevens: Prospects for a Systematic Exploration of Earths Microbial Dive...
PPTX
2015 aem-grs-keynote
PPTX
Metagenomics and it’s applications
PDF
metagenomicsanditsapplications-161222180924.pdf
PPTX
2014 ucl
PPTX
Introduction to Gene Mining Part A: BLASTn-off!
PPTX
Job Talk Iowa State University Ag Bio Engineering
PPT
2011Field talk at iEVOBIO 2011
PPTX
2014 mmg-talk
PPTX
2016 davis-biotech
2014 marine-microbes-grc
2014 sage-talk
2013 alumni-webinar
Pathogen Genome Data
2015 illinois-talk
2014 nyu-bio-talk
Big data nebraska
2014 talk at NYU CUSP: "Biology Caught the Bus: Now what? Sequencing, Big Dat...
Iowa State Bioinformatics BCB Symposium 2018 - There and Back Again
Emerging challenges in data-intensive genomics
Rick Stevens: Prospects for a Systematic Exploration of Earths Microbial Dive...
2015 aem-grs-keynote
Metagenomics and it’s applications
metagenomicsanditsapplications-161222180924.pdf
2014 ucl
Introduction to Gene Mining Part A: BLASTn-off!
Job Talk Iowa State University Ag Bio Engineering
2011Field talk at iEVOBIO 2011
2014 mmg-talk
2016 davis-biotech

More from c.titus.brown (20)

PPTX
2016 bergen-sars
PPTX
2015 genome-center
PPTX
2015 beacon-metagenome-tutorial
PPTX
2015 msu-code-review
PPTX
2015 pycon-talk
PPTX
2015 opencon-webcast
PPTX
2015 vancouver-vanbug
PPTX
2015 osu-metagenome
PPTX
2015 ohsu-metagenome
PPTX
2015 balti-and-bioinformatics
PPTX
2015 pag-chicken
PPTX
2015 pag-metagenome
PPTX
2014 bangkok-talk
PPTX
2014 anu-canberra-streaming
PPTX
2014 nicta-reproducibility
PPTX
2014 aus-agta
PPTX
2014 abic-talk
PPTX
2014 nci-edrn
PPTX
2014 wcgalp
PPTX
2014 moore-ddd
2016 bergen-sars
2015 genome-center
2015 beacon-metagenome-tutorial
2015 msu-code-review
2015 pycon-talk
2015 opencon-webcast
2015 vancouver-vanbug
2015 osu-metagenome
2015 ohsu-metagenome
2015 balti-and-bioinformatics
2015 pag-chicken
2015 pag-metagenome
2014 bangkok-talk
2014 anu-canberra-streaming
2014 nicta-reproducibility
2014 aus-agta
2014 abic-talk
2014 nci-edrn
2014 wcgalp
2014 moore-ddd

Recently uploaded (20)

PDF
Formation of Supersonic Turbulence in the Primordial Star-forming Cloud
PPTX
C1 cut-Methane and it's Derivatives.pptx
PDF
Cosmic Outliers: Low-spin Halos Explain the Abundance, Compactness, and Redsh...
PPTX
BIOMOLECULES PPT........................
PDF
Is Earendel a Star Cluster?: Metal-poor Globular Cluster Progenitors at z ∼ 6
PPTX
Biomechanics of the Hip - Basic Science.pptx
PDF
ELS_Q1_Module-11_Formation-of-Rock-Layers_v2.pdf
PPTX
CORDINATION COMPOUND AND ITS APPLICATIONS
PPT
6.1 High Risk New Born. Padetric health ppt
PPTX
ognitive-behavioral therapy, mindfulness-based approaches, coping skills trai...
PPT
Heredity-grade-9 Heredity-grade-9. Heredity-grade-9.
PDF
GROUP 2 ORIGINAL PPT. pdf Hhfiwhwifhww0ojuwoadwsfjofjwsofjw
PDF
The Land of Punt — A research by Dhani Irwanto
PDF
An interstellar mission to test astrophysical black holes
PDF
Unveiling a 36 billion solar mass black hole at the centre of the Cosmic Hors...
PDF
Lymphatic System MCQs & Practice Quiz – Functions, Organs, Nodes, Ducts
PPTX
Fluid dynamics vivavoce presentation of prakash
PDF
. Radiology Case Scenariosssssssssssssss
PPTX
Pharmacology of Autonomic nervous system
PPTX
BODY FLUIDS AND CIRCULATION class 11 .pptx
Formation of Supersonic Turbulence in the Primordial Star-forming Cloud
C1 cut-Methane and it's Derivatives.pptx
Cosmic Outliers: Low-spin Halos Explain the Abundance, Compactness, and Redsh...
BIOMOLECULES PPT........................
Is Earendel a Star Cluster?: Metal-poor Globular Cluster Progenitors at z ∼ 6
Biomechanics of the Hip - Basic Science.pptx
ELS_Q1_Module-11_Formation-of-Rock-Layers_v2.pdf
CORDINATION COMPOUND AND ITS APPLICATIONS
6.1 High Risk New Born. Padetric health ppt
ognitive-behavioral therapy, mindfulness-based approaches, coping skills trai...
Heredity-grade-9 Heredity-grade-9. Heredity-grade-9.
GROUP 2 ORIGINAL PPT. pdf Hhfiwhwifhww0ojuwoadwsfjofjwsofjw
The Land of Punt — A research by Dhani Irwanto
An interstellar mission to test astrophysical black holes
Unveiling a 36 billion solar mass black hole at the centre of the Cosmic Hors...
Lymphatic System MCQs & Practice Quiz – Functions, Organs, Nodes, Ducts
Fluid dynamics vivavoce presentation of prakash
. Radiology Case Scenariosssssssssssssss
Pharmacology of Autonomic nervous system
BODY FLUIDS AND CIRCULATION class 11 .pptx

2015 mcgill-talk

  • 1. A data intensive future: how can biology best take advantage of the coming data deluge? C. Titus Brown ctbrown@ucdavis.edu Associate Professor, UC Davis
  • 2. Choose your own adventure: Either you believe that all this “Big Data” stuff is nonsense and/or overblown:  Please help me out by identifying my misconceptions! Or, you are interested in strategies and techniques for working with lots of data, in which case:  I hope to make some useful technical and social/cultural points.
  • 3. The obligatory slide about abundant sequencing data. http://www.genome.gov/sequencingcosts/ Also see: https://biomickwatson.wordpress.com/2015/03/25/the-cost-of-sequencing-is- still-going-down/
  • 4. Three general uses for abundant sequencing data.  Computational hypothesis falsification.  Model comparison or evaluation of sufficiency.  Hypothesis generation. http://ivory.idyll.org/blog/2015-what-to-do-with-sequencing-data.html
  • 5. My lab’s goals re “data intensive biology”  Build open tools and evaluate approaches for moving quickly from raw-ish data to hypotheses.  Work with collaborators to identify emerging challenges that are preventing them from doing their science.  Train peers in data analysis techniques.
  • 6. Investigating soil microbial communities  95% or more of soil microbes cannot be cultured in lab.  Very little transport in soil and sediment => slow mixing rates.  Estimates of immense diversity:  Billions of microbial cells per gram of soil.  Million+ microbial species per gram of soil (Gans et al, 2005)  One observed lower bound for genomic sequence complexity => 26 Gbp (Amazon Rain Forest Microbial Observatory)
  • 7. N. A. Krasil'nikov, SOIL MICROORGANISMS AND HIGHER PLANTS http://www.soilandhealth.org/01aglibrary/010112krasil/010112krasil.ptII.html “By 'soil' we understand (Vil'yams, 1931) a loose surface layer of earth capable of yielding plant crops. In the physical sense the soil represents a complex disperse system consisting of three phases: solid, liquid, and gaseous.” Microbies live in & on: • Surfaces of aggregate particles; • Pores within microaggregates;
  • 8. Questions to address  Role of soil microbes in nutrient cycling:  How does agricultural soil differ from native soil?  How do soil microbial communities respond to climate perturbation?  Genome-level questions:  What kind of strain-level heterogeneity is present in the population?  What are the phage and viral populations & dynamic?  What species are where, and how much is shared between different geographical locations?
  • 9. Must use culture independent approaches  Many reasons why you can’t or don’t want to culture: cross-feeding, niche specificity, dormancy, etc.  If you want to get at underlying function, 16s analysis alone is not sufficient. Single-cell sequencing & shotgun metagenomics are two common ways to investigate complex microbial communities.
  • 10. Shotgun metagenomics  Collect samples;  Extract DNA;  Feed into sequencer;  Computationally analyze. Wikipedia: Environmental shotgun sequencing.png “Sequence it all and let the bioinformaticians sort it out”
  • 11. Great Prairie Grand Challenge - -SAMPLING LOCATIONS 2008
  • 12. A “Grand Challenge” dataset (DOE/JGI) 0 100 200 300 400 500 600 Iowa, Continuous corn Iowa, Native Prairie Kansas, Cultivated corn Kansas, Native Prairie Wisconsin, Continuous corn Wisconsin, Native Prairie Wisconsin, Restored Prairie Wisconsin, Switchgrass BasepairsofSequencing(Gbp) GAII HiSeq Rumen (Hess et. al, 2011), 268 Gbp MetaHIT (Qin et. al, 2011), 578 Gbp NCBI nr database, 37 Gbp Total: 1,846 Gbp soil metagenome Rumen K-mer Filtered, 111 Gbp
  • 13. Why do we need so much data?!  20-40x coverage is necessary; 100x is ~sufficient.  Mixed population sampling => sensitivity driven by lowest abundance.  For example, for E. coli in 1/1000 dilution, you would need approximately 100x coverage of a 5mb genome at 1/1000, or 500 Gbp of sequence! (For soil, estimate is 50 Tbp)  Sequencing is straightforward; data analysis is not. “$1000 genome with $1m analysis”
  • 14. Great Prairie Grand Challenge - goals  How much of the source metagenome can we reconstruct from ~300-600 Gbp+ of shotgun sequencing? (Largest data set ever sequenced, ~2010.)  What can we learn about soil from looking at the reconstructed metagenome? (See list of questions)
  • 15. Great Prairie Grand Challenge - goals  How much of the source metagenome can we reconstruct from ~300-600 Gbp+ of shotgun sequencing? (Largest data set ever sequenced, ~2010.)  What can we learn about soil from looking at the reconstructed metagenome? (See list of questions) (For complex ecological and evolutionary systems, we’re just starting to get past the first question. More on that later.)
  • 16. Conway T C , Bromage A J Bioinformatics 2011;27:479-486 © The Author 2011. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com De novo assembly scales with size of data, not size of (meta)genome.
  • 17. Why do assemblers scale badly? Memory usage ~ “real” variation + number of errors Number of errors ~ size of data set
  • 18. Our problem, in a nutshell: We had so much data that we couldn’t compute on it. (This was, and is, a common problem in non- model systems.)
  • 19. Our solution: abundance normalization (diginorm) Conway T C , Bromage A J Bioinformatics 2011;27:479-486 © The Author 2011. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
  • 20. Random sampling => deep sampling needed Typically 10-100x needed for robust recovery (30-300 Gbp for human)
  • 21. Actual coverage varies widely from the average. Low coverage introduces unavoidable breaks.
  • 22. But! Shotgun sequencing is very redundant! Lots of the high coverage simply isn’t needed. (unnecessary data)
  • 29. Contig assembly now scales with richness, not diversity. Most samples can be assembled on commodity computers. (information) (data)
  • 30. Diginorm is widely useful: 1. Assembly of the H. contortus parasitic nematode genome, a “high polymorphism/variable coverage” problem. (Schwarz et al., 2013; pmid 23985341) 2. Reference-free assembly of the lamprey (P. marinus) transcriptome, a “big assembly” problem. (in prep) 3. Osedax symbiont metagenome, a “contaminated metagenome” problem (Goffredi et al, 2013; pmid 24225886)
  • 31. Changes the way analyses scale. Conway T C , Bromage A J Bioinformatics 2011;27:479-486 © The Author 2011. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
  • 32. Question: does this approach negatively affect results? (No.) 3 70 25 1 36 13563 35 13 7 4 23 8 1 6 5 Diginorm V/O Raw V/O Diginorm trinity Raw trinity Evaluation of Molgula occulta transcriptome assembly approaches. Lowe et al., 2014, https://peerj.com/preprints/505/
  • 33. Putting it in perspective: Total equivalent of ~1200 bacterial genomes Human genome ~3 billion bp Back to soil - what about the assembly results for Iowa corn and prairie?? Total Assembly Total Contigs (> 300 bp) % Reads Assembled Predicted protein coding 2.5 bill 4.5 mill 19% 5.3 mill 3.5 bill 5.9 mill 22% 6.8 mill Adina Howe
  • 34. Resulting contigs are low coverage. Figure11: Coverage (median basepair) distribution of assembled contigs from soil metagenomes.
  • 35. So, for soil:  We really do need quite a bit more data to comprehensively sample gene content of agricultural soil;  But at least now we can assemble what we already have.  Estimate required sequencing depth at 50 Tbp;  Now also have 2-8 Tbp from Amazon Rain Forest Microbial Observatory.  …still not saturated coverage, but getting closer.
  • 36. Biogeography: Iowa sample overlap? Corn and prairie De Bruijn graphs have 51% overlap. Corn Prairie Suggests that at greater depth, samples may have similar genomic content.
  • 37. Putting it in perspective: Total equivalent of ~1200 bacterial genomes Human genome ~3 billion bp Blocking problem: we don’t know what most genes do! Total Assembly Total Contigs (> 300 bp) % Reads Assembled Predicted protein coding 2.5 bill 4.5 mill 19% 5.3 mill 3.5 bill 5.9 mill 22% 6.8 mill Howe et al, 2014; pmid 24632729
  • 38. Reminder: the real challenge is understanding  We have gotten distracted by shiny toys: sequencing!! Data!!  Data is now plentiful! But:  We typically have no knowledge of what > 50% of an environmental metagenome “means”, functionally. http://ivory.idyll.org/blog/2014-function-of-unknown-genes.html
  • 39. Data integration as a next challenge In 5-10 years, we will have nigh-infinite data. (Genomic, transcriptomic, proteomic, metabolomic, …?) How do we explore these data sets? Registration, cross-validation, integration with models…
  • 40. Carbon cycling in the ocean - “DeepDOM” cruise, Kujawinski & Longnecker et al.
  • 41. Integrating many different data types to build understanding. Figure 2. Summary of challenges associated with the data integration in the proposed project. “DeepDOM” cruise: examination of dissolved organic matter & microbial metabolism vs physical parameters – potential collab.
  • 43. A few thoughts on next steps.  Enable scientists with better tools.  Train a bioinformatics “middle class.”  Accelerate science via the open science “network effect”.
  • 44. That is… what now? Once you have all this data, what do you do? "Business as usual simply cannot work.” - David Haussler, 2014 Looking at millions to billions of (human) genomes in the next 5-10 years.
  • 45. Enabling scientists with better tools - Build robust, flexible computational frameworks for data exploration, and make them open and remixable. Develop theory, algorithms, & software together, and train people in its use. (Oh, and stop pretending that we can develop “black boxes” that will give you the right answer.)
  • 46. Education and training - towards a bioinformatics “middle class” Biology is underprepared for data-intensive investigation. We must teach and train the next generations. => Build a cohort of “data intensive biologists” who can use data and tools as an intrinsic and unremarkable part of their research. ~10-20 workshops / year, novice -> masterclass; open materials. dib-training.rtfd.org/
  • 47. Can open science trigger a “network effect”? http://prasoondiwakar.com/wordpress/trivia/the-network-effect
  • 48. The open science “network effect” If we have open tools, and trained users, then what remains to hold us back? Access to data.
  • 49. The data deluge is here – it’s just somewhat hidden. I actually think this graph should be a much steeper.
  • 50. Tackling data availability… In 5-10 years, we will have nigh-infinite data. (Genomic, transcriptomic, proteomic, metabolomic, …?) We currently have no good way of querying, exploring, investigating, or mining these data sets, especially across multiple locations.. Moreover, most data is unavailable until after publication, and often it must then be “curated” to become useful.
  • 51. Pre-publication data sharing? There is no obvious reason to make data available prior to publication of its analysis. There is no immediate reward for doing so. Neither is there much systematized reward for doing so. (Citations and kudos feel good, but are cold comfort.) Worse, there are good reasons not to do so. If you make your data available, others can take advantage of it…
  • 52. This bears some similarity to the Prisoners’ Dilemma: Where “confession” is not sharing your data. Note: I’m not a game theorist (but some of my best friends are). (Leighton Pritchard modification of http://www.acting-man.com/?p=34313)
  • 53. So, how do we get academics to share their data!? Well, what are people doing now? Two successful “systems” (send me more!!) 1. Oceanographic research 2. Biomedical research
  • 54. 1. Research cruises are expensive! In oceanography, individual researchers cannot afford to set up a cruise. So, they form scientific consortia. These consortia have data sharing and preprint sharing agreements. (I’m told it works pretty well (?))
  • 55. 2. Some data makes more sense when you have more data Omberg et al., Nature Genetics, 2013. Sage Bionetworks et al.: Organize a consortium to generate data; Standardize data generation; Share via common platform; Store results, provenance, analysis descriptions, and source code; Run a leaderboard for a subset of analyses; Win!
  • 56. This “walled garden” model is interesting! “Compete” on analysis, not on data.
  • 57. Some notes -  Sage model requires ~similar data in common format;  Common analysis platform then becomes immediately useful;  Data is ~easily re-usable by participants;  Publication of data becomes straightforward;  Both models are centralized and coordinated. :(
  • 58. So: can we drive data sharing via a decentralized model, e.g. a distributed graph database? Compute server (Galaxy? Arvados?) Web interface + API Data/ Info Raw data sets Public servers "Walled garden" server Private server Graph query layer Upload/submit (NCBI, KBase) Import (MG-RAST, SRA, EBI) ivory.idyll.org/blog/2014-moore-ddd-award.html
  • 59. My larger research vision: 100% buzzword compliantTM Enable and incentivize sharing by providing immediate utility; frictionless sharing. Permissionless innovation for e.g. new data mining approaches. Plan for poverty with federated infrastructure built on open & cloud. Solve people’s current problems, while remaining agile for the future. ivory.idyll.org/blog/2014-moore-ddd-award.html
  • 60. Thanks! Please contact me at ctbrown@ucdavis.edu! Soil collaborators: Tiedje (MSU), Jansson (PNNL), Tringe (JGI/DOE)

Editor's Notes

  • #12: Fly-over country (that I live in)
  • #17: A sketch showing the relationship between the number of sequence reads and the number of edges in the graph. Because the underlying genome is fixed in size, as the number of sequence reads increases the number of edges in the graph due to the underlying genome that will plateau when every part of the genome is covered. Conversely, since errors tend to be random and more or less unique, their number scales linearly with the number of sequence reads. Once enough sequence reads are present to have enough coverage to clearly distinguish true edges (which come from the underlying genome), they will usually be outnumbered by spurious edges (which arise from errors) by a substantial factor.
  • #22: High coverage is essential.
  • #23: High coverage is essential.
  • #30: Goal is to do first stage data reduction/analysis in less time than it takes to generate the data. Compression => OLC assembly.
  • #47: Passionate about training; necessary fro advancement of field; also deeply self-interested because I find out what the real problems are. (“Some people can do assembly” is not “everyone can do assembly”)
  • #59: Analyze data in cloud; import and export important; connect to other databases.
  • #60: Work with other Moore DDD folk on the data mining aspect. Start with cross validation, move to more sophisticated in-server implementations.