SlideShare a Scribd company logo
www.HyperionResearch.com
www.hpcuserforum.com
HPC Market Update and
Observations on Big Memory
December 10, 2020
Mark Nossokoff
Senior Analyst, Lead Storage Analyst
Visit Our Website: www.HyperionResearch.com
© Hyperion Research 2020 2
• Twitter: @HPC_Hyperion
3
HPC Market Update
© Hyperion Research 2020
Market Area
($M)
2019 2020 2021 2022 2023 2024
CAGR
19-24
Server $13,710 $11,846 $13,295 $15,817 $17,942 $19,044 6.8%
Storage $5,427 $4,772 $5,410 $6,519 $7,577 $8,099 8.3%
Middleware $1,613 $1,402 $1,576 $1,902 $2,171 $2,317 7.5%
Applications $4,689 $4,062 $4,455 $5,258 $5,862 $6,111 5.4%
Service $2,239 $1,899 $2,040 $2,366 $2,587 $2,643 3.4%
Total Revenue $27,678 $23,981 $26,774 $31,862 $36,138 $38,214 6.7%
Source: Hyperion Research, November 2020
6.7% CAGR
(2019-2024)
~$38B
Total
($M)
On-prem Broader Market Forecast
• Forecast incorporates Covid-19’s impact
• Downside pressure
 Delayed product shipments
 Delayed revenues
 Delayed orders
 Decline of 11.5% in first half of 2020
 Forecasting Y/Y decline of 14% for 2020
• Upside momentum
 Demand to combat Covid-19
 Increase in HPC workloads running in
the public cloud
 Expected recovery in mid 2021
4© Hyperion Research 2020
Storage is expected to grow the most at 8.3%
Source: Hyperion Research, November 2020
Source: Hyperion Research, November 2020
HPC-enabled AI
 HPC Servers
 HPC-Enabled AI
 ML in HPC
 DL in HPC
 Other AI in HPC
($M)
5© Hyperion Research 2020
HPC-enabled On-prem AI Server Forecast
HPC-Enabled AI Growth ~ 5x Overall HPC Server Growth 2019-2024
~$19B total
• 6.8% growth
• 15.8% growth
• 31.1% growth
HPC Servers
HPC-enabled
AI Servers
HPDA
Servers
6© Hyperion Research 2020
HPC On-Prem Server Forecast By Application Area
Government, Academic, CAE/Manufacturing and Bio-sciences >50% of market
Source: Hyperion Research, November 2020
$M 2019 2020 2021 2022 2023 2024
CAGR
19-24
Bio-Sciences $1,457 $1,239 $1,226 $1,536 $1,739 $1,850 4.9%
CAE $1,721 $1,468 $1,492 $1,859 $2,110 $2,242 5.4%
Chemical Engineering $170 $145 $154 $185 $209 $220 5.2%
DCC & Distribution $825 $696 $681 $857 $970 $1,017 4.3%
Economics/Financial $710 $608 $623 $818 $924 $972 6.5%
EDA / IT / ISV $822 $702 $696 $918 $1,037 $1,091 5.8%
Geosciences $969 $815 $843 $1,010 $1,151 $1,231 4.9%
Mechanical Design $52 $044 $049 $057 $065 $068 5.6%
Defense $1,472 $1,284 $1,317 $1,692 $1,916 $2,027 6.6%
Government Lab $2,418 $2,161 $3,352 $3,314 $3,759 $4,127 11.3%
University/Academic $2,301 $1,993 $2,141 $2,647 $2,981 $3,053 5.8%
Weather $639 $553 $570 $724 $819 $866 6.3%
Other $155 $139 $151 $202 $261 $279 12.5%
Total Revenue $13,710 $11,846 $13,295 $15,817 $17,942 $19,044 6.8%
Market Area ($M) 2019 2020 2021 2022 2023 2024
CAGR
19-24
Server $13,710 $11,846 $13,295 $15,817 $17,942 $19,044 6.8%
Storage $5,427 $4,772 $5,410 $6,519 $7,577 $8,099 8.3%
Middleware $1,613 $1,402 $1,576 $1,902 $2,171 $2,317 7.5%
Applications $4,689 $4,062 $4,455 $5,258 $5,862 $6,111 5.4%
Service $2,239 $1,899 $2,040 $2,366 $2,587 $2,643 3.4%
Public Cloud Spend $3,910 $4,300 $5,300 $4,600 $7,600 $8,800 17.6%
Total On and Off
Prem Revenue
$31,588 $28,281 $32,076 $36,462 $43,739 $47,014 8.3%
HPC Usage in the Cloud
7© Hyperion Research 2020
Expected to incrementally add $8.8B to on-prem HPC spend in 2024
Source: Hyperion Research, November 2020
Source: Hyperion Research, November 2020
Server, $19,044
Storage, $8,099
Middleware,
$2,317
Applications,
$6,111
Service, $2,643
Public Cloud
Spend, $8,800
Server Storage Middleware Applications Service Public Cloud Spend
2024 Broader Market Forecast - ~$47B
8© Hyperion Research 2020
Key Buying Requirements For On-prem HPC
Price/performance and overall performance on specific applications the top items
Top Criteria For Next Purchase
Price 83%
Application Performance 61%
Security 25%
Faster CPUs 25%
AI-Big Data Capabilities 22%
Interconnect Performance 16%
Quality 15%
Accelerators 14%
Storage 11%
Memory Bandwidth 10%
Backwards Compatibility with Current
Systems
10%
Source of Open Source Software 4%
Other 3%
9
Observations on Big Memory and HPC
© Hyperion Research 2020
Historic
perspective on
memory
Cost Expensive
Capacity
100s GB memory
per server
Resiliency Volatile
Relationship to
Storage
Extension of
memory
Data
Access
Type Form Factor
Hot/Active
Integrated n/a
DRAM DIMM
SSD
AIC, U.2, M.2,
EDSFF
HDD dual actuator 3.5”
Warm HDD 3.5”
Cold Tape
• Persistent Memory +
Memory Virtualization Software
10
What is Big Memory?
High capacity, performant, resilient data via memory footprint and accessibility
Data
Access
Type Form Factor
Hot/Active
Integrated n/a
DRAM DIMM
Persistent Memory DIMM
Warm
SSD
AIC, U.2, M.2,
EDSFF
HDD dual actuator 3.5”
HDD 3.5”
Cold Tape
Historic
perspective on
memory
HPC
Requirements
Cost Expensive
Capacity
100s GB memory
per server
Resiliency Volatile
Relationship to
Storage
Extension of
memory
Historic
perspective on
memory
HPC
Requirements
Big Memory
Cost Expensive Less expensive
Capacity
100s GB memory
per server
100s TB memory
per server
Resiliency Volatile HA Tier
Relationship to
Storage
Extension of
memory
Data is in memory
Application
Revenue ($M)
2019 2024
CAGR
19-24
Bio-Sciences $1,457 $1,850 4.9%
CAE / Manufacturing $1,721 $2,242 5.4%
Chemical Engineering $170 $220 5.2%
DCC & Distribution $825 $1,017 4.3%
Economics/Financial $710 $972 6.5%
EDA / IT / ISV $822 $1,091 5.8%
Geosciences $969 $1,231 4.9%
Mechanical Design $52 $068 5.6%
Defense $1,472 $2,027 6.6%
Government Lab $2,418 $4,127 11.3%
University/Academic $2,301 $3,053 5.8%
Weather $639 $866 6.3%
Other $155 $279 12.5%
Total $13,710 $19,044 6.8%
Application
Revenue ($M)
2019 2024
CAGR
19-24
Bio-Sciences $1,457 $1,850 4.9%
CAE / Manufacturing $1,721 $2,242 5.4%
Chemical Engineering $170 $220 5.2%
DCC & Distribution $825 $1,017 4.3%
Economics/Financial $710 $972 6.5%
EDA / IT / ISV $822 $1,091 5.8%
Geosciences $969 $1,231 4.9%
Mechanical Design $52 $068 5.6%
Defense $1,472 $2,027 6.6%
Government Lab $2,418 $4,127 11.3%
University/Academic $2,301 $3,053 5.8%
Weather $639 $866 6.3%
Other $155 $279 12.5%
Total $13,710 $19,044 6.8%
Application
Revenue ($M)
2019 2024
CAGR
19-24
Bio-Sciences $1,457 $1,850 4.9%
CAE / Manufacturing $1,721 $2,242 5.4%
Chemical Engineering $170 $220 5.2%
DCC & Distribution $825 $1,017 4.3%
Economics/Financial $710 $972 6.5%
EDA / IT / ISV $822 $1,091 5.8%
Geosciences $969 $1,231 4.9%
Mechanical Design $52 $068 5.6%
Defense $1,472 $2,027 6.6%
Government Lab $2,418 $4,127 11.3%
University/Academic $2,301 $3,053 5.8%
Weather $639 $866 6.3%
Other $155 $279 12.5%
Total $13,710 $19,044 6.8%
HPC On-Prem Server Forecast By Application Area
11© Hyperion Research 2020
Government, Academic, CAE/Manufacturing and Bio-sciences >50% of market
Source: Hyperion Research, November 2020
• Core counts growing
faster than memory
capacities
• Memory amount per
core decreasing
• Can memory be
efficiently and
effectively pooled and
utilized?
Most amenable to Big Memory
Likely amenable to Big Memory
Processors Shipped (estimated)
2019 2024
CAGR
19-24
425,956 534,882 4.7%
502,965 648,452 5.2%
49,796 63,591 5.0%
241,401 294,212 4.0%
206,904 281,127 6.3%
240,322 315,575 5.6%
283,098 355,851 4.7%
15,166 19,748 5.4%
430,349 586,136 6.4%
785,793 1,193,592 8.7%
672,908 882,790 5.6%
186,845 250,432 6.0%
45,191 80,660 12.3%
4,086,694 5,507,047 6.1%
Top Criteria For Next Purchase
Price 83%
Application Performance 61%
Security 25%
Faster CPUs 25%
AI-Big Data Capabilities 22%
Interconnect Performance 16%
Quality 15%
Accelerators 14%
Storage 11%
Memory Bandwidth 10%
Backwards Compatibility with
Current Systems
10%
Source of Open Source Software 4%
Other 3%
Top Criteria For Next Purchase
Price 83%
Application Performance 61%
Security 25%
Faster CPUs 25%
AI-Big Data Capabilities 22%
Interconnect Performance 16%
Quality 15%
Accelerators 14%
Storage 11%
Memory Bandwidth 10%
Backwards Compatibility with
Current Systems
10%
Source of Open Source Software 4%
Other 3%
12© Hyperion Research 2020
Key Buying Requirements For On-prem HPC
Price/performance and overall performance on specific applications the top items
Potential areas Big Memory can address
Workload Use Case Description
Traditional
HPC
Project
• Sometimes referred to as home directories or user files
• Used to capture and share final results of the modelling and simulation
• Mixture of bandwidth and throughput needs, utilizing hybrid flash, HDD storage
solutions
Scratch
• Workspace capacity used to perform the modelling and simulation
• Includes metadata capacity (high throughput [IOs/sec] and flash-based) and raw data
capacity and checkpoint writes for protection against system component failure during
long simulation runs (high bandwidth [GB/s], traditionally HDD-based but now largely
hybrid flash and HDDs
Archive
• Long-term data retention
• Scalable storage without a critical latency requirement
• Largely near-line HDD-based systems with a growing cloud-based element.
• Typically file or object data types
HPDA/AI
Ingest
• Quickly loading large amounts of data from a variety of different sources such that the
data can be tagged, normalized, stored and swiftly retrieved for subsequent analysis
• Very high bandwidth (GB/s) performance at scale to sustain retrieving data rates,
typically object-based, high-capacity HDD-based and increasingly cloud-based.
Data
Preparation
• Often times referred to as data classification or data tagging, requires a balanced mix of
throughput and bandwidth (hybrid flash and HDD storage systems)
Training
• Utilizing Machine Learning (ML) and/or Deep Learning (DL) to build an accurate model
for researchers, engineers and business analysts to use for their research, design and
business needs
• Requires high throughput (IOs/sec) and low latency for continuous and repetitive
computational analysis of the data, typically flash-based storage.
Inference
• Utilizing the model for experimentation and analysis to derive and deliver the targeted
scientific or business insights
Also requires high bandwidth and low latency and typically flash-based, often with a
caching layer
Archive
• Long-term data retention
Scalable storage without a critical latency requirement
Largely near-line HDD-based systems with a growing cloud-based element.
Typically file or object data types
Workload Use Case Description
Traditional
HPC
Project
• Sometimes referred to as home directories or user files
• Used to capture and share final results of the modelling and simulation
• Mixture of bandwidth and throughput needs, utilizing hybrid flash, HDD storage
solutions
Scratch
• Workspace capacity used to perform the modelling and simulation
• Includes metadata capacity (high throughput [IOs/sec] and flash-based) and raw data
capacity and checkpoint writes for protection against system component failure during
long simulation runs (high bandwidth [GB/s], traditionally HDD-based but now largely
hybrid flash and HDDs
Archive
• Long-term data retention
• Scalable storage without a critical latency requirement
• Largely near-line HDD-based systems with a growing cloud-based element.
• Typically file or object data types
HPDA/AI
Ingest
• Quickly loading large amounts of data from a variety of different sources such that the
data can be tagged, normalized, stored and swiftly retrieved for subsequent analysis
• Very high bandwidth (GB/s) performance at scale to sustain retrieving data rates,
typically object-based, high-capacity HDD-based and increasingly cloud-based.
Data
Preparation
• Often times referred to as data classification or data tagging, requires a balanced mix of
throughput and bandwidth (hybrid flash and HDD storage systems)
Training
• Utilizing Machine Learning (ML) and/or Deep Learning (DL) to build an accurate model
for researchers, engineers and business analysts to use for their research, design and
business needs
• Requires high throughput (IOs/sec) and low latency for continuous and repetitive
computational analysis of the data, typically flash-based storage.
Inference
• Utilizing the model for experimentation and analysis to derive and deliver the targeted
scientific or business insights
Also requires high bandwidth and low latency and typically flash-based, often with a
caching layer
Archive
• Long-term data retention
Scalable storage without a critical latency requirement
Largely near-line HDD-based systems with a growing cloud-based element.
Typically file or object data types
HPC and HPDA/AI Workloads
• Traditional HPC
• Metadata
 Small block, random
 Focus on latency, IOPs
• Simulation data
 Large block, sequential
 Focus on GB/s
• Historically separate data stores
• HDPA / AI
• Heterogenous I/O profiles
• Interspersed transfer sizes,
access patterns and performance
focus
• Growing dataset sizes
13© Hyperion Research 2020
HPDA/AI workloads changing the status quo of data access
Workload Use Case Description
Traditional
HPC
Project
• Sometimes referred to as home directories or user files
• Used to capture and share final results of the modelling and simulation
• Mixture of bandwidth and throughput needs, utilizing hybrid flash, HDD storage
solutions
Scratch
• Workspace capacity used to perform the modelling and simulation
• Includes metadata capacity (high throughput [IOs/sec] and flash-based) and raw data
capacity and checkpoint writes for protection against system component failure during
long simulation runs (high bandwidth [GB/s], traditionally HDD-based but now largely
hybrid flash and HDDs
Archive
• Long-term data retention
• Scalable storage without a critical latency requirement
• Largely near-line HDD-based systems with a growing cloud-based element.
• Typically file or object data types
HPDA/AI
Ingest
• Quickly loading large amounts of data from a variety of different sources such that the
data can be tagged, normalized, stored and swiftly retrieved for subsequent analysis
• Very high bandwidth (GB/s) performance at scale to sustain retrieving data rates,
typically object-based, high-capacity HDD-based and increasingly cloud-based.
Data
Preparation
• Often times referred to as data classification or data tagging, requires a balanced mix of
throughput and bandwidth (hybrid flash and HDD storage systems)
Training
• Utilizing Machine Learning (ML) and/or Deep Learning (DL) to build an accurate model
for researchers, engineers and business analysts to use for their research, design and
business needs
• Requires high throughput (IOs/sec) and low latency for continuous and repetitive
computational analysis of the data, typically flash-based storage.
Inference
• Utilizing the model for experimentation and analysis to derive and deliver the targeted
scientific or business insights
• Also requires high bandwidth and low latency and typically flash-based, often with a
caching layer
Archive
• Long-term data retention
Scalable storage without a critical latency requirement
• Largely near-line HDD-based systems with a growing cloud-based element.
• Typically file or object data types
Most amenable to Big Memory
Likely amenable to Big Memory
Closing Observations on Big Data, Big Memory and HPC
• Conventional thoughts on memory
• Limited amount, expensive, persistent
• Plentiful, less expensive, but not persistent
• Consistent feedback from HPC users for most
new technologies
• Is there enough [insert resource] for my [insert task]?
• Is there enough memory for my working dataset size?
• How much will my “time to results” be improved?
• Will it simplify (at least not complicate) system
management, data management and workflow?
• Do I need to change any code?
• Can I afford the amount of memory I need for my HPC
workloads?
14© Hyperion Research 2020
HPDA requires massive growth in data consumption and memory sizes
“Traditional” Memory
• Node-based
• Ephemeral
• Transient
• Byte addressable
• Lowest latencies
“Traditional” Storage
• Add-on
• Persistent
• Resilient
• Block addressable
• Longer Latencies
Opportunity
15
QUESTIONS?
Questions or comments are welcome.
mnossokoff@hyperionres.com
© Hyperion Research 2020

More Related Content

PPTX
How to Optimize Hortonworks Apache Spark ML Workloads on Modern Processors
PPT
Direct Bond Interconnect (DBI) Technology as an Alternative to Thermal Compre...
PDF
Scaling Beyond 100G With 400G and 800G
PDF
Besi - TSV Summit 2015 - Handout
PDF
Status of The Advanced Packaging Industry_Yole Développement report
PDF
Ayar Labs TeraPHY: A Chiplet Technology for Low-Power, High-Bandwidth In-Pack...
PDF
Growth of advanced packaging - What make it so special? Presentation by Rozal...
PPT
Data Center
How to Optimize Hortonworks Apache Spark ML Workloads on Modern Processors
Direct Bond Interconnect (DBI) Technology as an Alternative to Thermal Compre...
Scaling Beyond 100G With 400G and 800G
Besi - TSV Summit 2015 - Handout
Status of The Advanced Packaging Industry_Yole Développement report
Ayar Labs TeraPHY: A Chiplet Technology for Low-Power, High-Bandwidth In-Pack...
Growth of advanced packaging - What make it so special? Presentation by Rozal...
Data Center

Similar to HPC Market Update and Observations on Big Memory (20)

PDF
Driven by data - Why we need a Modern Enterprise Data Analytics Platform
PPTX
Modernizing the Legacy Data Warehouse – What, Why, and How 1.23.19
PDF
Benefits of Cloud Hosting and SaaS Solutions for IT Solution Providers and th...
PPT
PDF
Big Data in Oil and Gas: How to Tap Its Full Potential
PPTX
Supermicro and The Green Grid (TGG)
PPTX
How to use flash drives with Apache Hadoop 3.x: Real world use cases and proo...
PDF
Hadoop Tutorial | What is Hadoop | Hadoop Project on Reddit | Edureka
PDF
Guest Lecture: Introduction to Big Data at Indian Institute of Technology
PDF
Don't think DevOps think Compliant Database DevOps
PPTX
Big Data and Analytics
PPTX
Big Data and Analytics
PDF
Big Data for Product Managers
PDF
Digital Transformation Journey
PDF
Energy Tech Market View - Vaquero Capital
PPTX
Conflict in the Cloud – Issues & Solutions for Big Data
PPTX
Future of cloud up presentation m_dawson
PDF
Pivotal - Advanced Analytics for Telecommunications
PDF
Big data for product managers
PDF
The Benefits of Flash Storage for Virtualized Environments
Driven by data - Why we need a Modern Enterprise Data Analytics Platform
Modernizing the Legacy Data Warehouse – What, Why, and How 1.23.19
Benefits of Cloud Hosting and SaaS Solutions for IT Solution Providers and th...
Big Data in Oil and Gas: How to Tap Its Full Potential
Supermicro and The Green Grid (TGG)
How to use flash drives with Apache Hadoop 3.x: Real world use cases and proo...
Hadoop Tutorial | What is Hadoop | Hadoop Project on Reddit | Edureka
Guest Lecture: Introduction to Big Data at Indian Institute of Technology
Don't think DevOps think Compliant Database DevOps
Big Data and Analytics
Big Data and Analytics
Big Data for Product Managers
Digital Transformation Journey
Energy Tech Market View - Vaquero Capital
Conflict in the Cloud – Issues & Solutions for Big Data
Future of cloud up presentation m_dawson
Pivotal - Advanced Analytics for Telecommunications
Big data for product managers
The Benefits of Flash Storage for Virtualized Environments
Ad

More from MemVerge (9)

PPTX
Analytical Biosciences Accelerates Single Cell Sequencing with Big Memory
PPTX
Checkpointing the Uncheckpointable
PPTX
Impact of Intel Optane Technology on HPC
PPTX
Live Data: For When Data is Greater than Memory
PPTX
Big Memory for HPC
PDF
Tech Talk: Moneyball - Hitting real-time apps out of the park with Big Memory
PDF
MemVerge Company Overview
PDF
IDC Technology Spotlight: Big Memory Computing Emerges to Better Enable Dat...
PDF
Big Memory Webcast
Analytical Biosciences Accelerates Single Cell Sequencing with Big Memory
Checkpointing the Uncheckpointable
Impact of Intel Optane Technology on HPC
Live Data: For When Data is Greater than Memory
Big Memory for HPC
Tech Talk: Moneyball - Hitting real-time apps out of the park with Big Memory
MemVerge Company Overview
IDC Technology Spotlight: Big Memory Computing Emerges to Better Enable Dat...
Big Memory Webcast
Ad

Recently uploaded (20)

PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PPTX
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
Approach and Philosophy of On baking technology
PDF
CIFDAQ's Market Insight: SEC Turns Pro Crypto
PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PDF
KodekX | Application Modernization Development
PPTX
breach-and-attack-simulation-cybersecurity-india-chennai-defenderrabbit-2025....
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
NewMind AI Monthly Chronicles - July 2025
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PDF
Modernizing your data center with Dell and AMD
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
Advanced Soft Computing BINUS July 2025.pdf
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Electronic commerce courselecture one. Pdf
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Approach and Philosophy of On baking technology
CIFDAQ's Market Insight: SEC Turns Pro Crypto
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
KodekX | Application Modernization Development
breach-and-attack-simulation-cybersecurity-india-chennai-defenderrabbit-2025....
The Rise and Fall of 3GPP – Time for a Sabbatical?
NewMind AI Monthly Chronicles - July 2025
“AI and Expert System Decision Support & Business Intelligence Systems”
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
NewMind AI Weekly Chronicles - August'25 Week I
Modernizing your data center with Dell and AMD
Unlocking AI with Model Context Protocol (MCP)
Advanced Soft Computing BINUS July 2025.pdf
Mobile App Security Testing_ A Comprehensive Guide.pdf
Electronic commerce courselecture one. Pdf
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf

HPC Market Update and Observations on Big Memory

  • 1. www.HyperionResearch.com www.hpcuserforum.com HPC Market Update and Observations on Big Memory December 10, 2020 Mark Nossokoff Senior Analyst, Lead Storage Analyst
  • 2. Visit Our Website: www.HyperionResearch.com © Hyperion Research 2020 2 • Twitter: @HPC_Hyperion
  • 3. 3 HPC Market Update © Hyperion Research 2020
  • 4. Market Area ($M) 2019 2020 2021 2022 2023 2024 CAGR 19-24 Server $13,710 $11,846 $13,295 $15,817 $17,942 $19,044 6.8% Storage $5,427 $4,772 $5,410 $6,519 $7,577 $8,099 8.3% Middleware $1,613 $1,402 $1,576 $1,902 $2,171 $2,317 7.5% Applications $4,689 $4,062 $4,455 $5,258 $5,862 $6,111 5.4% Service $2,239 $1,899 $2,040 $2,366 $2,587 $2,643 3.4% Total Revenue $27,678 $23,981 $26,774 $31,862 $36,138 $38,214 6.7% Source: Hyperion Research, November 2020 6.7% CAGR (2019-2024) ~$38B Total ($M) On-prem Broader Market Forecast • Forecast incorporates Covid-19’s impact • Downside pressure  Delayed product shipments  Delayed revenues  Delayed orders  Decline of 11.5% in first half of 2020  Forecasting Y/Y decline of 14% for 2020 • Upside momentum  Demand to combat Covid-19  Increase in HPC workloads running in the public cloud  Expected recovery in mid 2021 4© Hyperion Research 2020 Storage is expected to grow the most at 8.3% Source: Hyperion Research, November 2020
  • 5. Source: Hyperion Research, November 2020 HPC-enabled AI  HPC Servers  HPC-Enabled AI  ML in HPC  DL in HPC  Other AI in HPC ($M) 5© Hyperion Research 2020 HPC-enabled On-prem AI Server Forecast HPC-Enabled AI Growth ~ 5x Overall HPC Server Growth 2019-2024 ~$19B total • 6.8% growth • 15.8% growth • 31.1% growth HPC Servers HPC-enabled AI Servers HPDA Servers
  • 6. 6© Hyperion Research 2020 HPC On-Prem Server Forecast By Application Area Government, Academic, CAE/Manufacturing and Bio-sciences >50% of market Source: Hyperion Research, November 2020 $M 2019 2020 2021 2022 2023 2024 CAGR 19-24 Bio-Sciences $1,457 $1,239 $1,226 $1,536 $1,739 $1,850 4.9% CAE $1,721 $1,468 $1,492 $1,859 $2,110 $2,242 5.4% Chemical Engineering $170 $145 $154 $185 $209 $220 5.2% DCC & Distribution $825 $696 $681 $857 $970 $1,017 4.3% Economics/Financial $710 $608 $623 $818 $924 $972 6.5% EDA / IT / ISV $822 $702 $696 $918 $1,037 $1,091 5.8% Geosciences $969 $815 $843 $1,010 $1,151 $1,231 4.9% Mechanical Design $52 $044 $049 $057 $065 $068 5.6% Defense $1,472 $1,284 $1,317 $1,692 $1,916 $2,027 6.6% Government Lab $2,418 $2,161 $3,352 $3,314 $3,759 $4,127 11.3% University/Academic $2,301 $1,993 $2,141 $2,647 $2,981 $3,053 5.8% Weather $639 $553 $570 $724 $819 $866 6.3% Other $155 $139 $151 $202 $261 $279 12.5% Total Revenue $13,710 $11,846 $13,295 $15,817 $17,942 $19,044 6.8%
  • 7. Market Area ($M) 2019 2020 2021 2022 2023 2024 CAGR 19-24 Server $13,710 $11,846 $13,295 $15,817 $17,942 $19,044 6.8% Storage $5,427 $4,772 $5,410 $6,519 $7,577 $8,099 8.3% Middleware $1,613 $1,402 $1,576 $1,902 $2,171 $2,317 7.5% Applications $4,689 $4,062 $4,455 $5,258 $5,862 $6,111 5.4% Service $2,239 $1,899 $2,040 $2,366 $2,587 $2,643 3.4% Public Cloud Spend $3,910 $4,300 $5,300 $4,600 $7,600 $8,800 17.6% Total On and Off Prem Revenue $31,588 $28,281 $32,076 $36,462 $43,739 $47,014 8.3% HPC Usage in the Cloud 7© Hyperion Research 2020 Expected to incrementally add $8.8B to on-prem HPC spend in 2024 Source: Hyperion Research, November 2020 Source: Hyperion Research, November 2020 Server, $19,044 Storage, $8,099 Middleware, $2,317 Applications, $6,111 Service, $2,643 Public Cloud Spend, $8,800 Server Storage Middleware Applications Service Public Cloud Spend 2024 Broader Market Forecast - ~$47B
  • 8. 8© Hyperion Research 2020 Key Buying Requirements For On-prem HPC Price/performance and overall performance on specific applications the top items Top Criteria For Next Purchase Price 83% Application Performance 61% Security 25% Faster CPUs 25% AI-Big Data Capabilities 22% Interconnect Performance 16% Quality 15% Accelerators 14% Storage 11% Memory Bandwidth 10% Backwards Compatibility with Current Systems 10% Source of Open Source Software 4% Other 3%
  • 9. 9 Observations on Big Memory and HPC © Hyperion Research 2020
  • 10. Historic perspective on memory Cost Expensive Capacity 100s GB memory per server Resiliency Volatile Relationship to Storage Extension of memory Data Access Type Form Factor Hot/Active Integrated n/a DRAM DIMM SSD AIC, U.2, M.2, EDSFF HDD dual actuator 3.5” Warm HDD 3.5” Cold Tape • Persistent Memory + Memory Virtualization Software 10 What is Big Memory? High capacity, performant, resilient data via memory footprint and accessibility Data Access Type Form Factor Hot/Active Integrated n/a DRAM DIMM Persistent Memory DIMM Warm SSD AIC, U.2, M.2, EDSFF HDD dual actuator 3.5” HDD 3.5” Cold Tape Historic perspective on memory HPC Requirements Cost Expensive Capacity 100s GB memory per server Resiliency Volatile Relationship to Storage Extension of memory Historic perspective on memory HPC Requirements Big Memory Cost Expensive Less expensive Capacity 100s GB memory per server 100s TB memory per server Resiliency Volatile HA Tier Relationship to Storage Extension of memory Data is in memory
  • 11. Application Revenue ($M) 2019 2024 CAGR 19-24 Bio-Sciences $1,457 $1,850 4.9% CAE / Manufacturing $1,721 $2,242 5.4% Chemical Engineering $170 $220 5.2% DCC & Distribution $825 $1,017 4.3% Economics/Financial $710 $972 6.5% EDA / IT / ISV $822 $1,091 5.8% Geosciences $969 $1,231 4.9% Mechanical Design $52 $068 5.6% Defense $1,472 $2,027 6.6% Government Lab $2,418 $4,127 11.3% University/Academic $2,301 $3,053 5.8% Weather $639 $866 6.3% Other $155 $279 12.5% Total $13,710 $19,044 6.8% Application Revenue ($M) 2019 2024 CAGR 19-24 Bio-Sciences $1,457 $1,850 4.9% CAE / Manufacturing $1,721 $2,242 5.4% Chemical Engineering $170 $220 5.2% DCC & Distribution $825 $1,017 4.3% Economics/Financial $710 $972 6.5% EDA / IT / ISV $822 $1,091 5.8% Geosciences $969 $1,231 4.9% Mechanical Design $52 $068 5.6% Defense $1,472 $2,027 6.6% Government Lab $2,418 $4,127 11.3% University/Academic $2,301 $3,053 5.8% Weather $639 $866 6.3% Other $155 $279 12.5% Total $13,710 $19,044 6.8% Application Revenue ($M) 2019 2024 CAGR 19-24 Bio-Sciences $1,457 $1,850 4.9% CAE / Manufacturing $1,721 $2,242 5.4% Chemical Engineering $170 $220 5.2% DCC & Distribution $825 $1,017 4.3% Economics/Financial $710 $972 6.5% EDA / IT / ISV $822 $1,091 5.8% Geosciences $969 $1,231 4.9% Mechanical Design $52 $068 5.6% Defense $1,472 $2,027 6.6% Government Lab $2,418 $4,127 11.3% University/Academic $2,301 $3,053 5.8% Weather $639 $866 6.3% Other $155 $279 12.5% Total $13,710 $19,044 6.8% HPC On-Prem Server Forecast By Application Area 11© Hyperion Research 2020 Government, Academic, CAE/Manufacturing and Bio-sciences >50% of market Source: Hyperion Research, November 2020 • Core counts growing faster than memory capacities • Memory amount per core decreasing • Can memory be efficiently and effectively pooled and utilized? Most amenable to Big Memory Likely amenable to Big Memory Processors Shipped (estimated) 2019 2024 CAGR 19-24 425,956 534,882 4.7% 502,965 648,452 5.2% 49,796 63,591 5.0% 241,401 294,212 4.0% 206,904 281,127 6.3% 240,322 315,575 5.6% 283,098 355,851 4.7% 15,166 19,748 5.4% 430,349 586,136 6.4% 785,793 1,193,592 8.7% 672,908 882,790 5.6% 186,845 250,432 6.0% 45,191 80,660 12.3% 4,086,694 5,507,047 6.1%
  • 12. Top Criteria For Next Purchase Price 83% Application Performance 61% Security 25% Faster CPUs 25% AI-Big Data Capabilities 22% Interconnect Performance 16% Quality 15% Accelerators 14% Storage 11% Memory Bandwidth 10% Backwards Compatibility with Current Systems 10% Source of Open Source Software 4% Other 3% Top Criteria For Next Purchase Price 83% Application Performance 61% Security 25% Faster CPUs 25% AI-Big Data Capabilities 22% Interconnect Performance 16% Quality 15% Accelerators 14% Storage 11% Memory Bandwidth 10% Backwards Compatibility with Current Systems 10% Source of Open Source Software 4% Other 3% 12© Hyperion Research 2020 Key Buying Requirements For On-prem HPC Price/performance and overall performance on specific applications the top items Potential areas Big Memory can address
  • 13. Workload Use Case Description Traditional HPC Project • Sometimes referred to as home directories or user files • Used to capture and share final results of the modelling and simulation • Mixture of bandwidth and throughput needs, utilizing hybrid flash, HDD storage solutions Scratch • Workspace capacity used to perform the modelling and simulation • Includes metadata capacity (high throughput [IOs/sec] and flash-based) and raw data capacity and checkpoint writes for protection against system component failure during long simulation runs (high bandwidth [GB/s], traditionally HDD-based but now largely hybrid flash and HDDs Archive • Long-term data retention • Scalable storage without a critical latency requirement • Largely near-line HDD-based systems with a growing cloud-based element. • Typically file or object data types HPDA/AI Ingest • Quickly loading large amounts of data from a variety of different sources such that the data can be tagged, normalized, stored and swiftly retrieved for subsequent analysis • Very high bandwidth (GB/s) performance at scale to sustain retrieving data rates, typically object-based, high-capacity HDD-based and increasingly cloud-based. Data Preparation • Often times referred to as data classification or data tagging, requires a balanced mix of throughput and bandwidth (hybrid flash and HDD storage systems) Training • Utilizing Machine Learning (ML) and/or Deep Learning (DL) to build an accurate model for researchers, engineers and business analysts to use for their research, design and business needs • Requires high throughput (IOs/sec) and low latency for continuous and repetitive computational analysis of the data, typically flash-based storage. Inference • Utilizing the model for experimentation and analysis to derive and deliver the targeted scientific or business insights Also requires high bandwidth and low latency and typically flash-based, often with a caching layer Archive • Long-term data retention Scalable storage without a critical latency requirement Largely near-line HDD-based systems with a growing cloud-based element. Typically file or object data types Workload Use Case Description Traditional HPC Project • Sometimes referred to as home directories or user files • Used to capture and share final results of the modelling and simulation • Mixture of bandwidth and throughput needs, utilizing hybrid flash, HDD storage solutions Scratch • Workspace capacity used to perform the modelling and simulation • Includes metadata capacity (high throughput [IOs/sec] and flash-based) and raw data capacity and checkpoint writes for protection against system component failure during long simulation runs (high bandwidth [GB/s], traditionally HDD-based but now largely hybrid flash and HDDs Archive • Long-term data retention • Scalable storage without a critical latency requirement • Largely near-line HDD-based systems with a growing cloud-based element. • Typically file or object data types HPDA/AI Ingest • Quickly loading large amounts of data from a variety of different sources such that the data can be tagged, normalized, stored and swiftly retrieved for subsequent analysis • Very high bandwidth (GB/s) performance at scale to sustain retrieving data rates, typically object-based, high-capacity HDD-based and increasingly cloud-based. Data Preparation • Often times referred to as data classification or data tagging, requires a balanced mix of throughput and bandwidth (hybrid flash and HDD storage systems) Training • Utilizing Machine Learning (ML) and/or Deep Learning (DL) to build an accurate model for researchers, engineers and business analysts to use for their research, design and business needs • Requires high throughput (IOs/sec) and low latency for continuous and repetitive computational analysis of the data, typically flash-based storage. Inference • Utilizing the model for experimentation and analysis to derive and deliver the targeted scientific or business insights Also requires high bandwidth and low latency and typically flash-based, often with a caching layer Archive • Long-term data retention Scalable storage without a critical latency requirement Largely near-line HDD-based systems with a growing cloud-based element. Typically file or object data types HPC and HPDA/AI Workloads • Traditional HPC • Metadata  Small block, random  Focus on latency, IOPs • Simulation data  Large block, sequential  Focus on GB/s • Historically separate data stores • HDPA / AI • Heterogenous I/O profiles • Interspersed transfer sizes, access patterns and performance focus • Growing dataset sizes 13© Hyperion Research 2020 HPDA/AI workloads changing the status quo of data access Workload Use Case Description Traditional HPC Project • Sometimes referred to as home directories or user files • Used to capture and share final results of the modelling and simulation • Mixture of bandwidth and throughput needs, utilizing hybrid flash, HDD storage solutions Scratch • Workspace capacity used to perform the modelling and simulation • Includes metadata capacity (high throughput [IOs/sec] and flash-based) and raw data capacity and checkpoint writes for protection against system component failure during long simulation runs (high bandwidth [GB/s], traditionally HDD-based but now largely hybrid flash and HDDs Archive • Long-term data retention • Scalable storage without a critical latency requirement • Largely near-line HDD-based systems with a growing cloud-based element. • Typically file or object data types HPDA/AI Ingest • Quickly loading large amounts of data from a variety of different sources such that the data can be tagged, normalized, stored and swiftly retrieved for subsequent analysis • Very high bandwidth (GB/s) performance at scale to sustain retrieving data rates, typically object-based, high-capacity HDD-based and increasingly cloud-based. Data Preparation • Often times referred to as data classification or data tagging, requires a balanced mix of throughput and bandwidth (hybrid flash and HDD storage systems) Training • Utilizing Machine Learning (ML) and/or Deep Learning (DL) to build an accurate model for researchers, engineers and business analysts to use for their research, design and business needs • Requires high throughput (IOs/sec) and low latency for continuous and repetitive computational analysis of the data, typically flash-based storage. Inference • Utilizing the model for experimentation and analysis to derive and deliver the targeted scientific or business insights • Also requires high bandwidth and low latency and typically flash-based, often with a caching layer Archive • Long-term data retention Scalable storage without a critical latency requirement • Largely near-line HDD-based systems with a growing cloud-based element. • Typically file or object data types Most amenable to Big Memory Likely amenable to Big Memory
  • 14. Closing Observations on Big Data, Big Memory and HPC • Conventional thoughts on memory • Limited amount, expensive, persistent • Plentiful, less expensive, but not persistent • Consistent feedback from HPC users for most new technologies • Is there enough [insert resource] for my [insert task]? • Is there enough memory for my working dataset size? • How much will my “time to results” be improved? • Will it simplify (at least not complicate) system management, data management and workflow? • Do I need to change any code? • Can I afford the amount of memory I need for my HPC workloads? 14© Hyperion Research 2020 HPDA requires massive growth in data consumption and memory sizes “Traditional” Memory • Node-based • Ephemeral • Transient • Byte addressable • Lowest latencies “Traditional” Storage • Add-on • Persistent • Resilient • Block addressable • Longer Latencies Opportunity
  • 15. 15 QUESTIONS? Questions or comments are welcome. mnossokoff@hyperionres.com © Hyperion Research 2020

Editor's Notes

  • #5: Delayed product shipments: worker illnesses and health precautions temporarily shut down HPC component suppliers in 1Q20 Delayed revenues: delayed shipments caused delayed revenues. And even if equipment could be shipped, workers weren’t permitted on-site to do installations for a period of time; most vendors expect customers to fully spend their HPC budgets once the covid threat subsides Delayed orders: inability to meet in customers face-face or attend sales-supporting conferences & events temporarily constricted the new business pipeline
  • #6: Growth in HPDA and HPC-enabled AI servers expected to far outpace the overall HPC server market by almost 5x. Overall HPC Server 2019-2024 CAGR: ~6.8% HPC-based AI Server 2019-2024 CAGR: ~31.1%
  • #8: Now to turn briefly from on-prem HPC spending to HPC spending in the cloud. It’s important to note this forecast represents what users pay for using cloud for HPC, as opposed to what CSPs are spending to build and support infrastructure to deliver HPC services to users. This past year exposed one of the benefits of moving workloads to the cloud, that being the ability to quickly and complete turn resources on and off based on business need. This was especially true in the Workgroup segment. HPC user spending in the cloud
  • #9: Now to take a quick look at primary motivations for users’ on-prem HPC spending. Not surprisingly, “price” tops the list. It’s important to highlight, though, that while raw cost/budget is a key factor, relative cost is even more critical; that is metrics like price/capacity ($/GB) and price/performance (where performance is some kind of “time to result” metric).
  • #11: OK, so what is Big Memory. At the highest, macro level, Big Memory is “High capacity, performant, resilient data via memory footprint and accessibility”. So what does that mean?
  • #12: Let’s circle back, now, and look at HPC market data with Big Memory in mind. Big Memory aspires to provide the greatest benefits to memory-intensive workloads and data-intensive workloads where the dataset can be fully contained in the memory footprint. Application areas that exhibit these characteristics include… Others that may benefit further include… Lastly, not that processor count is a good proxy for memory and big memory but is instructive to comment on its growth and relationship to memory.
  • #13: So which of the aforementioned Key Buying Requirements can be addressed by Big Memory?
  • #14: Let’s take at a slightly different workload perspective. Traditional HPC mod/sim use cases can loosely be group as Project, Scratch and Archive, each with their separate and disparate data access and I/O profiles. In contrast HPDA and AI use cases display different, more heterogenous access and I/O profiles, Trainin and Inference in particular. Real-world models are requiring increased accuracy which is a function of the size of the model and training set. Will continue to grow substantially.
  • #15: Some closing thoughts on Big Data, Big Memory and HPC before moving on to our next speaker. Traditional memory… Traditional storage… providing a technology gap and business opportunity to bridge the divide. Is there ever “enough”? Like closets or any general home storage spaces, if you have it, it will get utilized. Utilization also a function of where it’s located. The fridge in the kitchen is used more than the garage or basement fridge and both are accessed more individually than running to the supermarket.