SlideShare a Scribd company logo
HPC Environments for
Leading Edge Simulations
1
Greg Clifford
Manufacturing Segment Manager
clifford@cray.com
Topics:
● Cray, Inc and some customer
examples
● Application scaling examples
● The Manufacturing segment is set
for a significant step forward in
performance
2
Manufacturing
Earth Sciences
Energy Life Sciences
Financial
Services
Cray Industry Solutions
Cray Inc. – 2013
3
Anything That Can Be Simulated Needs a Cray
Computation Analysis Storage/Data
Capacity Focus
(Highly Configurable Solutions)
Supercomputing Solutions to Match the Needs of the Application
Supercomputing Solutions
Cray Inc. – 2013
4
CapabilityFocus
(TightlyIntegratedSolutions)
Cray CS300 Series:
Flexible Performance
Cray XC30 Series:
Scalable Performance
Cray Specializes in Large Systems…
Over 45 PF’s
in XE6 and XK7
Systems
5
Cray Higher-Ed Roundtable, July 22, 2013
New Clothes: NERSC - Edison
10/3/13
6
Cray Higher-Ed Roundtable, July 22, 2013
Running Large Jobs…
NERSC “Now Computing” Snapshot (taken Sept. 4th 2013)
10/3/13 Cray Higher-Ed Roundtable, July 22, 2013
7
WRF Hurricane Sandy Simulation on Blue
Waters
Cray Confidential
8
●  Initial analysis of WRF output is showing some very
striking features of Hurricane Sandy. Level of detail
between a 3km WRF simulation and BW 500meter run is
apparent in these radar reflectivity results
3km WRF results Blue Waters 500meter WRF results
10/3/13
WRF Hurricane Sandy Simulation on Blue
Waters
Cray Confidential
9
10/3/13
Cavity Flow Studies using HECToR (Cray XE6)
S. Lawson, et.al. University of Liverpool
10
●  1.1 Billion grid point model
●  Scaling to 24,000 cores
●  Good agreement between
experiments and CFD
* Ref: http://www.hector.ac.uk/casestudies/ucav.php
11
CTH Shock Physics
CTH is a multi-material, large
deformation, strong shock wave, solid
mechanics code and is one of the most
heavily used computational structural
mechanics codes on DoD HPC
platforms.
“For large models, CTH will show linear scaling to over 10,000 cores.
We have not seen a limit to the scalability of the CTH application”
“A single parametric study can easily consume all of the ORNL
Jaguar resources”
CTH developer
Seismic processing Compute requirements
1212
petaFLOPS
0,1
1
10
1000
100
1995 2000 2005 2010 2015 2020
0,5
Seismic Algorithm complexity
Visco elastic FWI
Petro-elastic inversion
Elastic FWI
Visco elastic modeling
Isotropic/anisotropic FWI
Elastic modeling/RTM
Isotropic/anisotropic RTM
Isotropic/anisotropic modeling
Paraxial isotropic/anisotropic imaging
Asymptotic approximation imaging
A petaflop scale system is required to deliver the capability to
move to a new level of seismic imaging.
One petaflop
13
Compute requirements in CAE
Simulation Fidelity
Robust
Design
Design
Optimization
Design
Exploration
Multiple
runs
Departmental cluster
100 cores
Desktop
16 cores
Single run
Central Compute
Cluster
1000 cores
Supercomputing
Environment
>2000 cores
“Simulation allows engineers to know, not
guess – but only if IT can deliver dramatically
scaled up infrastructure for mega
simulations….
1000’s of cores per mega simulation”
CAE developer
13
CAE Application Workload
14
CFD	
  	
  
(30%)	
  
Structures	
  	
  
(20%)	
  
Impact/Crash
(40%)
Vast majority of large
simulations are MPI parallel
Basically the same codes used across all industries
CAE Workload status
●  ISV codes dominate the CAE commercial workload
●  Many large manufacturing companies have >>10,000
cores HPC systems
●  Even for large organizations very few jobs use more
than 256 MPI ranks
●  There is a huge discrepancy between the scalability in
production at large HPC centers and the commercial
CAE environment
15
Why aren’t commercial CAE
environments leveraging scaling
for better performance
16
Often the full power available is
not being leveraged
10/3/13
17
Innovations in the field of Combustion
c. 2003,
high density,
fast interconnect
Crash & CFD
c. 1983, Cray X-MP, Convex
MSC/NASTRAN
c. 1988, Cray Y-MP, SGI
Crash
18
c.1978
Cray-1, Vector processing
Serial
c. 2007, Extreme scalability
Proprietary interconnect
1000’s cores
Requires “end-to-end parallel”
c. 1983
Cray X-MP, SMP
2-4 cores
c. 1998,
MPI Parallel
“Linux cluster”,
low density, slow interconnect
~100 MPI ranks
c. 2013
Cray XE6
driving apps:
CFD, CEM, ???
Propagation of HPC to commercial CAE
Early adoption Common in Industry
Obstacles to extreme scalability using ISV CAE codes
19
1.  Most CAE environments are configured for capacity computing
—  Difficult to schedule 1000‘s of cores for one simulation
—  Simulation size and complexity driven by available compute resource
—  This will change as compute environments evolve
2.  Application license fees are an issue
—  Application cost can be 2-5 times the hardware costs
—  ISVs are encouraging scalable computing and are adjusting their
licensing models
3.  Applications must deliver “end-to-end” scalability
—  “Amdahl’s Law” requires vast majority of the code to be parallel
—  This includes all of the features in a general purpose ISV code
—  This is an active area of development for CAE ISVs
Copyright © 2013 Altair Engineering, Inc. Proprietary and Confidential. All rights reserved.
Collaboration with Cray
•  Fine-tuned AcuSolve for maximum efficiency on Cray hardware
•  Using Cray MPI libraries
•  Efficient core placement
•  AcuSolve package built specifically for Cray’s Extreme Scalability
Mode(ESM) -- and for the first time shipped in V12.0
•  Extensively tested the code for various Cray systems (XE6, XC30)
20
Copyright © 2013 Altair Engineering, Inc. Proprietary and Confidential. All rights reserved.
Version 12.0: More Scalable
•  Optimized domain decomposition for hybrid mpi/openmp
•  Added MPI performance optimizer
•  Nearly perfect scalability seen down to ~4k nodes/subdomain
Parallel performance
on a Linux Cluster with
IB interconnect
21
Copyright © 2013 Altair Engineering, Inc. Proprietary and Confidential. All rights reserved.
Version 12.0: Larger Problems
•  In V12.0 the capacity of AcuSolve is increased to efficiently solve
problem sizes exceeding 1 billion elements
•  Example: transient, DDES simulation of F1 drafting on ~1 billion
elements
22
Copyright © 2013 Altair Engineering, Inc. Proprietary and Confidential. All rights reserved.
Case studies are real-life engineering problems
All tests are performed in-house by Cray
•  Case Study #1: Aerodynamics of a car model (ASMO)
referred as 70M
•  70 million elements
•  Transient incompressible flow (implicit solve)
Case Studies
23
Copyright © 2013 Altair Engineering, Inc. Proprietary and Confidential. All rights reserved.
•  Case Study #2: Cabin comforter model referred as
140M
•  140 million elements
•  Steady incompressible flow + heat transfer (implicit solve) + radiation
Case Studies
24
Copyright © 2013 Altair Engineering, Inc. Proprietary and Confidential. All rights reserved.
Performance Results (combined)
140M Cray XC30 (SandyBridge + Aries)
70M Cray XC30 (SandyBridge + Aries)
70M Cray XE6 (AMD + Gemini)
70M Linux cluster (SandyBridge + IB)
Ideal
small core
count 25
Copyright © 2013 Altair Engineering, Inc. Proprietary and Confidential. All rights reserved.
Webinar Conclusions
•  Cray’s XC30 demonstrated the best performance in terms of both
scalability and throughput
•  Parallel performance of XE6 and XC30 interconnect was superior
to IB
•  For small number of cores (approximately less than 750),
AcuSolve parallel performance is satisfactory across multiple
platforms
•  Throughput is mostly affected by core type (e.g. SandyBridge v.s.
Westmere)
26
Summary of Cray Value
1. Extreme fidelity simulations require HPC
performance and extreme scalability is the only
option to achieve this performance
2. Cray systems are designed for large production HPC
environments, whether it is a single simulation
using 10,000 cores or 100 simulations each using
100 cores.
3. The technology is in place for CAE environments to
leverage >>1000s of cores per simulation and we are
over due to see extreme scaling leveraged in
commercial environments
Copyright © 2013 Altair Engineering, Inc. Proprietary and Confidential. All rights reserved.
Questions?
28

More Related Content

PPT
Fmcad08
PDF
CFD and more with Acusolve and HyperWorks
PPTX
Big Memory for HPC
PDF
Arm Neoverse solutions @Graviton2-AWS Japan Webinar Oct2020
PDF
GTC Taiwan 2017 主題演說
PPTX
OpenACC Monthly Highlights June 2017
PDF
Multiphysics - A New Generation of Simulation
PDF
GTC Taiwan 2017 企業端深度學習與人工智慧應用
Fmcad08
CFD and more with Acusolve and HyperWorks
Big Memory for HPC
Arm Neoverse solutions @Graviton2-AWS Japan Webinar Oct2020
GTC Taiwan 2017 主題演說
OpenACC Monthly Highlights June 2017
Multiphysics - A New Generation of Simulation
GTC Taiwan 2017 企業端深度學習與人工智慧應用

What's hot (20)

PDF
Adapting to a Cambrian AI/SW/HW explosion with open co-design competitions an...
PPTX
Virtualized high performance computing with mellanox fdr and ro ce
PDF
GTC Taiwan 2017 自主駕駛車輛發展平台與技術研發
PDF
Implementing AI: High Performance Architectures: A Universal Accelerated Comp...
 
PDF
Ai Forum at Computex 2017 - Keynote Slides by Jensen Huang
PDF
Advances in Accelerator-based CFD Simulation
PDF
GTC Taiwan 2017 如何在充滿未知的巨量數據時代中建構一個數據中心
PPTX
HPC Parallel Computing for CFD - Customer Examples (2 of 4)
PDF
Modeling Lare Deformations Phenomenon with Altair OptiStruct
PPTX
HPC Top 5 Stories: April 26, 2018
PDF
JMI Techtalk: 한재근 - How to use GPU for developing AI
PDF
Accelerating open science and AI with automated, portable, customizable and r...
PDF
GTC 2017: Powering the AI Revolution
ODP
Information Retrieval, Applied Statistics and Mathematics onBigData - German ...
PDF
The Coming Age of Extreme Heterogeneity in HPC
PDF
FEA Based Design Optimization to Mitigate Anchor Cage Impact Damage Risk
PDF
How to Choose Mobile Workstation? VR Ready
PPTX
HPC Midlands - Supercomputing for Research and Industry (Hartree Centre prese...
PDF
Are You Maximising the Potential of Composite Materials?
PDF
High Performance Computing (HPC) and Engineering Simulations in the Cloud
Adapting to a Cambrian AI/SW/HW explosion with open co-design competitions an...
Virtualized high performance computing with mellanox fdr and ro ce
GTC Taiwan 2017 自主駕駛車輛發展平台與技術研發
Implementing AI: High Performance Architectures: A Universal Accelerated Comp...
 
Ai Forum at Computex 2017 - Keynote Slides by Jensen Huang
Advances in Accelerator-based CFD Simulation
GTC Taiwan 2017 如何在充滿未知的巨量數據時代中建構一個數據中心
HPC Parallel Computing for CFD - Customer Examples (2 of 4)
Modeling Lare Deformations Phenomenon with Altair OptiStruct
HPC Top 5 Stories: April 26, 2018
JMI Techtalk: 한재근 - How to use GPU for developing AI
Accelerating open science and AI with automated, portable, customizable and r...
GTC 2017: Powering the AI Revolution
Information Retrieval, Applied Statistics and Mathematics onBigData - German ...
The Coming Age of Extreme Heterogeneity in HPC
FEA Based Design Optimization to Mitigate Anchor Cage Impact Damage Risk
How to Choose Mobile Workstation? VR Ready
HPC Midlands - Supercomputing for Research and Industry (Hartree Centre prese...
Are You Maximising the Potential of Composite Materials?
High Performance Computing (HPC) and Engineering Simulations in the Cloud
Ad

Similar to Cray HPC Environments for Leading Edge Simulations (20)

PDF
Deview 2013 rise of the wimpy machines - john mao
PPTX
Supermicro AI Pod that’s Super Simple, Super Scalable, and Super Affordable
PDF
스마트 엔지니어링: 제조사를 위한 품질 예측 시뮬레이션 및 인공지능 모델 적용 사례 소개 – 권신중 AWS 솔루션즈 아키텍트, 천준홍 두산...
PDF
Lecture 1 Advanced Computer Architecture
PDF
AIST Super Green Cloud: lessons learned from the operation and the performanc...
PDF
“Introduction to the TVM Open Source Deep Learning Compiler Stack,” a Present...
PDF
Ampere Offers Energy-Efficient Future For AI And Cloud
PDF
Bringing Private Cloud computing to HPC and Science - EGI TF tf 2013
PDF
EGITF 2013 - Bringing Private Cloud Computing to HPC and Science with OpenNebula
PDF
Fast Insights to Optimized Vectorization and Memory Using Cache-aware Rooflin...
PDF
Real time machine learning proposers day v3
PDF
Architecting for Hyper-Scale Datacenter Efficiency
PDF
Robotics technical Presentation
PPTX
Cloud Roundtable at Microsoft Switzerland
PPTX
Energy efficient AI workload partitioning on multi-core systems
PDF
Microsoft Azure in HPC scenarios
PDF
AWS Cloud for HPC and Big Data
PDF
CA Spectrum® Just Keeps Getting Better and Better
PDF
Smart Manufacturing: CAE in the Cloud
PDF
Update on Trinity System Procurement and Plans
Deview 2013 rise of the wimpy machines - john mao
Supermicro AI Pod that’s Super Simple, Super Scalable, and Super Affordable
스마트 엔지니어링: 제조사를 위한 품질 예측 시뮬레이션 및 인공지능 모델 적용 사례 소개 – 권신중 AWS 솔루션즈 아키텍트, 천준홍 두산...
Lecture 1 Advanced Computer Architecture
AIST Super Green Cloud: lessons learned from the operation and the performanc...
“Introduction to the TVM Open Source Deep Learning Compiler Stack,” a Present...
Ampere Offers Energy-Efficient Future For AI And Cloud
Bringing Private Cloud computing to HPC and Science - EGI TF tf 2013
EGITF 2013 - Bringing Private Cloud Computing to HPC and Science with OpenNebula
Fast Insights to Optimized Vectorization and Memory Using Cache-aware Rooflin...
Real time machine learning proposers day v3
Architecting for Hyper-Scale Datacenter Efficiency
Robotics technical Presentation
Cloud Roundtable at Microsoft Switzerland
Energy efficient AI workload partitioning on multi-core systems
Microsoft Azure in HPC scenarios
AWS Cloud for HPC and Big Data
CA Spectrum® Just Keeps Getting Better and Better
Smart Manufacturing: CAE in the Cloud
Update on Trinity System Procurement and Plans
Ad

More from inside-BigData.com (20)

PDF
Major Market Shifts in IT
PDF
Preparing to program Aurora at Exascale - Early experiences and future direct...
PPTX
Transforming Private 5G Networks
PDF
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...
PDF
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...
PDF
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...
PDF
HPC Impact: EDA Telemetry Neural Networks
PDF
Biohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
PDF
Machine Learning for Weather Forecasts
PPTX
HPC AI Advisory Council Update
PDF
Fugaku Supercomputer joins fight against COVID-19
PDF
Energy Efficient Computing using Dynamic Tuning
PDF
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
PDF
State of ARM-based HPC
PDF
Versal Premium ACAP for Network and Cloud Acceleration
PDF
Zettar: Moving Massive Amounts of Data across Any Distance Efficiently
PDF
Scaling TCO in a Post Moore's Era
PDF
CUDA-Python and RAPIDS for blazing fast scientific computing
PDF
Introducing HPC with a Raspberry Pi Cluster
PDF
Overview of HPC Interconnects
Major Market Shifts in IT
Preparing to program Aurora at Exascale - Early experiences and future direct...
Transforming Private 5G Networks
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...
HPC Impact: EDA Telemetry Neural Networks
Biohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
Machine Learning for Weather Forecasts
HPC AI Advisory Council Update
Fugaku Supercomputer joins fight against COVID-19
Energy Efficient Computing using Dynamic Tuning
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
State of ARM-based HPC
Versal Premium ACAP for Network and Cloud Acceleration
Zettar: Moving Massive Amounts of Data across Any Distance Efficiently
Scaling TCO in a Post Moore's Era
CUDA-Python and RAPIDS for blazing fast scientific computing
Introducing HPC with a Raspberry Pi Cluster
Overview of HPC Interconnects

Recently uploaded (20)

PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PDF
Empathic Computing: Creating Shared Understanding
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
Electronic commerce courselecture one. Pdf
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
Review of recent advances in non-invasive hemoglobin estimation
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PPTX
Cloud computing and distributed systems.
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PPTX
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
PPTX
Programs and apps: productivity, graphics, security and other tools
DOCX
The AUB Centre for AI in Media Proposal.docx
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
Chapter 3 Spatial Domain Image Processing.pdf
MIND Revenue Release Quarter 2 2025 Press Release
Empathic Computing: Creating Shared Understanding
Encapsulation_ Review paper, used for researhc scholars
Reach Out and Touch Someone: Haptics and Empathic Computing
The Rise and Fall of 3GPP – Time for a Sabbatical?
Building Integrated photovoltaic BIPV_UPV.pdf
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Electronic commerce courselecture one. Pdf
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
Dropbox Q2 2025 Financial Results & Investor Presentation
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Review of recent advances in non-invasive hemoglobin estimation
Advanced methodologies resolving dimensionality complications for autism neur...
Cloud computing and distributed systems.
Understanding_Digital_Forensics_Presentation.pptx
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
Programs and apps: productivity, graphics, security and other tools
The AUB Centre for AI in Media Proposal.docx

Cray HPC Environments for Leading Edge Simulations

  • 1. HPC Environments for Leading Edge Simulations 1 Greg Clifford Manufacturing Segment Manager clifford@cray.com
  • 2. Topics: ● Cray, Inc and some customer examples ● Application scaling examples ● The Manufacturing segment is set for a significant step forward in performance 2
  • 3. Manufacturing Earth Sciences Energy Life Sciences Financial Services Cray Industry Solutions Cray Inc. – 2013 3 Anything That Can Be Simulated Needs a Cray Computation Analysis Storage/Data
  • 4. Capacity Focus (Highly Configurable Solutions) Supercomputing Solutions to Match the Needs of the Application Supercomputing Solutions Cray Inc. – 2013 4 CapabilityFocus (TightlyIntegratedSolutions) Cray CS300 Series: Flexible Performance Cray XC30 Series: Scalable Performance
  • 5. Cray Specializes in Large Systems… Over 45 PF’s in XE6 and XK7 Systems 5 Cray Higher-Ed Roundtable, July 22, 2013
  • 6. New Clothes: NERSC - Edison 10/3/13 6 Cray Higher-Ed Roundtable, July 22, 2013
  • 7. Running Large Jobs… NERSC “Now Computing” Snapshot (taken Sept. 4th 2013) 10/3/13 Cray Higher-Ed Roundtable, July 22, 2013 7
  • 8. WRF Hurricane Sandy Simulation on Blue Waters Cray Confidential 8 ●  Initial analysis of WRF output is showing some very striking features of Hurricane Sandy. Level of detail between a 3km WRF simulation and BW 500meter run is apparent in these radar reflectivity results 3km WRF results Blue Waters 500meter WRF results 10/3/13
  • 9. WRF Hurricane Sandy Simulation on Blue Waters Cray Confidential 9 10/3/13
  • 10. Cavity Flow Studies using HECToR (Cray XE6) S. Lawson, et.al. University of Liverpool 10 ●  1.1 Billion grid point model ●  Scaling to 24,000 cores ●  Good agreement between experiments and CFD * Ref: http://www.hector.ac.uk/casestudies/ucav.php
  • 11. 11 CTH Shock Physics CTH is a multi-material, large deformation, strong shock wave, solid mechanics code and is one of the most heavily used computational structural mechanics codes on DoD HPC platforms. “For large models, CTH will show linear scaling to over 10,000 cores. We have not seen a limit to the scalability of the CTH application” “A single parametric study can easily consume all of the ORNL Jaguar resources” CTH developer
  • 12. Seismic processing Compute requirements 1212 petaFLOPS 0,1 1 10 1000 100 1995 2000 2005 2010 2015 2020 0,5 Seismic Algorithm complexity Visco elastic FWI Petro-elastic inversion Elastic FWI Visco elastic modeling Isotropic/anisotropic FWI Elastic modeling/RTM Isotropic/anisotropic RTM Isotropic/anisotropic modeling Paraxial isotropic/anisotropic imaging Asymptotic approximation imaging A petaflop scale system is required to deliver the capability to move to a new level of seismic imaging. One petaflop
  • 13. 13 Compute requirements in CAE Simulation Fidelity Robust Design Design Optimization Design Exploration Multiple runs Departmental cluster 100 cores Desktop 16 cores Single run Central Compute Cluster 1000 cores Supercomputing Environment >2000 cores “Simulation allows engineers to know, not guess – but only if IT can deliver dramatically scaled up infrastructure for mega simulations…. 1000’s of cores per mega simulation” CAE developer 13
  • 14. CAE Application Workload 14 CFD     (30%)   Structures     (20%)   Impact/Crash (40%) Vast majority of large simulations are MPI parallel Basically the same codes used across all industries
  • 15. CAE Workload status ●  ISV codes dominate the CAE commercial workload ●  Many large manufacturing companies have >>10,000 cores HPC systems ●  Even for large organizations very few jobs use more than 256 MPI ranks ●  There is a huge discrepancy between the scalability in production at large HPC centers and the commercial CAE environment 15 Why aren’t commercial CAE environments leveraging scaling for better performance
  • 16. 16 Often the full power available is not being leveraged
  • 17. 10/3/13 17 Innovations in the field of Combustion
  • 18. c. 2003, high density, fast interconnect Crash & CFD c. 1983, Cray X-MP, Convex MSC/NASTRAN c. 1988, Cray Y-MP, SGI Crash 18 c.1978 Cray-1, Vector processing Serial c. 2007, Extreme scalability Proprietary interconnect 1000’s cores Requires “end-to-end parallel” c. 1983 Cray X-MP, SMP 2-4 cores c. 1998, MPI Parallel “Linux cluster”, low density, slow interconnect ~100 MPI ranks c. 2013 Cray XE6 driving apps: CFD, CEM, ??? Propagation of HPC to commercial CAE Early adoption Common in Industry
  • 19. Obstacles to extreme scalability using ISV CAE codes 19 1.  Most CAE environments are configured for capacity computing —  Difficult to schedule 1000‘s of cores for one simulation —  Simulation size and complexity driven by available compute resource —  This will change as compute environments evolve 2.  Application license fees are an issue —  Application cost can be 2-5 times the hardware costs —  ISVs are encouraging scalable computing and are adjusting their licensing models 3.  Applications must deliver “end-to-end” scalability —  “Amdahl’s Law” requires vast majority of the code to be parallel —  This includes all of the features in a general purpose ISV code —  This is an active area of development for CAE ISVs
  • 20. Copyright © 2013 Altair Engineering, Inc. Proprietary and Confidential. All rights reserved. Collaboration with Cray •  Fine-tuned AcuSolve for maximum efficiency on Cray hardware •  Using Cray MPI libraries •  Efficient core placement •  AcuSolve package built specifically for Cray’s Extreme Scalability Mode(ESM) -- and for the first time shipped in V12.0 •  Extensively tested the code for various Cray systems (XE6, XC30) 20
  • 21. Copyright © 2013 Altair Engineering, Inc. Proprietary and Confidential. All rights reserved. Version 12.0: More Scalable •  Optimized domain decomposition for hybrid mpi/openmp •  Added MPI performance optimizer •  Nearly perfect scalability seen down to ~4k nodes/subdomain Parallel performance on a Linux Cluster with IB interconnect 21
  • 22. Copyright © 2013 Altair Engineering, Inc. Proprietary and Confidential. All rights reserved. Version 12.0: Larger Problems •  In V12.0 the capacity of AcuSolve is increased to efficiently solve problem sizes exceeding 1 billion elements •  Example: transient, DDES simulation of F1 drafting on ~1 billion elements 22
  • 23. Copyright © 2013 Altair Engineering, Inc. Proprietary and Confidential. All rights reserved. Case studies are real-life engineering problems All tests are performed in-house by Cray •  Case Study #1: Aerodynamics of a car model (ASMO) referred as 70M •  70 million elements •  Transient incompressible flow (implicit solve) Case Studies 23
  • 24. Copyright © 2013 Altair Engineering, Inc. Proprietary and Confidential. All rights reserved. •  Case Study #2: Cabin comforter model referred as 140M •  140 million elements •  Steady incompressible flow + heat transfer (implicit solve) + radiation Case Studies 24
  • 25. Copyright © 2013 Altair Engineering, Inc. Proprietary and Confidential. All rights reserved. Performance Results (combined) 140M Cray XC30 (SandyBridge + Aries) 70M Cray XC30 (SandyBridge + Aries) 70M Cray XE6 (AMD + Gemini) 70M Linux cluster (SandyBridge + IB) Ideal small core count 25
  • 26. Copyright © 2013 Altair Engineering, Inc. Proprietary and Confidential. All rights reserved. Webinar Conclusions •  Cray’s XC30 demonstrated the best performance in terms of both scalability and throughput •  Parallel performance of XE6 and XC30 interconnect was superior to IB •  For small number of cores (approximately less than 750), AcuSolve parallel performance is satisfactory across multiple platforms •  Throughput is mostly affected by core type (e.g. SandyBridge v.s. Westmere) 26
  • 27. Summary of Cray Value 1. Extreme fidelity simulations require HPC performance and extreme scalability is the only option to achieve this performance 2. Cray systems are designed for large production HPC environments, whether it is a single simulation using 10,000 cores or 100 simulations each using 100 cores. 3. The technology is in place for CAE environments to leverage >>1000s of cores per simulation and we are over due to see extreme scaling leveraged in commercial environments
  • 28. Copyright © 2013 Altair Engineering, Inc. Proprietary and Confidential. All rights reserved. Questions? 28