SlideShare a Scribd company logo
AcuSolve
Performance Benchmark and Profiling
The HPC Advisory Council

• World-wide HPC organization (240+ members)

• Bridges the gap between HPC usage and its potential

• Provides best practices and a support/development center

• Explores future technologies and future developments

• Working Groups – HPC|Cloud, HPC|Scale, HPC|GPU, HPC|Storage

• Leading edge solutions and technology demonstrations




                                                                2
HPC Advisory Council Members




                               3
HPC Advisory Council HPC Center


 Lustre                  GPU Cluster




             192 cores




             528 cores                 456 cores


                                                   4
2012 HPC Advisory Council Workshops



•   Germany Conference – June 17
•   Spain Conference – Sept 13
•   China Conference – October
•   US Stanford Conference – December

• For more information
  – www.hpcadvisorycouncil.com
  – info@hpcadvisorycouncil.com




                                        5
AcuSolve

 • AcuSolve
  – AcuSolve™ is a leading general-purpose finite element-based
    Computational Fluid Dynamics (CFD) flow solver with superior robustness,
    speed, and accuracy
  – AcuSolve can be used by designers and research engineers with all levels
    of expertise, either as a standalone product or seamlessly integrated into a
    powerful design and analysis application
  – With AcuSolve, users can quickly obtain quality solutions without iterating
    on solution procedures or worrying about mesh quality or topology




                                                                            6
Test Cluster Configuration
•   Dell™ PowerEdge™ M610 38-node (456-core) cluster
    – Six-Core Intel X5670 @ 2.93 GHz CPUs

    – Memory: 24GB memory, DDR3 1333 MHz

    – OS: RHEL 5.5, OFED 1.5.2 InfiniBand SW stack

•   Intel Cluster Ready certified cluster

•   Mellanox ConnectX-2 InfiniBand adapters and non-blocking switches

•   MPI: Intel MPI 3.0, MVAPICH2 1.0, Platform MPI 7.1

•   InfiniBand-based Lustre Storage: Lustre 1.8.5

•   Application: AcuSolve 1.8a

•   Benchmark datasets:
    – Pipe_fine (700 axial nodes, 3.04 million mesh points total, 17.8 million tetrahedral elements)

    – The test computes the steady state flow conditions for the turbulent flow (Re = 30000) of water in a
       pipe with heat transfer. The pipe is 1 meter in length and 150 cm in diameter. Water enters the inlet
       at room temperature conditions.
                                                                                                           7
AcuSolve Performance – Interconnects
• InfiniBand QDR enables higher cluster productivity
   – Provides more than 36% of job productivity over 1GigE network on benchmark problem
   – Savings in job productivity increases as cluster size increases
• 1GigE performance has a limited effect on performance for this benchmark
     •   Infers that the application is not as sensitive to network latency
• Test stops at 16-node for 1GigE due to switch port limitation




                                                                 36%




 Higher is better                                                             InfiniBand QDR

                                                                                          8
AcuSolve Performance – MPI Implementations
• Intel MPI performs better than Platform MPI
   – See around 16% higher performance at 32-node
   – Reflects that each Intel MPI efficiently handles MPI data transfers
• MVAPICH2 executable is only built with ch3:sock support for TCP network
   – Thus it does not reflect the true InfiniBand verbs performance as other MPI implementations




                                                                      16%




 Higher is better                                                             InfiniBand QDR

                                                                                          9
AcuSolve Performance – MPI & OpenMP Hybrid
•     On a single node, OpenMP Hybrid performs better than pure MPI
      – OpenMP provides faster results starting with 6 CPU cores (or 6 OpenMP threads)
      – OpenMP hybrid threads is a lighter weight alternative compared to MPI processes
•     Hybrid process enables scalability by minimizing process and communications
      – MPI communications are done by an MPI-OpenMP hybrid process on each node
      – The hybrid process is responsible for communications and spawning off worker threads
      – The OpenMP worker threads subsequently responsible for computation
• Graphs below compare Platform MPI to Platform MPI/OpenMP hybrid


                                                                                                    16%




    Higher is better                                                                      InfiniBand QDR

                                                                                                     10
AcuSolve Profiling – MPI/User Time Ratio
• Time spent in computation is more dominant than the MPI communication
  – MPI time only accounts for around 40% at 32-node
  – Actual computation run time reduces as the cluster scales
• OpenMP hybrid mode reduces overheads and yields more time for computation
  – Computation time: From 60% in pure MPI mode versus 77% in OpenMP hybrid mode




                                                                     InfiniBand QDR

                                                                                   11
AcuSolve Profiling – MPI Calls
• MPI_Recv and MPI_Isend are the most used MPI calls
  – Each accounts for ~42-43% of the MPI function calls on a 32-node job
• AcuSolve has large percentage of MPI calls for non-blocking data transfers
  – The non-blocking APIs allow transferring data while overlapping computation
  – Minimizes communications by using OpenMP hybrid
  – These 2 measures allow slow network to maintain decent productivity




                                                                                  12
AcuSolve Profiling – Time Spent by MPI Calls
• Majority of the MPI time is spent on MPI_Barrier and MPI_Allreduce
  – MPI_Barrier(43%), MPI_Allreduce(40%), MPI_Waitall(14%) on 32-node
• MPI communication time drops as cluster scales
  – Due to the faster total runtime, as more CPUs are working on completing the job faster
  – Reducing the communication time for each of the MPI calls




                                                                                        13
AcuSolve Profiling – MPI Message Sizes
• Most of the MPI messages are in the range of small to medium sizes
  – Most message sizes are less than 4KB
• The volume of MPI messages in MPI are significantly higher than hybrid
  – While the concentration of the messages stay within the same range




                                                                           14
AcuSolve Profiling – MPI Data Transfer

• As the cluster grows, substantial less data transfers between MPI processes
  – Reducing data communications from 20-30GB an single node simulation
  – To around 6GB for a 32-node simulation




                                                                          15
AcuSolve Profiling – MPI Data Transfer
• The amount of communications becomes more concentrated with hybrid mode
  – With 1 hybrid process launched for each node that is responsible for communications
  – Leaving the worker OpenMP threads for doing parallel computational routines
• At a result, the hybrid mode becomes a more efficient mode at scale
  – Even though larger data transfers takes place between MPI processes on each node




                                                                                       16
AcuSolve Profiling – Aggregated Transfer

• Aggregated data transfer refers to:
  – Total amount of data being transferred in the network between all MPI ranks collectively
• Large sum of data transfer takes place in AcuSolve
  – Seen around 2.5TB of data being exchanged between the nodes at 32-node in MPI
• The OpenMP hybrid mode reduces the overall traffic between the MPI processes
  – OpenMP has less than 870GB of data transferred, compared to 2.5TB for pure MPI case




                                                                               InfiniBand QDR

                                                                                         17
AcuSolve – Summary

• Performance
  – Acusolve is designed for superior performance and scalability
  – InfiniBand allows AcuSolve to run at the most efficient rate
  – Intel MPI produces higher parallel job efficiency than Platform MPI
  – The MVAPICH2 executable does not support communications over InfiniBand verbs
• MPI
  – By deploying non-blocking MPI calls, it overlaps computation with in-flight communications
  – Thus allowing it to achieve higher job performance while reducing communication needed
• OpenMP hybrid mode
  – By using the hybrid model, less data is needed be exchanged between nodes in a cluster
  – Thus allowing job to be done faster as more resources available for the computation
• Profiling
  – MPI_Isend and MPI_Recv are the most used MPI functions
  – OpenMP mode reduces the amount of network data transfer that needs to take place




                                                                                          18
Thank You
                                                           HPC Advisory Council




     All trademarks are property of their respective owners. All information is provided “As-Is” without any kind of warranty. The HPC Advisory Council makes no representation to the accuracy and
     completeness of the information contained herein. HPC Advisory Council Mellanox undertakes no duty and assumes no obligation to update or correct any information presented herein


19                                                                                                                                                                                            19

More Related Content

PDF
Решения WANDL и NorthStar для операторов
PDF
NZNOG 2020: Buffers, Buffer Bloat and BBR
PPTX
Hhm 3470 mq v8 and more recent new things for z os
PDF
RIPE 80: Buffers and Protocols
PDF
On Redundant Multipath Operating System Support for Wireless Mesh Networks
PPTX
Presentation - Programming a Heterogeneous Computing Cluster
PDF
RADIOSS FSI at NASA Langley: Water Impact of 20 inch Sphere - Nasa langley
PDF
TechGig Top 10 Coders
Решения WANDL и NorthStar для операторов
NZNOG 2020: Buffers, Buffer Bloat and BBR
Hhm 3470 mq v8 and more recent new things for z os
RIPE 80: Buffers and Protocols
On Redundant Multipath Operating System Support for Wireless Mesh Networks
Presentation - Programming a Heterogeneous Computing Cluster
RADIOSS FSI at NASA Langley: Water Impact of 20 inch Sphere - Nasa langley
TechGig Top 10 Coders

Viewers also liked (7)

PDF
Performance Improvement of Recently Updated FE Dummy Models - Humanetics
PDF
Improve Packaging Performance Using Simulation - Mabe
PDF
Multi-physics with MotionSolve
PDF
An Improved Subgrade Model for Crash Analysis of Guardrail Posts - University...
PDF
Development of Tools to Streamline the Analysis of Turbo-Machinery - Cooper S...
PDF
HTC 2012 Midsurfacing Training
Performance Improvement of Recently Updated FE Dummy Models - Humanetics
Improve Packaging Performance Using Simulation - Mabe
Multi-physics with MotionSolve
An Improved Subgrade Model for Crash Analysis of Guardrail Posts - University...
Development of Tools to Streamline the Analysis of Turbo-Machinery - Cooper S...
HTC 2012 Midsurfacing Training
Ad

Similar to AcuSolve Optimizations for Scale - Hpc advisory council (20)

PDF
Application Profiling at the HPCAC High Performance Center
PDF
HPC Best Practices: Application Performance Optimization
PPTX
Hardware-aware thread scheduling: the case of asymmetric multicore processors
PDF
Performance Optimization of HPC Applications: From Hardware to Source Code
PPTX
Optimizing Performance of your Oracle Database using 8Gb Fibre Channel
PPT
Current Trends in HPC
PDF
High Performance Computing: an Introduction for the Society of Actuaries
PDF
Understanding Low And Scalable Mpi Latency
PDF
Understanding Low And Scalable Mpi Latency
PPTX
High performance computing for research
PDF
Parallel Application Performance Prediction of Using Analysis Based Modeling
PDF
Could the “C” in HPC stand for Cloud?
PPTX
Providing fault tolerance in extreme scale parallel applications
PPTX
many-task computing
PDF
Scheduling MapReduce Jobs in HPC Clusters
PPTX
PDF
IBM zEnterprise System Brings Hybrid Computing Capabilities to Midsize Organi...
PDF
Hpc R2 Beta2 Press Deck 2010 04 07
PDF
Large-Scale Optimization Strategies for Typical HPC Workloads
PDF
Performance Analysis and Optimizations of CAE Applications (Case Study: STAR_...
Application Profiling at the HPCAC High Performance Center
HPC Best Practices: Application Performance Optimization
Hardware-aware thread scheduling: the case of asymmetric multicore processors
Performance Optimization of HPC Applications: From Hardware to Source Code
Optimizing Performance of your Oracle Database using 8Gb Fibre Channel
Current Trends in HPC
High Performance Computing: an Introduction for the Society of Actuaries
Understanding Low And Scalable Mpi Latency
Understanding Low And Scalable Mpi Latency
High performance computing for research
Parallel Application Performance Prediction of Using Analysis Based Modeling
Could the “C” in HPC stand for Cloud?
Providing fault tolerance in extreme scale parallel applications
many-task computing
Scheduling MapReduce Jobs in HPC Clusters
IBM zEnterprise System Brings Hybrid Computing Capabilities to Midsize Organi...
Hpc R2 Beta2 Press Deck 2010 04 07
Large-Scale Optimization Strategies for Typical HPC Workloads
Performance Analysis and Optimizations of CAE Applications (Case Study: STAR_...
Ad

More from Altair (20)

PDF
Altair for Manufacturing Applications
PDF
Smart Product Development: Scalable Solutions for Your Entire Product Lifecycle
PDF
Simplify and Scale FEA Post-Processing
PDF
Designing for Sustainability: Altair's Customer Story
PDF
why digital twin adoption rates are skyrocketing.pdf
PDF
Can digital twins save the planet?
PDF
Altair for Industrial Design Applications
PDF
Analyze performance and operations of truck fleets in real time
PDF
Powerful Customer Intelligence | Altair Knowledge Studio
PDF
Altair Data analytics for Healthcare.
PDF
AI supported material test automation.
PDF
Altair High-performance Computing (HPC) and Cloud
PDF
No Code Data Transformation for Insurance with Altair Monarch
PDF
Altair Data analytics for Banking, Financial Services and Insurance
PDF
Altair data analytics and artificial intelligence solutions
PDF
Are You Maximising the Potential of Composite Materials?
PDF
Lead time reduction in CAE: Automated FEM Description Report
PDF
A way to reduce mass of gearbox housing
PDF
The Team H2politO: vehicles for low consumption competitions using HyperWorks
PDF
Improving of Assessment Quality of Fatigue Analysis Using: MS, FEMFAT and FEM...
Altair for Manufacturing Applications
Smart Product Development: Scalable Solutions for Your Entire Product Lifecycle
Simplify and Scale FEA Post-Processing
Designing for Sustainability: Altair's Customer Story
why digital twin adoption rates are skyrocketing.pdf
Can digital twins save the planet?
Altair for Industrial Design Applications
Analyze performance and operations of truck fleets in real time
Powerful Customer Intelligence | Altair Knowledge Studio
Altair Data analytics for Healthcare.
AI supported material test automation.
Altair High-performance Computing (HPC) and Cloud
No Code Data Transformation for Insurance with Altair Monarch
Altair Data analytics for Banking, Financial Services and Insurance
Altair data analytics and artificial intelligence solutions
Are You Maximising the Potential of Composite Materials?
Lead time reduction in CAE: Automated FEM Description Report
A way to reduce mass of gearbox housing
The Team H2politO: vehicles for low consumption competitions using HyperWorks
Improving of Assessment Quality of Fatigue Analysis Using: MS, FEMFAT and FEM...

Recently uploaded (20)

PDF
Chapter 3 Spatial Domain Image Processing.pdf
PDF
cuic standard and advanced reporting.pdf
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PPTX
Big Data Technologies - Introduction.pptx
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PDF
Spectral efficient network and resource selection model in 5G networks
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
Electronic commerce courselecture one. Pdf
PDF
A comparative analysis of optical character recognition models for extracting...
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Chapter 3 Spatial Domain Image Processing.pdf
cuic standard and advanced reporting.pdf
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
“AI and Expert System Decision Support & Business Intelligence Systems”
Reach Out and Touch Someone: Haptics and Empathic Computing
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Building Integrated photovoltaic BIPV_UPV.pdf
Big Data Technologies - Introduction.pptx
Advanced methodologies resolving dimensionality complications for autism neur...
Spectral efficient network and resource selection model in 5G networks
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Unlocking AI with Model Context Protocol (MCP)
Electronic commerce courselecture one. Pdf
A comparative analysis of optical character recognition models for extracting...
The AUB Centre for AI in Media Proposal.docx
Encapsulation_ Review paper, used for researhc scholars
Diabetes mellitus diagnosis method based random forest with bat algorithm
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx

AcuSolve Optimizations for Scale - Hpc advisory council

  • 2. The HPC Advisory Council • World-wide HPC organization (240+ members) • Bridges the gap between HPC usage and its potential • Provides best practices and a support/development center • Explores future technologies and future developments • Working Groups – HPC|Cloud, HPC|Scale, HPC|GPU, HPC|Storage • Leading edge solutions and technology demonstrations 2
  • 4. HPC Advisory Council HPC Center Lustre GPU Cluster 192 cores 528 cores 456 cores 4
  • 5. 2012 HPC Advisory Council Workshops • Germany Conference – June 17 • Spain Conference – Sept 13 • China Conference – October • US Stanford Conference – December • For more information – www.hpcadvisorycouncil.com – info@hpcadvisorycouncil.com 5
  • 6. AcuSolve • AcuSolve – AcuSolve™ is a leading general-purpose finite element-based Computational Fluid Dynamics (CFD) flow solver with superior robustness, speed, and accuracy – AcuSolve can be used by designers and research engineers with all levels of expertise, either as a standalone product or seamlessly integrated into a powerful design and analysis application – With AcuSolve, users can quickly obtain quality solutions without iterating on solution procedures or worrying about mesh quality or topology 6
  • 7. Test Cluster Configuration • Dell™ PowerEdge™ M610 38-node (456-core) cluster – Six-Core Intel X5670 @ 2.93 GHz CPUs – Memory: 24GB memory, DDR3 1333 MHz – OS: RHEL 5.5, OFED 1.5.2 InfiniBand SW stack • Intel Cluster Ready certified cluster • Mellanox ConnectX-2 InfiniBand adapters and non-blocking switches • MPI: Intel MPI 3.0, MVAPICH2 1.0, Platform MPI 7.1 • InfiniBand-based Lustre Storage: Lustre 1.8.5 • Application: AcuSolve 1.8a • Benchmark datasets: – Pipe_fine (700 axial nodes, 3.04 million mesh points total, 17.8 million tetrahedral elements) – The test computes the steady state flow conditions for the turbulent flow (Re = 30000) of water in a pipe with heat transfer. The pipe is 1 meter in length and 150 cm in diameter. Water enters the inlet at room temperature conditions. 7
  • 8. AcuSolve Performance – Interconnects • InfiniBand QDR enables higher cluster productivity – Provides more than 36% of job productivity over 1GigE network on benchmark problem – Savings in job productivity increases as cluster size increases • 1GigE performance has a limited effect on performance for this benchmark • Infers that the application is not as sensitive to network latency • Test stops at 16-node for 1GigE due to switch port limitation 36% Higher is better InfiniBand QDR 8
  • 9. AcuSolve Performance – MPI Implementations • Intel MPI performs better than Platform MPI – See around 16% higher performance at 32-node – Reflects that each Intel MPI efficiently handles MPI data transfers • MVAPICH2 executable is only built with ch3:sock support for TCP network – Thus it does not reflect the true InfiniBand verbs performance as other MPI implementations 16% Higher is better InfiniBand QDR 9
  • 10. AcuSolve Performance – MPI & OpenMP Hybrid • On a single node, OpenMP Hybrid performs better than pure MPI – OpenMP provides faster results starting with 6 CPU cores (or 6 OpenMP threads) – OpenMP hybrid threads is a lighter weight alternative compared to MPI processes • Hybrid process enables scalability by minimizing process and communications – MPI communications are done by an MPI-OpenMP hybrid process on each node – The hybrid process is responsible for communications and spawning off worker threads – The OpenMP worker threads subsequently responsible for computation • Graphs below compare Platform MPI to Platform MPI/OpenMP hybrid 16% Higher is better InfiniBand QDR 10
  • 11. AcuSolve Profiling – MPI/User Time Ratio • Time spent in computation is more dominant than the MPI communication – MPI time only accounts for around 40% at 32-node – Actual computation run time reduces as the cluster scales • OpenMP hybrid mode reduces overheads and yields more time for computation – Computation time: From 60% in pure MPI mode versus 77% in OpenMP hybrid mode InfiniBand QDR 11
  • 12. AcuSolve Profiling – MPI Calls • MPI_Recv and MPI_Isend are the most used MPI calls – Each accounts for ~42-43% of the MPI function calls on a 32-node job • AcuSolve has large percentage of MPI calls for non-blocking data transfers – The non-blocking APIs allow transferring data while overlapping computation – Minimizes communications by using OpenMP hybrid – These 2 measures allow slow network to maintain decent productivity 12
  • 13. AcuSolve Profiling – Time Spent by MPI Calls • Majority of the MPI time is spent on MPI_Barrier and MPI_Allreduce – MPI_Barrier(43%), MPI_Allreduce(40%), MPI_Waitall(14%) on 32-node • MPI communication time drops as cluster scales – Due to the faster total runtime, as more CPUs are working on completing the job faster – Reducing the communication time for each of the MPI calls 13
  • 14. AcuSolve Profiling – MPI Message Sizes • Most of the MPI messages are in the range of small to medium sizes – Most message sizes are less than 4KB • The volume of MPI messages in MPI are significantly higher than hybrid – While the concentration of the messages stay within the same range 14
  • 15. AcuSolve Profiling – MPI Data Transfer • As the cluster grows, substantial less data transfers between MPI processes – Reducing data communications from 20-30GB an single node simulation – To around 6GB for a 32-node simulation 15
  • 16. AcuSolve Profiling – MPI Data Transfer • The amount of communications becomes more concentrated with hybrid mode – With 1 hybrid process launched for each node that is responsible for communications – Leaving the worker OpenMP threads for doing parallel computational routines • At a result, the hybrid mode becomes a more efficient mode at scale – Even though larger data transfers takes place between MPI processes on each node 16
  • 17. AcuSolve Profiling – Aggregated Transfer • Aggregated data transfer refers to: – Total amount of data being transferred in the network between all MPI ranks collectively • Large sum of data transfer takes place in AcuSolve – Seen around 2.5TB of data being exchanged between the nodes at 32-node in MPI • The OpenMP hybrid mode reduces the overall traffic between the MPI processes – OpenMP has less than 870GB of data transferred, compared to 2.5TB for pure MPI case InfiniBand QDR 17
  • 18. AcuSolve – Summary • Performance – Acusolve is designed for superior performance and scalability – InfiniBand allows AcuSolve to run at the most efficient rate – Intel MPI produces higher parallel job efficiency than Platform MPI – The MVAPICH2 executable does not support communications over InfiniBand verbs • MPI – By deploying non-blocking MPI calls, it overlaps computation with in-flight communications – Thus allowing it to achieve higher job performance while reducing communication needed • OpenMP hybrid mode – By using the hybrid model, less data is needed be exchanged between nodes in a cluster – Thus allowing job to be done faster as more resources available for the computation • Profiling – MPI_Isend and MPI_Recv are the most used MPI functions – OpenMP mode reduces the amount of network data transfer that needs to take place 18
  • 19. Thank You HPC Advisory Council All trademarks are property of their respective owners. All information is provided “As-Is” without any kind of warranty. The HPC Advisory Council makes no representation to the accuracy and completeness of the information contained herein. HPC Advisory Council Mellanox undertakes no duty and assumes no obligation to update or correct any information presented herein 19 19