SlideShare a Scribd company logo
Bullx HPC eXtreme computing cluster references
Nov 18th 2013

Bull Extreme Factory
Remote Visualizer
3D Streaming Technology
Nov 18th 2013

Readers’ Choice best server product or technology:
Intel Xeon Processor
Editors’ Choice best server product or technology:
Intel Xeon Phi Coprocessor
Readers’ Choice top product or technology to watch:
Intel Xeon Phi Coprocessor
Nov 18th 2013

on

Readers’ Choice:
GENCI CURIE for DEUS
(Dark Energy Universe Simulation) project
Bullx HPC eXtreme computing cluster references
Bullx HPC eXtreme computing cluster references
Bullx HPC eXtreme computing cluster references
Bullx HPC eXtreme computing cluster references
Bullx HPC eXtreme computing cluster references
Bullx HPC eXtreme computing cluster references
Bullx HPC eXtreme computing cluster references
Bullx HPC eXtreme computing cluster references
Bullx HPC eXtreme computing cluster references
Bullx HPC eXtreme computing cluster references
Bullx HPC eXtreme computing cluster references
Bullx HPC eXtreme computing cluster references
Needs
Increase the capacity of University of Reims’ ROMEO HPC Center, an
NVIDIA CUDA® Research Center
Develop the teaching activities on accelerators technologies
A system to drive research in mathematics and computer science, physics
and engineering sciences, and multiscale molecular modeling.

Solution
A large GPU-accelerated cluster:

260 NVIDIA Tesla K20X GPU accelerators housed in 130 bullx R421 E3 servers
Expected performance: 230 Tflops (Linpack)
Free-cooling system based on Bull Cool Cabinet Doors
joint scientific and technical collaboration with NVIDIA and BULL
The new “Romeo” system will be installed this summer
Needs
Create a world-class supercomputing center that will be
made available to the Czech science and industry community
Find accommodation for the supercomputer while the IT4I
supercomputing center is built

Solution
The Anselm supercomputer, an 82 Tflops bullx system housed in a
leased mobull container:
180 bullx B510 compute nodes
23 bullx B515 accelerator blades with NVIDIA M2090 GPUs
4 bullx B515 accelerator blades with Intel® Xeon Phi™ coprocessors
Lustre shared file system
Water-cooled rear doors
bullx supercomputer suite
Needs
A system that matches the University’s strong involvement in
sustainable development
A minimum performance of 45 Tflops

Solution
A bullx configuration that optimizes power consumption, footprint
and cooling, with Direct Liquid Cooling nodes and free cooling:
136 dual-socket bullx DLC B710 compute nodes
InfiniBand FDR
Lustre file system
bullx supercomputer suite
PUE 1.13
Free cooling installation
Needs
A solution focusing on green IT with a very innovative collaboration
and research program

Solution
A bullx supercomputer based on the bullx DLC series
Phase 1 (Q1 2013)
Throughput: 180 bullx B500 blades, IB QDR
HPC: 270 bullx DLC B710 nodes, 24 bullx B515 accelerator blades, IB FDR

Phase 2 (Q3 2014)
HPC: 630 bullx DLC B720 nodes, 4 SMP nodes with 2TB each
A total peak performance > 1.6 Petaflops at the end of phase 2
Needs
Replace the current 41.8 Tflops vector system by a scalar supercomputer
Two identical systems: one for research & one for production

Solution
A bullx configuration that optimizes power consumption, footprint and cooling,
with Direct Liquid Cooling nodes:
Phase 1 (2013): 2 x 475 Tflops peak
─ 2 x 990 dual-socket bullx B710 compute nodes with Intel® Xeon® ‘IvyBridge EP’

Phase 2 (2015): 2 x 2.85 Pflops peak
─ 2 x 1800 dual-socket bullx B710 compute nodes with Intel® Xeon® ‘Broadwell EP’

Fat tree InfiniBand FDR
Lustre file system
bullx supercomputer suite
Needs
Replace the 65 TFlops Dutch National Supercomputer Huygens
Support a wide variety of scientific disciplines
A solution that can easily be extended
An HPC vendor who can also be a partner

Solution
A bullx supercomputer delivered in 3 phases:
Phase 1: 180 bullx Direct Liquid Cooling B710 nodes – Intel Sandybridge-based
+ 32 bullx R428 E3 fat nodes
Phase 2: 360 bullx Direct Liquid Cooling B710 nodes – Intel Ivybridge-based
Phase 3: 1080 bullx Direct Liquid Cooling B710 nodes – Intel Haswell-based

A total peak performance in excess of 1.3 Petaflops in phase 3
Needs
Provide high level computing resources for the R&D teams at
AREVA, Astrium, EDF, INERIS, Safran and CEA
Meet the requirements of a large variety of research topics

Solution
200 Tflops supercomputer “Airain” with a flexible architecture
594 bullx B510 compute nodes
InfiniBand QDR interconnect
Lustre file system
bullx supercomputer suite
+ extension used for genomics
180 “memory-rich” bullx B510 compute nodes (128GB of RAM)
Needs
Replacement of the bullx cluster installed in 2007
Support a diverse community of users, from experienced practitioners
to those just starting to consider HPC

Solution
A dedicated MPI compute node partition

 128 dual-socket bullx B510 compute nodes with Intel® Xeon® E5-2670
 16 “memory-rich” nodes for codes with large memory requirements
A dedicated HTC compute node partition

 72 refurbished bullx B500 blades
InfiniBand QDR
Lustre
bullx supercomputer suite
Needs
Upgrade the computing capacities dedicated to aerodynamics

Solution
A homogeneous cluster of 72 compute nodes
A few specialized nodes used either as “pure” compute nodes or as
hybrid nodes transferring part of the calculations to accelerators
bullx R424-E/F3 2U servers each housing 4 compute nodes
1 NVIDIA 1U system with 4 GPUs
InfiniBand QDR
Managed with the bullx supercomputer suite.
«The bullx cluster provides the ease of use and robustness that our
engineers are entitled to expect from an everyday tool for their work.»
Atomic Weapons Establishment

AWE confirms its trust in Bull with the upgrade
of its 3 bullx supercomputers
New blades in the existing infrastructure
Simple replacement of the initial blades with new bullx B510
blades featuring the latest Sandy Bridge EP CPUs
Willow 2x 35 Tflops  Whitebeam 2x 156 Tflops
Blackthorn 145 Tflops  Sycamore 398 Tflops
All existing bullx chassis re-used to house the new blades
Upgrade of the storage systems
Cluster software upgraded to bullx supercomputer suite 4
Bundesanstalt für Wasserbau
Needs
Replace one of their 2 compute clusters used for:
 2D and 3D modeling of rivers
 3D modeling of flows
 Reliability analyses (Monte Carlo simulations)

Solution
126 bullx B510 compute nodes (2x Intel® Xeon® E5-2670)
Bull Cool Cabinet doors (water-cooled)
Full non-blocking InfiniBand QDR interconnect network
Panasas Storage System (110 TB)
Cluster software: Hpc.manage powered by scVENUS (a solution
from science + computing, a Bull Group company)
HPC Midlands Consortium

Needs
Make world-class HPC facilities accessible to both academic and
industrial researchers, and especially to smaller companies, to
facilitate innovation, growth and wealth creation
Encourage industrially relevant research to benefit the UK economy

Solution
A bullx supercomputer with a peak performance of 48 TF:
188 bullx B510 compute nodes (Intel® Xeon® E5-2600)
Lustre parallel file system (with LSI/Netapp HW)
Water-cooled racks
This research center active in the fields of energy &
transport wanted:
A 100 Tflops extension to their computing resources
To provide sustainable technologies to meet the challenges of climate
change, energy diversification & water resource management

Solution
A bullx supercomputer delivering 130 Tflops peak:
392 B510 compute nodes (Intel® Xeon® E5-2670)
new generation InfiniBand FDR interconnect
GPFS on LSI storage
Needs
Create a world-class manufacturing research
centre
Finite-based modeling of detailed 3D timedependent manufacturing processes

Solution
72 bullx B510 compute nodes (Intel® Xeon® E52670)
1 bullx S6030 supernode
Needs
One of the newest public universities in Spain, it
needed a high density compute cluster:
For the Physical Chemistry Division
To design multifunctional nano-structured materials

Solution
A complete solution with:
36 bullx B500 compute blades (Intel® Xeon® 5640)
installation, training, 5-year maintenance
This innovative engineering company
specializing in design for the motor
racing industry wanted to:
Support the use of advanced virtual engineering
technologies, developed in-house, for complete
simulated vehicle design, development and testing

Solution
198 bullx B500 compute blades
2 memory rich bullx S6010 compute nodes for pre
and post meshing
Needs
“Keep content looking great wherever it’s played”
An ultra-dense HPC platform optimized for large scale
video processing

Solution
TITAN, built on bullx B510 blades: a scalable video
processing platform that enables massively parallel
content transcoding into multiple formats at a very high
degree of fidelity to the original
This Belgian research center working for
the aeronautics industry wanted to:
Double their HPC capacity
Find an easy way to extend their computer room
capacity

Solution
A bullx system delivering 40 Teraflops (bullx B500
compute nodes)
Installed in a mobull mobile data centre
Banco Bilbao Vizcaya Argentaria needed to
reduce run time for mathematical models to:
manage financial risks better
have a competitive advantage and get the best price for
complex financial products

Solution
A bullx cluster delivering 41 Teraflops, with:
80 bullx R424-E2 compute nodes
2 bullx R423-E2 service nodes
The Dutch meteo was looking for:
More computing power to be able to issue early warnings in case
of extreme weather and enhance capabilities for climate research

Solution
A system 40 times more powerful than KNMI’s previous system:
396 bullx B500 compute nodes, equipped with Intel® Xeon® Series
5600 processors
9.5 TB memory
peak performance 58.2 Tflop/s

“The hardware, combined with Bull's expert support, gives us
confidence in our cooperation”
300 Tflops peak
A massively parallel section (MPI) including
1,350 bullx B500 processing nodes with a total
of 16,200 Intel® Xeon® cores
An SMP (symmetrical multiprocessing) section
including 11,456 Intel® Xeon® cores, grouped
into 181 bullx S6010/S6030 supernodes
Over 90 Terabytes of memory
Join
the Bull User group for eXtreme computing

www.bux-org.com

More Related Content

PDF
SGI HPC Update for June 2013
PDF
SGI HPC DAY 2011 Kiev
PDF
LCA13: Jason Taylor Keynote - ARM & Disaggregated Rack - LCA13-Hong - 6 March...
PDF
Artificial intelligence on the Edge
PPTX
AMD Bridges the X86 and ARM Ecosystems for the Data Center
 
PDF
Deep Learning on the SaturnV Cluster
PDF
AMD and the new “Zen” High Performance x86 Core at Hot Chips 28
 
PDF
Microsoft Project Olympus AI Accelerator Chassis (HGX-1)
SGI HPC Update for June 2013
SGI HPC DAY 2011 Kiev
LCA13: Jason Taylor Keynote - ARM & Disaggregated Rack - LCA13-Hong - 6 March...
Artificial intelligence on the Edge
AMD Bridges the X86 and ARM Ecosystems for the Data Center
 
Deep Learning on the SaturnV Cluster
AMD and the new “Zen” High Performance x86 Core at Hot Chips 28
 
Microsoft Project Olympus AI Accelerator Chassis (HGX-1)

What's hot (20)

PPTX
Rack Cluster Deployment for SDSC Supercomputer
PDF
From Rack scale computers to Warehouse scale computers
PDF
dCUDA: Distributed GPU Computing with Hardware Overlap
PDF
POWER10 innovations for HPC
PDF
Fujitsu World Tour 2017 - Compute Platform For The Digital World
PPTX
Accelerating Innovation from Edge to Cloud
PDF
Part 3 Maximizing the utilization of GPU resources on-premise and in the cloud
PDF
Qct quick stack ubuntu openstack
PPTX
Modular by Design: Supermicro’s New Standards-Based Universal GPU Server
PPTX
AMD Hot Chips Bulldozer & Bobcat Presentation
 
PDF
Ac922 cdac webinar
PDF
InfiniBox z pohledu zákazníka
PDF
GIST AI-X Computing Cluster
PDF
Summit workshop thompto
PDF
Using a Field Programmable Gate Array to Accelerate Application Performance
PDF
Heterogeneous Computing : The Future of Systems
PDF
Harnessing the virtual realm for successful real world artificial intelligence
PDF
"The Xilinx AI Engine: High Performance with Future-proof Architecture Adapta...
PDF
IBM Data Centric Systems & OpenPOWER
PPTX
AI Hardware Landscape 2021
Rack Cluster Deployment for SDSC Supercomputer
From Rack scale computers to Warehouse scale computers
dCUDA: Distributed GPU Computing with Hardware Overlap
POWER10 innovations for HPC
Fujitsu World Tour 2017 - Compute Platform For The Digital World
Accelerating Innovation from Edge to Cloud
Part 3 Maximizing the utilization of GPU resources on-premise and in the cloud
Qct quick stack ubuntu openstack
Modular by Design: Supermicro’s New Standards-Based Universal GPU Server
AMD Hot Chips Bulldozer & Bobcat Presentation
 
Ac922 cdac webinar
InfiniBox z pohledu zákazníka
GIST AI-X Computing Cluster
Summit workshop thompto
Using a Field Programmable Gate Array to Accelerate Application Performance
Heterogeneous Computing : The Future of Systems
Harnessing the virtual realm for successful real world artificial intelligence
"The Xilinx AI Engine: High Performance with Future-proof Architecture Adapta...
IBM Data Centric Systems & OpenPOWER
AI Hardware Landscape 2021
Ad

Similar to Bullx HPC eXtreme computing cluster references (20)

PDF
Bullx HPC eXtreme computing technology
PDF
b-sequana-en1_web
PPT
Bull Information Systems
PDF
BURA Supercomputer
PPTX
¿Es posible construir el Airbus de la Supercomputación en Europa?
PPT
Current Trends in HPC
PPT
Necesidades de supercomputacion en las empresas españolas
PDF
Achitecture Aware Algorithms and Software for Peta and Exascale
PDF
Nikravesh big datafeb2013bt
PPT
Valladolid final-septiembre-2010
PPTX
Company Presentation - ClusterVision
PPT
NWU and HPC
PPTX
e-Infrastructure available for research, using the right tool for the right job
PDF
Hp city extern
PPTX
HPC Midlands Launch - Introduction to HPC Midlands
PDF
HPC_June2011
PPTX
Experiences in Application Specific Supercomputer Design - Reasons, Challenge...
PDF
AMS 250 - High-Performance, Massively Parallel Computing with FLASH
PDF
Barcelona Supercomputing Center, Generador de Riqueza
PPTX
Introduction to heterogeneous_computing_for_hpc
Bullx HPC eXtreme computing technology
b-sequana-en1_web
Bull Information Systems
BURA Supercomputer
¿Es posible construir el Airbus de la Supercomputación en Europa?
Current Trends in HPC
Necesidades de supercomputacion en las empresas españolas
Achitecture Aware Algorithms and Software for Peta and Exascale
Nikravesh big datafeb2013bt
Valladolid final-septiembre-2010
Company Presentation - ClusterVision
NWU and HPC
e-Infrastructure available for research, using the right tool for the right job
Hp city extern
HPC Midlands Launch - Introduction to HPC Midlands
HPC_June2011
Experiences in Application Specific Supercomputer Design - Reasons, Challenge...
AMS 250 - High-Performance, Massively Parallel Computing with FLASH
Barcelona Supercomputing Center, Generador de Riqueza
Introduction to heterogeneous_computing_for_hpc
Ad

More from Jeff Spencer (10)

PPTX
Responsabilité Sociétale d’Entreprise Un réel atout pour Bull
PPTX
Corporate Social Responsibility : A new business asset
PDF
Businesses held back by their inability to exploit their data (Infographic)
PDF
Mobile government presentation - Bull and Citrix - March 6th 2014
PDF
Derive value from data as IT shifts from technology to useage
PPTX
The Good, The Bad, and The Ugly
PDF
Bull Corporate Vision
PDF
Bull UK Overview
PPT
Data, Data Everywhere but Not a BYTE to Eat
PPTX
Research and technology explosion in scale-out storage
Responsabilité Sociétale d’Entreprise Un réel atout pour Bull
Corporate Social Responsibility : A new business asset
Businesses held back by their inability to exploit their data (Infographic)
Mobile government presentation - Bull and Citrix - March 6th 2014
Derive value from data as IT shifts from technology to useage
The Good, The Bad, and The Ugly
Bull Corporate Vision
Bull UK Overview
Data, Data Everywhere but Not a BYTE to Eat
Research and technology explosion in scale-out storage

Recently uploaded (20)

PDF
Review of recent advances in non-invasive hemoglobin estimation
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
Approach and Philosophy of On baking technology
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PDF
NewMind AI Monthly Chronicles - July 2025
PDF
Empathic Computing: Creating Shared Understanding
PDF
Modernizing your data center with Dell and AMD
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
Advanced Soft Computing BINUS July 2025.pdf
PDF
GamePlan Trading System Review: Professional Trader's Honest Take
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
NewMind AI Weekly Chronicles - August'25 Week I
DOCX
The AUB Centre for AI in Media Proposal.docx
PPTX
breach-and-attack-simulation-cybersecurity-india-chennai-defenderrabbit-2025....
PDF
[발표본] 너의 과제는 클라우드에 있어_KTDS_김동현_20250524.pdf
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
PDF
Electronic commerce courselecture one. Pdf
Review of recent advances in non-invasive hemoglobin estimation
Reach Out and Touch Someone: Haptics and Empathic Computing
Approach and Philosophy of On baking technology
Chapter 3 Spatial Domain Image Processing.pdf
NewMind AI Monthly Chronicles - July 2025
Empathic Computing: Creating Shared Understanding
Modernizing your data center with Dell and AMD
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Advanced Soft Computing BINUS July 2025.pdf
GamePlan Trading System Review: Professional Trader's Honest Take
Mobile App Security Testing_ A Comprehensive Guide.pdf
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
NewMind AI Weekly Chronicles - August'25 Week I
The AUB Centre for AI in Media Proposal.docx
breach-and-attack-simulation-cybersecurity-india-chennai-defenderrabbit-2025....
[발표본] 너의 과제는 클라우드에 있어_KTDS_김동현_20250524.pdf
Dropbox Q2 2025 Financial Results & Investor Presentation
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
Electronic commerce courselecture one. Pdf

Bullx HPC eXtreme computing cluster references

  • 2. Nov 18th 2013 Bull Extreme Factory Remote Visualizer 3D Streaming Technology
  • 3. Nov 18th 2013 Readers’ Choice best server product or technology: Intel Xeon Processor Editors’ Choice best server product or technology: Intel Xeon Phi Coprocessor Readers’ Choice top product or technology to watch: Intel Xeon Phi Coprocessor
  • 4. Nov 18th 2013 on Readers’ Choice: GENCI CURIE for DEUS (Dark Energy Universe Simulation) project
  • 17. Needs Increase the capacity of University of Reims’ ROMEO HPC Center, an NVIDIA CUDA® Research Center Develop the teaching activities on accelerators technologies A system to drive research in mathematics and computer science, physics and engineering sciences, and multiscale molecular modeling. Solution A large GPU-accelerated cluster: 260 NVIDIA Tesla K20X GPU accelerators housed in 130 bullx R421 E3 servers Expected performance: 230 Tflops (Linpack) Free-cooling system based on Bull Cool Cabinet Doors joint scientific and technical collaboration with NVIDIA and BULL The new “Romeo” system will be installed this summer
  • 18. Needs Create a world-class supercomputing center that will be made available to the Czech science and industry community Find accommodation for the supercomputer while the IT4I supercomputing center is built Solution The Anselm supercomputer, an 82 Tflops bullx system housed in a leased mobull container: 180 bullx B510 compute nodes 23 bullx B515 accelerator blades with NVIDIA M2090 GPUs 4 bullx B515 accelerator blades with Intel® Xeon Phi™ coprocessors Lustre shared file system Water-cooled rear doors bullx supercomputer suite
  • 19. Needs A system that matches the University’s strong involvement in sustainable development A minimum performance of 45 Tflops Solution A bullx configuration that optimizes power consumption, footprint and cooling, with Direct Liquid Cooling nodes and free cooling: 136 dual-socket bullx DLC B710 compute nodes InfiniBand FDR Lustre file system bullx supercomputer suite PUE 1.13 Free cooling installation
  • 20. Needs A solution focusing on green IT with a very innovative collaboration and research program Solution A bullx supercomputer based on the bullx DLC series Phase 1 (Q1 2013) Throughput: 180 bullx B500 blades, IB QDR HPC: 270 bullx DLC B710 nodes, 24 bullx B515 accelerator blades, IB FDR Phase 2 (Q3 2014) HPC: 630 bullx DLC B720 nodes, 4 SMP nodes with 2TB each A total peak performance > 1.6 Petaflops at the end of phase 2
  • 21. Needs Replace the current 41.8 Tflops vector system by a scalar supercomputer Two identical systems: one for research & one for production Solution A bullx configuration that optimizes power consumption, footprint and cooling, with Direct Liquid Cooling nodes: Phase 1 (2013): 2 x 475 Tflops peak ─ 2 x 990 dual-socket bullx B710 compute nodes with Intel® Xeon® ‘IvyBridge EP’ Phase 2 (2015): 2 x 2.85 Pflops peak ─ 2 x 1800 dual-socket bullx B710 compute nodes with Intel® Xeon® ‘Broadwell EP’ Fat tree InfiniBand FDR Lustre file system bullx supercomputer suite
  • 22. Needs Replace the 65 TFlops Dutch National Supercomputer Huygens Support a wide variety of scientific disciplines A solution that can easily be extended An HPC vendor who can also be a partner Solution A bullx supercomputer delivered in 3 phases: Phase 1: 180 bullx Direct Liquid Cooling B710 nodes – Intel Sandybridge-based + 32 bullx R428 E3 fat nodes Phase 2: 360 bullx Direct Liquid Cooling B710 nodes – Intel Ivybridge-based Phase 3: 1080 bullx Direct Liquid Cooling B710 nodes – Intel Haswell-based A total peak performance in excess of 1.3 Petaflops in phase 3
  • 23. Needs Provide high level computing resources for the R&D teams at AREVA, Astrium, EDF, INERIS, Safran and CEA Meet the requirements of a large variety of research topics Solution 200 Tflops supercomputer “Airain” with a flexible architecture 594 bullx B510 compute nodes InfiniBand QDR interconnect Lustre file system bullx supercomputer suite + extension used for genomics 180 “memory-rich” bullx B510 compute nodes (128GB of RAM)
  • 24. Needs Replacement of the bullx cluster installed in 2007 Support a diverse community of users, from experienced practitioners to those just starting to consider HPC Solution A dedicated MPI compute node partition  128 dual-socket bullx B510 compute nodes with Intel® Xeon® E5-2670  16 “memory-rich” nodes for codes with large memory requirements A dedicated HTC compute node partition  72 refurbished bullx B500 blades InfiniBand QDR Lustre bullx supercomputer suite
  • 25. Needs Upgrade the computing capacities dedicated to aerodynamics Solution A homogeneous cluster of 72 compute nodes A few specialized nodes used either as “pure” compute nodes or as hybrid nodes transferring part of the calculations to accelerators bullx R424-E/F3 2U servers each housing 4 compute nodes 1 NVIDIA 1U system with 4 GPUs InfiniBand QDR Managed with the bullx supercomputer suite. «The bullx cluster provides the ease of use and robustness that our engineers are entitled to expect from an everyday tool for their work.»
  • 26. Atomic Weapons Establishment AWE confirms its trust in Bull with the upgrade of its 3 bullx supercomputers New blades in the existing infrastructure Simple replacement of the initial blades with new bullx B510 blades featuring the latest Sandy Bridge EP CPUs Willow 2x 35 Tflops  Whitebeam 2x 156 Tflops Blackthorn 145 Tflops  Sycamore 398 Tflops All existing bullx chassis re-used to house the new blades Upgrade of the storage systems Cluster software upgraded to bullx supercomputer suite 4
  • 27. Bundesanstalt für Wasserbau Needs Replace one of their 2 compute clusters used for:  2D and 3D modeling of rivers  3D modeling of flows  Reliability analyses (Monte Carlo simulations) Solution 126 bullx B510 compute nodes (2x Intel® Xeon® E5-2670) Bull Cool Cabinet doors (water-cooled) Full non-blocking InfiniBand QDR interconnect network Panasas Storage System (110 TB) Cluster software: Hpc.manage powered by scVENUS (a solution from science + computing, a Bull Group company)
  • 28. HPC Midlands Consortium Needs Make world-class HPC facilities accessible to both academic and industrial researchers, and especially to smaller companies, to facilitate innovation, growth and wealth creation Encourage industrially relevant research to benefit the UK economy Solution A bullx supercomputer with a peak performance of 48 TF: 188 bullx B510 compute nodes (Intel® Xeon® E5-2600) Lustre parallel file system (with LSI/Netapp HW) Water-cooled racks
  • 29. This research center active in the fields of energy & transport wanted: A 100 Tflops extension to their computing resources To provide sustainable technologies to meet the challenges of climate change, energy diversification & water resource management Solution A bullx supercomputer delivering 130 Tflops peak: 392 B510 compute nodes (Intel® Xeon® E5-2670) new generation InfiniBand FDR interconnect GPFS on LSI storage
  • 30. Needs Create a world-class manufacturing research centre Finite-based modeling of detailed 3D timedependent manufacturing processes Solution 72 bullx B510 compute nodes (Intel® Xeon® E52670) 1 bullx S6030 supernode
  • 31. Needs One of the newest public universities in Spain, it needed a high density compute cluster: For the Physical Chemistry Division To design multifunctional nano-structured materials Solution A complete solution with: 36 bullx B500 compute blades (Intel® Xeon® 5640) installation, training, 5-year maintenance
  • 32. This innovative engineering company specializing in design for the motor racing industry wanted to: Support the use of advanced virtual engineering technologies, developed in-house, for complete simulated vehicle design, development and testing Solution 198 bullx B500 compute blades 2 memory rich bullx S6010 compute nodes for pre and post meshing
  • 33. Needs “Keep content looking great wherever it’s played” An ultra-dense HPC platform optimized for large scale video processing Solution TITAN, built on bullx B510 blades: a scalable video processing platform that enables massively parallel content transcoding into multiple formats at a very high degree of fidelity to the original
  • 34. This Belgian research center working for the aeronautics industry wanted to: Double their HPC capacity Find an easy way to extend their computer room capacity Solution A bullx system delivering 40 Teraflops (bullx B500 compute nodes) Installed in a mobull mobile data centre
  • 35. Banco Bilbao Vizcaya Argentaria needed to reduce run time for mathematical models to: manage financial risks better have a competitive advantage and get the best price for complex financial products Solution A bullx cluster delivering 41 Teraflops, with: 80 bullx R424-E2 compute nodes 2 bullx R423-E2 service nodes
  • 36. The Dutch meteo was looking for: More computing power to be able to issue early warnings in case of extreme weather and enhance capabilities for climate research Solution A system 40 times more powerful than KNMI’s previous system: 396 bullx B500 compute nodes, equipped with Intel® Xeon® Series 5600 processors 9.5 TB memory peak performance 58.2 Tflop/s “The hardware, combined with Bull's expert support, gives us confidence in our cooperation”
  • 37. 300 Tflops peak A massively parallel section (MPI) including 1,350 bullx B500 processing nodes with a total of 16,200 Intel® Xeon® cores An SMP (symmetrical multiprocessing) section including 11,456 Intel® Xeon® cores, grouped into 181 bullx S6010/S6030 supernodes Over 90 Terabytes of memory
  • 38. Join the Bull User group for eXtreme computing www.bux-org.com