SlideShare a Scribd company logo
Available HPC resources at CSUC
Adrián Macía
19 / 02 / 2020
Summary
• Who are we?
• High performance computing at CSUC
• Hardware facilities
• Working environment
• Development environment
Summary
• Who are we?
• High performance computing at CSUC
• Hardware facilities
• Working environment
• Development environment
What is the CSUC?
• CSUC is a public consortium born from the
join of CESCA and CBUC
• Institutions part of the consortium:
• Associated institutions:
What we do?
Scientific
computing
Communications
IT InfrastructuresProcurements
Scientific
documentation
management
Joint purchases
Electronic
administration
Summary
• Who are we?
• High performance computing at CSUC
• Hardware facilities
• Working environment
• Development environment
HPC matters
• Nowadays simulation is a fundamental tool to
solve and uderstand problems in science and
engineering
Theory
SimulationExperiment
HPC role in science and engineering
• HPC allows the researchers to solve problems
that otherwise cannot be afforded
• Numerical simulationsare used in a wide
variety of fields like:
Chemistry and
materials sciences
Life and health
sciences
Mathematics,
physics and
engineering
Astronomy, space
and Earth sciences
Main applications per knowledge area
Chemistry
and materials
science
Vasp
Siesta
Gaussian
ADF
CP2K
Life and
health
sciences
Amber
Gromacs
NAMD
Schrödinger
VMD
Mathematics,
physics and
engineering
OpenFOAM
FDS
Code Aster
Paraview
Astronomy
and Earth
sciences
WRF
WPS
Software available
• In the following link you can find a detailed list
of the software
installed: https://confluence.csuc.cat/display/
HPCKB/Installed+software
• If you don't find your application ask for it to
the support team and we will be happy to
install it for you or help you in the installation
process
Demography of the service: users
• 32 research projects from 14 different
institutions are using our HPC service.
• These projects are distributed in:
– 11 Large HPC projects (> 500.000 UC)
– 3Medium HPC project (250.000 UC)
– 13 Small HPC projects (100.000 UC)
– 1 XSmall HPC project (40.000 UC)
Demography of the service: jobs (I)
Jobs per # cores
Demography of the service: jobs (II)
% Jobs vs Memory/core
Top 10 apps per usage (2019)
Usage per knowledge area (2019)
Wait time of the jobs
% Jobs vs wait time
Wait time vs Job core count
Summary
• Who are we?
• High performance computing at CSUC
• Hardware facilities
• Working environment
• Development environment
Hardware facilities
Canigo(2018)
Bull Sequana
X800
384 cores Intel
SP Platinum
6148
9 TB RAM
memory
33,18 Tflop/s
Pirineus
II(2018)
Bull Sequana
X550
2688 cores
Intel SP
Platinum 6148
4 nodes with 2
GPU + 4 Intel
KNL nodes
283,66 TFlop/s
Canigó
• Shared memory
machines (2 nodes)
• 33.18 Tflop/s peak
performance (16,59
per node)
• 384 cores (8 cpus Intel
SP Platinum 8168 per
node)
• Frequency of 2,7 GHz
• 4,6 TB main memory
per node
• 20 TB disk storage
4 nodes with 2 x GPGPU
• 48 cores (2x Intel SP Platinum 8168,
2.7 GHz)
• 192 GB main memory
• 4.7 Tflop/s per GPGPU
4 Intel KNL nodes
• 1 x Xeon-Phi 7250 (68 cores @
1.5 GHz, 4 hw threads)
• 384 GB main memory per node
• 3.5 Tflop/s per node
Pirineus II
Standard nodes (44 nodes)
• 48 cores (2x Intel SP Platinum
6148, 2.7 GHz)
• 192 GB main memory (4 GB/core)
• 4 TB disk storage per node
High memory nodes (6 nodes)
• 48 cores (2x Intel SP Platinum 6148, 2.7 GHz)
• 384 GB main memory (8 GB/core)
• 4 TB disk storage per node
Pirineus II
High performace scratch system
• High performance storage available based
on BeeGFS
• 180 TB total space available
• Very high read / write speed
• Infiniband HDR direct connection (100 Gbps)
between the BeeGFS cluster and the compute
nodes.
HPC Service infrastructure at CSUC
Canigó Pirineus II
Summary of HW infrastructure
Canigó Pirineus II TOTAL
Cores 384 2 688 3 072
TotalRpeak (TFlop/s) 33.18 283.66 317
Power consumption
(kW)
5.24 32.80 38
Efficiency
(Tflop/s/kW)
6.33 8.65 8.34
Evolution of the performance of HPC
at CSUC
10 x
Summary
• Who are we?
• High performance computing at CSUC
• Hardware facilities
• Working environment
• Development environment
Working environment
• The working environment is shared between
all the users of the service.
• Each machine is managed by GNU/Linux
operating system (Red Had).
• Computational resources are managed by the
Slurm Workload manager.
• Compilers and development tools availble:
Intel, GNU and PGI
Batch manager: Slurm
• Slurm manages the available resources in
order to have an optimal distributionbetween
all the jobs in the system
• Slurm assign different priority to each job
depending on a lot of factors
… more on this after the coffee!
Storage units
(*) There is a limit per project depending on the project category. Group I: 200
GB, group II 100 GB, group III 50 GB, group IV 25 GB
Name Variable Availability Quota Time limit Backup
/home/$USER $HOME Global
25- 200
GB (*)
Unlimited Yes
/scratch/$USER/ − Global 1 TB 30 days No
/scratch/$USER/tmp/$JOBI
D
$SCRATCH /
$SHAREDSCRATCH
Global 1 TB 7 days No
/tmp/$USER/$JOBID
$SCRATCH /
$LOCALSCRATCH
Local node −
Job
execution
No
How to access to our services?
• If you are not granted with a RES project or
you are not interested in applying for it you
can still work with us. More info
in https://www.csuc.cat/ca/supercomputacio
/sollicitud-d-us
HPC Service price
Academic project¹
Initial block
- Group I: 500.000 UC 8.333,33 €
- Group II: 250.000 UC 5.555,55 €
- Group III: 100.000 UC 3.333,33 €
- Group IV: 40.000 UC 1.666,66€
Additional 50.000 UC block
- When you have paid for 500.000 UC 280 €/block
- When you have paid for 250.000 UC 1.100 €/block
- When you have paid for 100.000 UC 1.390 €/block
- When you have paid for 40.000 UC 2.000 €/block
DGR discount for catalan academic
groups
-10 %
Accounting HPC resources
• In order to quantify the used resources we introduce the UC
as a unit.
• UC: Computational unit. It is defined as UC =
HC(Computacional Hour) x factor
– For standard nodes, 1HC = 1UC. Factor = 1.
– For standard fat nodes, 1HC = 1.5 UC. Factor = 1.5
– For GPU nodes, 1HC = 1UC. Factor = 1. (*)
– For KNL nodes, 1HC = 0,5 UC. Factor = 0,5. (**)
– Per a canigó (SMP), 1HC = 2UC. Factor = 2
(*) You need to allocate a full socket (24 cores) at minimum
(**) You need to allocate the full node (68 cores)
Access through RES project
• You can apply for a RES (red española de
supercomputación) project asking to work at
CSUC (in pirineus II or canigo). More
information about this
on https://www.res.es/es/acceso-a-la-res
Choosing your architecture: HPC
partitions // queues
• We have 5 partitions available for the users:
std, std-fat, gpu, knl, mem working on
standard, standard fat, gpu, knl or shared
memory nodes.
• Initially the user can only use std and std-fat
partition but if any user wants to use a
different architecture only need to request
permission and it will be granted.
… more on this later...
Do you need help?
http://hpc.csuc.cat
Documentation: HPC Knowledge Base
http://hpc.csuc.cat > Documentation
Problems or requests? Service Desk
http://hpc.csuc.cat > Support
Summary
• Who are we?
• High performance computing at CSUC
• Hardware facilities
• Working environment
• Development environment
Development tools @ CSUC HPC
• Compilers available for the users:
– Intel compilers
– PGI compilers
– GNU compilers
• MPI libraries:
– Open MPI
– Intel MPI
– MPICH
– MVAPICH
Development tools @ CSUC HPC
• Intel Advisor, VTune, ITAC, Inspector
• Scalasca
• Mathematical libraries:
– Intel MKL
– Lapack
– Scalapack
– FFTW
• If you need anything that is not installed let us
know
Questions?
MOLTES GRÀCIES
http://hpc.csuc.cat
CristianGomollón Adrián Macía
Ricard de laVega
Ismael Fernàndez
VíctorPérez

More Related Content

PDF
PDF
PDF
USENIX NSDI 2016 (Session: Resource Sharing)
PPTX
Exascale Capabl
PDF
Distributed Multi-GPU Computing with Dask, CuPy and RAPIDS
PDF
Flow-centric Computing - A Datacenter Architecture in the Post Moore Era
PDF
Slides for In-Datacenter Performance Analysis of a Tensor Processing Unit
USENIX NSDI 2016 (Session: Resource Sharing)
Exascale Capabl
Distributed Multi-GPU Computing with Dask, CuPy and RAPIDS
Flow-centric Computing - A Datacenter Architecture in the Post Moore Era
Slides for In-Datacenter Performance Analysis of a Tensor Processing Unit

What's hot (20)

PDF
Exploring the Performance Impact of Virtualization on an HPC Cloud
PDF
Designing High Performance Computing Architectures for Reliable Space Applica...
PPTX
GPU Performance Prediction Using High-level Application Models
PDF
IEEE CloudCom 2014参加報告
PDF
クラウド時代の半導体メモリー技術
PPTX
Working together with SURF Raymond Oonk Annette Langedijk SURF
PDF
Expectations for optical network from the viewpoint of system software research
PPTX
GPGPU programming with CUDA
PPTX
Programmable Exascale Supercomputer
PPTX
Applying Recursive Temporal Blocking for Stencil Computations to Deeper Memor...
PDF
Japan Lustre User Group 2014
PDF
PDF
Protecting Real-Time GPU Kernels in Integrated CPU-GPU SoC Platforms
PPTX
PDF
Technology Updates of PG-Strom at Aug-2014 (PGUnconf@Tokyo)
PPTX
Applying of the NVIDIA CUDA to the video processing in the task of the roundw...
PDF
GPU Programming
PPTX
GPU Architecture NVIDIA (GTX GeForce 480)
PDF
cnsm2011_slide
PDF
Evolution of Supermicro GPU Server Solution
Exploring the Performance Impact of Virtualization on an HPC Cloud
Designing High Performance Computing Architectures for Reliable Space Applica...
GPU Performance Prediction Using High-level Application Models
IEEE CloudCom 2014参加報告
クラウド時代の半導体メモリー技術
Working together with SURF Raymond Oonk Annette Langedijk SURF
Expectations for optical network from the viewpoint of system software research
GPGPU programming with CUDA
Programmable Exascale Supercomputer
Applying Recursive Temporal Blocking for Stencil Computations to Deeper Memor...
Japan Lustre User Group 2014
Protecting Real-Time GPU Kernels in Integrated CPU-GPU SoC Platforms
Technology Updates of PG-Strom at Aug-2014 (PGUnconf@Tokyo)
Applying of the NVIDIA CUDA to the video processing in the task of the roundw...
GPU Programming
GPU Architecture NVIDIA (GTX GeForce 480)
cnsm2011_slide
Evolution of Supermicro GPU Server Solution
Ad

Similar to Available HPC resources at CSUC (20)

PDF
PPTX
PDF
PDF
From the Archives: Future of Supercomputing at Altparty 2009
PPTX
PDF
2023comp90024_Spartan.pdf
PDF
OpenNebulaconf2017US: Rapid scaling of research computing to over 70,000 cor...
PPTX
High performance computing for research
PPTX
Full PPT Stack
PDF
NSCC Training Introductory Class
PDF
High–Performance Computing
PDF
Future of hpc
PDF
Implementing AI: High Performance Architectures: Large scale HPC hardware in ...
 
PDF
High Performance Computing in a Nutshell
PPT
Necesidades de supercomputacion en las empresas españolas
PPTX
HPC Midlands Launch - Introduction to HPC Midlands
PDF
Maxwell siuc hpc_description_tutorial
PDF
Swiss National Supercomputing Centre CSCS
PDF
Dell High-Performance Computing solutions: Enable innovations, outperform exp...
PPTX
High performance computing capacity building
From the Archives: Future of Supercomputing at Altparty 2009
2023comp90024_Spartan.pdf
OpenNebulaconf2017US: Rapid scaling of research computing to over 70,000 cor...
High performance computing for research
Full PPT Stack
NSCC Training Introductory Class
High–Performance Computing
Future of hpc
Implementing AI: High Performance Architectures: Large scale HPC hardware in ...
 
High Performance Computing in a Nutshell
Necesidades de supercomputacion en las empresas españolas
HPC Midlands Launch - Introduction to HPC Midlands
Maxwell siuc hpc_description_tutorial
Swiss National Supercomputing Centre CSCS
Dell High-Performance Computing solutions: Enable innovations, outperform exp...
High performance computing capacity building
Ad

More from CSUC - Consorci de Serveis Universitaris de Catalunya (20)

PDF
Novetats a l'Anella Científica, per Maria Isabel Gandia
PDF
IPCEI Cloud - Using European Open-Source Technologies to Build a Sovereign, M...
PDF
L'impacte geopolític a les TIC, per Genís Roca
PDF
Pirineus OnDemand: l'accés fàcil al càlcul científic del CSUC
PDF
Funcionament del servei de càlcul científic del CSUC
PDF
El servei de càlcul científic del CSUC: presentació
PPTX
RDM Training: Publish research data with the Research Data Repository
PPTX
Facilitar a gestão, a visibilidade e a reutilização dos dados de investigação...
PDF
Com fer un pla de gestió de dades amb l'eiNa DMP (en anglès)
PDF
Construint comunitat i governança: ​ el rol del CSUC en el cicle de vida de l...
PDF
Formació RDM: Publicar dades de recerca amb el Repositori de Dades de Recerca
PDF
Publica les teves dades de recerca al Repositori de Dades de Recerca
PDF
Com fer un pla de gestió de dades amb l'eiNa DMP (en català)
PDF
Los datos abiertos: movimiento en expansión
PDF
Dataverse as a FAIR Data Repository (Mercè Crosas)
PDF
From Automation to Autonomous Networks with AI
PDF
Jornada de presentació de les noves infraestructures de càlcul i emmagatzematge
PDF
Les subvencions del Departament de Cultura per a projectes relatius al patrim...
PDF
Presentació dels serveis d'eScire (patrocinador)
PDF
L'Arxiu Històric de la Biblioteca del Centre de Lectura de Reus
Novetats a l'Anella Científica, per Maria Isabel Gandia
IPCEI Cloud - Using European Open-Source Technologies to Build a Sovereign, M...
L'impacte geopolític a les TIC, per Genís Roca
Pirineus OnDemand: l'accés fàcil al càlcul científic del CSUC
Funcionament del servei de càlcul científic del CSUC
El servei de càlcul científic del CSUC: presentació
RDM Training: Publish research data with the Research Data Repository
Facilitar a gestão, a visibilidade e a reutilização dos dados de investigação...
Com fer un pla de gestió de dades amb l'eiNa DMP (en anglès)
Construint comunitat i governança: ​ el rol del CSUC en el cicle de vida de l...
Formació RDM: Publicar dades de recerca amb el Repositori de Dades de Recerca
Publica les teves dades de recerca al Repositori de Dades de Recerca
Com fer un pla de gestió de dades amb l'eiNa DMP (en català)
Los datos abiertos: movimiento en expansión
Dataverse as a FAIR Data Repository (Mercè Crosas)
From Automation to Autonomous Networks with AI
Jornada de presentació de les noves infraestructures de càlcul i emmagatzematge
Les subvencions del Departament de Cultura per a projectes relatius al patrim...
Presentació dels serveis d'eScire (patrocinador)
L'Arxiu Històric de la Biblioteca del Centre de Lectura de Reus

Recently uploaded (20)

PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PPTX
Big Data Technologies - Introduction.pptx
PDF
Empathic Computing: Creating Shared Understanding
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PPTX
MYSQL Presentation for SQL database connectivity
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PPT
Teaching material agriculture food technology
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PDF
Network Security Unit 5.pdf for BCA BBA.
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
Machine learning based COVID-19 study performance prediction
PDF
Approach and Philosophy of On baking technology
PPTX
sap open course for s4hana steps from ECC to s4
PPTX
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
Digital-Transformation-Roadmap-for-Companies.pptx
Big Data Technologies - Introduction.pptx
Empathic Computing: Creating Shared Understanding
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
20250228 LYD VKU AI Blended-Learning.pptx
MYSQL Presentation for SQL database connectivity
Unlocking AI with Model Context Protocol (MCP)
Dropbox Q2 2025 Financial Results & Investor Presentation
Teaching material agriculture food technology
The Rise and Fall of 3GPP – Time for a Sabbatical?
Mobile App Security Testing_ A Comprehensive Guide.pdf
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
Network Security Unit 5.pdf for BCA BBA.
The AUB Centre for AI in Media Proposal.docx
Chapter 3 Spatial Domain Image Processing.pdf
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Machine learning based COVID-19 study performance prediction
Approach and Philosophy of On baking technology
sap open course for s4hana steps from ECC to s4
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx

Available HPC resources at CSUC

  • 1. Available HPC resources at CSUC Adrián Macía 19 / 02 / 2020
  • 2. Summary • Who are we? • High performance computing at CSUC • Hardware facilities • Working environment • Development environment
  • 3. Summary • Who are we? • High performance computing at CSUC • Hardware facilities • Working environment • Development environment
  • 4. What is the CSUC? • CSUC is a public consortium born from the join of CESCA and CBUC • Institutions part of the consortium: • Associated institutions:
  • 5. What we do? Scientific computing Communications IT InfrastructuresProcurements Scientific documentation management Joint purchases Electronic administration
  • 6. Summary • Who are we? • High performance computing at CSUC • Hardware facilities • Working environment • Development environment
  • 7. HPC matters • Nowadays simulation is a fundamental tool to solve and uderstand problems in science and engineering Theory SimulationExperiment
  • 8. HPC role in science and engineering • HPC allows the researchers to solve problems that otherwise cannot be afforded • Numerical simulationsare used in a wide variety of fields like: Chemistry and materials sciences Life and health sciences Mathematics, physics and engineering Astronomy, space and Earth sciences
  • 9. Main applications per knowledge area Chemistry and materials science Vasp Siesta Gaussian ADF CP2K Life and health sciences Amber Gromacs NAMD Schrödinger VMD Mathematics, physics and engineering OpenFOAM FDS Code Aster Paraview Astronomy and Earth sciences WRF WPS
  • 10. Software available • In the following link you can find a detailed list of the software installed: https://confluence.csuc.cat/display/ HPCKB/Installed+software • If you don't find your application ask for it to the support team and we will be happy to install it for you or help you in the installation process
  • 11. Demography of the service: users • 32 research projects from 14 different institutions are using our HPC service. • These projects are distributed in: – 11 Large HPC projects (> 500.000 UC) – 3Medium HPC project (250.000 UC) – 13 Small HPC projects (100.000 UC) – 1 XSmall HPC project (40.000 UC)
  • 12. Demography of the service: jobs (I) Jobs per # cores
  • 13. Demography of the service: jobs (II) % Jobs vs Memory/core
  • 14. Top 10 apps per usage (2019)
  • 15. Usage per knowledge area (2019)
  • 16. Wait time of the jobs % Jobs vs wait time
  • 17. Wait time vs Job core count
  • 18. Summary • Who are we? • High performance computing at CSUC • Hardware facilities • Working environment • Development environment
  • 19. Hardware facilities Canigo(2018) Bull Sequana X800 384 cores Intel SP Platinum 6148 9 TB RAM memory 33,18 Tflop/s Pirineus II(2018) Bull Sequana X550 2688 cores Intel SP Platinum 6148 4 nodes with 2 GPU + 4 Intel KNL nodes 283,66 TFlop/s
  • 20. Canigó • Shared memory machines (2 nodes) • 33.18 Tflop/s peak performance (16,59 per node) • 384 cores (8 cpus Intel SP Platinum 8168 per node) • Frequency of 2,7 GHz • 4,6 TB main memory per node • 20 TB disk storage
  • 21. 4 nodes with 2 x GPGPU • 48 cores (2x Intel SP Platinum 8168, 2.7 GHz) • 192 GB main memory • 4.7 Tflop/s per GPGPU 4 Intel KNL nodes • 1 x Xeon-Phi 7250 (68 cores @ 1.5 GHz, 4 hw threads) • 384 GB main memory per node • 3.5 Tflop/s per node Pirineus II
  • 22. Standard nodes (44 nodes) • 48 cores (2x Intel SP Platinum 6148, 2.7 GHz) • 192 GB main memory (4 GB/core) • 4 TB disk storage per node High memory nodes (6 nodes) • 48 cores (2x Intel SP Platinum 6148, 2.7 GHz) • 384 GB main memory (8 GB/core) • 4 TB disk storage per node Pirineus II
  • 23. High performace scratch system • High performance storage available based on BeeGFS • 180 TB total space available • Very high read / write speed • Infiniband HDR direct connection (100 Gbps) between the BeeGFS cluster and the compute nodes.
  • 24. HPC Service infrastructure at CSUC Canigó Pirineus II
  • 25. Summary of HW infrastructure Canigó Pirineus II TOTAL Cores 384 2 688 3 072 TotalRpeak (TFlop/s) 33.18 283.66 317 Power consumption (kW) 5.24 32.80 38 Efficiency (Tflop/s/kW) 6.33 8.65 8.34
  • 26. Evolution of the performance of HPC at CSUC 10 x
  • 27. Summary • Who are we? • High performance computing at CSUC • Hardware facilities • Working environment • Development environment
  • 28. Working environment • The working environment is shared between all the users of the service. • Each machine is managed by GNU/Linux operating system (Red Had). • Computational resources are managed by the Slurm Workload manager. • Compilers and development tools availble: Intel, GNU and PGI
  • 29. Batch manager: Slurm • Slurm manages the available resources in order to have an optimal distributionbetween all the jobs in the system • Slurm assign different priority to each job depending on a lot of factors … more on this after the coffee!
  • 30. Storage units (*) There is a limit per project depending on the project category. Group I: 200 GB, group II 100 GB, group III 50 GB, group IV 25 GB Name Variable Availability Quota Time limit Backup /home/$USER $HOME Global 25- 200 GB (*) Unlimited Yes /scratch/$USER/ − Global 1 TB 30 days No /scratch/$USER/tmp/$JOBI D $SCRATCH / $SHAREDSCRATCH Global 1 TB 7 days No /tmp/$USER/$JOBID $SCRATCH / $LOCALSCRATCH Local node − Job execution No
  • 31. How to access to our services? • If you are not granted with a RES project or you are not interested in applying for it you can still work with us. More info in https://www.csuc.cat/ca/supercomputacio /sollicitud-d-us
  • 32. HPC Service price Academic project¹ Initial block - Group I: 500.000 UC 8.333,33 € - Group II: 250.000 UC 5.555,55 € - Group III: 100.000 UC 3.333,33 € - Group IV: 40.000 UC 1.666,66€ Additional 50.000 UC block - When you have paid for 500.000 UC 280 €/block - When you have paid for 250.000 UC 1.100 €/block - When you have paid for 100.000 UC 1.390 €/block - When you have paid for 40.000 UC 2.000 €/block DGR discount for catalan academic groups -10 %
  • 33. Accounting HPC resources • In order to quantify the used resources we introduce the UC as a unit. • UC: Computational unit. It is defined as UC = HC(Computacional Hour) x factor – For standard nodes, 1HC = 1UC. Factor = 1. – For standard fat nodes, 1HC = 1.5 UC. Factor = 1.5 – For GPU nodes, 1HC = 1UC. Factor = 1. (*) – For KNL nodes, 1HC = 0,5 UC. Factor = 0,5. (**) – Per a canigó (SMP), 1HC = 2UC. Factor = 2 (*) You need to allocate a full socket (24 cores) at minimum (**) You need to allocate the full node (68 cores)
  • 34. Access through RES project • You can apply for a RES (red española de supercomputación) project asking to work at CSUC (in pirineus II or canigo). More information about this on https://www.res.es/es/acceso-a-la-res
  • 35. Choosing your architecture: HPC partitions // queues • We have 5 partitions available for the users: std, std-fat, gpu, knl, mem working on standard, standard fat, gpu, knl or shared memory nodes. • Initially the user can only use std and std-fat partition but if any user wants to use a different architecture only need to request permission and it will be granted. … more on this later...
  • 36. Do you need help? http://hpc.csuc.cat
  • 37. Documentation: HPC Knowledge Base http://hpc.csuc.cat > Documentation
  • 38. Problems or requests? Service Desk http://hpc.csuc.cat > Support
  • 39. Summary • Who are we? • High performance computing at CSUC • Hardware facilities • Working environment • Development environment
  • 40. Development tools @ CSUC HPC • Compilers available for the users: – Intel compilers – PGI compilers – GNU compilers • MPI libraries: – Open MPI – Intel MPI – MPICH – MVAPICH
  • 41. Development tools @ CSUC HPC • Intel Advisor, VTune, ITAC, Inspector • Scalasca • Mathematical libraries: – Intel MKL – Lapack – Scalapack – FFTW • If you need anything that is not installed let us know
  • 43. MOLTES GRÀCIES http://hpc.csuc.cat CristianGomollón Adrián Macía Ricard de laVega Ismael Fernàndez VíctorPérez