SlideShare a Scribd company logo
Available HPC resources at CSUC
Adrián Macía
13 / 12 / 2022
Summary
• Who are we?
• Scientific computing at CSUC
• Hardware facilities
• Working environment
• Development environment
• How to access our services?
Summary
• Who are we?
• Scientific computing at CSUC
• Hardware facilities
• Working environment
• Development environment
• How to access our services?
What is the CSUC?
What is the CSUC?
Institutions in the consortium
Associated institutions
What we do?
Scientific
computing
Communications
IT Infrastructures
Procurements
Scientific
documentation
management
Joint purchases
Electronic
administration
Summary
• Who are we?
• Scientific computing at CSUC
• Hardware facilities
• Working environment
• Development environment
• How to access our services?
Science before scientific computing
Theory
Experiments
New paradigm: scientific computing
Science needs to solve problems
that, otherwise, can not be solved
Development of new theoretical and
technological tools
Problem resolution that drives to
new questions
But... what is Scientific Computing?
Science progress after Scientific Computing
Theory
Experiment
Simulations
New and powerful tools change the scientific process
Usage examples: Engineering simulations
Aerodynamics of a plane
Vibrations in structures Thermal simulation of lighting systems
Thermal distribution in a brake disc
Usage examples: Simulations in life sciences
Interaction between SARS-CoV-2 spike
protein and different surfaces
Prediction of protein
structures using Artificial
Intelligence
Usage examples: simulations in material science
Emergent structures
in ultracold materials
Graphene electronic structure
Adsorbed molecules in surfaces
Main applications per knowledge area
Chemistry and Materials
Science
Life and Health Sciences
Mathematics, Physics
and Engineering
Astronomy and Earth
Sciences
Software available
• In the following link you can find a detailed list of
the software
installed: https://confluence.csuc.cat/display/HPC
KB/Installed+software
• If you don't find your application ask for it to the
support team and we will be happy to install it for
you or help you in the installation process
Demography of the service: users
These projects are distributed in:
13 Large
HPC
projects (>
500.000
UC)
7Medium
HPC project
(250.000
UC)
3 Small
HPC
projects
(100.000
UC)
12 XSmall
HPC
projects
(50.000 UC)
2 Industrial
HPC
projects
13 RES
projects
2 Test
projects
52 research projects from 22
different institutions are using our
HPC service.
2021
Resource usage per month
Resource usage per institution
Top resource consuming applications
Top executed applications
Usage per architecture
Architecture
% of the total
capacity
Usage (CH)​
Occupation of
the partition (%
over the
theoretical
maximum)
Std 71 %​ 16.790.160​ 89 %​
Fat 8%​ 53.435​ 6 %​
GPU​ 5%​ 545.469 18 %​
KNL​ 6%​ 44.771 13 %​
Canigo 10%​ 1.303.276 54 %​
Demography of the service: jobs (I)
Jobs per # cores
Demography of the service: jobs (II)
CPUtime per number of cores
% Jobs vs memory per core
% CPUTime vs memory per core
Wait time of the jobs
Wait time vs Job core count
Summary
• Who are we?
• Scientific computing at CSUC
• Hardware facilities
• Working environment
• Development environment
• How to access our services?
Hardware facilities
Canigo
(2018)
Bull Sequana
X800
384 cores Intel
SP Platinum
8168
9 TB RAM
memory
33 Tflop/s
Pirineus II
(2018,2021)
Bull Sequana
X550
3504 cores
Intel SP
Platinum
4 nodes with 2
GPU + 4 Intel
KNL nodes
358TFlop/s
More info: https://confluence.csuc.cat/display/HPCKB/Machine+specifications
Canigó
• Shared memory machines
(2 nodes)
• 33.18 Tflop/s peak
performance (16,59 per
node)
• 384 cores (8 cpus Intel SP
Platinum 8168 per node)
• Frequency of 2,7 GHz
• 4,6 TB main memory per
node
• 20 TB disk storage per node
4 nodes with 2 x GPGPU
• 48 cores (2x Intel SP Platinum 8168,
2.7 GHz)
• 192 GB main memory
• Nvidia P100 GPGPU
• 4.7 Tflop/s per GPGPU
4 Intel KNL nodes
• 1 x Xeon-Phi 7250 (68 cores @
1.5 GHz, 4 hw threads)
• 384 GB main memory per node
• 3.5 Tflop/s per node
Pirineus II
Standard nodes (44 nodes)
• 44 nodes:
- 48 cores (2x Intel SP Platinum 8168, 2.7
GHz)
- 192 GB main memory (4 GB/core)
- 4 TB disk storage per node
• 19 nodes:
- 48 cores (2x Intel SP Platinum 8268 2,9
GHz
- 192 GB main memory (4 GB/core)
- 4 TB disk storage per node
High memory nodes (6 nodes)
• 48 cores (2x Intel SP Platinum 8168, 2.7 GHz)
• 384 GB main memory (8 GB/core)
• 4 TB disk storage per node
Pirineus II
High performace scratch system
• High performance storage available based
on BeeGFS
• 180 TB total space available
• Very high read / write speed
• Infiniband HDR direct connection (100 Gbps)
between the BeeGFS cluster and the compute
nodes.
HPC Service infrastructure at CSUC
Canigó Pirineus II
Summary of HW infrastructure
Canigó Pirineus II TOTAL
Cores 384 3 504 3 888
Total Rpeak
(TFlop/s)
33 358 391
Power
consumption (kW)
5.24 31.93 37
Efficiency
(Tflop/s/kW)
6.33 11.6 10.5
Summary
• Who are we?
• Scientific computing at CSUC
• Hardware facilities
• Working environment
• Development environment
• How to access our services?
Working environment
• The working environment is shared between
all the users of the service.
• Each machine is managed by GNU/Linux
operating system (Red Had).
• Computational resources are managed by the
Slurm Workload manager.
• Compilers and development tools availble: Intel,
GNU and PGI
Batch manager: Slurm
• Slurm manages the available resources in order
to have an optimal distribution between all the
jobs in the system
• Slurm assign different priority to each job
depending on a lot of factors
… more on this after the coffee!
Storage units
(*) There is a limit per project depending on the project category. Group I: 200
GB, group II 100 GB, group III 50 GB, group IV 25 GB
Name Variable Availability Quota Time limit Backu
p
/home/$USER $HOME Global
25- 200
GB (*)
Unlimited Yes
/scratch/$USER/ − Global 1 TB 30 days No
/scratch/$USER/tmp/$J
OBID
$SCRATCH/
$SHAREDSCRAT
CH
Global 1 TB 7 days No
/tmp/$USER/$JOBID
$SCRATCH/
$LOCALSCRATC
H
Local node −
Job
execution
No
Choosing your architecture: HPC partitions // queues
• We have 5 partitions available for the users: std,
std-fat, gpu, knl, mem working on standard,
standard fat, gpu, knl or shared memory nodes.
• Each user can use any of them (except RES
users that are restricted to their own partitions)
depending on their needs
… more on this later...
Do you need help?
http://hpc.csuc.cat
Documentation: HPC Knowledge Base
http://hpc.csuc.cat > Documentation
Problems or requests? Service Desk
http://hpc.csuc.cat > Support
Summary
• Who are we?
• Scientific computing at CSUC
• Hardware facilities
• Working environment
• Development environment
• How to access our services?
Development tools @ CSUC HPC
• Compilers available for the users:
o Intel compilers
o PGI compilers
o GNU compilers
• MPI libraries:
o Open MPI
o Intel MPI
o MPICH
o MVAPICH
Development tools @ CSUC HPC
• Intel Advisor, VTune, ITAC, Inspector
• Scalasca
• Mathematical libraries:
o Intel MKL
o Lapack
o Scalapack
o FFTW
• If you need anything that is not installed let us
know
Summary
• Who are we?
• Scientific computing at CSUC
• Hardware facilities
• Working environment
• Development environment
• How to access our services?
How to access to our services?
• If you are not granted with a RES project or you
are not interested in applying for it you can still
work with us. More info
in https://www.csuc.cat/ca/supercomputacio/solli
citud-d-us
HPC Service price
Accounting HPC resources
There are some considerations concerning the accounting
of HPC resources:
• If you want to use the gpu partition you need to allocate a
full socket (24 cores) at minimum. This is imposed by the
fact that we don't want two different jobs sharing the
same GPU
• If you want to use the KNL nodes you need to allocate
the full node (68 cores). Same reason that the previous
case
• Each partition has an associated default memory per
core. If you need more than that you should ask for it and
the system will assign more cores (with its associated
memory) for your job.
Access through RES project
• You can apply for a RES (red española de
supercomputación) project asking to work at
CSUC (in pirineus II or canigo). More information
about this on https://www.res.es/es/acceso-a-la-
res
National competence center in HPC: EuroCC Spain RES
EuroCC Spain testbed
• EuroCC is an H2020 European
project that want to establish a
network of National competence
center (NCC) in HPC, HPDA and
AI in each country involved on the
project
• The aim of the project is to
promote the usage of scientific
computing, mainly for SME's, but
also in academia and public
administration
• Our HPC resources are also
offered through the Spanish
national competence center in
HPC
https://www.eurocc-project.eu/
https://eurocc-spain.res.es/
Nombre
Nombre institución
Fecha
Goals:
• To install and operate a quantum computer based on superconductingqbits (BSC)and two
quantum emulators (CESGA and SCAYLE)
• Starta service of remote access to the quantum computer (cloud access) to facilitate the
access to the researchers interested in working with this new technology
• Provide support to the users of these quantum infrastructures
• Development of new quantum algorithms Desenvolupament d'algoritmes quàntics
aplicables a problemes reals, principalment enfocats al Quantum Machine Learning
Alejandro Jaramillo joined us as a quantum computing expert to provide support to
the users and promote the usage of the infrastructure
Quantum Spain Project
Questions?
MOLTES GRÀCIES
http://hpc.csuc.cat
CristianGomollón Adrián Macía
Ricard de laVega
Ismael Fernàndez
VíctorPérez
AlejandroJaramillo

More Related Content

PDF
PDF
PPTX
PDF
PDF
PDF
Dell High-Performance Computing solutions: Enable innovations, outperform exp...
PPTX
High performance computing for research
PPTX
e-Infrastructure available for research, using the right tool for the right job
Dell High-Performance Computing solutions: Enable innovations, outperform exp...
High performance computing for research
e-Infrastructure available for research, using the right tool for the right job

Similar to Available HPC Resources at CSUC (20)

PDF
Foundation of High Performance Computing HPC
PPT
HPC Performance tools, on the road to Exascale
PPTX
Pioneering and Democratizing Scalable HPC+AI at PSC
PDF
ABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big Data
PPTX
HPC Midlands Launch - Introduction to HPC Midlands
PPT
Current Trends in HPC
PDF
Implementing AI: High Performace Architectures
 
PDF
Modern Computing: Cloud, Distributed, & High Performance
PPSX
ICEOTOPE & OCF: Performance for Manufacturing
PDF
A Library for Emerging High-Performance Computing Clusters
PPTX
Stories About Spark, HPC and Barcelona by Jordi Torres
PDF
OpenPOWER Acceleration of HPCC Systems
PDF
Mauricio breteernitiz hpc-exascale-iscte
PDF
"Performance Evaluation, Scalability Analysis, and Optimization Tuning of A...
PPTX
EPCC MSc industry projects
PDF
Enabling Insight to Support World-Class Supercomputing (Stefan Ceballos, Oak ...
PDF
Designing HPC & Deep Learning Middleware for Exascale Systems
PDF
From the Archives: Future of Supercomputing at Altparty 2009
PDF
The Exascale Computing Project and the future of HPC
PPTX
epscor_talk_2.pptx
Foundation of High Performance Computing HPC
HPC Performance tools, on the road to Exascale
Pioneering and Democratizing Scalable HPC+AI at PSC
ABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big Data
HPC Midlands Launch - Introduction to HPC Midlands
Current Trends in HPC
Implementing AI: High Performace Architectures
 
Modern Computing: Cloud, Distributed, & High Performance
ICEOTOPE & OCF: Performance for Manufacturing
A Library for Emerging High-Performance Computing Clusters
Stories About Spark, HPC and Barcelona by Jordi Torres
OpenPOWER Acceleration of HPCC Systems
Mauricio breteernitiz hpc-exascale-iscte
"Performance Evaluation, Scalability Analysis, and Optimization Tuning of A...
EPCC MSc industry projects
Enabling Insight to Support World-Class Supercomputing (Stefan Ceballos, Oak ...
Designing HPC & Deep Learning Middleware for Exascale Systems
From the Archives: Future of Supercomputing at Altparty 2009
The Exascale Computing Project and the future of HPC
epscor_talk_2.pptx
Ad

More from CSUC - Consorci de Serveis Universitaris de Catalunya (20)

PDF
Novetats a l'Anella Científica, per Maria Isabel Gandia
PDF
IPCEI Cloud - Using European Open-Source Technologies to Build a Sovereign, M...
PDF
L'impacte geopolític a les TIC, per Genís Roca
PDF
Pirineus OnDemand: l'accés fàcil al càlcul científic del CSUC
PDF
Funcionament del servei de càlcul científic del CSUC
PDF
El servei de càlcul científic del CSUC: presentació
PPTX
RDM Training: Publish research data with the Research Data Repository
PPTX
Facilitar a gestão, a visibilidade e a reutilização dos dados de investigação...
PDF
Com fer un pla de gestió de dades amb l'eiNa DMP (en anglès)
PDF
Construint comunitat i governança: ​ el rol del CSUC en el cicle de vida de l...
PDF
Formació RDM: Publicar dades de recerca amb el Repositori de Dades de Recerca
PDF
Publica les teves dades de recerca al Repositori de Dades de Recerca
PDF
Com fer un pla de gestió de dades amb l'eiNa DMP (en català)
PDF
Los datos abiertos: movimiento en expansión
PDF
Dataverse as a FAIR Data Repository (Mercè Crosas)
PDF
From Automation to Autonomous Networks with AI
PDF
Jornada de presentació de les noves infraestructures de càlcul i emmagatzematge
PDF
Les subvencions del Departament de Cultura per a projectes relatius al patrim...
PDF
Presentació dels serveis d'eScire (patrocinador)
PDF
L'Arxiu Històric de la Biblioteca del Centre de Lectura de Reus
Novetats a l'Anella Científica, per Maria Isabel Gandia
IPCEI Cloud - Using European Open-Source Technologies to Build a Sovereign, M...
L'impacte geopolític a les TIC, per Genís Roca
Pirineus OnDemand: l'accés fàcil al càlcul científic del CSUC
Funcionament del servei de càlcul científic del CSUC
El servei de càlcul científic del CSUC: presentació
RDM Training: Publish research data with the Research Data Repository
Facilitar a gestão, a visibilidade e a reutilização dos dados de investigação...
Com fer un pla de gestió de dades amb l'eiNa DMP (en anglès)
Construint comunitat i governança: ​ el rol del CSUC en el cicle de vida de l...
Formació RDM: Publicar dades de recerca amb el Repositori de Dades de Recerca
Publica les teves dades de recerca al Repositori de Dades de Recerca
Com fer un pla de gestió de dades amb l'eiNa DMP (en català)
Los datos abiertos: movimiento en expansión
Dataverse as a FAIR Data Repository (Mercè Crosas)
From Automation to Autonomous Networks with AI
Jornada de presentació de les noves infraestructures de càlcul i emmagatzematge
Les subvencions del Departament de Cultura per a projectes relatius al patrim...
Presentació dels serveis d'eScire (patrocinador)
L'Arxiu Històric de la Biblioteca del Centre de Lectura de Reus
Ad

Recently uploaded (20)

PDF
Enhancing emotion recognition model for a student engagement use case through...
PDF
STKI Israel Market Study 2025 version august
PPTX
Group 1 Presentation -Planning and Decision Making .pptx
PPTX
TechTalks-8-2019-Service-Management-ITIL-Refresh-ITIL-4-Framework-Supports-Ou...
PDF
Assigned Numbers - 2025 - Bluetooth® Document
PDF
A novel scalable deep ensemble learning framework for big data classification...
PDF
Univ-Connecticut-ChatGPT-Presentaion.pdf
PPTX
1. Introduction to Computer Programming.pptx
PDF
How ambidextrous entrepreneurial leaders react to the artificial intelligence...
PPTX
Final SEM Unit 1 for mit wpu at pune .pptx
PDF
NewMind AI Weekly Chronicles – August ’25 Week III
PPTX
OMC Textile Division Presentation 2021.pptx
PPTX
The various Industrial Revolutions .pptx
PDF
NewMind AI Weekly Chronicles - August'25-Week II
PPTX
cloud_computing_Infrastucture_as_cloud_p
PDF
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
PDF
Web App vs Mobile App What Should You Build First.pdf
PDF
Getting started with AI Agents and Multi-Agent Systems
PPTX
TLE Review Electricity (Electricity).pptx
PPT
What is a Computer? Input Devices /output devices
Enhancing emotion recognition model for a student engagement use case through...
STKI Israel Market Study 2025 version august
Group 1 Presentation -Planning and Decision Making .pptx
TechTalks-8-2019-Service-Management-ITIL-Refresh-ITIL-4-Framework-Supports-Ou...
Assigned Numbers - 2025 - Bluetooth® Document
A novel scalable deep ensemble learning framework for big data classification...
Univ-Connecticut-ChatGPT-Presentaion.pdf
1. Introduction to Computer Programming.pptx
How ambidextrous entrepreneurial leaders react to the artificial intelligence...
Final SEM Unit 1 for mit wpu at pune .pptx
NewMind AI Weekly Chronicles – August ’25 Week III
OMC Textile Division Presentation 2021.pptx
The various Industrial Revolutions .pptx
NewMind AI Weekly Chronicles - August'25-Week II
cloud_computing_Infrastucture_as_cloud_p
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
Web App vs Mobile App What Should You Build First.pdf
Getting started with AI Agents and Multi-Agent Systems
TLE Review Electricity (Electricity).pptx
What is a Computer? Input Devices /output devices

Available HPC Resources at CSUC

  • 1. Available HPC resources at CSUC Adrián Macía 13 / 12 / 2022
  • 2. Summary • Who are we? • Scientific computing at CSUC • Hardware facilities • Working environment • Development environment • How to access our services?
  • 3. Summary • Who are we? • Scientific computing at CSUC • Hardware facilities • Working environment • Development environment • How to access our services?
  • 4. What is the CSUC?
  • 5. What is the CSUC? Institutions in the consortium Associated institutions
  • 6. What we do? Scientific computing Communications IT Infrastructures Procurements Scientific documentation management Joint purchases Electronic administration
  • 7. Summary • Who are we? • Scientific computing at CSUC • Hardware facilities • Working environment • Development environment • How to access our services?
  • 8. Science before scientific computing Theory Experiments
  • 9. New paradigm: scientific computing Science needs to solve problems that, otherwise, can not be solved Development of new theoretical and technological tools Problem resolution that drives to new questions
  • 10. But... what is Scientific Computing?
  • 11. Science progress after Scientific Computing Theory Experiment Simulations
  • 12. New and powerful tools change the scientific process
  • 13. Usage examples: Engineering simulations Aerodynamics of a plane Vibrations in structures Thermal simulation of lighting systems Thermal distribution in a brake disc
  • 14. Usage examples: Simulations in life sciences Interaction between SARS-CoV-2 spike protein and different surfaces Prediction of protein structures using Artificial Intelligence
  • 15. Usage examples: simulations in material science Emergent structures in ultracold materials Graphene electronic structure Adsorbed molecules in surfaces
  • 16. Main applications per knowledge area Chemistry and Materials Science Life and Health Sciences Mathematics, Physics and Engineering Astronomy and Earth Sciences
  • 17. Software available • In the following link you can find a detailed list of the software installed: https://confluence.csuc.cat/display/HPC KB/Installed+software • If you don't find your application ask for it to the support team and we will be happy to install it for you or help you in the installation process
  • 18. Demography of the service: users These projects are distributed in: 13 Large HPC projects (> 500.000 UC) 7Medium HPC project (250.000 UC) 3 Small HPC projects (100.000 UC) 12 XSmall HPC projects (50.000 UC) 2 Industrial HPC projects 13 RES projects 2 Test projects 52 research projects from 22 different institutions are using our HPC service.
  • 19. 2021
  • 21. Resource usage per institution
  • 22. Top resource consuming applications
  • 24. Usage per architecture Architecture % of the total capacity Usage (CH)​ Occupation of the partition (% over the theoretical maximum) Std 71 %​ 16.790.160​ 89 %​ Fat 8%​ 53.435​ 6 %​ GPU​ 5%​ 545.469 18 %​ KNL​ 6%​ 44.771 13 %​ Canigo 10%​ 1.303.276 54 %​
  • 25. Demography of the service: jobs (I) Jobs per # cores
  • 26. Demography of the service: jobs (II) CPUtime per number of cores
  • 27. % Jobs vs memory per core
  • 28. % CPUTime vs memory per core
  • 29. Wait time of the jobs
  • 30. Wait time vs Job core count
  • 31. Summary • Who are we? • Scientific computing at CSUC • Hardware facilities • Working environment • Development environment • How to access our services?
  • 32. Hardware facilities Canigo (2018) Bull Sequana X800 384 cores Intel SP Platinum 8168 9 TB RAM memory 33 Tflop/s Pirineus II (2018,2021) Bull Sequana X550 3504 cores Intel SP Platinum 4 nodes with 2 GPU + 4 Intel KNL nodes 358TFlop/s More info: https://confluence.csuc.cat/display/HPCKB/Machine+specifications
  • 33. Canigó • Shared memory machines (2 nodes) • 33.18 Tflop/s peak performance (16,59 per node) • 384 cores (8 cpus Intel SP Platinum 8168 per node) • Frequency of 2,7 GHz • 4,6 TB main memory per node • 20 TB disk storage per node
  • 34. 4 nodes with 2 x GPGPU • 48 cores (2x Intel SP Platinum 8168, 2.7 GHz) • 192 GB main memory • Nvidia P100 GPGPU • 4.7 Tflop/s per GPGPU 4 Intel KNL nodes • 1 x Xeon-Phi 7250 (68 cores @ 1.5 GHz, 4 hw threads) • 384 GB main memory per node • 3.5 Tflop/s per node Pirineus II
  • 35. Standard nodes (44 nodes) • 44 nodes: - 48 cores (2x Intel SP Platinum 8168, 2.7 GHz) - 192 GB main memory (4 GB/core) - 4 TB disk storage per node • 19 nodes: - 48 cores (2x Intel SP Platinum 8268 2,9 GHz - 192 GB main memory (4 GB/core) - 4 TB disk storage per node High memory nodes (6 nodes) • 48 cores (2x Intel SP Platinum 8168, 2.7 GHz) • 384 GB main memory (8 GB/core) • 4 TB disk storage per node Pirineus II
  • 36. High performace scratch system • High performance storage available based on BeeGFS • 180 TB total space available • Very high read / write speed • Infiniband HDR direct connection (100 Gbps) between the BeeGFS cluster and the compute nodes.
  • 37. HPC Service infrastructure at CSUC Canigó Pirineus II
  • 38. Summary of HW infrastructure Canigó Pirineus II TOTAL Cores 384 3 504 3 888 Total Rpeak (TFlop/s) 33 358 391 Power consumption (kW) 5.24 31.93 37 Efficiency (Tflop/s/kW) 6.33 11.6 10.5
  • 39. Summary • Who are we? • Scientific computing at CSUC • Hardware facilities • Working environment • Development environment • How to access our services?
  • 40. Working environment • The working environment is shared between all the users of the service. • Each machine is managed by GNU/Linux operating system (Red Had). • Computational resources are managed by the Slurm Workload manager. • Compilers and development tools availble: Intel, GNU and PGI
  • 41. Batch manager: Slurm • Slurm manages the available resources in order to have an optimal distribution between all the jobs in the system • Slurm assign different priority to each job depending on a lot of factors … more on this after the coffee!
  • 42. Storage units (*) There is a limit per project depending on the project category. Group I: 200 GB, group II 100 GB, group III 50 GB, group IV 25 GB Name Variable Availability Quota Time limit Backu p /home/$USER $HOME Global 25- 200 GB (*) Unlimited Yes /scratch/$USER/ − Global 1 TB 30 days No /scratch/$USER/tmp/$J OBID $SCRATCH/ $SHAREDSCRAT CH Global 1 TB 7 days No /tmp/$USER/$JOBID $SCRATCH/ $LOCALSCRATC H Local node − Job execution No
  • 43. Choosing your architecture: HPC partitions // queues • We have 5 partitions available for the users: std, std-fat, gpu, knl, mem working on standard, standard fat, gpu, knl or shared memory nodes. • Each user can use any of them (except RES users that are restricted to their own partitions) depending on their needs … more on this later...
  • 44. Do you need help? http://hpc.csuc.cat
  • 45. Documentation: HPC Knowledge Base http://hpc.csuc.cat > Documentation
  • 46. Problems or requests? Service Desk http://hpc.csuc.cat > Support
  • 47. Summary • Who are we? • Scientific computing at CSUC • Hardware facilities • Working environment • Development environment • How to access our services?
  • 48. Development tools @ CSUC HPC • Compilers available for the users: o Intel compilers o PGI compilers o GNU compilers • MPI libraries: o Open MPI o Intel MPI o MPICH o MVAPICH
  • 49. Development tools @ CSUC HPC • Intel Advisor, VTune, ITAC, Inspector • Scalasca • Mathematical libraries: o Intel MKL o Lapack o Scalapack o FFTW • If you need anything that is not installed let us know
  • 50. Summary • Who are we? • Scientific computing at CSUC • Hardware facilities • Working environment • Development environment • How to access our services?
  • 51. How to access to our services? • If you are not granted with a RES project or you are not interested in applying for it you can still work with us. More info in https://www.csuc.cat/ca/supercomputacio/solli citud-d-us
  • 53. Accounting HPC resources There are some considerations concerning the accounting of HPC resources: • If you want to use the gpu partition you need to allocate a full socket (24 cores) at minimum. This is imposed by the fact that we don't want two different jobs sharing the same GPU • If you want to use the KNL nodes you need to allocate the full node (68 cores). Same reason that the previous case • Each partition has an associated default memory per core. If you need more than that you should ask for it and the system will assign more cores (with its associated memory) for your job.
  • 54. Access through RES project • You can apply for a RES (red española de supercomputación) project asking to work at CSUC (in pirineus II or canigo). More information about this on https://www.res.es/es/acceso-a-la- res
  • 55. National competence center in HPC: EuroCC Spain RES
  • 56. EuroCC Spain testbed • EuroCC is an H2020 European project that want to establish a network of National competence center (NCC) in HPC, HPDA and AI in each country involved on the project • The aim of the project is to promote the usage of scientific computing, mainly for SME's, but also in academia and public administration • Our HPC resources are also offered through the Spanish national competence center in HPC https://www.eurocc-project.eu/ https://eurocc-spain.res.es/
  • 57. Nombre Nombre institución Fecha Goals: • To install and operate a quantum computer based on superconductingqbits (BSC)and two quantum emulators (CESGA and SCAYLE) • Starta service of remote access to the quantum computer (cloud access) to facilitate the access to the researchers interested in working with this new technology • Provide support to the users of these quantum infrastructures • Development of new quantum algorithms Desenvolupament d'algoritmes quàntics aplicables a problemes reals, principalment enfocats al Quantum Machine Learning Alejandro Jaramillo joined us as a quantum computing expert to provide support to the users and promote the usage of the infrastructure Quantum Spain Project
  • 59. MOLTES GRÀCIES http://hpc.csuc.cat CristianGomollón Adrián Macía Ricard de laVega Ismael Fernàndez VíctorPérez AlejandroJaramillo