SlideShare a Scribd company logo
Helix Nebula – The Science Cloud with Grant Agreement 687614 is a Pre-Commercial Procurement Action
funded by H2020 Framework Programme
M-PIL-3.2 Public Session
Next steps
Andrea Chierici
Dec’18Feb’18
Ramping-up of the IaaS Resources
20k cores/2PB storage
Network aggregated speed: 40Gbps
Scalability
Testing
End User
Access
Combined capacity
distributed across 2 Pilots
Apr’18 Jun’18
14/06/2018
10k cores / 1PB storage
7k cores / 700TB storage
End of
Phase 3
Start of
Phase 3
Quota management
Procurers shared resources in order to achieve large
scale tests
Quota limitation may have prevented reaching important
results
Optimized utilization of resources (reached 80% in one
case)
Consolidated WLCG approach
Single access point with shared resources
New use-cases
Each Procurer is responsible to allocate some of their
capacity to all use-cases they propose
In some cases of high demand the procurers voluntarily
donate capacity for a limited time.
14/06/2018
Upcoming events
July 9: CHEP 2018, Sofia
August 28: GridKa School, Karlsruhe
Hands-On session organised by KIT
September 11: HNSciCloud meeting,
Amsterdam
Organised by SURFsara
October 9-11: DI4R 2018, Lisbon
October 24th: Hamburg
Organised by DESY
November 28th-30th: HNSciCloud
meeting, CERN
December 4-6: ICT 2018, Vienna
Demonstrations
14/06/2018
14/06/2018
TCO Study: PanCancer
Up to 400 VMs, at least 250 VMs concurrently, 8-16 vCPUs, 16G
RAM and 30 GB scratch
1 PB dataset, more than 4,000 files 5-30GB range, more than 4,000
files 100-500 MB range. Outputs ca. 92,000 files in kb range
a minimal set of resources will be in constant use, and at periods of
incoming data we will ramp up resource usage to a maximum
Continuous deployment (will be running for the full 12 months):
Compute: 7 VMs, 50 CPUs, 50GB RAM
Storage: 0.5PB, not accessed frequently
Network: Minimal
During bursts (24-48 hours duration, 1-2x / month):
Compute: ~250 VMs, 1000 vCPUs, 8,5TB RAM
Storage: 0.5PB, random access with high I/O requirement
Network: 10GBit/s intra-node traffic, up to 4GBit/s ingress
14/06/2018
TCO Study: Alice
3 types of jobs (workloads) within the ALICE
use-case (each job uses a single core)
Monte Carlo (detector simulation)
low priority suitable for ‘backfilling’ unused capacity
duration: 6 hours, 2 GB/core, disk 2 GB/core
Network: 200MB down / 350MB up/ 0,3 Mbps
Raw Data reconstruction
process recently recorded data or reprocess older data
Analysis Trains
14/06/2018
Intro of Vouchers short-term usage
Initial voucher emission: 10 per procurer
Value: 250 euro
Available for any service (cpu, disk, gpu)
Consumption blocked once value has been reached
Possibility to export data even if credit exhausted
Feedback provided by early users
Expected 100 vouchers emitted in total
Means of paying for IaaS services consumed by
long tail of science users that will execute EGI
Applications on Demand
14/06/2018

More Related Content

PDF
Geospatial Sensor Networks and Partitioning Data
PPTX
Hyperloglog Lightning Talk
PDF
Scale search powered apps with Elastisearch, k8s and go - Maxime Boisvert
PDF
Red Hat Summit 2017 - LT107508 - Better Managing your Red Hat footprint with ...
PDF
Blueprint: Kafka Publisher of Ceilometer
PDF
Large Scale EventLog Management @Twitter
PPTX
Data- How Does It Work-
PDF
Ceilometer lsf-intergration-openstack-summit
Geospatial Sensor Networks and Partitioning Data
Hyperloglog Lightning Talk
Scale search powered apps with Elastisearch, k8s and go - Maxime Boisvert
Red Hat Summit 2017 - LT107508 - Better Managing your Red Hat footprint with ...
Blueprint: Kafka Publisher of Ceilometer
Large Scale EventLog Management @Twitter
Data- How Does It Work-
Ceilometer lsf-intergration-openstack-summit

Similar to M-PIL-3.2 Public Session (20)

PPT
High Performance Cyberinfrastructure Enabling Data-Driven Science Supporting ...
PDF
Opportunities of ML-based data analytics in ABCI
PDF
Hpc Cloud project Overview
PPT
High Performance Cyberinfrastructure Enabling Data-Driven Science in the Biom...
PDF
European Exascale System Interconnect & Storage
PPTX
PDF
Scaling Open Source Big Data Cloud Applications is Easy/Hard
PPTX
The Pacific Research Platform Enables Distributed Big-Data Machine-Learning
PPT
Systore07 V4
PPTX
OpenStack at CERN : A 5 year perspective
PDF
Possibility of hpc application on cloud infrastructure by container cluster
PPTX
20190620 accelerating containers v3
PPTX
PRP, NRP, GRP & the Path Forward
PDF
Design phase kick-off event and Ceremony
PDF
HNSciCloud update @ the World LHC Computing Grid deployment board
PPTX
Security Challenges and the Pacific Research Platform
PPTX
3 archiver omc deployment_scenarios
PDF
Deep Learning for Fast Simulation
PDF
Linac Coherent Light Source (LCLS) Data Transfer Requirements
PDF
Large Scale Image Processing in Real-Time Environments with Kafka
High Performance Cyberinfrastructure Enabling Data-Driven Science Supporting ...
Opportunities of ML-based data analytics in ABCI
Hpc Cloud project Overview
High Performance Cyberinfrastructure Enabling Data-Driven Science in the Biom...
European Exascale System Interconnect & Storage
Scaling Open Source Big Data Cloud Applications is Easy/Hard
The Pacific Research Platform Enables Distributed Big-Data Machine-Learning
Systore07 V4
OpenStack at CERN : A 5 year perspective
Possibility of hpc application on cloud infrastructure by container cluster
20190620 accelerating containers v3
PRP, NRP, GRP & the Path Forward
Design phase kick-off event and Ceremony
HNSciCloud update @ the World LHC Computing Grid deployment board
Security Challenges and the Pacific Research Platform
3 archiver omc deployment_scenarios
Deep Learning for Fast Simulation
Linac Coherent Light Source (LCLS) Data Transfer Requirements
Large Scale Image Processing in Real-Time Environments with Kafka
Ad

More from Helix Nebula The Science Cloud (20)

PDF
Interactive Data Analysis for End Users on HN Science Cloud
PDF
Container Federation Use Cases
PDF
CERN Batch in the HNSciCloud
PDF
LHCb on RHEA and T-Systems
PDF
HNSciCloud CMS status-report
PDF
Helix Nebula Science Cloud usage by ALICE
PDF
Hybrid cloud for science
PDF
HNSciCloud PILOT PLATFORM OVERVIEW
PDF
HNSciCloud Overview
PDF
This Helix Nebula Science Cloud Pilot Phase Open Session
PDF
Cloud Services for Education - HNSciCloud applied to the UP2U project
PDF
Network experiences with Public Cloud Services @ TNC2017
PDF
EOSC in practice - Silvana Muscella (chair EOSC HLEG)
PDF
Helix Nebula Science Cloud Pilot Phase, 6 February 2018, Bologna, Italy
PDF
Pilot phase Award Ceremony - INFN Introduction and welcome
PDF
Early adopter group and closing of webinar - João Fernandes (CERN)
PDF
HNSciCloud pilot phase - Andrea Chierici (INFN)
PDF
Pilot phase Award Ceremony - T-Systems
PDF
Pilot phase Award Ceremony - RHEA
PDF
Overview of HNSciCloud - Bob Jones (CERN)
Interactive Data Analysis for End Users on HN Science Cloud
Container Federation Use Cases
CERN Batch in the HNSciCloud
LHCb on RHEA and T-Systems
HNSciCloud CMS status-report
Helix Nebula Science Cloud usage by ALICE
Hybrid cloud for science
HNSciCloud PILOT PLATFORM OVERVIEW
HNSciCloud Overview
This Helix Nebula Science Cloud Pilot Phase Open Session
Cloud Services for Education - HNSciCloud applied to the UP2U project
Network experiences with Public Cloud Services @ TNC2017
EOSC in practice - Silvana Muscella (chair EOSC HLEG)
Helix Nebula Science Cloud Pilot Phase, 6 February 2018, Bologna, Italy
Pilot phase Award Ceremony - INFN Introduction and welcome
Early adopter group and closing of webinar - João Fernandes (CERN)
HNSciCloud pilot phase - Andrea Chierici (INFN)
Pilot phase Award Ceremony - T-Systems
Pilot phase Award Ceremony - RHEA
Overview of HNSciCloud - Bob Jones (CERN)
Ad

Recently uploaded (20)

PPTX
Advanced SystemCare Ultimate Crack + Portable (2025)
PDF
Types of Token_ From Utility to Security.pdf
PPTX
Monitoring Stack: Grafana, Loki & Promtail
PPTX
"Secure File Sharing Solutions on AWS".pptx
PPTX
Trending Python Topics for Data Visualization in 2025
PDF
AI/ML Infra Meetup | LLM Agents and Implementation Challenges
PPTX
Tech Workshop Escape Room Tech Workshop
PDF
iTop VPN Crack Latest Version Full Key 2025
PDF
DuckDuckGo Private Browser Premium APK for Android Crack Latest 2025
PPTX
Oracle Fusion HCM Cloud Demo for Beginners
PDF
CCleaner 6.39.11548 Crack 2025 License Key
PDF
Time Tracking Features That Teams and Organizations Actually Need
PDF
MCP Security Tutorial - Beginner to Advanced
PDF
DNT Brochure 2025 – ISV Solutions @ D365
PPTX
WiFi Honeypot Detecscfddssdffsedfseztor.pptx
PPTX
AMADEUS TRAVEL AGENT SOFTWARE | AMADEUS TICKETING SYSTEM
PPTX
Cybersecurity: Protecting the Digital World
PDF
Microsoft Office 365 Crack Download Free
PDF
How Tridens DevSecOps Ensures Compliance, Security, and Agility
PPTX
Computer Software and OS of computer science of grade 11.pptx
Advanced SystemCare Ultimate Crack + Portable (2025)
Types of Token_ From Utility to Security.pdf
Monitoring Stack: Grafana, Loki & Promtail
"Secure File Sharing Solutions on AWS".pptx
Trending Python Topics for Data Visualization in 2025
AI/ML Infra Meetup | LLM Agents and Implementation Challenges
Tech Workshop Escape Room Tech Workshop
iTop VPN Crack Latest Version Full Key 2025
DuckDuckGo Private Browser Premium APK for Android Crack Latest 2025
Oracle Fusion HCM Cloud Demo for Beginners
CCleaner 6.39.11548 Crack 2025 License Key
Time Tracking Features That Teams and Organizations Actually Need
MCP Security Tutorial - Beginner to Advanced
DNT Brochure 2025 – ISV Solutions @ D365
WiFi Honeypot Detecscfddssdffsedfseztor.pptx
AMADEUS TRAVEL AGENT SOFTWARE | AMADEUS TICKETING SYSTEM
Cybersecurity: Protecting the Digital World
Microsoft Office 365 Crack Download Free
How Tridens DevSecOps Ensures Compliance, Security, and Agility
Computer Software and OS of computer science of grade 11.pptx

M-PIL-3.2 Public Session

  • 1. Helix Nebula – The Science Cloud with Grant Agreement 687614 is a Pre-Commercial Procurement Action funded by H2020 Framework Programme M-PIL-3.2 Public Session Next steps Andrea Chierici
  • 2. Dec’18Feb’18 Ramping-up of the IaaS Resources 20k cores/2PB storage Network aggregated speed: 40Gbps Scalability Testing End User Access Combined capacity distributed across 2 Pilots Apr’18 Jun’18 14/06/2018 10k cores / 1PB storage 7k cores / 700TB storage End of Phase 3 Start of Phase 3
  • 3. Quota management Procurers shared resources in order to achieve large scale tests Quota limitation may have prevented reaching important results Optimized utilization of resources (reached 80% in one case) Consolidated WLCG approach Single access point with shared resources New use-cases Each Procurer is responsible to allocate some of their capacity to all use-cases they propose In some cases of high demand the procurers voluntarily donate capacity for a limited time. 14/06/2018
  • 4. Upcoming events July 9: CHEP 2018, Sofia August 28: GridKa School, Karlsruhe Hands-On session organised by KIT September 11: HNSciCloud meeting, Amsterdam Organised by SURFsara October 9-11: DI4R 2018, Lisbon October 24th: Hamburg Organised by DESY November 28th-30th: HNSciCloud meeting, CERN December 4-6: ICT 2018, Vienna Demonstrations 14/06/2018
  • 6. TCO Study: PanCancer Up to 400 VMs, at least 250 VMs concurrently, 8-16 vCPUs, 16G RAM and 30 GB scratch 1 PB dataset, more than 4,000 files 5-30GB range, more than 4,000 files 100-500 MB range. Outputs ca. 92,000 files in kb range a minimal set of resources will be in constant use, and at periods of incoming data we will ramp up resource usage to a maximum Continuous deployment (will be running for the full 12 months): Compute: 7 VMs, 50 CPUs, 50GB RAM Storage: 0.5PB, not accessed frequently Network: Minimal During bursts (24-48 hours duration, 1-2x / month): Compute: ~250 VMs, 1000 vCPUs, 8,5TB RAM Storage: 0.5PB, random access with high I/O requirement Network: 10GBit/s intra-node traffic, up to 4GBit/s ingress 14/06/2018
  • 7. TCO Study: Alice 3 types of jobs (workloads) within the ALICE use-case (each job uses a single core) Monte Carlo (detector simulation) low priority suitable for ‘backfilling’ unused capacity duration: 6 hours, 2 GB/core, disk 2 GB/core Network: 200MB down / 350MB up/ 0,3 Mbps Raw Data reconstruction process recently recorded data or reprocess older data Analysis Trains 14/06/2018
  • 8. Intro of Vouchers short-term usage Initial voucher emission: 10 per procurer Value: 250 euro Available for any service (cpu, disk, gpu) Consumption blocked once value has been reached Possibility to export data even if credit exhausted Feedback provided by early users Expected 100 vouchers emitted in total Means of paying for IaaS services consumed by long tail of science users that will execute EGI Applications on Demand 14/06/2018