#11:With the elasticity and scalability of AWS, customers can easily deploy multiple customers to meet the needs of their users, with the additional capability to match cluster architectures to jobs, such as memory optimized instances for large memory jobs or storage optimized instances for heavy IO jobs.
#19:AWS provides a wide breadth of services supporting compute-intensive workloads
[Quickly walk through them]
EFA network interface for compute instances
AWSでは、クラウド上でこのようなコンピュートインテンシブワークロードを実現するために、様々なサービスを提供しています。
こちらに一例を示していますが、EC2を主軸に、
最近では
#22:1/ We offer the choice of processor and architecture to build the applications you need with the flexibility in choice that you want. We believe that by providing greater choice, customers can choose the right compute to power their application and workload.
2/ We have had a rich and long-term partnership with Intel and the Skylake processor have been essential to powering our most powerful instances. We have also been release the latest generation Intel instance with the second general Xeon Scalable processors from Intel with our core workloads most recently, our largest C5 instances
3/ A year ago, we announced our support for the AMD EPYC processor. We have 5 instances across 15 regions and we have seen very positive reception for these new instances for those who want to benefit from lower costs when you do not need the full compute power of an instance. And we have given our commitment to deliver the latest generation Rome architecture for higher performance.
3/Lastly 4/ Last year here at re-Invent, we announced that AWS has released a new processor, the Graviton processor, based on Arm-architectures. W
#25:初代Graviton搭載インスタンスはA1インスタンスとしてリリースされていた。
Graviton2では他のEC2インスタンスと同じ命名規則で汎用M、コンピュートC、メモリ最適化R と世代表記にGravitonの”g”がついたネーミング
それぞれローカルNVMe SSDを搭載したインスタンス(x6gd)も用意
M6gは2020/5/11にGA
(As of May 11) C6g: End-May, R6g: Mid-Jun, The D platforms (m6gd, r6gd and c6gd) are targeting to Jun-End.
6 instance types powered by Graviton2 processors into our popular C, M, R instance families with 2GB, 4GB, 8G DRAM/vCPU respectively
And the option for instance storage on each
M6g is in preview now, and the other instances will be coming soon
#32:Parallel tightly-coupled computing applications are typically based on MPI. Here is a notional diagram of how mpi applications work today on AWS. This is the “before” chart, without EFA. MPI is a standardized message passing interface. There are a variety of versions of MPI, such as Open MPI and Intel MPI. These two we will be talking about later in the webinar.
MPI is the networking library used by the application to provide point to point communication between the different cores on which an application is running. MPI is the bottom of the user portion of the stack and it talks to the kernel tcp/ip stack. Which is at the top of the kernel stack. The kernel stack then talks to the ENA network driver which communicates with the hardware.
#49:All sorts of HPC workloads are run on AWS, each optimized architecturally for performance and minimum cost. We run life sciences workloads such as genomics and neuroscience, support for financial services, oil and gas, weather and climate simulation, Electronic design automation, many, many design and engineering cases, we run all of the major CFD flow solvers, structural tools, media and entertainment, molecular dynamics, autonomous vehicles. And, we have lots of machine learning services.