Showing posts with label supercomputers. Show all posts
Showing posts with label supercomputers. Show all posts

Thursday, September 21, 2023

Huawei and the US-China Chip War — Manifold #44

 

TP Huang is a computer scientist and analyst of global technology development. He posts often on X: https://twitter.com/tphuang 


Steve and TP discuss: 

0:00 Introduction: TP Huang and semiconductor technology 
5:40 Huawei’s new phone and SoC 
23:19 SMIC 7nm chip production in China: Yield and economics 
28:21 Impact on Qualcomm 
36:08 U.S. sanctions solved the coordination problem for China semiconductor companies 
42:48 5G modem and RF chips: impact on Qualcomm, Broadcom, Apple, etc. 
47:14 5G and Huawei 52:50 Satellite capabilities of Huawei phones 
56:46 Huawei vs Apple and Chinese consumers 
1:01:33 Chip War and AI model training

Thursday, May 17, 2018

Exponential growth in compute used for AI training


Chart shows the total amount of compute, in petaflop/s-days, used in training (e.g., optimizing an objective function in a high dimensional space). This exponential trend is likely to continue for some time -- leading to qualitative advances in machine intelligence.
AI and Compute (OpenAI blog): ... since 2012, the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.5 month-doubling time (by comparison, Moore’s Law had an 18-month doubling period). Since 2012, this metric has grown by more than 300,000x (an 18-month doubling period would yield only a 12x increase). Improvements in compute have been a key component of AI progress, so as long as this trend continues, it’s worth preparing for the implications of systems far outside today’s capabilities.

... Three factors drive the advance of AI: algorithmic innovation, data (which can be either supervised data or interactive environments), and the amount of compute available for training. Algorithmic innovation and data are difficult to track, but compute is unusually quantifiable, providing an opportunity to measure one input to AI progress. Of course, the use of massive compute sometimes just exposes the shortcomings of our current algorithms. But at least within many current domains, more compute seems to lead predictably to better performance, and is often complementary to algorithmic advances.

...We see multiple reasons to believe that the trend in the graph could continue. Many hardware startups are developing AI-specific chips, some of which claim they will achieve a substantial increase in FLOPS/Watt (which is correlated to FLOPS/$) over the next 1-2 years. ...

Monday, September 05, 2016

World's fastest supercomputer: Sunway TaihuLight (41k nodes, 11M cores)



Jack Dongarra, professor at UT Knoxville, discusses the strengths and weaknesses of the Sunway TaihuLight, currently the world's fastest supercomputer. The fastest US supercomputer, Titan (#3 in the world), is at Oak Ridge National Lab, near UTK. More here and here.

MSU's latest HPC cluster would be ranked ~150 in the world.
Top 500 Supercomputers in the world

Sunway TaihuLight, a system developed by China’s National Research Center of Parallel Computer Engineering & Technology (NRCPC) and installed at the National Supercomputing Center in Wuxi, which is in China's Jiangsu province is the No. 1 system with 93 petaflop/s (Pflop/s) on the Linpack benchmark. The system has 40,960 nodes, each with one SW26010 processor for a combined total of 10,649,600 computing cores. Each SW26010 processor is composed of 4 MPEs, 4 CPEs, (a total of 260 cores), 4 Memory Controllers (MC), and a Network on Chip (NoC) connected to the System Interface (SI). Each of the four MPEs, CPEs, and MCs have access to 8GB of DDR3 memory. The system is based on processors exclusively designed and built in China. The Sunway TaihuLight is almost three times as fast and three times as efficient as Tianhe-2, the system it displaces in the number one spot. The peak power consumption under load (running the HPL benchmark) is at 15.371 MW or 6 Gflops/W. This allows the TaihuLight system to hold one of the top spots on the Green500 in terms of the Performance/Power metric. [ IIRC, these processors are inspired by the old Digital Alpha chips that I used to use... ]

...

The number of systems installed in China has increased dramatically to 167, compared to 109 on the last list. China is now at the No. 1 position as a user of HPC. Additionally, China now is at No. 1 position in the performance share thanks to the big contribution of the systems at No. 1 and No. 2.

The number of systems installed in the USA declines sharply and is now at 165 systems, down from from 199 in the previous list. This is the lowest number of systems installed in the U.S. since the list was started 23 years ago.

...

The U.S., the leading consumer of HPC systems since the inception of the TOP500 lists is now second for the first time after China with 165 of the 500 systems. China leads the systems and performance categories now thanks to the No.1 and No. 2 system and a surge in industrial and research installations registered over the last few years. The European share (105 systems compared to 107 last time) has fallen and is now lower than the dominant Asian share of 218 systems, up from 173 in November 2015.

Dominant countries in Asia are China with 167 systems (up from 109) and Japan with 29 systems (down from 37).

In Europe, Germany is the clear leader with 26 systems followed by France with 18 and the UK with 12 systems.

Monday, December 29, 2008

Globalization and supercomputing

From the NYTimes, 100 fastest supercomputers by location, and historical timeline of computing power. Click for larger images.







Blog Archive

Labels