SlideShare a Scribd company logo
AMD heterogeneous Uniform Memory Access
PHIL ROGERS, CORPORATE FELLOW
JOE MACRI, CORPORATE VICE PRESIDENT & PRODUCT CTO
SASA MARINKOVIC, SENIOR MANAGER, PRODUCT MARKETING
AMD Confidential, under embargo until Apr 30, 12:01 AM EST
ABOUT HSA
3AMD Confidential, under embargo until Apr 30, 12:01 AM EST
10 YEARS AGO…
Memory Controller on
the chip
HyperTransport64-bit extensions
AMD Opteron
4AMD Confidential, under embargo until Apr 30, 12:01 AM EST
0
500
1000
1500
2000
2500
3000
3500
4000
4500
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012CPU GFLOPS GPU GFLOPS
HOW DO WE UNLOCK THIS
PERFORMANCE?
GPU COMPUTE CAPABILITY IS MORE THAN THAT OF THE CPU
 See slide 24 for details
10X
5AMD Confidential, under embargo until Apr 30, 12:01 AM EST
WHAT IS HSA?
SERIAL
WORKLOADS
PARALLEL
WORKLOADS
hUMA (MEMORY)
APU
ACCELERATED PROCESSING UNIT
An intelligent computing architecture that enables CPU, GPU and other processors to work in harmony
on a single piece of silicon by seamlessly moving the right tasks to the best suited processing element
6AMD Confidential, under embargo until Apr 30, 12:01 AM EST
HSA EVOLUTION
Uniform memory access
for CPU and GPU
GPU can access CPU
memory
Integrate CPU and GPU
in silicon
Capabilities
Simplified
data sharing
Improved compute
efficiency
Unified power
efficiency
Benefits
7AMD Confidential, under embargo until Apr 30, 12:01 AM EST
WHAT IS hUMA?
heterogeneous
UNIFORM
MEMORY
ACCESS
8AMD Confidential, under embargo until Apr 30, 12:01 AM EST
UNDERSTANDING UMA
Original meaning of UMA is Uniform Memory Access
• Refers to how processing cores in a system view and access memory
• All processing cores in a true UMA system share a single memory address space
Introduction of GPU compute created systems with Non-Uniform Memory Access (NUMA)
• Require data to be managed across multiple heaps with different address spaces
• Add programming complexity due to frequent copies, synchronization, and address translation
HSA restores the GPU to Uniform memory Access
• Heterogeneous computing replaces GPU Computing
9AMD Confidential, under embargo until Apr 30, 12:01 AM EST
INTRODUCING hUMA
CPU
APU
APU
with
HSA
Memory
CPU CPU CPU CPU
UMA
CPU Memory
CPU CPU CPU CPU
NUMA
GPU
GPUGPU
GPU
GPU Memory
Memory
CPU CPU CPU CPU
hUMA
GPU
GPU
GPU
GPU
10AMD Confidential, under embargo until Apr 30, 12:01 AM EST
hUMA KEY FEATURES
BI-DIRECTIONAL COHERENT MEMORY
Any updates made by one processing element will be seen by all other processing elements -
GPU or CPU
PAGEABLE MEMORY
GPU can take page faults, and is no longer restricted to page locked memory
ENTIRE MEMORY SPACE
CPU and GPU processes can dynamically allocate memory from the entire memory space
11AMD Confidential, under embargo until Apr 30, 12:01 AM EST
hUMA KEY FEATURES
Physical Memory
GPU
HW
Coherency
Virtual Memory
CPU
Entire memory space:
Both CPU and GPU can access and allocate any
location in the system’s virtual memory space
CacheCache
Coherent Memory:
Ensures CPU and GPU
caches both see
an up-to-date view of data
Pageable memory:
The GPU can seamlessly
access virtual memory
addresses that are not (yet)
present in physical memory
12AMD Confidential, under embargo until Apr 30, 12:01 AM EST
WITHOUT POINTERS* AND DATA SHARING
*A Pointer is a named variable that holds a memory address. It makes it easy to reference data or code segments by a name and eliminates the need
for the developer to know the actual address in memory. Pointers can be manipulated by the same expressions used to operate on any other variable
GPUCPU
CPU Memory GPU Memory
| | | | || | | | |
| | | | || | | | |
Without hUMA:
• CPU explicitly copies data to GPU memory
• GPU completes computation
• CPU explicitly copies result back to CPU memory
Only the data array
can be copied since GPU
cannot follow embedded
data-structure links
13AMD Confidential, under embargo until Apr 30, 12:01 AM EST
GPU
With hUMA:
• CPU simply passes a pointer to GPU
• GPU completes computation
• CPU can read the result directly – no copying needed!
CPU
CPU / GPU Uniform Memory
| | | | || | | | |
*A Pointer is a named variable that holds a memory address. It makes it easy to reference data or code segments by a name and eliminates the need
for the developer to know the actual address in memory. Pointers can be manipulated by the same expressions used to operate on any other variable
CPU can pass a pointer to
entire data structure since
the GPU can now follow
embedded links
WITH POINTERS* AND DATA SHARING
14AMD Confidential, under embargo until Apr 30, 12:01 AM EST
TOP 10 REASONS TO GO FULLY HARDWARE COHERENT ON GPU/APU
1. Much easier for programmers
2. No need for special APIs
3. Move CPU multi-core algorithms to the GPU without recoding for absence of coherency
4. Allow finer grained data sharing than software coherency
5. Implement coherency once in hardware, rather than N times in different software stacks
6. Prevent hard to debug errors in application software
7. Operating systems prefer hardware coherency – they do not want the bug reports to the platform
8. Probe filters and directories will maintain power efficiency
9. Full coherency opens the doors to single source, native and managed code programming for heterogeneous platforms
10. Optimal architecture for heterogeneous computing on APUs and SOCs
AMD Confidential, under embargo until Apr 30, 12:01 AM EST
15AMD Confidential, under embargo until Apr 30, 12:01 AM EST

hUMA FEATURES
Access to Entire Memory Space
Pageable memory
Bi-directional Coherency
Fast GPU access to system memory
Dynamic Memory Allocation




hUMA BENEFITS
17AMD Confidential, under embargo until Apr 30, 12:01 AM EST
BENEFITS OF HSA
18AMD Confidential, under embargo until Apr 30, 12:01 AM EST
UNIFORM MEMORY BENEFITS TO DEVELOPERS
EASE AND SIMPLICITY OF PROGRAMMING
Single, standard computing environments
LOWER DEVELOPMENT COST
More efficient architecture enables less people to do the same work
SUPPORT FOR MAINSTREAM PROGRAMING LANGUAGES
Python, C++, Java
19AMD Confidential, under embargo until Apr 30, 12:01 AM EST
BETTER EXPERIENCES
Radically different user experiences
LONGER BATTERY LIFE
Less power at the same performance
MORE PERFORMANCE
Getting more performance from the same form factor
BENEFITS TO CONSUMERS
20AMD Confidential, under embargo until Apr 30, 12:01 AM EST
SUPPORT FROM MAJOR INDUSTRY PLAYERS
 For more information go to: http://hsafoundation.com/  Source http://pinterest.com/pin/193021534001931884/
21AMD Confidential, under embargo until Apr 30, 12:01 AM EST
HSA
Nov 11 – 14, 2013
San Jose
McEnery Convention Center
14 Different Tracks with over 140 Individual Presentations
THANK YOU
23AMD Confidential, under embargo until Apr 30, 12:01 AM EST
GFLOPS
Year CPU CPU GFLOPS GPU (RADEON) GPU GFLOPS
2002 Pentium 4 (Northwood) 12.24 9700 Pro 31.2
2003 Pentium 4 (Northwood) 12.8 9800 XT 36.48
2004 Pentium 4 (Prescott 15.2 X850 XT 103.68
2005 15.2 X1800 XT 134.4
2006 Core 2 Duo 23.44 X1950 375
2007 Core 2 Quad 48 HD 2900 XT 473.6
2008 Q9650 96 HD 4870 1200
2009 Core i7 960 102.4 HD 5870 2720
2010 Core i7 970 153.6 HD 6970 2703
2011 Core i7 3960X 316.8 HD7970 3789
2012 Core i7 3970X 336 HD 7970 GHz Edition 4301
24AMD Confidential, under embargo until Apr 30, 12:01 AM EST
POTENTIAL MARKET IS HUGE
Notebooks
Servers
Desktops
Embedded
Game Consoles
Tablets
25AMD Confidential, under embargo until Apr 30, 12:01 AM EST
DISCLAIMER
The information presented in this document is for informational purposes only and may contain technical inaccuracies, omissions and typographical errors.
The information contained herein is subject to change and may be rendered inaccurate for many reasons, including but not limited to product and roadmap changes, component
and motherboard version changes, new model and/or product releases, product differences between differing manufacturers, software changes, BIOS flashes, firmware
upgrades, or the like. AMD assumes no obligation to update or otherwise correct or revise this information. However, AMD reserves the right to revise this information and to
make changes from time to time to the content hereof without obligation of AMD to notify any person of such revisions or changes.
AMD MAKES NO REPRESENTATIONS OR WARRANTIES WITH RESPECT TO THE CONTENTS HEREOF AND ASSUMES NO RESPONSIBILITY FOR ANY
INACCURACIES, ERRORS OR OMISSIONS THAT MAY APPEAR IN THIS INFORMATION.
AMD SPECIFICALLY DISCLAIMS ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE. IN NO EVENT WILL AMD BE LIABLE TO
ANY PERSON FOR ANY DIRECT, INDIRECT, SPECIAL OR OTHER CONSEQUENTIAL DAMAGES ARISING FROM THE USE OF ANY INFORMATION CONTAINED HEREIN, EVEN IF
AMD IS EXPRESSLY ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
ATTRIBUTION
© 2013 Advanced Micro Devices, Inc. All rights reserved. AMD, the AMD Arrow logo, Radeon, and combinations thereof are trademarks of
Advanced Micro Devices, Inc. Other names and logos are used for informational purposes only and may be trademarks of their respective
owners.

More Related Content

PPTX
HSA Introduction
PPTX
AMD 2014 A Series and Performance Mobile Accelerated Processing Units (Codena...
 
PDF
GPU Compute in Medical and Print Imaging
 
PDF
Open compute technology
 
PDF
2014 CES Press Conference
 
PDF
Computex 2014 AMD Press Conference
 
DOCX
Apu fc & s project
PPTX
AMD Next Horizon
 
HSA Introduction
AMD 2014 A Series and Performance Mobile Accelerated Processing Units (Codena...
 
GPU Compute in Medical and Print Imaging
 
Open compute technology
 
2014 CES Press Conference
 
Computex 2014 AMD Press Conference
 
Apu fc & s project
AMD Next Horizon
 

What's hot (17)

PPTX
AMD Next Horizon
 
PDF
AMD Catalyst Software
 
PPTX
Tyrone-Intel oneAPI Webinar: Optimized Tools for Performance-Driven, Cross-Ar...
PPTX
Introduction to usability
PDF
AMD Unified Video Decoder
 
PDF
Dell VMware Virtual SAN Ready Nodes
PDF
Dell PowerEdge Deployment Guide
PPTX
Unleashing Intel® Advanced Vector Extensions 512 (Intel® AVX-512) Inside the ...
PDF
AMD Opteron 6200 and 4200 Series Presentation
 
PDF
Open Source Interactive CPU Preview Rendering with Pixar's Universal Scene De...
PDF
Intel Technologies for High Performance Computing
PDF
HP versus Dell - A Server Comparison
PDF
Trinity press deck 10 2 2012
 
PDF
White Paper: New Features in EMC Enginuity 5876 for Mainframe Environments
 
PPTX
AMD Opteron 6000 Series Platform Press Presentation
 
PPTX
Power8 sales exam prep
PDF
Real Time Storage Configuration Using PERC9 on Dell 13th Generation PowerEdge...
AMD Next Horizon
 
AMD Catalyst Software
 
Tyrone-Intel oneAPI Webinar: Optimized Tools for Performance-Driven, Cross-Ar...
Introduction to usability
AMD Unified Video Decoder
 
Dell VMware Virtual SAN Ready Nodes
Dell PowerEdge Deployment Guide
Unleashing Intel® Advanced Vector Extensions 512 (Intel® AVX-512) Inside the ...
AMD Opteron 6200 and 4200 Series Presentation
 
Open Source Interactive CPU Preview Rendering with Pixar's Universal Scene De...
Intel Technologies for High Performance Computing
HP versus Dell - A Server Comparison
Trinity press deck 10 2 2012
 
White Paper: New Features in EMC Enginuity 5876 for Mainframe Environments
 
AMD Opteron 6000 Series Platform Press Presentation
 
Power8 sales exam prep
Real Time Storage Configuration Using PERC9 on Dell 13th Generation PowerEdge...
Ad

Viewers also liked (20)

PPT
Lecture4
PDF
Intel tools to optimize HPC systems
PPT
Paralle programming 2
PDF
HSA Foundation BoF -Siggraph 2013 Flyer
PPTX
2013 Elite A-Series Launch
 
PDF
Amd pro graphics showcase presentation
PPT
Building and managing high performance teams
PPTX
parallelization strategy
PPTX
Video Card (chs)
PDF
Lecture 6.1
PPTX
AMD Processor
PPTX
What are graphics cards
PPTX
Video/ Graphics cards
PPTX
Graphics card
PPSX
10. GPU - Video Card (Display, Graphics, VGA)
PPTX
Graphics card ppt
PPTX
Biosimilars
PPT
Introduction to computer hardware
PPTX
Whats New in AMD - 2012
KEY
AMD - Why, What and How
Lecture4
Intel tools to optimize HPC systems
Paralle programming 2
HSA Foundation BoF -Siggraph 2013 Flyer
2013 Elite A-Series Launch
 
Amd pro graphics showcase presentation
Building and managing high performance teams
parallelization strategy
Video Card (chs)
Lecture 6.1
AMD Processor
What are graphics cards
Video/ Graphics cards
Graphics card
10. GPU - Video Card (Display, Graphics, VGA)
Graphics card ppt
Biosimilars
Introduction to computer hardware
Whats New in AMD - 2012
AMD - Why, What and How
Ad

Similar to AMD Heterogeneous Uniform Memory Access (20)

PDF
Beyond Moore's Law: The Challenge of Heterogeneous Compute & Memory Systems
PDF
Keynote (Dr. Lisa Su) - Developers: The Heart of AMD Innovation - by Dr. Lisa...
PDF
Final lisa opening_keynote_draft_-_v12.1tb
PDF
GPU architecture notes game prog gpu-arch.pdf
PPT
Amd fusion apus
PPTX
Computação acelerada – a era das ap us roberto brandão, ciência
PDF
S0333 gtc2012-gmac-programming-cuda
PDF
Topics - , Addressing modes, GPU, .pdf
PPT
Guide to heterogeneous system architecture (hsa)
PPTX
CPU VS GPU Performance a: a comparative analysis
PDF
GPU: Understanding CUDA
PDF
Programming Models for Heterogeneous Chips
PDF
HC-4018, How to make the most of GPU accessible memory, by Paul Blinzer
PDF
Mauricio breteernitiz hpc-exascale-iscte
PPTX
PDF
Gpu application in cuda memory
PDF
Deep learning: Hardware Landscape
PPTX
6th Generation Processor Announcement
 
PPTX
Inflection Points in Personal Computing
 
PDF
Ac922 cdac webinar
Beyond Moore's Law: The Challenge of Heterogeneous Compute & Memory Systems
Keynote (Dr. Lisa Su) - Developers: The Heart of AMD Innovation - by Dr. Lisa...
Final lisa opening_keynote_draft_-_v12.1tb
GPU architecture notes game prog gpu-arch.pdf
Amd fusion apus
Computação acelerada – a era das ap us roberto brandão, ciência
S0333 gtc2012-gmac-programming-cuda
Topics - , Addressing modes, GPU, .pdf
Guide to heterogeneous system architecture (hsa)
CPU VS GPU Performance a: a comparative analysis
GPU: Understanding CUDA
Programming Models for Heterogeneous Chips
HC-4018, How to make the most of GPU accessible memory, by Paul Blinzer
Mauricio breteernitiz hpc-exascale-iscte
Gpu application in cuda memory
Deep learning: Hardware Landscape
6th Generation Processor Announcement
 
Inflection Points in Personal Computing
 
Ac922 cdac webinar

More from AMD (20)

PPTX
“Zen 3”: AMD 2nd Generation 7nm x86-64 Microprocessor Core
 
PPTX
Heterogeneous Integration with 3D Packaging
 
PPTX
3D V-Cache
 
PPTX
AMD EPYC Family World Record Performance Summary Mar 2022
 
PPTX
AMD EPYC Family of Processors World Record
 
PPTX
AMD EPYC Family of Processors World Record
 
PPTX
AMD EPYC World Records
 
PDF
AMD: Where Gaming Begins
 
PPTX
Hot Chips: AMD Next Gen 7nm Ryzen 4000 APU
 
PPTX
Hot Chips: AMD Next Gen 7nm Ryzen 4000 APU
 
PPTX
AMD EPYC 7002 World Records
 
PPTX
AMD EPYC 7002 World Records
 
PPTX
Zen 2: The AMD 7nm Energy-Efficient High-Performance x86-64 Microprocessor Core
 
PPTX
AMD Radeon™ RX 5700 Series 7nm Energy-Efficient High-Performance GPUs
 
PPTX
AMD Chiplet Architecture for High-Performance Server and Desktop Products
 
PPTX
AMD EPYC 100 World Records and Counting
 
PPTX
AMD EPYC 7002 Launch World Records
 
PDF
Delivering the Future of High-Performance Computing
 
PDF
7nm "Navi" GPU - A GPU Built For Performance
 
PDF
The Path to "Zen 2"
 
“Zen 3”: AMD 2nd Generation 7nm x86-64 Microprocessor Core
 
Heterogeneous Integration with 3D Packaging
 
3D V-Cache
 
AMD EPYC Family World Record Performance Summary Mar 2022
 
AMD EPYC Family of Processors World Record
 
AMD EPYC Family of Processors World Record
 
AMD EPYC World Records
 
AMD: Where Gaming Begins
 
Hot Chips: AMD Next Gen 7nm Ryzen 4000 APU
 
Hot Chips: AMD Next Gen 7nm Ryzen 4000 APU
 
AMD EPYC 7002 World Records
 
AMD EPYC 7002 World Records
 
Zen 2: The AMD 7nm Energy-Efficient High-Performance x86-64 Microprocessor Core
 
AMD Radeon™ RX 5700 Series 7nm Energy-Efficient High-Performance GPUs
 
AMD Chiplet Architecture for High-Performance Server and Desktop Products
 
AMD EPYC 100 World Records and Counting
 
AMD EPYC 7002 Launch World Records
 
Delivering the Future of High-Performance Computing
 
7nm "Navi" GPU - A GPU Built For Performance
 
The Path to "Zen 2"
 

Recently uploaded (20)

PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
cuic standard and advanced reporting.pdf
PPTX
sap open course for s4hana steps from ECC to s4
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Approach and Philosophy of On baking technology
PDF
Review of recent advances in non-invasive hemoglobin estimation
PPTX
Programs and apps: productivity, graphics, security and other tools
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PPTX
Understanding_Digital_Forensics_Presentation.pptx
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
Machine learning based COVID-19 study performance prediction
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PDF
Empathic Computing: Creating Shared Understanding
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
“AI and Expert System Decision Support & Business Intelligence Systems”
cuic standard and advanced reporting.pdf
sap open course for s4hana steps from ECC to s4
Encapsulation_ Review paper, used for researhc scholars
Mobile App Security Testing_ A Comprehensive Guide.pdf
Diabetes mellitus diagnosis method based random forest with bat algorithm
Approach and Philosophy of On baking technology
Review of recent advances in non-invasive hemoglobin estimation
Programs and apps: productivity, graphics, security and other tools
NewMind AI Weekly Chronicles - August'25 Week I
Per capita expenditure prediction using model stacking based on satellite ima...
Understanding_Digital_Forensics_Presentation.pptx
The AUB Centre for AI in Media Proposal.docx
Machine learning based COVID-19 study performance prediction
MIND Revenue Release Quarter 2 2025 Press Release
Empathic Computing: Creating Shared Understanding
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
20250228 LYD VKU AI Blended-Learning.pptx
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...

AMD Heterogeneous Uniform Memory Access

  • 1. AMD heterogeneous Uniform Memory Access PHIL ROGERS, CORPORATE FELLOW JOE MACRI, CORPORATE VICE PRESIDENT & PRODUCT CTO SASA MARINKOVIC, SENIOR MANAGER, PRODUCT MARKETING AMD Confidential, under embargo until Apr 30, 12:01 AM EST
  • 3. 3AMD Confidential, under embargo until Apr 30, 12:01 AM EST 10 YEARS AGO… Memory Controller on the chip HyperTransport64-bit extensions AMD Opteron
  • 4. 4AMD Confidential, under embargo until Apr 30, 12:01 AM EST 0 500 1000 1500 2000 2500 3000 3500 4000 4500 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012CPU GFLOPS GPU GFLOPS HOW DO WE UNLOCK THIS PERFORMANCE? GPU COMPUTE CAPABILITY IS MORE THAN THAT OF THE CPU  See slide 24 for details 10X
  • 5. 5AMD Confidential, under embargo until Apr 30, 12:01 AM EST WHAT IS HSA? SERIAL WORKLOADS PARALLEL WORKLOADS hUMA (MEMORY) APU ACCELERATED PROCESSING UNIT An intelligent computing architecture that enables CPU, GPU and other processors to work in harmony on a single piece of silicon by seamlessly moving the right tasks to the best suited processing element
  • 6. 6AMD Confidential, under embargo until Apr 30, 12:01 AM EST HSA EVOLUTION Uniform memory access for CPU and GPU GPU can access CPU memory Integrate CPU and GPU in silicon Capabilities Simplified data sharing Improved compute efficiency Unified power efficiency Benefits
  • 7. 7AMD Confidential, under embargo until Apr 30, 12:01 AM EST WHAT IS hUMA? heterogeneous UNIFORM MEMORY ACCESS
  • 8. 8AMD Confidential, under embargo until Apr 30, 12:01 AM EST UNDERSTANDING UMA Original meaning of UMA is Uniform Memory Access • Refers to how processing cores in a system view and access memory • All processing cores in a true UMA system share a single memory address space Introduction of GPU compute created systems with Non-Uniform Memory Access (NUMA) • Require data to be managed across multiple heaps with different address spaces • Add programming complexity due to frequent copies, synchronization, and address translation HSA restores the GPU to Uniform memory Access • Heterogeneous computing replaces GPU Computing
  • 9. 9AMD Confidential, under embargo until Apr 30, 12:01 AM EST INTRODUCING hUMA CPU APU APU with HSA Memory CPU CPU CPU CPU UMA CPU Memory CPU CPU CPU CPU NUMA GPU GPUGPU GPU GPU Memory Memory CPU CPU CPU CPU hUMA GPU GPU GPU GPU
  • 10. 10AMD Confidential, under embargo until Apr 30, 12:01 AM EST hUMA KEY FEATURES BI-DIRECTIONAL COHERENT MEMORY Any updates made by one processing element will be seen by all other processing elements - GPU or CPU PAGEABLE MEMORY GPU can take page faults, and is no longer restricted to page locked memory ENTIRE MEMORY SPACE CPU and GPU processes can dynamically allocate memory from the entire memory space
  • 11. 11AMD Confidential, under embargo until Apr 30, 12:01 AM EST hUMA KEY FEATURES Physical Memory GPU HW Coherency Virtual Memory CPU Entire memory space: Both CPU and GPU can access and allocate any location in the system’s virtual memory space CacheCache Coherent Memory: Ensures CPU and GPU caches both see an up-to-date view of data Pageable memory: The GPU can seamlessly access virtual memory addresses that are not (yet) present in physical memory
  • 12. 12AMD Confidential, under embargo until Apr 30, 12:01 AM EST WITHOUT POINTERS* AND DATA SHARING *A Pointer is a named variable that holds a memory address. It makes it easy to reference data or code segments by a name and eliminates the need for the developer to know the actual address in memory. Pointers can be manipulated by the same expressions used to operate on any other variable GPUCPU CPU Memory GPU Memory | | | | || | | | | | | | | || | | | | Without hUMA: • CPU explicitly copies data to GPU memory • GPU completes computation • CPU explicitly copies result back to CPU memory Only the data array can be copied since GPU cannot follow embedded data-structure links
  • 13. 13AMD Confidential, under embargo until Apr 30, 12:01 AM EST GPU With hUMA: • CPU simply passes a pointer to GPU • GPU completes computation • CPU can read the result directly – no copying needed! CPU CPU / GPU Uniform Memory | | | | || | | | | *A Pointer is a named variable that holds a memory address. It makes it easy to reference data or code segments by a name and eliminates the need for the developer to know the actual address in memory. Pointers can be manipulated by the same expressions used to operate on any other variable CPU can pass a pointer to entire data structure since the GPU can now follow embedded links WITH POINTERS* AND DATA SHARING
  • 14. 14AMD Confidential, under embargo until Apr 30, 12:01 AM EST TOP 10 REASONS TO GO FULLY HARDWARE COHERENT ON GPU/APU 1. Much easier for programmers 2. No need for special APIs 3. Move CPU multi-core algorithms to the GPU without recoding for absence of coherency 4. Allow finer grained data sharing than software coherency 5. Implement coherency once in hardware, rather than N times in different software stacks 6. Prevent hard to debug errors in application software 7. Operating systems prefer hardware coherency – they do not want the bug reports to the platform 8. Probe filters and directories will maintain power efficiency 9. Full coherency opens the doors to single source, native and managed code programming for heterogeneous platforms 10. Optimal architecture for heterogeneous computing on APUs and SOCs AMD Confidential, under embargo until Apr 30, 12:01 AM EST
  • 15. 15AMD Confidential, under embargo until Apr 30, 12:01 AM EST  hUMA FEATURES Access to Entire Memory Space Pageable memory Bi-directional Coherency Fast GPU access to system memory Dynamic Memory Allocation    
  • 17. 17AMD Confidential, under embargo until Apr 30, 12:01 AM EST BENEFITS OF HSA
  • 18. 18AMD Confidential, under embargo until Apr 30, 12:01 AM EST UNIFORM MEMORY BENEFITS TO DEVELOPERS EASE AND SIMPLICITY OF PROGRAMMING Single, standard computing environments LOWER DEVELOPMENT COST More efficient architecture enables less people to do the same work SUPPORT FOR MAINSTREAM PROGRAMING LANGUAGES Python, C++, Java
  • 19. 19AMD Confidential, under embargo until Apr 30, 12:01 AM EST BETTER EXPERIENCES Radically different user experiences LONGER BATTERY LIFE Less power at the same performance MORE PERFORMANCE Getting more performance from the same form factor BENEFITS TO CONSUMERS
  • 20. 20AMD Confidential, under embargo until Apr 30, 12:01 AM EST SUPPORT FROM MAJOR INDUSTRY PLAYERS  For more information go to: http://hsafoundation.com/  Source http://pinterest.com/pin/193021534001931884/
  • 21. 21AMD Confidential, under embargo until Apr 30, 12:01 AM EST HSA Nov 11 – 14, 2013 San Jose McEnery Convention Center 14 Different Tracks with over 140 Individual Presentations
  • 23. 23AMD Confidential, under embargo until Apr 30, 12:01 AM EST GFLOPS Year CPU CPU GFLOPS GPU (RADEON) GPU GFLOPS 2002 Pentium 4 (Northwood) 12.24 9700 Pro 31.2 2003 Pentium 4 (Northwood) 12.8 9800 XT 36.48 2004 Pentium 4 (Prescott 15.2 X850 XT 103.68 2005 15.2 X1800 XT 134.4 2006 Core 2 Duo 23.44 X1950 375 2007 Core 2 Quad 48 HD 2900 XT 473.6 2008 Q9650 96 HD 4870 1200 2009 Core i7 960 102.4 HD 5870 2720 2010 Core i7 970 153.6 HD 6970 2703 2011 Core i7 3960X 316.8 HD7970 3789 2012 Core i7 3970X 336 HD 7970 GHz Edition 4301
  • 24. 24AMD Confidential, under embargo until Apr 30, 12:01 AM EST POTENTIAL MARKET IS HUGE Notebooks Servers Desktops Embedded Game Consoles Tablets
  • 25. 25AMD Confidential, under embargo until Apr 30, 12:01 AM EST DISCLAIMER The information presented in this document is for informational purposes only and may contain technical inaccuracies, omissions and typographical errors. The information contained herein is subject to change and may be rendered inaccurate for many reasons, including but not limited to product and roadmap changes, component and motherboard version changes, new model and/or product releases, product differences between differing manufacturers, software changes, BIOS flashes, firmware upgrades, or the like. AMD assumes no obligation to update or otherwise correct or revise this information. However, AMD reserves the right to revise this information and to make changes from time to time to the content hereof without obligation of AMD to notify any person of such revisions or changes. AMD MAKES NO REPRESENTATIONS OR WARRANTIES WITH RESPECT TO THE CONTENTS HEREOF AND ASSUMES NO RESPONSIBILITY FOR ANY INACCURACIES, ERRORS OR OMISSIONS THAT MAY APPEAR IN THIS INFORMATION. AMD SPECIFICALLY DISCLAIMS ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE. IN NO EVENT WILL AMD BE LIABLE TO ANY PERSON FOR ANY DIRECT, INDIRECT, SPECIAL OR OTHER CONSEQUENTIAL DAMAGES ARISING FROM THE USE OF ANY INFORMATION CONTAINED HEREIN, EVEN IF AMD IS EXPRESSLY ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. ATTRIBUTION © 2013 Advanced Micro Devices, Inc. All rights reserved. AMD, the AMD Arrow logo, Radeon, and combinations thereof are trademarks of Advanced Micro Devices, Inc. Other names and logos are used for informational purposes only and may be trademarks of their respective owners.

Editor's Notes

  • #4: Instead, it fell to AMD to take the logical step and simply extend the x86 architecture to include 64-bit instructions, resulting in what the firm called AMD64.At a stroke, AMD was offering businesses the chance to give a performance boost to existing applications, with a seamless upgrade path to 64-bit software when this became available.Until that point, multiple processor chips simply shared a bus connection with the rest of the system. This arrangement created a memorybandwidth bottleneck that effectively made PC servers with more than four sockets impractical without a specialist chipset.AMD solved this problem by giving each Opteron chip its own directly connected pool of memory, and introduced a high-speed point-to-point interconnect called HyperTransport to link the chips to each other and the rest of the system in a switched fabric."By moving the memory controller onto the processor die, it runs at core frequency, and each processor added [to the system] adds another memory controller, so memory bandwidth scales with the number of processors," explained Mark Tellez, then AMD's server/workstation marketing manager.
  • #6: HSA will empower software developers to easily innovate and unleash new levels of performance and functionality on all your modern devices and lead to powerful new experiences such as visually rich, intuitive, human-like interactivity.   
  • #18: Power efficient – running parallel code on the GPU; eliminate copiesEasy to program – High level languages, task parallel runtimesReady for tomorrow’s workloads, increasingly dominated by parallel processingBuilt from established technology and operating principles. We take the architecture that has worked for SMP and multicore systems and extend it to the GPUHSA is open. This is so important. Multivendor architectures spawn large ecosystemsBroadly supported across multiple vendors. HSA is an architecture that spans from the smart phone to the super computer. Phones, tablets, fanless notebooks, desktops, all in ones, workstations, cloud servers, HPC. Go to hsafoundation.com to see the list of industry leaders who have joined us already.
  • #21: In 2012, there was 2012 billion smart connected devices. HSA Foundation members made up approx. 800 million of those, Intel about 300 million and nvidia around 100 or less. If you assume similar mamber share by these companies in 2016, HSA Foundation member companies will be in over 1.6 billion devices
  • #25: $37 Billion for existing PC markets and 10 Billion for new markets. (Game consoles represent $1 billion)Profit pool analysis highlights market attractiveness by contrasting processor revenue TAM with near term estimated market growth rates. In the near term, existing profit pool markets are estimated at 2X as large as the that of new entrants but caution should be exercised as the new market entrants are enjoying fast adoption and growth highlighting the need to investigate strategic choices and enhancements. Question: Even though ARM-based SOCs for tablets/smartphones are BSOs, should we make a trade-off of core market investments to deliver one? Why?If we had $50M-100M incremental R&D dollars, what would the next priority SOC be? ARM? Client-X86? Server?