SlideShare a Scribd company logo
Our Concurrent Past;
Our Distributed Future
Joe Duffy
Co-founder/CEO Pulumi
InfoQ.com: News & Community Site
Watch the video with slide
synchronization on InfoQ.com!
https://www.infoq.com/presentations
/concurrency-distributed-computing
• Over 1,000,000 software developers, architects and CTOs read the site world-
wide every month
• 250,000 senior developers subscribe to our weekly newsletter
• Published in 4 languages (English, Chinese, Japanese and Brazilian
Portuguese)
• Post content from our QCon conferences
• 2 dedicated podcast channels: The InfoQ Podcast, with a focus on
Architecture and The Engineering Culture Podcast, with a focus on building
• 96 deep dives on innovative topics packed as downloadable emags and
minibooks
• Over 40 new content items per week
Purpose of QCon
- to empower software development by facilitating the spread of
knowledge and innovation
Strategy
- practitioner-driven conference designed for YOU: influencers of
change and innovation in your teams
- speakers and topics driving the evolution and innovation
- connecting and catalyzing the influencers and innovators
Highlights
- attended by more than 12,000 delegates since 2007
- held in 9 cities worldwide
Presented at QCon London
www.qconlondon.com
The world is concurrent
File and network I/O.
Servers with many users.
Multicore.
Mobile devices.
GPUs.
Clouds of machines.
All of which communicate with one another.
2
Distributed is concurrent
Concurrency: two or more things happening at once.
Parallelism is a form of concurrency: multiple workers execute
concurrently, with the goal of getting a “single” job done faster.
So too is distributed: workers simply run further apart from one another:
Different processes. Different machines.
Different racks, data centers. Different planets.
Different distances implies different assumptions/capabilities.
3
Long lost brothers
Concurrency and distributed programming share deep roots in early CS.
Many of the early pioneers of our industry -- Knuth, Lampson, Lamport,
Hoare -- cut their teeth on distributed programming.
Concurrency and parallelism broke into the mainstream first.
Now, thanks to mobile + cloud, distributed will overtake them.
By revisiting those early roots, and reflecting on concurrency, we can learn
a lot about what the future of distributed programming holds for us.
4
How we got here
5
First there was “distributed”
Early OS work modeled concurrency as distributed agents.
Dijkstra semaphores (1962).
Hoare CSPs (1977).
Lamport time clocks (1978).
Butler Lampson on system design (1983).
Even I/O devices were thought to be distributed;
asynchronous with respect to the program, and
they did not share an address space.
6
Modern concurrency roots
Many abstractions core to our concurrent systems were invented
alongside those distributed programming concepts.
Hoare and Hansen monitors (1974).
From semaphores to mutual exclusion: Dijkstra/Dekker (1965),
Lamport (1974), Peterson (1981).
Multithreading (Lamport 1979).
Most of the pioneers of concurrency are also the pioneers of distributed.
7
Beyond synchronization: communication
Furthering the distributed bent was focus on communication. Modern
synchronization is a special case that (ab)uses shared memory space.
Message passing: communication is done by sending messages.
CSPs and Actors: independent processes and state machines.
Futures and Promises: dataflow-based asynchronous communication.
Memory interconnects are just a form of hardware message passing.
The focus on structured communication was lost along the way, but is
now coming back (Akka, Thrift, Go CSPs, gRPC, …).
8
The parallel 80’s
Explosion of research into parallelism, largely driven by AI research.
The Connection Machine: up to 65,536 processors.
From this was born a “supercomputing” culture that persists to this day.
9
Parallel programming
Substantial focus on programmability.
Task parallelism: computation drives parallel division.
Data parallelism: data drives parallel division (scales with data).
Eventually, scheduling algorithms.
Work stealing (Cilk).
Nested parallelism (Cilk, NESL).
10
Parallel programming (Ex1. Connection Machine)
Programming 65,536 processors is different!
Every processor is a ~CSP.
11
(DEFUN PATH-LENGTH (A B G)
α(SETF (LABEL •G) +INF)
(SETF (LABEL A) 0)
(LOOP UNTIL (< (LABEL B) +INF)
DO α(SETF (LABEL •(REMOVE A G))
(1+ (βMIN α(LABEL •(NEIGHBORS •G))))))
(LABEL B))
Parallel programming (Ex2. Cilk)
Bringing parallelism to imperative C-like languages.
Foreshadows modern data parallel, including GPGPU and MapReduce.
12
// Task parallel:
cilk int fib(int n) {
if (n < 2) { return n; }
else {
int x = spawn fib(n - 1);
int y = spawn fib(n - 2);
sync;
return x + y;
}
}
// Data parallel:
cilk_for (int i = 0; i < n; i++) {
a[i] = f(a[i]);
}
cilk::reducer_opadd<int> result(0);
cilk_for (int i = 0; i < n; i++) {
result += foo(i);
}
Parallel programming (Ex3. OpenMP)
Eventual mainstream language adoption of these ideas (1997).
Notice similarity to Cilk; set the stage for the future (Java, .NET, CUDA, ...).
13
// Task parallel:
cilk int fib(int n) {
if (n < 2) { return n; }
else {
#pragma omp task shared(n)
int x = spawn fib(n - 1);
#pragma omp task shared(n)
int y = spawn fib(n - 2);
#pragma omp taskwait
return x + y;
}
}
// Data parallel:
#pragma omp parallel for
for (int i = 0; i < n; i++) {
a[i] = f(a[i]);
}
#pragma omp for reduction(+: result)
for (int i = 0; i < n; i++) {
result += foo(i);
}
Enter multi-core
14
Copyright © 2017 ACM
15
Moore’s Law: what happened?
Clock speed doubled every ~18mos until 2005:
“The complexity for minimum component costs has increased at a rate of roughly a factor of two per year. Certainly
over the short term this rate can be expected to continue, if not to increase. Over the longer term, ... there is no reason
to believe it will not remain nearly constant for at least 10 years.”
-- Gordon Moore, 1965
"It can't continue forever. The nature of exponentials is that you push them out and eventually disaster happens.”
-- Gordon Moore, 2005
Power wall: higher frequency requires quadratic voltage/power increase,
yielding greater leakage and inability to cool, requires new innovation.
Result: you don’t get faster processors, you just get more of them.
16
Moore’s Law: stuttering
17
The renaissance: we must program all those cores!
The 2000’s was the concurrent programming renaissance.
Java, .NET, C++ went from threads, thread pools, locks, and events
to tasks, work stealing, and structured concurrency.
GPGPU became a thing, fueled by CUDA and architecture co-design.
Borrowed the best, easiest to program, ideas from academia. These
building blocks now used as the foundation for asynchrony.
Somehow lost focus on CSPs, Actors, etc.; Erlang and Go got this right.
18
Modest progress on safety
Recognition of the importance of immutable data structures.
Short-lived industry-wide infatuation with software transactional memory.
Some success stories: Haskell, Clojure, Intel TSX instructions.
But overall, the wrong solution to the wrong problem.
Shared-nothing programming in some domains (Erlang), but mostly not.
A common theme: safety remains (mostly) elusive to this day.
19
Why did it matter, anyway?
Servers already used concurrency to serve many users.
But “typical” client code did not enjoy this (no parallelism).
It still largely doesn’t. Amdahl’s Law is cruel (1/((1-P)+(P/N))).
The beginning of the end of the Wintel Era:
No more “Andy giveth and Bill taketh away” virtuous cycle
From this was born mobile, cloud computing, and GPGPU-style
“supercomputer on every desk”. Less PC, but multi-core still matters.
20
Post multi-core era
21
The return of distributed programming
We have come full circle:
Distributed ⇒ concurrency ⇒ distributed.
Thanks to cloud, proliferation of devices (mobile + IoT).
Increasingly fine-grained distributed systems (microservices) look more
and more like concurrent systems. We learned a lot about building these.
How much applies to our distributed future?
Thankfully, a lot. Let’s look at 7 key lessons.
22
1: Think about communication first, and always
Ad-hoc communication leads to reliability and state management woes.
CSP, actors, RPC, queues are good patterns. Stateless can help but is
often difficult to achieve; most likely, ephemeral (recreatable).
Thinking about dependencies between services as “capabilities” can help
add structure and security. Better than dynamic, weak discovery.
Queuing theory: imbalanced producers/consumers will cause resource
consumption issues, stuttering, scaling issues. Think about flow control.
23
2: Schema helps; but don’t blindly trust it
Many concurrent systems use process-level “plugins” (COM, EJB, etc).
Schema is a religious topic. It helps in the same way static typing helps; if
you find static typing useful, you will probably find schema useful.
But don’t blindly trust it. The Internet works here; copy it if possible.
The server revvs at a different rate than the client.
Trust but verify. Or, as Postel said:
“Be conservative in what you do, be liberal in what you accept.”
24
3: Safety is important, but elusive
Safety comes in many forms: isolated » immutable » synchronized.
Lack of safety creates hazards: races, deadlocks, undefined behavior.
Message passing suffers from races too; just not data races.
State machine discipline can help, regardless of whether concurrent
(multithreading) or distributed (CSPs, Actors, etc).
Relaxed consistency can help, but is harder to reason about.
In a distributed system the “source of truth” is less clear (by design).
25
4: Design for failure
Things will fail; don’t fight it. Thankfully:
“The unavoidable price of reliability is simplicity.” (Tony Hoare)
Always rendezvous and deal with failures intentionally.
Avoid “fire and forget” concurrency: implies mutable state and implicit
dependencies. Complex failure modes. Think about parent/children.
Design for replication and restartability.
Error recovery is a requirement for a reliable concurrent system.
26
5: From causality follows structure
Causality is the cascade of events that leads to an
action being taken. Complex in a concurrent system.
When context flows along causality boundaries,
management becomes easier; but it isn’t automatic:
Cancellation and deadlines.
Security, identity, secrets management.
Tracing and logging.
Performance profiling (critical path).
27
6: Encode structure using concurrency patterns
Easier to understand, debug, manage context, … everything, really.
A task without a consumer is a silent error or leak waiting to happen.
Structured concurrency instills structure: fork/join, pipeline. Let dataflow
determine dependence for maximum performance (whether push or pull).
28
6: Encode structure using concurrency patterns
For example, this
and not this (even though it is “easier”)
29
results := make(chan *Result, 8)
for _, e := range … {
go crawl(url, results) // send to results as ready
}
var merged []*Result
for _, result := range results {
merged = append(merged, process(result)) // do final processing
}
for _, url := range pages {
go crawl(url) // fire and forget; crawler silently mutates some data structures
}
7: Say less, declare/react more
Declarative and reactive patterns delegate hard problems to compilers,
runtimes, and frameworks: data locality, work division, synchronization.
Reactive models the event-oriented nature of an inherently asynchronous
system. It is a declarative specification without impedance mismatch.
Push is often the best for performance, but the hardest to program.
Serverless is a specialization of this idea (single event / single action).
30
counts = text_file.flatMap(lambda line: line.split(" ")) 
.map(lambda word: (word, 1)).reduceByKey(lambda a, b: a + b)
var keyups = Rx.Observable.fromEvent($input, 'keyup')
.pluck('target', 'value').filter(text => text.length > 2);
Recap
1. Think about communication first, and always.
2. Schema helps; but don’t blindly trust it.
3. Safety is important, but elusive.
4. Design for failure.
5. From causality follows structure.
6. Encode structure using concurrency patterns.
7. Say less, declare/react more.
Future programming models will increasingly make these “built-in.”
31
In conclusion
32
Our concurrent past; our distributed future
Many design philosophies shared. Hopefully more in the future.
Expect to see more inspiration from the early distributed programming
pioneers. Shift from low-level cloud infrastructure to higher-level models.
Reading old papers seems “academic” but can lead to insights.
“Those who cannot remember the past are condemned to repeat it.”
Programming clouds of machines will become as easy as multi-core.
33
Thank you
Joe Duffy <joe@pulumi.com>
http://joeduffyblog.com
https://twitter.com/xjoeduffyx
34
Watch the video with slide synchronization on
InfoQ.com!
https://www.infoq.com/presentations/concurrency-
distributed-computing

More Related Content

PDF
Constructing Operating Systems and E-Commerce
PDF
Eventual Consistency - JUG DA
PDF
The Effect of Semantic Technology on Wireless Pipelined Complexity Theory
PDF
Tackling Consistency-related Design Challenges of Distributed Data-Intensive ...
PDF
Small Deep-Neural-Networks: Their Advantages and Their Design
PDF
Availability in a cloud native world v1.6 (Feb 2019)
PDF
What the cloud has to do with a burning house?
PPTX
Quantum Computing and AI
Constructing Operating Systems and E-Commerce
Eventual Consistency - JUG DA
The Effect of Semantic Technology on Wireless Pipelined Complexity Theory
Tackling Consistency-related Design Challenges of Distributed Data-Intensive ...
Small Deep-Neural-Networks: Their Advantages and Their Design
Availability in a cloud native world v1.6 (Feb 2019)
What the cloud has to do with a burning house?
Quantum Computing and AI

What's hot (9)

PPT
Ubiquitous Computing and AmI Smart Environments
PPS
Operating System Support for Ubiquitous Computing
PDF
Forrest Iandola: How to become a full-stack deep learning engineer
PDF
DeepScale: Real-Time Perception for Automated Driving
PPTX
The Deep Learning Glossary
PDF
Forrest Iandola: My Adventures in Artificial Intelligence and Entrepreneurship
PDF
Software and the Concurrency Revolution : Notes
PDF
[Thomas chamberlain] learning_om_ne_t++(z-lib.org)
PPTX
Serguei “SB” Beloussov - Future Of Computing at SIT Insights in Technology 2019
Ubiquitous Computing and AmI Smart Environments
Operating System Support for Ubiquitous Computing
Forrest Iandola: How to become a full-stack deep learning engineer
DeepScale: Real-Time Perception for Automated Driving
The Deep Learning Glossary
Forrest Iandola: My Adventures in Artificial Intelligence and Entrepreneurship
Software and the Concurrency Revolution : Notes
[Thomas chamberlain] learning_om_ne_t++(z-lib.org)
Serguei “SB” Beloussov - Future Of Computing at SIT Insights in Technology 2019
Ad

Similar to Our Concurrent Past; Our Distributed Future (20)

PDF
Multicore computing
PDF
Le PC est mort. Vive le PC!
PPTX
Microsoft Dryad
DOCX
BCO 117 IT Software for Business Lecture Reference Notes.docx
PDF
Real-world Concurrency : Notes
PPTX
Final Draft of IT 402 Presentation
DOCX
But is it Art(ificial Intelligence)?
PPTX
Sc10 slide share
PPT
f32-book-parallel-pres-pt1jjjjjjjooo.ppt
PDF
An Optics Life
PDF
HP Labs: Titan DB on LDBC SNB interactive by Tomer Sagi (HP)
PDF
Reproducible Science and Deep Software Variability
PPT
Software engineering
PPT
Software engineering
PDF
Running containers in production, the ING story
PDF
Ch01lect1 et
PPTX
onur-comparch-fall2018-lecture3a-whycomparch-afterlecture.pptx
PPTX
Serguei Beloussov - Future of computing
PDF
At the Crossroads of HPC and Cloud Computing with Openstack
PPTX
High performance computing
Multicore computing
Le PC est mort. Vive le PC!
Microsoft Dryad
BCO 117 IT Software for Business Lecture Reference Notes.docx
Real-world Concurrency : Notes
Final Draft of IT 402 Presentation
But is it Art(ificial Intelligence)?
Sc10 slide share
f32-book-parallel-pres-pt1jjjjjjjooo.ppt
An Optics Life
HP Labs: Titan DB on LDBC SNB interactive by Tomer Sagi (HP)
Reproducible Science and Deep Software Variability
Software engineering
Software engineering
Running containers in production, the ING story
Ch01lect1 et
onur-comparch-fall2018-lecture3a-whycomparch-afterlecture.pptx
Serguei Beloussov - Future of computing
At the Crossroads of HPC and Cloud Computing with Openstack
High performance computing
Ad

More from C4Media (20)

PDF
Streaming a Million Likes/Second: Real-Time Interactions on Live Video
PDF
Next Generation Client APIs in Envoy Mobile
PDF
Software Teams and Teamwork Trends Report Q1 2020
PDF
Understand the Trade-offs Using Compilers for Java Applications
PDF
Kafka Needs No Keeper
PDF
High Performing Teams Act Like Owners
PDF
Does Java Need Inline Types? What Project Valhalla Can Bring to Java
PDF
Service Meshes- The Ultimate Guide
PDF
Shifting Left with Cloud Native CI/CD
PDF
CI/CD for Machine Learning
PDF
Fault Tolerance at Speed
PDF
Architectures That Scale Deep - Regaining Control in Deep Systems
PDF
ML in the Browser: Interactive Experiences with Tensorflow.js
PDF
Build Your Own WebAssembly Compiler
PDF
User & Device Identity for Microservices @ Netflix Scale
PDF
Scaling Patterns for Netflix's Edge
PDF
Make Your Electron App Feel at Home Everywhere
PDF
The Talk You've Been Await-ing For
PDF
Future of Data Engineering
PDF
Automated Testing for Terraform, Docker, Packer, Kubernetes, and More
Streaming a Million Likes/Second: Real-Time Interactions on Live Video
Next Generation Client APIs in Envoy Mobile
Software Teams and Teamwork Trends Report Q1 2020
Understand the Trade-offs Using Compilers for Java Applications
Kafka Needs No Keeper
High Performing Teams Act Like Owners
Does Java Need Inline Types? What Project Valhalla Can Bring to Java
Service Meshes- The Ultimate Guide
Shifting Left with Cloud Native CI/CD
CI/CD for Machine Learning
Fault Tolerance at Speed
Architectures That Scale Deep - Regaining Control in Deep Systems
ML in the Browser: Interactive Experiences with Tensorflow.js
Build Your Own WebAssembly Compiler
User & Device Identity for Microservices @ Netflix Scale
Scaling Patterns for Netflix's Edge
Make Your Electron App Feel at Home Everywhere
The Talk You've Been Await-ing For
Future of Data Engineering
Automated Testing for Terraform, Docker, Packer, Kubernetes, and More

Recently uploaded (20)

PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
Empathic Computing: Creating Shared Understanding
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PPTX
MYSQL Presentation for SQL database connectivity
PDF
CIFDAQ's Market Insight: SEC Turns Pro Crypto
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
Electronic commerce courselecture one. Pdf
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
Unlocking AI with Model Context Protocol (MCP)
PPTX
Cloud computing and distributed systems.
PDF
Spectral efficient network and resource selection model in 5G networks
PPTX
Big Data Technologies - Introduction.pptx
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Empathic Computing: Creating Shared Understanding
NewMind AI Weekly Chronicles - August'25 Week I
Advanced methodologies resolving dimensionality complications for autism neur...
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
MYSQL Presentation for SQL database connectivity
CIFDAQ's Market Insight: SEC Turns Pro Crypto
Diabetes mellitus diagnosis method based random forest with bat algorithm
Digital-Transformation-Roadmap-for-Companies.pptx
Electronic commerce courselecture one. Pdf
Encapsulation_ Review paper, used for researhc scholars
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
“AI and Expert System Decision Support & Business Intelligence Systems”
Unlocking AI with Model Context Protocol (MCP)
Cloud computing and distributed systems.
Spectral efficient network and resource selection model in 5G networks
Big Data Technologies - Introduction.pptx
The Rise and Fall of 3GPP – Time for a Sabbatical?

Our Concurrent Past; Our Distributed Future

  • 1. Our Concurrent Past; Our Distributed Future Joe Duffy Co-founder/CEO Pulumi
  • 2. InfoQ.com: News & Community Site Watch the video with slide synchronization on InfoQ.com! https://www.infoq.com/presentations /concurrency-distributed-computing • Over 1,000,000 software developers, architects and CTOs read the site world- wide every month • 250,000 senior developers subscribe to our weekly newsletter • Published in 4 languages (English, Chinese, Japanese and Brazilian Portuguese) • Post content from our QCon conferences • 2 dedicated podcast channels: The InfoQ Podcast, with a focus on Architecture and The Engineering Culture Podcast, with a focus on building • 96 deep dives on innovative topics packed as downloadable emags and minibooks • Over 40 new content items per week
  • 3. Purpose of QCon - to empower software development by facilitating the spread of knowledge and innovation Strategy - practitioner-driven conference designed for YOU: influencers of change and innovation in your teams - speakers and topics driving the evolution and innovation - connecting and catalyzing the influencers and innovators Highlights - attended by more than 12,000 delegates since 2007 - held in 9 cities worldwide Presented at QCon London www.qconlondon.com
  • 4. The world is concurrent File and network I/O. Servers with many users. Multicore. Mobile devices. GPUs. Clouds of machines. All of which communicate with one another. 2
  • 5. Distributed is concurrent Concurrency: two or more things happening at once. Parallelism is a form of concurrency: multiple workers execute concurrently, with the goal of getting a “single” job done faster. So too is distributed: workers simply run further apart from one another: Different processes. Different machines. Different racks, data centers. Different planets. Different distances implies different assumptions/capabilities. 3
  • 6. Long lost brothers Concurrency and distributed programming share deep roots in early CS. Many of the early pioneers of our industry -- Knuth, Lampson, Lamport, Hoare -- cut their teeth on distributed programming. Concurrency and parallelism broke into the mainstream first. Now, thanks to mobile + cloud, distributed will overtake them. By revisiting those early roots, and reflecting on concurrency, we can learn a lot about what the future of distributed programming holds for us. 4
  • 7. How we got here 5
  • 8. First there was “distributed” Early OS work modeled concurrency as distributed agents. Dijkstra semaphores (1962). Hoare CSPs (1977). Lamport time clocks (1978). Butler Lampson on system design (1983). Even I/O devices were thought to be distributed; asynchronous with respect to the program, and they did not share an address space. 6
  • 9. Modern concurrency roots Many abstractions core to our concurrent systems were invented alongside those distributed programming concepts. Hoare and Hansen monitors (1974). From semaphores to mutual exclusion: Dijkstra/Dekker (1965), Lamport (1974), Peterson (1981). Multithreading (Lamport 1979). Most of the pioneers of concurrency are also the pioneers of distributed. 7
  • 10. Beyond synchronization: communication Furthering the distributed bent was focus on communication. Modern synchronization is a special case that (ab)uses shared memory space. Message passing: communication is done by sending messages. CSPs and Actors: independent processes and state machines. Futures and Promises: dataflow-based asynchronous communication. Memory interconnects are just a form of hardware message passing. The focus on structured communication was lost along the way, but is now coming back (Akka, Thrift, Go CSPs, gRPC, …). 8
  • 11. The parallel 80’s Explosion of research into parallelism, largely driven by AI research. The Connection Machine: up to 65,536 processors. From this was born a “supercomputing” culture that persists to this day. 9
  • 12. Parallel programming Substantial focus on programmability. Task parallelism: computation drives parallel division. Data parallelism: data drives parallel division (scales with data). Eventually, scheduling algorithms. Work stealing (Cilk). Nested parallelism (Cilk, NESL). 10
  • 13. Parallel programming (Ex1. Connection Machine) Programming 65,536 processors is different! Every processor is a ~CSP. 11 (DEFUN PATH-LENGTH (A B G) α(SETF (LABEL •G) +INF) (SETF (LABEL A) 0) (LOOP UNTIL (< (LABEL B) +INF) DO α(SETF (LABEL •(REMOVE A G)) (1+ (βMIN α(LABEL •(NEIGHBORS •G)))))) (LABEL B))
  • 14. Parallel programming (Ex2. Cilk) Bringing parallelism to imperative C-like languages. Foreshadows modern data parallel, including GPGPU and MapReduce. 12 // Task parallel: cilk int fib(int n) { if (n < 2) { return n; } else { int x = spawn fib(n - 1); int y = spawn fib(n - 2); sync; return x + y; } } // Data parallel: cilk_for (int i = 0; i < n; i++) { a[i] = f(a[i]); } cilk::reducer_opadd<int> result(0); cilk_for (int i = 0; i < n; i++) { result += foo(i); }
  • 15. Parallel programming (Ex3. OpenMP) Eventual mainstream language adoption of these ideas (1997). Notice similarity to Cilk; set the stage for the future (Java, .NET, CUDA, ...). 13 // Task parallel: cilk int fib(int n) { if (n < 2) { return n; } else { #pragma omp task shared(n) int x = spawn fib(n - 1); #pragma omp task shared(n) int y = spawn fib(n - 2); #pragma omp taskwait return x + y; } } // Data parallel: #pragma omp parallel for for (int i = 0; i < n; i++) { a[i] = f(a[i]); } #pragma omp for reduction(+: result) for (int i = 0; i < n; i++) { result += foo(i); }
  • 18. Moore’s Law: what happened? Clock speed doubled every ~18mos until 2005: “The complexity for minimum component costs has increased at a rate of roughly a factor of two per year. Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, ... there is no reason to believe it will not remain nearly constant for at least 10 years.” -- Gordon Moore, 1965 "It can't continue forever. The nature of exponentials is that you push them out and eventually disaster happens.” -- Gordon Moore, 2005 Power wall: higher frequency requires quadratic voltage/power increase, yielding greater leakage and inability to cool, requires new innovation. Result: you don’t get faster processors, you just get more of them. 16
  • 20. The renaissance: we must program all those cores! The 2000’s was the concurrent programming renaissance. Java, .NET, C++ went from threads, thread pools, locks, and events to tasks, work stealing, and structured concurrency. GPGPU became a thing, fueled by CUDA and architecture co-design. Borrowed the best, easiest to program, ideas from academia. These building blocks now used as the foundation for asynchrony. Somehow lost focus on CSPs, Actors, etc.; Erlang and Go got this right. 18
  • 21. Modest progress on safety Recognition of the importance of immutable data structures. Short-lived industry-wide infatuation with software transactional memory. Some success stories: Haskell, Clojure, Intel TSX instructions. But overall, the wrong solution to the wrong problem. Shared-nothing programming in some domains (Erlang), but mostly not. A common theme: safety remains (mostly) elusive to this day. 19
  • 22. Why did it matter, anyway? Servers already used concurrency to serve many users. But “typical” client code did not enjoy this (no parallelism). It still largely doesn’t. Amdahl’s Law is cruel (1/((1-P)+(P/N))). The beginning of the end of the Wintel Era: No more “Andy giveth and Bill taketh away” virtuous cycle From this was born mobile, cloud computing, and GPGPU-style “supercomputer on every desk”. Less PC, but multi-core still matters. 20
  • 24. The return of distributed programming We have come full circle: Distributed ⇒ concurrency ⇒ distributed. Thanks to cloud, proliferation of devices (mobile + IoT). Increasingly fine-grained distributed systems (microservices) look more and more like concurrent systems. We learned a lot about building these. How much applies to our distributed future? Thankfully, a lot. Let’s look at 7 key lessons. 22
  • 25. 1: Think about communication first, and always Ad-hoc communication leads to reliability and state management woes. CSP, actors, RPC, queues are good patterns. Stateless can help but is often difficult to achieve; most likely, ephemeral (recreatable). Thinking about dependencies between services as “capabilities” can help add structure and security. Better than dynamic, weak discovery. Queuing theory: imbalanced producers/consumers will cause resource consumption issues, stuttering, scaling issues. Think about flow control. 23
  • 26. 2: Schema helps; but don’t blindly trust it Many concurrent systems use process-level “plugins” (COM, EJB, etc). Schema is a religious topic. It helps in the same way static typing helps; if you find static typing useful, you will probably find schema useful. But don’t blindly trust it. The Internet works here; copy it if possible. The server revvs at a different rate than the client. Trust but verify. Or, as Postel said: “Be conservative in what you do, be liberal in what you accept.” 24
  • 27. 3: Safety is important, but elusive Safety comes in many forms: isolated » immutable » synchronized. Lack of safety creates hazards: races, deadlocks, undefined behavior. Message passing suffers from races too; just not data races. State machine discipline can help, regardless of whether concurrent (multithreading) or distributed (CSPs, Actors, etc). Relaxed consistency can help, but is harder to reason about. In a distributed system the “source of truth” is less clear (by design). 25
  • 28. 4: Design for failure Things will fail; don’t fight it. Thankfully: “The unavoidable price of reliability is simplicity.” (Tony Hoare) Always rendezvous and deal with failures intentionally. Avoid “fire and forget” concurrency: implies mutable state and implicit dependencies. Complex failure modes. Think about parent/children. Design for replication and restartability. Error recovery is a requirement for a reliable concurrent system. 26
  • 29. 5: From causality follows structure Causality is the cascade of events that leads to an action being taken. Complex in a concurrent system. When context flows along causality boundaries, management becomes easier; but it isn’t automatic: Cancellation and deadlines. Security, identity, secrets management. Tracing and logging. Performance profiling (critical path). 27
  • 30. 6: Encode structure using concurrency patterns Easier to understand, debug, manage context, … everything, really. A task without a consumer is a silent error or leak waiting to happen. Structured concurrency instills structure: fork/join, pipeline. Let dataflow determine dependence for maximum performance (whether push or pull). 28
  • 31. 6: Encode structure using concurrency patterns For example, this and not this (even though it is “easier”) 29 results := make(chan *Result, 8) for _, e := range … { go crawl(url, results) // send to results as ready } var merged []*Result for _, result := range results { merged = append(merged, process(result)) // do final processing } for _, url := range pages { go crawl(url) // fire and forget; crawler silently mutates some data structures }
  • 32. 7: Say less, declare/react more Declarative and reactive patterns delegate hard problems to compilers, runtimes, and frameworks: data locality, work division, synchronization. Reactive models the event-oriented nature of an inherently asynchronous system. It is a declarative specification without impedance mismatch. Push is often the best for performance, but the hardest to program. Serverless is a specialization of this idea (single event / single action). 30 counts = text_file.flatMap(lambda line: line.split(" ")) .map(lambda word: (word, 1)).reduceByKey(lambda a, b: a + b) var keyups = Rx.Observable.fromEvent($input, 'keyup') .pluck('target', 'value').filter(text => text.length > 2);
  • 33. Recap 1. Think about communication first, and always. 2. Schema helps; but don’t blindly trust it. 3. Safety is important, but elusive. 4. Design for failure. 5. From causality follows structure. 6. Encode structure using concurrency patterns. 7. Say less, declare/react more. Future programming models will increasingly make these “built-in.” 31
  • 35. Our concurrent past; our distributed future Many design philosophies shared. Hopefully more in the future. Expect to see more inspiration from the early distributed programming pioneers. Shift from low-level cloud infrastructure to higher-level models. Reading old papers seems “academic” but can lead to insights. “Those who cannot remember the past are condemned to repeat it.” Programming clouds of machines will become as easy as multi-core. 33
  • 36. Thank you Joe Duffy <joe@pulumi.com> http://joeduffyblog.com https://twitter.com/xjoeduffyx 34
  • 37. Watch the video with slide synchronization on InfoQ.com! https://www.infoq.com/presentations/concurrency- distributed-computing