C Concurrency In Action 1st Edition Anthony Williams
C Concurrency In Action 1st Edition Anthony Williams
C Concurrency In Action 1st Edition Anthony Williams
C Concurrency In Action 1st Edition Anthony Williams
1. C Concurrency In Action 1st Edition Anthony
Williams download
https://ebookbell.com/product/c-concurrency-in-action-1st-
edition-anthony-williams-47032730
Explore and download more ebooks at ebookbell.com
2. Here are some recommended products that we believe you will be
interested in. You can click the link to download.
C Concurrency In Action 1st Edition Anthony Williams
https://ebookbell.com/product/c-concurrency-in-action-1st-edition-
anthony-williams-24141242
C Concurrency In Action 2nd Edition Anthony Williams
https://ebookbell.com/product/c-concurrency-in-action-2nd-edition-
anthony-williams-36459722
C Concurrency In Action Practical Multithreading 1st Edition Anthony
Williams
https://ebookbell.com/product/c-concurrency-in-action-practical-
multithreading-1st-edition-anthony-williams-49550296
C Concurrency In Action Anthony Williams
https://ebookbell.com/product/c-concurrency-in-action-anthony-
williams-2310044
3. Concurrency In C Cookbook 1st Edition Stephen Cleary
https://ebookbell.com/product/concurrency-in-c-cookbook-1st-edition-
stephen-cleary-38043546
Concurrency In C Cookbook Asynchronous Parallel And Multithreaded
Programming 2nd Edition Stephen Cleary
https://ebookbell.com/product/concurrency-in-c-cookbook-asynchronous-
parallel-and-multithreaded-programming-2nd-edition-stephen-
cleary-11162124
Concurrency In C Cookbook Asynchronous Parallel And Multithreaded
Programming Stephen Cleary
https://ebookbell.com/product/concurrency-in-c-cookbook-asynchronous-
parallel-and-multithreaded-programming-stephen-cleary-11074322
Concurrency In Net Modern Patterns Of Concurrent And Parallel
Programming With Examples In C And F 1st Edition Riccardo Terrell
https://ebookbell.com/product/concurrency-in-net-modern-patterns-of-
concurrent-and-parallel-programming-with-examples-in-c-and-f-1st-
edition-riccardo-terrell-7117844
Extreme C Taking You To The Limit In Concurrency Oop And The Most
Advanced Capabilities Of C Kamran Amini
https://ebookbell.com/product/extreme-c-taking-you-to-the-limit-in-
concurrency-oop-and-the-most-advanced-capabilities-of-c-kamran-
amini-10561794
5. M A N N I N G
Anthony Williams
Practical Multithreading
IN ACTION
12. vii
brief contents
1 ■ Hello, world of concurrency in C++! 1
2 ■ Managing threads 15
3 ■ Sharing data between threads 33
4 ■ Synchronizing concurrent operations 67
5 ■ The C++ memory model and operations on atomic types 103
6 ■ Designing lock-based concurrent data structures 148
7 ■ Designing lock-free concurrent data structures 180
8 ■ Designing concurrent code 224
9 ■ Advanced thread management 273
10 ■ Testing and debugging multithreaded applications 300
14. ix
contents
preface xv
acknowledgments xvii
about this book xix
about the cover illustration xxii
1 Hello, world of concurrency in C++! 1
1.1 What is concurrency? 2
Concurrency in computer systems 2
Approaches to concurrency 4
1.2 Why use concurrency? 6
Using concurrency for separation of concerns 6
Using concurrency for performance 7 ■
When not
to use concurrency 8
1.3 Concurrency and multithreading in C++ 9
History of multithreading in C++ 10 ■
Concurrency support
in the new standard 10 ■
Efficiency in the C++
Thread Library 11 ■
Platform-specific facilities 12
1.4 Getting started 13
Hello, Concurrent World 13
1.5 Summary 14
15. CONTENTS
x
2 Managing threads 15
2.1 Basic thread management 16
Launching a thread 16 ■ Waiting for a thread to complete 18
Waiting in exceptional circumstances 19 ■ Running threads
in the background 21
2.2 Passing arguments to a thread function 23
2.3 Transferring ownership of a thread 25
2.4 Choosing the number of threads at runtime 28
2.5 Identifying threads 31
2.6 Summary 32
3 Sharing data between threads 33
3.1 Problems with sharing data between threads 34
Race conditions 35 ■
Avoiding problematic race conditions 36
3.2 Protecting shared data with mutexes 37
Using mutexes in C++ 38 ■
Structuring code for protecting
shared data 39 ■
Spotting race conditions inherent
in interfaces 40 ■ Deadlock: the problem and a solution 47
Further guidelines for avoiding deadlock 49 ■ Flexible locking
with std::unique_lock 54 ■ Transferring mutex ownership
between scopes 55 ■ Locking at an appropriate granularity 57
3.3 Alternative facilities for protecting shared data 59
Protecting shared data during initialization 59 ■ Protecting rarely
updated data structures 63 ■ Recursive locking 64
3.4 Summary 65
4 Synchronizing concurrent operations 67
4.1 Waiting for an event or other condition 68
Waiting for a condition with condition variables 69
Building a thread-safe queue with condition variables 71
4.2 Waiting for one-off events with futures 76
Returning values from background tasks 77 ■
Associating a task
with a future 79 ■
Making (std::)promises 81 ■
Saving an
exception for the future 83 ■
Waiting from multiple threads 85
4.3 Waiting with a time limit 87
Clocks 87 ■
Durations 88 ■
Time points 89
Functions that accept timeouts 91
16. CONTENTS xi
4.4 Using synchronization of operations to simplify code 93
Functional programming with futures 93 ■ Synchronizing
operations with message passing 97
4.5 Summary 102
5 The C++ memory model and operations on atomic types 103
5.1 Memory model basics 104
Objects and memory locations 104 ■
Objects, memory locations,
and concurrency 105 ■
Modification orders 106
5.2 Atomic operations and types in C++ 107
The standard atomic types 107 ■ Operations on
std::atomic_flag 110 ■ Operations on std::atomic<bool> 112
Operations on std::atomic<T*>: pointer arithmetic 114
Operations on standard atomic integral types 116
The std::atomic<> primary class template 116 ■ Free functions
for atomic operations 117
5.3 Synchronizing operations and enforcing ordering 119
The synchronizes-with relationship 121 ■
The happens-before
relationship 122 ■ Memory ordering for atomic operations 123
Release sequences and synchronizes-with 141 ■ Fences 143
Ordering nonatomic operations with atomics 145
5.4 Summary 147
6 Designing lock-based concurrent data structures 148
6.1 What does it mean to design for concurrency? 149
Guidelines for designing data structures for concurrency 149
6.2 Lock-based concurrent data structures 151
A thread-safe stack using locks 151 ■ A thread-safe queue using
locks and condition variables 154 ■ A thread-safe queue using
fine-grained locks and condition variables 158
6.3 Designing more complex lock-based data structures 169
Writing a thread-safe lookup table using locks 169 ■ Writing a
thread-safe list using locks 175
6.4 Summary 179
7 Designing lock-free concurrent data structures 180
7.1 Definitions and consequences 181
Types of nonblocking data structures 181 ■
Lock-free
data structures 182 ■
Wait-free data structures 182
The pros and cons of lock-free data structures 183
17. CONTENTS
xii
7.2 Examples of lock-free data structures 184
Writing a thread-safe stack without locks 184 ■ Stopping those
pesky leaks: managing memory in lock-free data structures 188
Detecting nodes that can’t be reclaimed using hazard pointers 193
Detecting nodes in use with reference counting 200 ■ Applying the
memory model to the lock-free stack 205 ■ Writing a thread-safe
queue without locks 209
7.3 Guidelines for writing lock-free data structures 221
Guideline: use std::memory_order_seq_cst for prototyping 221
Guideline: use a lock-free memory reclamation scheme 221
Guideline: watch out for the ABA problem 222
Guideline: identify busy-wait loops and help the other thread 222
7.4 Summary 223
8 Designing concurrent code 224
8.1 Techniques for dividing work between threads 225
Dividing data between threads before processing begins 226
Dividing data recursively 227 ■
Dividing work by task type 231
8.2 Factors affecting the performance of concurrent code 233
How many processors? 234 ■
Data contention and cache
ping-pong 235 ■
False sharing 237 ■
How close is
your data? 238 ■
Oversubscription and excessive
task switching 239
8.3 Designing data structures for multithreaded
performance 239
Dividing array elements for complex operations 240
Data access patterns in other data structures 242
8.4 Additional considerations when designing for
concurrency 243
Exception safety in parallel algorithms 243 ■
Scalability and
Amdahl’s law 250 ■
Hiding latency with multiple threads 252
Improving responsiveness with concurrency 253
8.5 Designing concurrent code in practice 255
A parallel implementation of std::for_each 255 ■
A parallel
implementation of std::find 257 ■
A parallel implementation
of std::partial_sum 263
8.6 Summary 272
18. CONTENTS xiii
9 Advanced thread management 273
9.1 Thread pools 274
The simplest possible thread pool 274 ■ Waiting for tasks
submitted to a thread pool 276 ■ Tasks that wait for other
tasks 280 ■ Avoiding contention on the work queue 283
Work stealing 284
9.2 Interrupting threads 289
Launching and interrupting another thread 289 ■ Detecting that
a thread has been interrupted 291 ■ Interrupting a condition
variable wait 291 ■ Interrupting a wait on
std::condition_variable_any 294 ■ Interrupting other
blocking calls 296 ■ Handling interruptions 297
Interrupting background tasks on application exit 298
9.3 Summary 299
10 Testing and debugging multithreaded applications 300
10.1 Types of concurrency-related bugs 301
Unwanted blocking 301 ■
Race conditions 302
10.2 Techniques for locating concurrency-related bugs 303
Reviewing code to locate potential bugs 303
Locating concurrency-related bugs by testing 305
Designing for testability 307 ■
Multithreaded testing
techniques 308 ■
Structuring multithreaded test code 311
Testing the performance of multithreaded code 314
10.3 Summary 314
appendix A Brief reference for some C++11 language features 315
appendix B Brief comparison of concurrency libraries 340
appendix C A message-passing framework and complete ATM example 342
appendix D C++ Thread Library reference 360
resources 487
index 489
20. xv
preface
I encountered the concept of multithreaded code while working at my first job after I
left college. We were writing a data processing application that had to populate a data-
base with incoming data records. There was a lot of data, but each record was inde-
pendent and required a reasonable amount of processing before it could be inserted
into the database. To take full advantage of the power of our 10-CPU UltraSPARC, we
ran the code in multiple threads, each thread processing its own set of incoming
records. We wrote the code in C++, using POSIX threads, and made a fair number of
mistakes—multithreading was new to all of us—but we got there in the end. It was also
while working on this project that I first became aware of the C++ Standards Commit-
tee and the freshly published C++ Standard.
I have had a keen interest in multithreading and concurrency ever since. Where
others saw it as difficult, complex, and a source of problems, I saw it as a powerful tool
that could enable your code to take advantage of the available hardware to run faster.
Later on I would learn how it could be used to improve the responsiveness and perfor-
mance of applications even on single-core hardware, by using multiple threads to hide
the latency of time-consuming operations such as I/O. I also learned how it worked at
the OS level and how Intel CPUs handled task switching.
Meanwhile, my interest in C++ brought me in contact with the ACCU and then the
C++ Standards panel at BSI, as well as Boost. I followed the initial development of
the Boost Thread Library with interest, and when it was abandoned by the original
developer, I jumped at the chance to get involved. I have been the primary developer
and maintainer of the Boost Thread Library ever since.
21. PREFACE
xvi
As the work of the C++ Standards Committee shifted from fixing defects in the exist-
ing standard to writing proposals for the next standard (named C++0x in the hope
that it would be finished by 2009, and now officially C++11, because it was finally pub-
lished in 2011), I got more involved with BSI and started drafting proposals of my own.
Once it became clear that multithreading was on the agenda, I jumped in with both
feet and authored or coauthored many of the multithreading and concurrency-
related proposals that shaped this part of the new standard. I feel privileged to have
had the opportunity to combine two of my major computer-related interests—C++
and multithreading—in this way.
This book draws on all my experience with both C++ and multithreading and aims
to teach other C++ developers how to use the C++11 Thread Library safely and effi-
ciently. I also hope to impart some of my enthusiasm for the subject along the way.
22. xvii
acknowledgments
I will start by saying a big “Thank you” to my wife, Kim, for all the love and support she
has given me while writing this book. It has occupied a significant part of my spare
time for the last four years, and without her patience, support, and understanding, I
couldn’t have managed it.
Second, I would like to thank the team at Manning who have made this book possi-
ble: Marjan Bace, publisher; Michael Stephens, associate publisher; Cynthia Kane, my
development editor; Karen Tegtmeyer, review editor; Linda Recktenwald, my copy-
editor; Katie Tennant, my proofreader; and Mary Piergies, the production manager.
Without their efforts you would not be reading this book right now.
I would also like to thank the other members of the C++ Standards Committee
who wrote committee papers on the multithreading facilities: Andrei Alexandrescu,
Pete Becker, Bob Blainer, Hans Boehm, Beman Dawes, Lawrence Crowl, Peter Dimov,
Jeff Garland, Kevlin Henney, Howard Hinnant, Ben Hutchings, Jan Kristofferson, Doug
Lea, Paul McKenney, Nick McLaren, Clark Nelson, Bill Pugh, Raul Silvera, Herb Sutter,
Detlef Vollmann, and Michael Wong, plus all those who commented on the papers, dis-
cussed them at the committee meetings, and otherwise helped shaped the multithread-
ing and concurrency support in C++11.
Finally, I would like to thank the following people, whose suggestions have greatly
improved this book: Dr. Jamie Allsop, Peter Dimov, Howard Hinnant, Rick Molloy,
Jonathan Wakely, and Dr. Russel Winder, with special thanks to Russel for his detailed
reviews and to Jonathan who, as technical proofreader, painstakingly checked all the
content for outright errors in the final manuscript during production. (Any remaining
23. ACKNOWLEDGMENTS
xviii
mistakes are of course all mine.) In addition I’d like to thank my panel of reviewers:
Ryan Stephens, Neil Horlock, John Taylor Jr., Ezra Jivan, Joshua Heyer, Keith S. Kim,
Michele Galli, Mike Tian-Jian Jiang, David Strong, Roger Orr, Wagner Rick, Mike Buksas,
and Bas Vodde. Also, thanks to the readers of the MEAP edition who took the time to
point out errors or highlight areas that needed clarifying.
24. xix
about this book
This book is an in-depth guide to the concurrency and multithreading facilities from the
new C++ Standard, from the basic usage of std::thread, std::mutex, and std::async,
to the complexities of atomic operations and the memory model.
Roadmap
The first four chapters introduce the various library facilities provided by the library
and show how they can be used.
Chapter 5 covers the low-level nitty-gritty of the memory model and atomic opera-
tions, including how atomic operations can be used to impose ordering constraints on
other code, and marks the end of the introductory chapters.
Chapters 6 and 7 start the coverage of higher-level topics, with some examples of
how to use the basic facilities to build more complex data structures—lock-based data
structures in chapter 6, and lock-free data structures in chapter 7.
Chapter 8 continues the higher-level topics, with guidelines for designing multi-
threaded code, coverage of the issues that affect performance, and example imple-
mentations of various parallel algorithms.
Chapter 9 covers thread management—thread pools, work queues, and interrupt-
ing operations.
Chapter 10 covers testing and debugging—types of bugs, techniques for locating
them, how to test for them, and so forth.
The appendixes include a brief description of some of the new language facili-
ties introduced with the new standard that are relevant to multithreading, the
25. ABOUT THIS BOOK
xx
implementation details of the message-passing library mentioned in chapter 4, and a
complete reference to the C++11 Thread Library.
Who should read this book
If you're writing multithreaded code in C++, you should read this book. If you're using
the new multithreading facilities from the C++ Standard Library, this book is an essen-
tial guide. If you’re using alternative thread libraries, the guidelines and techniques
from the later chapters should still prove useful.
A good working knowledge of C++ is assumed, though familiarity with the new lan-
guage features is not—these are covered in appendix A. Prior knowledge or experience
of multithreaded programming is not assumed, though it may be useful.
How to use this book
If you’ve never written multithreaded code before, I suggest reading this book sequen-
tially from beginning to end, though possibly skipping the more detailed parts of
chapter 5. Chapter 7 relies heavily on the material in chapter 5, so if you skipped chap-
ter 5, you should save chapter 7 until you’ve read it.
If you’ve not used the new C++11 language facilities before, it might be worth
skimming appendix A before you start to ensure that you’re up to speed with the
examples in the book. The uses of the new language facilities are highlighted in
the text, though, and you can always flip to the appendix if you encounter something
you’ve not seen before.
If you have extensive experience with writing multithreaded code in other environ-
ments, the beginning chapters are probably still worth skimming so you can see how
the facilities you know map onto the new standard C++ ones. If you’re going to be
doing any low-level work with atomic variables, chapter 5 is a must. Chapter 8 is worth
reviewing to ensure that you’re familiar with things like exception safety in multi-
threaded C++. If you have a particular task in mind, the index and table of contents
should help you find a relevant section quickly.
Once you’re up to speed on the use of the C++ Thread Library, appendix D should
continue to be useful, such as for looking up the exact details of each class and func-
tion call. You may also like to dip back into the main chapters from time to time to
refresh your use of a particular construct or look at the sample code.
Code conventions and downloads
All source code in listings or in text is in a fixed-width font like this to separate it
from ordinary text. Code annotations accompany many of the listings, highlighting
important concepts. In some cases, numbered bullets link to explanations that follow
the listing.
Source code for all working examples in this book is available for download from
the publisher’s website at www.manning.com/CPlusPlusConcurrencyinAction.
26. ABOUT THIS BOOK xxi
Software requirements
To use the code from this book unchanged, you’ll need a recent C++ compiler that
supports the new C++11 language features used in the examples (see appendix A),
and you’ll need a copy of the C++ Standard Thread Library.
At the time of writing, g++ is the only compiler I’m aware of that ships with an
implementation of the Standard Thread Library, although the Microsoft Visual Studio
2011 preview also includes an implementation. The g++ implementation of the
Thread Library was first introduced in a basic form in g++ 4.3 and extended in subse-
quent releases. g++ 4.3 also introduced the first support for some of the new C++11
language features; more of the new language features are supported in each subse-
quent release. See the g++ C++11 status page for details.1
Microsoft Visual Studio 2010 provides some of the new C++11 language features,
such as rvalue references and lambda functions, but doesn't ship with an implementa-
tion of the Thread Library.
My company, Just Software Solutions Ltd, sells a complete implementation of the
C++11 Standard Thread Library for Microsoft Visual Studio 2005, Microsoft Visual
Studio 2008, Microsoft Visual Studio 2010, and various versions of g++.2
This imple-
mentation has been used for testing the examples in this book.
The Boost Thread Library3
provides an API that’s based on the C++11 Standard
Thread Library proposals and is portable to many platforms. Most of the examples
from the book can be modified to work with the Boost Thread Library by judicious
replacement of std:: with boost:: and use of the appropriate #include directives.
There are a few facilities that are either not supported (such as std::async) or have
different names (such as boost::unique_future) in the Boost Thread Library.
Author Online
Purchase of C++ Concurrency in Action includes free access to a private web forum run by
Manning Publications where you can make comments about the book, ask technical ques-
tions, and receive help from the author and from other users. To access the forum and
subscribe to it, point your web browser to www.manning.com/CPlusPlusConcurrencyin-
Action. This page provides information on how to get on the forum once you’re regis-
tered, what kind of help is available, and the rules of conduct on the forum.
Manning’s commitment to our readers is to provide a venue where a meaningful
dialogue between individual readers and between readers and the author can take
place. It’s not a commitment to any specific amount of participation on the part of the
author, whose contribution to the book’s forum remains voluntary (and unpaid). We
suggest you try asking the author some challenging questions, lest his interest stray!
The Author Online forum and the archives of previous discussions will be accessi-
ble from the publisher’s website as long as the book is in print.
1
GNU Compiler Collection C++0x/C++11 status page, http:/
/gcc.gnu.org/projects/cxx0x.html.
2
The just::thread implementation of the C++ Standard Thread Library, http:/
/www.stdthread.co.uk.
3
The Boost C++ library collection, http:/
/www.boost.org.
27. xxii
about the cover illustration
The illustration on the cover of C++ Concurrency in Action is captioned “Habit of a
Lady of Japan.” The image is taken from the four-volume Collection of the Dress of
Different Nations by Thomas Jefferys, published in London between 1757 and 1772. The
collection includes beautiful hand-colored copperplate engravings of costumes from
around the world and has influenced theatrical costume design since its publication.
The diversity of the drawings in the compendium speaks vividly of the richness of the
costumes presented on the London stage over 200 years ago. The costumes, both his-
torical and contemporaneous, offered a glimpse into the dress customs of people liv-
ing in different times and in different countries, making them come alive for London
theater audiences.
Dress codes have changed in the last century and the diversity by region, so rich in
the past, has faded away. It’s now often hard to tell the inhabitant of one continent
from another. Perhaps, trying to view it optimistically, we’ve traded a cultural and
visual diversity for a more varied personal life—or a more varied and interesting intel-
lectual and technical life.
We at Manning celebrate the inventiveness, the initiative, and the fun of the com-
puter business with book covers based on the rich diversity of regional and theatrical
life of two centuries ago, brought back to life by the pictures from this collection.
28. 1
Hello, world of
concurrency in C++!
These are exciting times for C++ users. Thirteen years after the original C++ Stan-
dard was published in 1998, the C++ Standards Committee is giving the language
and its supporting library a major overhaul. The new C++ Standard (referred to as
C++11 or C++0x) was published in 2011 and brings with it a whole swathe of
changes that will make working with C++ easier and more productive.
One of the most significant new features in the C++11 Standard is the support of
multithreaded programs. For the first time, the C++ Standard will acknowledge the
existence of multithreaded applications in the language and provide components in
the library for writing multithreaded applications. This will make it possible to write
This chapter covers
■ What is meant by concurrency and
multithreading
■ Why you might want to use concurrency and
multithreading in your applications
■ Some of the history of the support for
concurrency in C++
■ What a simple multithreaded C++ program
looks like
29. 2 CHAPTER 1 Hello, world of concurrency in C++!
multithreaded C++ programs without relying on platform-specific extensions and thus
allow writing portable multithreaded code with guaranteed behavior. It also comes at a
time when programmers are increasingly looking to concurrency in general, and multi-
threaded programming in particular, to improve application performance.
This book is about writing programs in C++ using multiple threads for concur-
rency and the C++ language features and library facilities that make that possible. I’ll
start by explaining what I mean by concurrency and multithreading and why you
would want to use concurrency in your applications. After a quick detour into why
you might not want to use it in your applications, I’ll give an overview of the concur-
rency support in C++, and I’ll round off this chapter with a simple example of C++
concurrency in action. Readers experienced with developing multithreaded applica-
tions may wish to skip the early sections. In subsequent chapters I’ll cover more
extensive examples and look at the library facilities in more depth. The book will fin-
ish with an in-depth reference to all the C++ Standard Library facilities for multi-
threading and concurrency.
So, what do I mean by concurrency and multithreading?
1.1 What is concurrency?
At the simplest and most basic level, concurrency is about two or more separate activi-
ties happening at the same time. We encounter concurrency as a natural part of life;
we can walk and talk at the same time or perform different actions with each hand,
and of course we each go about our lives independently of each other—you can watch
football while I go swimming, and so on.
1.1.1 Concurrency in computer systems
When we talk about concurrency in terms of computers, we mean a single system per-
forming multiple independent activities in parallel, rather than sequentially, or one
after the other. It isn’t a new phenomenon: multitasking operating systems that allow
a single computer to run multiple applications at the same time through task switch-
ing have been commonplace for many years, and high-end server machines with mul-
tiple processors that enable genuine concurrency have been available for even longer.
What is new is the increased prevalence of computers that can genuinely run multiple
tasks in parallel rather than just giving the illusion of doing so.
Historically, most computers have had one processor, with a single processing
unit or core, and this remains true for many desktop machines today. Such a
machine can really only perform one task at a time, but it can switch between tasks
many times per second. By doing a bit of one task and then a bit of another and so
on, it appears that the tasks are happening concurrently. This is called task switching.
We still talk about concurrency with such systems; because the task switches are so fast,
you can’t tell at which point a task may be suspended as the processor switches to
another one. The task switching provides an illusion of concurrency to both the user
and the applications themselves. Because there is only an illusion of concurrency, the
30. 3
What is concurrency?
behavior of applications may be subtly different when executing in a single-processor
task-switching environment compared to when executing in an environment with
true concurrency. In particular, incorrect assumptions about the memory model
(covered in chapter 5) may not show up in such an environment. This is discussed
in more depth in chapter 10.
Computers containing multiple processors have been used for servers and high-
performance computing tasks for a number of years, and now computers based on
processors with more than one core on a single chip (multicore processors) are becom-
ing increasingly common as desktop machines too. Whether they have multiple proces-
sors or multiple cores within a processor (or both), these computers are capable of
genuinely running more than one task in parallel. We call this hardware concurrency.
Figure 1.1 shows an idealized scenario of a computer with precisely two tasks to do,
each divided into 10 equal-size chunks. On a dual-core machine (which has two pro-
cessing cores), each task can execute on its own core. On a single-core machine doing
task switching, the chunks from each task are interleaved. But they are also spaced out
a bit (in the diagram this is shown by the gray bars separating the chunks being
thicker than the separator bars shown for the dual-core machine); in order to do the
interleaving, the system has to perform a context switch every time it changes from one
task to another, and this takes time. In order to perform a context switch, the OS has
to save the CPU state and instruction pointer for the currently running task, work out
which task to switch to, and reload the CPU state for the task being switched to. The
CPU will then potentially have to load the memory for the instructions and data for
the new task into cache, which can prevent the CPU from executing any instructions,
causing further delay.
Though the availability of concurrency in the hardware is most obvious with multi-
processor or multicore systems, some processors can execute multiple threads on a
single core. The important factor to consider is really the number of hardware threads:
the measure of how many independent tasks the hardware can genuinely run concur-
rently. Even with a system that has genuine hardware concurrency, it’s easy to have
more tasks than the hardware can run in parallel, so task switching is still used in these
cases. For example, on a typical desktop computer there may be hundreds of tasks
Figure 1.1 Two approaches to concurrency: parallel execution on a dual-core
machine versus task switching on a single-core machine
31. 4 CHAPTER 1 Hello, world of concurrency in C++!
running, performing background operations, even when the computer is nominally
idle. It’s the task switching that allows these background tasks to run and allows you to
run your word processor, compiler, editor, and web browser (or any combination of
applications) all at once. Figure 1.2 shows task switching among four tasks on a dual-
core machine, again for an idealized scenario with the tasks divided neatly into equal-
size chunks. In practice, many issues will make the divisions uneven and the scheduling
irregular. Some of these issues are covered in chapter 8 when we look at factors affect-
ing the performance of concurrent code.
All the techniques, functions, and classes covered in this book can be used whether
your application is running on a machine with one single-core processor or on a
machine with many multicore processors and are not affected by whether the concur-
rency is achieved through task switching or by genuine hardware concurrency. But as
you may imagine, how you make use of concurrency in your application may well
depend on the amount of hardware concurrency available. This is covered in chapter 8,
where I cover the issues involved with designing concurrent code in C++.
1.1.2 Approaches to concurrency
Imagine for a moment a pair of programmers working together on a software project.
If your developers are in separate offices, they can go about their work peacefully,
without being disturbed by each other, and they each have their own set of reference
manuals. However, communication is not straightforward; rather than just turning
around and talking to each other, they have to use the phone or email or get up and
walk to each other’s office. Also, you have the overhead of two offices to manage and mul-
tiple copies of reference manuals to purchase.
Now imagine that you move your developers into the same office. They can now
talk to each other freely to discuss the design of the application, and they can easily
draw diagrams on paper or on a whiteboard to help with design ideas or explanations.
You now have only one office to manage, and one set of resources will often suffice.
On the negative side, they might find it harder to concentrate, and there may be
issues with sharing resources (“Where’s the reference manual gone now?”).
These two ways of organizing your developers illustrate the two basic approaches
to concurrency. Each developer represents a thread, and each office represents a pro-
cess. The first approach is to have multiple single-threaded processes, which is similar
to having each developer in their own office, and the second approach is to have mul-
tiple threads in a single process, which is like having two developers in the same office.
Figure 1.2 Task switching of four tasks on two cores
32. 5
What is concurrency?
You can combine these in an arbitrary fashion and have multiple processes, some of
which are multithreaded and some of which are single-threaded, but the principles
are the same. Let’s now have a brief look at these two approaches to concurrency in
an application.
CONCURRENCY WITH MULTIPLE PROCESSES
The first way to make use of concurrency within an appli-
cation is to divide the application into multiple, separate,
single-threaded processes that are run at the same time,
much as you can run your web browser and word proces-
sor at the same time. These separate processes can then
pass messages to each other through all the normal inter-
process communication channels (signals, sockets, files,
pipes, and so on), as shown in figure 1.3. One downside is
that such communication between processes is often
either complicated to set up or slow or both, because
operating systems typically provide a lot of protection
between processes to avoid one process accidentally modi-
fying data belonging to another process. Another down-
side is that there’s an inherent overhead in running
multiple processes: it takes time to start a process, the
operating system must devote internal resources to man-
aging the process, and so forth.
Of course, it’s not all downside: the added protection operating systems typically
provide between processes and the higher-level communication mechanisms mean
that it can be easier to write safe concurrent code with processes rather than threads.
Indeed, environments such as that provided for the Erlang programming language
use processes as the fundamental building block of concurrency to great effect.
Using separate processes for concurrency also has an additional advantage—you can
run the separate processes on distinct machines connected over a network. Though this
increases the communication cost, on a carefully designed system it can be a cost-
effective way of increasing the available parallelism and improving performance.
CONCURRENCY WITH MULTIPLE THREADS
The alternative approach to concurrency is to run multiple threads in a single pro-
cess. Threads are much like lightweight processes: each thread runs independently of
the others, and each thread may run a different sequence of instructions. But all
threads in a process share the same address space, and most of the data can be
accessed directly from all threads—global variables remain global, and pointers or ref-
erences to objects or data can be passed around among threads. Although it’s often
possible to share memory among processes, this is complicated to set up and often
hard to manage, because memory addresses of the same data aren’t necessarily the
same in different processes. Figure 1.4 shows two threads within a process communi-
cating through shared memory.
Figure 1.3 Communication
between a pair of processes
running concurrently
33. 6 CHAPTER 1 Hello, world of concurrency in C++!
The shared address space and lack of protection of data
between threads makes the overhead associated with using multi-
ple threads much smaller than that from using multiple pro-
cesses, because the operating system has less bookkeeping to do.
But the flexibility of shared memory also comes with a price: if
data is accessed by multiple threads, the application programmer
must ensure that the view of data seen by each thread is consistent
whenever it is accessed. The issues surrounding sharing data
between threads and the tools to use and guidelines to follow to
avoid problems are covered throughout this book, notably in
chapters 3, 4, 5, and 8. The problems are not insurmountable,
provided suitable care is taken when writing the code, but they do
mean that a great deal of thought must go into the communica-
tion between threads.
The low overhead associated with launching and communicat-
ing between multiple threads within a process compared to launching and communi-
cating between multiple single-threaded processes means that this is the favored
approach to concurrency in mainstream languages including C++, despite the poten-
tial problems arising from the shared memory. In addition, the C++ Standard doesn’t
provide any intrinsic support for communication between processes, so applications
that use multiple processes will have to rely on platform-specific APIs to do so. This book
therefore focuses exclusively on using multithreading for concurrency, and future refer-
ences to concurrency assume that this is achieved by using multiple threads.
Having clarified what we mean by concurrency, let’s now look at why you would use
concurrency in your applications.
1.2 Why use concurrency?
There are two main reasons to use concurrency in an application: separation of con-
cerns and performance. In fact, I’d go so far as to say that they’re pretty much the only
reasons to use concurrency; anything else boils down to one or the other (or maybe even
both) when you look hard enough (well, except for reasons like “because I want to”).
1.2.1 Using concurrency for separation of concerns
Separation of concerns is almost always a good idea when writing software; by group-
ing related bits of code together and keeping unrelated bits of code apart, you can
make your programs easier to understand and test, and thus less likely to contain
bugs. You can use concurrency to separate distinct areas of functionality, even when
the operations in these distinct areas need to happen at the same time; without the
explicit use of concurrency you either have to write a task-switching framework or
actively make calls to unrelated areas of code during an operation.
Consider a processing-intensive application with a user interface, such as a DVD
player application for a desktop computer. Such an application fundamentally has two
Figure 1.4 Commu-
nication between
a pair of threads
runningconcurrently
in a single process
34. 7
Why use concurrency?
sets of responsibilities: not only does it have to read the data from the disk, decode the
images and sound, and send them to the graphics and sound hardware in a timely
fashion so the DVD plays without glitches, but it must also take input from the user,
such as when the user clicks Pause or Return To Menu, or even Quit. In a single
thread, the application has to check for user input at regular intervals during the play-
back, thus conflating the DVD playback code with the user interface code. By using
multithreading to separate these concerns, the user interface code and DVD playback
code no longer have to be so closely intertwined; one thread can handle the user
interface and another the DVD playback. There will have to be interaction between
them, such as when the user clicks Pause, but now these interactions are directly
related to the task at hand.
This gives the illusion of responsiveness, because the user interface thread can typ-
ically respond immediately to a user request, even if the response is simply to display a
busy cursor or Please Wait message while the request is conveyed to the thread doing
the work. Similarly, separate threads are often used to run tasks that must run contin-
uously in the background, such as monitoring the filesystem for changes in a desktop
search application. Using threads in this way generally makes the logic in each thread
much simpler, because the interactions between them can be limited to clearly identi-
fiable points, rather than having to intersperse the logic of the different tasks.
In this case, the number of threads is independent of the number of CPU cores
available, because the division into threads is based on the conceptual design rather
than an attempt to increase throughput.
1.2.2 Using concurrency for performance
Multiprocessor systems have existed for decades, but until recently they were mostly
found only in supercomputers, mainframes, and large server systems. But chip manu-
facturers have increasingly been favoring multicore designs with 2, 4, 16, or more pro-
cessors on a single chip over better performance with a single core. Consequently,
multicore desktop computers, and even multicore embedded devices, are now
increasingly prevalent. The increased computing power of these machines comes not
from running a single task faster but from running multiple tasks in parallel. In the
past, programmers have been able to sit back and watch their programs get faster with
each new generation of processors, without any effort on their part. But now, as Herb
Sutter put it, “The free lunch is over.”1
If software is to take advantage of this increased
computing power, it must be designed to run multiple tasks concurrently. Programmers must
therefore take heed, and those who have hitherto ignored concurrency must now
look to add it to their toolbox.
There are two ways to use concurrency for performance. The first, and most obvi-
ous, is to divide a single task into parts and run each in parallel, thus reducing the
total runtime. This is task parallelism. Although this sounds straightforward, it can be
1
“The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software,” Herb Sutter, Dr. Dobb’s
Journal, 30(3), March 2005. http:/
/www.gotw.ca/publications/concurrency-ddj.htm.
35. 8 CHAPTER 1 Hello, world of concurrency in C++!
quite a complex process, because there may be many dependencies between the vari-
ous parts. The divisions may be either in terms of processing—one thread performs
one part of the algorithm while another thread performs a different part—or in terms
of data—each thread performs the same operation on different parts of the data. This
latter approach is called data parallelism.
Algorithms that are readily susceptible to such parallelism are frequently called
embarrassingly parallel. Despite the implications that you might be embarrassed to have
code so easy to parallelize, this is a good thing: other terms I’ve encountered for such
algorithms are naturally parallel and conveniently concurrent. Embarrassingly parallel algo-
rithms have good scalability properties—as the number of available hardware threads
goes up, the parallelism in the algorithm can be increased to match. Such an algo-
rithm is the perfect embodiment of the adage, “Many hands make light work.” For
those parts of the algorithm that aren’t embarrassingly parallel, you might be able to
divide the algorithm into a fixed (and therefore not scalable) number of parallel
tasks. Techniques for dividing tasks between threads are covered in chapter 8.
The second way to use concurrency for performance is to use the available paral-
lelism to solve bigger problems; rather than processing one file at a time, process 2 or
10 or 20, as appropriate. Although this is really just an application of data parallelism,
by performing the same operation on multiple sets of data concurrently, there’s a dif-
ferent focus. It still takes the same amount of time to process one chunk of data, but
now more data can be processed in the same amount of time. Obviously, there are lim-
its to this approach too, and this won’t be beneficial in all cases, but the increase in
throughput that comes from such an approach can actually make new things possi-
ble—increased resolution in video processing, for example, if different areas of the
picture can be processed in parallel.
1.2.3 When not to use concurrency
It’s just as important to know when not to use concurrency as it is to know when to use
it. Fundamentally, the only reason not to use concurrency is when the benefit is not
worth the cost. Code using concurrency is harder to understand in many cases, so
there’s a direct intellectual cost to writing and maintaining multithreaded code, and
the additional complexity can also lead to more bugs. Unless the potential perfor-
mance gain is large enough or separation of concerns clear enough to justify the addi-
tional development time required to get it right and the additional costs associated
with maintaining multithreaded code, don’t use concurrency.
Also, the performance gain might not be as large as expected; there’s an inherent
overhead associated with launching a thread, because the OS has to allocate the associ-
ated kernel resources and stack space and then add the new thread to the scheduler,
all of which takes time. If the task being run on the thread is completed quickly, the
actual time taken by the task may be dwarfed by the overhead of launching the thread,
possibly making the overall performance of the application worse than if the task had
been executed directly by the spawning thread.
36. 9
Concurrency and multithreading in C++
Furthermore, threads are a limited resource. If you have too many threads run-
ning at once, this consumes OS resources and may make the system as a whole run
slower. Not only that, but using too many threads can exhaust the available memory or
address space for a process, because each thread requires a separate stack space. This
is particularly a problem for 32-bit processes with a flat architecture where there’s a
4 GB limit in the available address space: if each thread has a 1 MB stack (as is typical on
many systems), then the address space would be all used up with 4096 threads, with-
out allowing for any space for code or static data or heap data. Although 64-bit (or
larger) systems don’t have this direct address-space limit, they still have finite resources:
if you run too many threads, this will eventually cause problems. Though thread pools
(see chapter 9) can be used to limit the number of threads, these are not a silver bul-
let, and they do have their own issues.
If the server side of a client/server application launches a separate thread for each
connection, this works fine for a small number of connections, but can quickly
exhaust system resources by launching too many threads if the same technique is used
for a high-demand server that has to handle many connections. In this scenario, care-
ful use of thread pools can provide optimal performance (see chapter 9).
Finally, the more threads you have running, the more context switching the oper-
ating system has to do. Each context switch takes time that could be spent doing use-
ful work, so at some point adding an extra thread will actually reduce the overall
application performance rather than increase it. For this reason, if you’re trying to
achieve the best possible performance of the system, it’s necessary to adjust the num-
ber of threads running to take account of the available hardware concurrency (or
lack of it).
Use of concurrency for performance is just like any other optimization strategy: it
has potential to greatly improve the performance of your application, but it can also
complicate the code, making it harder to understand and more prone to bugs. There-
fore it’s only worth doing for those performance-critical parts of the application
where there’s the potential for measurable gain. Of course, if the potential for perfor-
mance gains is only secondary to clarity of design or separation of concerns, it may
still be worth using a multithreaded design.
Assuming that you’ve decided you do want to use concurrency in your application,
whether for performance, separation of concerns, or because it’s “multithreading
Monday,” what does that mean for C++ programmers?
1.3 Concurrency and multithreading in C++
Standardized support for concurrency through multithreading is a new thing for C++.
It’s only with the upcoming C++11 Standard that you’ll be able to write multithreaded
code without resorting to platform-specific extensions. In order to understand the
rationale behind lots of the decisions in the new Standard C++ Thread Library, it’s
important to understand the history.
37. 10 CHAPTER 1 Hello, world of concurrency in C++!
1.3.1 History of multithreading in C++
The 1998 C++ Standard doesn’t acknowledge the existence of threads, and the opera-
tional effects of the various language elements are written in terms of a sequential
abstract machine. Not only that, but the memory model isn’t formally defined, so you
can’t write multithreaded applications without compiler-specific extensions to the
1998 C++ Standard.
Of course, compiler vendors are free to add extensions to the language, and the
prevalence of C APIs for multithreading—such as those in the POSIX C standard and
the Microsoft Windows API—has led many C++ compiler vendors to support multi-
threading with various platform-specific extensions. This compiler support is gener-
ally limited to allowing the use of the corresponding C API for the platform and
ensuring that the C++ Runtime Library (such as the code for the exception-handling
mechanism) works in the presence of multiple threads. Although very few compiler
vendors have provided a formal multithreading-aware memory model, the actual
behavior of the compilers and processors has been sufficiently good that a large num-
ber of multithreaded C++ programs have been written.
Not content with using the platform-specific C APIs for handling multithread-
ing, C++ programmers have looked to their class libraries to provide object-oriented
multithreading facilities. Application frameworks such as MFC and general-purpose
C++ libraries such as Boost and ACE have accumulated sets of C++ classes that
wrap the underlying platform-specific APIs and provide higher-level facilities for
multithreading that simplify tasks. Although the precise details of the class librar-
ies have varied considerably, particularly in the area of launching new threads, the
overall shape of the classes has had a lot in common. One particularly important
design that’s common to many C++ class libraries, and that provides considerable
benefit to the programmer, has been the use of the Resource Acquisition Is Initializa-
tion (RAII) idiom with locks to ensure that mutexes are unlocked when the relevant
scope is exited.
For many cases, the multithreading support of existing C++ compilers combined
with the availability of platform-specific APIs and platform-independent class libraries
such as Boost and ACE provide a solid foundation on which to write multithreaded
C++ code, and as a result there are probably millions of lines of C++ code written as
part of multithreaded applications. But the lack of standard support means that there
are occasions where the lack of a thread-aware memory model causes problems, par-
ticularly for those who try to gain higher performance by using knowledge of the pro-
cessor hardware or for those writing cross-platform code where the actual behavior of
the compilers varies between platforms.
1.3.2 Concurrency support in the new standard
All this changes with the release of the new C++11 Standard. Not only is there a brand-
new thread-aware memory model, but the C++ Standard Library has been extended to
include classes for managing threads (see chapter 2), protecting shared data (see
38. 11
Concurrency and multithreading in C++
chapter 3), synchronizing operations between threads (see chapter 4), and low-level
atomic operations (see chapter 5).
The new C++ Thread Library is heavily based on the prior experience accumu-
lated through the use of the C++ class libraries mentioned previously. In particular,
the Boost Thread Library has been used as the primary model on which the new
library is based, with many of the classes sharing their names and structure with the
corresponding ones from Boost. As the new standard has evolved, this has been a
two-way flow, and the Boost Thread Library has itself changed to match the C++
Standard in many respects, so users transitioning from Boost should find themselves
very much at home.
Concurrency support is just one of the changes with the new C++ Standard—as
mentioned at the beginning of this chapter, there are many enhancements to the lan-
guage itself to make programmers’ lives easier. Although these are generally outside
the scope of this book, some of those changes have had a direct impact on the Thread
Library itself and the ways in which it can be used. Appendix A provides a brief intro-
duction to these language features.
The support for atomic operations directly in C++ enables programmers to write
efficient code with defined semantics without the need for platform-specific assembly
language. This is a real boon for those trying to write efficient, portable code; not only
does the compiler take care of the platform specifics, but the optimizer can be written
to take into account the semantics of the operations, thus enabling better optimiza-
tion of the program as a whole.
1.3.3 Efficiency in the C++ Thread Library
One of the concerns that developers involved in high-performance computing often
raise regarding C++ in general, and C++ classes that wrap low-level facilities—such as
those in the new Standard C++ Thread Library specifically is that of efficiency. If
you’re after the utmost in performance, then it’s important to understand the imple-
mentation costs associated with using any high-level facilities, compared to using the
underlying low-level facilities directly. This cost is the abstraction penalty.
The C++ Standards Committee has been very aware of this when designing the C++
Standard Library in general and the Standard C++ Thread Library in particular; one
of the design goals has been that there should be little or no benefit to be gained from
using the lower-level APIs directly, where the same facility is to be provided. The
library has therefore been designed to allow for efficient implementation (with a very
low abstraction penalty) on most major platforms.
Another goal of the C++ Standards Committee has been to ensure that C++ pro-
vides sufficient low-level facilities for those wishing to work close to the metal for the
ultimate performance. To this end, along with the new memory model comes a com-
prehensive atomic operations library for direct control over individual bits and bytes
and the inter-thread synchronization and visibility of any changes. These atomic types
and the corresponding operations can now be used in many places where developers
39. 12 CHAPTER 1 Hello, world of concurrency in C++!
would previously have chosen to drop down to platform-specific assembly language.
Code using the new standard types and operations is thus more portable and easier
to maintain.
The C++ Standard Library also provides higher-level abstractions and facilities that
make writing multithreaded code easier and less error prone. Sometimes the use of
these facilities does come with a performance cost because of the additional code that
must be executed. But this performance cost doesn’t necessarily imply a higher
abstraction penalty; in general the cost is no higher than would be incurred by writing
equivalent functionality by hand, and the compiler may well inline much of the addi-
tional code anyway.
In some cases, the high-level facilities provide additional functionality beyond what
may be required for a specific use. Most of the time this is not an issue: you don’t pay
for what you don’t use. On rare occasions, this unused functionality will impact the
performance of other code. If you’re aiming for performance and the cost is too high,
you may be better off handcrafting the desired functionality from lower-level facilities.
In the vast majority of cases, the additional complexity and chance of errors far out-
weigh the potential benefits from a small performance gain. Even if profiling does
demonstrate that the bottleneck is in the C++ Standard Library facilities, it may be due
to poor application design rather than a poor library implementation. For example, if
too many threads are competing for a mutex, it will impact the performance signifi-
cantly. Rather than trying to shave a small fraction of time off the mutex operations, it
would probably be more beneficial to restructure the application so that there’s less
contention on the mutex. Designing applications to reduce contention is covered
in chapter 8.
In those very rare cases where the C++ Standard Library does not provide the perfor-
mance or behavior required, it might be necessary to use platform-specific facilities.
1.3.4 Platform-specific facilities
Although the C++ Thread Library provides reasonably comprehensive facilities for
multithreading and concurrency, on any given platform there will be platform-specific
facilities that go beyond what’s offered. In order to gain easy access to those facilities
without giving up the benefits of using the Standard C++ Thread Library, the types in
the C++ Thread Library may offer a native_handle() member function that allows
the underlying implementation to be directly manipulated using a platform-specific
API. By its very nature, any operations performed using the native_handle() are
entirely platform dependent and out of the scope of this book (and the Standard C++
Library itself).
Of course, before even considering using platform-specific facilities, it’s important to
understand what the Standard Library provides, so let’s get started with an example.
40. 13
Getting started
1.4 Getting started
OK, so you have a nice, shiny C++11-compatible compiler. What next? What does a
multithreaded C++ program look like? It looks pretty much like any other C++ pro-
gram, with the usual mix of variables, classes, and functions. The only real distinction
is that some functions might be running concurrently, so you need to ensure that
shared data is safe for concurrent access, as described in chapter 3. Of course, in
order to run functions concurrently, specific functions and objects must be used to
manage the different threads.
1.4.1 Hello, Concurrent World
Let’s start with a classic example: a program to print “Hello World.” A really simple
Hello, World program that runs in a single thread is shown here, to serve as a baseline
when we move to multiple threads:
#include <iostream>
int main()
{
std::cout<<"Hello Worldn";
}
All this program does is write “Hello World” to the standard output stream. Let’s com-
pare it to the simple Hello, Concurrent World program shown in the following listing,
which starts a separate thread to display the message.
#include <iostream>
#include <thread>
void hello()
{
std::cout<<"Hello Concurrent Worldn";
}
int main()
{
std::thread t(hello);
t.join();
}
The first difference is the extra #include <thread> B. The declarations for the multi-
threading support in the Standard C++ Library are in new headers: the functions and
classes for managing threads are declared in <thread>, whereas those for protecting
shared data are declared in other headers.
Second, the code for writing the message has been moved to a separate function
c. This is because every thread has to have an initial function, which is where the new
thread of execution begins. For the initial thread in an application, this is main(), but
for every other thread it’s specified in the constructor of a std::thread object—in
Listing 1.1 A simple Hello, Concurrent World program
b
c
d
e
41. 14 CHAPTER 1 Hello, world of concurrency in C++!
this case, the std::thread object named t d has the new function hello() as its ini-
tial function.
This is the next difference: rather than just writing directly to standard output or
calling hello() from main(), this program launches a whole new thread to do it,
bringing the thread count to two—the initial thread that starts at main() and the new
thread that starts at hello().
After the new thread has been launched d, the initial thread continues execution.
If it didn’t wait for the new thread to finish, it would merrily continue to the end of
main() and thus end the program—possibly before the new thread had had a chance
to run. This is why the call to join() is there e—as described in chapter 2, this causes
the calling thread (in main()) to wait for the thread associated with the std::thread
object, in this case, t.
If this seems like a lot of work to go to just to write a message to standard output, it
is—as described previously in section 1.2.3, it’s generally not worth the effort to use
multiple threads for such a simple task, especially if the initial thread has nothing to
do in the meantime. Later in the book, we’ll work through examples that show scenar-
ios where there’s a clear gain to using multiple threads.
1.5 Summary
In this chapter, I covered what is meant by concurrency and multithreading and why
you’d choose to use it (or not) in your applications. I also covered the history of multi-
threading in C++ from the complete lack of support in the 1998 standard, through
various platform-specific extensions, to proper multithreading support in the new C++
Standard, C++11. This support is coming just in time to allow programmers to take
advantage of the greater hardware concurrency becoming available with newer CPUs,
as chip manufacturers choose to add more processing power in the form of multiple
cores that allow more tasks to be executed concurrently, rather than increasing the
execution speed of a single core.
I also showed how simple using the classes and functions from the C++ Standard
Library can be, in the examples in section 1.4. In C++, using multiple threads isn’t
complicated in and of itself; the complexity lies in designing the code so that it
behaves as intended.
After the taster examples of section 1.4, it’s time for something with a bit more
substance. In chapter 2 we’ll look at the classes and functions available for manag-
ing threads.
42. 15
Managing threads
OK, so you’ve decided to use concurrency for your application. In particular, you’ve
decided to use multiple threads. What now? How do you launch these threads, how
do you check that they’ve finished, and how do you keep tabs on them? The C++
Standard Library makes most thread-management tasks relatively easy, with just
about everything managed through the std::thread object associated with a given
thread, as you’ll see. For those tasks that aren’t so straightforward, the library pro-
vides the flexibility to build what you need from the basic building blocks.
In this chapter, I’ll start by covering the basics: launching a thread, waiting for it
to finish, or running it in the background. We’ll then proceed to look at passing
additional parameters to the thread function when it’s launched and how to trans-
fer ownership of a thread from one std::thread object to another. Finally, we’ll
look at choosing the number of threads to use and identifying particular threads.
This chapter covers
■ Starting threads, and various ways of specifying
code to run on a new thread
■ Waiting for a thread to finish versus leaving it
to run
■ Uniquely identifying threads
43. 16 CHAPTER 2 Managing threads
2.1 Basic thread management
Every C++ program has at least one thread, which is started by the C++ runtime: the
thread running main(). Your program can then launch additional threads that have
another function as the entry point. These threads then run concurrently with each
other and with the initial thread. Just as the program exits when the program returns
from main(), when the specified entry point function returns, the thread exits. As
you’ll see, if you have a std::thread object for a thread, you can wait for it to finish;
but first you have to start it, so let’s look at launching threads.
2.1.1 Launching a thread
As you saw in chapter 1, threads are started by constructing a std::thread object that
specifies the task to run on that thread. In the simplest case, that task is just a plain,
ordinary void-returning function that takes no parameters. This function runs on its
own thread until it returns, and then the thread stops. At the other extreme, the task
could be a function object that takes additional parameters and performs a series of
independent operations that are specified through some kind of messaging system
while it’s running, and the thread stops only when it’s signaled to do so, again via
some kind of messaging system. It doesn’t matter what the thread is going to do or
where it’s launched from, but starting a thread using the C++ Thread Library always
boils down to constructing a std::thread object:
void do_some_work();
std::thread my_thread(do_some_work);
This is just about as simple as it gets. Of course, you have to make sure that the
<thread> header is included so the compiler can see the definition of the std::
thread class. As with much of the C++ Standard Library, std::thread works with any
callable type, so you can pass an instance of a class with a function call operator to the
std::thread constructor instead:
class background_task
{
public:
void operator()() const
{
do_something();
do_something_else();
}
};
background_task f;
std::thread my_thread(f);
In this case, the supplied function object is copied into the storage belonging to the
newly created thread of execution and invoked from there. It’s therefore essential that
the copy behave equivalently to the original, or the result may not be what’s expected.
One thing to consider when passing a function object to the thread constructor is
to avoid what is dubbed “C++’s most vexing parse.” If you pass a temporary rather
44. 17
Basic thread management
than a named variable, then the syntax can be the same as that of a function declara-
tion, in which case the compiler interprets it as such, rather than an object definition.
For example,
std::thread my_thread(background_task());
declares a function my_thread that takes a single parameter (of type pointer to a func-
tion taking no parameters and returning a background_task object) and returns a
std::thread object, rather than launching a new thread. You can avoid this by nam-
ing your function object as shown previously, by using an extra set of parentheses, or
by using the new uniform initialization syntax, for example:
std::thread my_thread((background_task()));
std::thread my_thread{background_task()};
In the first example B, the extra parentheses prevent interpretation as a function
declaration, thus allowing my_thread to be declared as a variable of type std::thread.
The second example c uses the new uniform initialization syntax with braces rather
than parentheses, and thus would also declare a variable.
One type of callable object that avoids this problem is a lambda expression. This is a
new feature from C++11 which essentially allows you to write a local function, possibly
capturing some local variables and avoiding the need of passing additional arguments
(see section 2.2). For full details on lambda expressions, see appendix A, section A.5.
The previous example can be written using a lambda expression as follows:
std::thread my_thread([](
do_something();
do_something_else();
});
Once you’ve started your thread, you need to explicitly decide whether to wait for it to
finish (by joining with it—see section 2.1.2) or leave it to run on its own (by detaching
it—see section 2.1.3). If you don’t decide before the std::thread object is destroyed,
then your program is terminated (the std::thread destructor calls std::terminate()).
It’s therefore imperative that you ensure that the thread is correctly joined or
detached, even in the presence of exceptions. See section 2.1.3 for a technique to han-
dle this scenario. Note that you only have to make this decision before the std::thread
object is destroyed—the thread itself may well have finished long before you join with
it or detach it, and if you detach it, then the thread may continue running long after
the std::thread object is destroyed.
If you don’t wait for your thread to finish, then you need to ensure that the data
accessed by the thread is valid until the thread has finished with it. This isn’t a new
problem—even in single-threaded code it is undefined behavior to access an object
after it’s been destroyed—but the use of threads provides an additional opportunity to
encounter such lifetime issues.
One situation in which you can encounter such problems is when the thread
function holds pointers or references to local variables and the thread hasn’t
b
c
45. 18 CHAPTER 2 Managing threads
finished when the function exits. The following listing shows an example of just
such a scenario.
struct func
{
int& i;
func(int& i_):i(i_){}
void operator()()
{
for(unsigned j=0;j<1000000;++j)
{
do_something(i);
}
}
};
void oops()
{
int some_local_state=0;
func my_func(some_local_state);
std::thread my_thread(my_func);
my_thread.detach();
}
In this case, the new thread associated with my_thread will probably still be running
when oops exits d, because you’ve explicitly decided not to wait for it by calling
detach() c. If the thread is still running, then the next call to do_something(i) B
will access an already destroyed variable. This is just like normal single-threaded
code—allowing a pointer or reference to a local variable to persist beyond the func-
tion exit is never a good idea—but it’s easier to make the mistake with multithreaded
code, because it isn’t necessarily immediately apparent that this has happened.
One common way to handle this scenario is to make the thread function self-
contained and copy the data into the thread rather than sharing the data. If you use a
callable object for your thread function, that object is itself copied into the thread, so
the original object can be destroyed immediately. But you still need to be wary of
objects containing pointers or references, such as that from listing 2.1. In particular,
it’s a bad idea to create a thread within a function that has access to the local variables
in that function, unless the thread is guaranteed to finish before the function exits.
Alternatively, you can ensure that the thread has completed execution before the
function exits by joining with the thread.
2.1.2 Waiting for a thread to complete
If you need to wait for a thread to complete, you can do this by calling join() on the asso-
ciated std::thread instance. In the case of listing 2.1, replacing the call to my_thread
.detach() before the closing brace of the function body with a call to my_thread.join()
Listing 2.1 A function that returns while a thread still has access to local variables
Potential access to
dangling reference
b
Don’t wait
for thread
to finish
c
New thread
might still
be running
d
46. 19
Basic thread management
would therefore be sufficient to ensure that the thread was finished before the func-
tion was exited and thus before the local variables were destroyed. In this case, it
would mean there was little point running the function on a separate thread, because
the first thread wouldn’t be doing anything useful in the meantime, but in real code
the original thread would either have work to do itself or it would have launched sev-
eral threads to do useful work before waiting for all of them to complete.
join() is simple and brute force—either you wait for a thread to finish or you
don’t. If you need more fine-grained control over waiting for a thread, such as to
check whether a thread is finished, or to wait only a certain period of time, then you
have to use alternative mechanisms such as condition variables and futures, which
we’ll look at in chapter 4. The act of calling join() also cleans up any storage associ-
ated with the thread, so the std::thread object is no longer associated with the now-
finished thread; it isn’t associated with any thread. This means that you can call
join() only once for a given thread; once you’ve called join(), the std::thread
object is no longer joinable, and joinable() will return false.
2.1.3 Waiting in exceptional circumstances
As mentioned earlier, you need to ensure that you’ve called either join() or
detach() before a std::thread object is destroyed. If you’re detaching a thread, you
can usually call detach() immediately after the thread has been started, so this isn’t a
problem. But if you’re intending to wait for the thread, you need to pick carefully the
place in the code where you call join(). This means that the call to join() is liable to
be skipped if an exception is thrown after the thread has been started but before the
call to join().
To avoid your application being terminated when an exception is thrown, you
therefore need to make a decision on what to do in this case. In general, if you were
intending to call join() in the non-exceptional case, you also need to call join() in
the presence of an exception to avoid accidental lifetime problems. The next listing
shows some simple code that does just that.
struct func;
void f()
{
int some_local_state=0;
func my_func(some_local_state);
std::thread t(my_func);
try
{
do_something_in_current_thread();
}
catch(...)
{
t.join();
Listing 2.2 Waiting for a thread to finish
See definition
in listing 2.1
b
47. 20 CHAPTER 2 Managing threads
throw;
}
t.join();
}
The code in listing 2.2 uses a try/catch block to ensure that a thread with access to
local state is finished before the function exits, whether the function exits normally c
or by an exception B. The use of try/catch blocks is verbose, and it’s easy to get the
scope slightly wrong, so this isn’t an ideal scenario. If it’s important to ensure that
the thread must complete before the function exits—whether because it has a refer-
ence to other local variables or for any other reason—then it’s important to ensure
this is the case for all possible exit paths, whether normal or exceptional, and it’s
desirable to provide a simple, concise mechanism for doing so.
One way of doing this is to use the standard Resource Acquisition Is Initialization
(RAII) idiom and provide a class that does the join() in its destructor, as in the follow-
ing listing. See how it simplifies the function f().
class thread_guard
{
std::thread& t;
public:
explicit thread_guard(std::thread& t_):
t(t_)
{}
~thread_guard()
{
if(t.joinable())
{
t.join();
}
}
thread_guard(thread_guard const&)=delete;
thread_guard& operator=(thread_guard const&)=delete;
};
struct func;
void f()
{
int some_local_state=0;
func my_func(some_local_state);
std::thread t(my_func);
thread_guard g(t);
do_something_in_current_thread();
}
When the execution of the current thread reaches the end of f e, the local objects
are destroyed in reverse order of construction. Consequently, the thread_guard
object g is destroyed first, and the thread is joined with in the destructor c. This
Listing 2.3 Using RAII to wait for a thread to complete
c
b
c
d
See definition
in listing 2.1
e
48. 21
Basic thread management
even happens if the function exits because do_something_in_current_thread throws
an exception.
The destructor of thread_guard in listing 2.3 first tests to see if the std::thread
object is joinable() B before calling join() c. This is important, because join()
can be called only once for a given thread of execution, so it would therefore be a mis-
take to do so if the thread had already been joined.
The copy constructor and copy-assignment operator are marked =delete d to
ensure that they’re not automatically provided by the compiler. Copying or assigning
such an object would be dangerous, because it might then outlive the scope of the
thread it was joining. By declaring them as deleted, any attempt to copy a thread_
guard object will generate a compilation error. See appendix A, section A.2, for more
about deleted functions.
If you don’t need to wait for a thread to finish, you can avoid this exception-safety
issue by detaching it. This breaks the association of the thread with the std::thread object
and ensures that std::terminate() won’t be called when the std::thread object is
destroyed, even though the thread is still running in the background.
2.1.4 Running threads in the background
Calling detach() on a std::thread object leaves the thread to run in the back-
ground, with no direct means of communicating with it. It’s no longer possible to wait
for that thread to complete; if a thread becomes detached, it isn’t possible to obtain a
std::thread object that references it, so it can no longer be joined. Detached threads
truly run in the background; ownership and control are passed over to the C++ Run-
time Library, which ensures that the resources associated with the thread are correctly
reclaimed when the thread exits.
Detached threads are often called daemon threads after the UNIX concept of a
daemon process that runs in the background without any explicit user interface. Such
threads are typically long-running; they may well run for almost the entire lifetime of
the application, performing a background task such as monitoring the filesystem,
clearing unused entries out of object caches, or optimizing data structures. At the
other extreme, it may make sense to use a detached thread where there’s another
mechanism for identifying when the thread has completed or where the thread is
used for a “fire and forget” task.
As you’ve already seen in section 2.1.2, you detach a thread by calling the detach()
member function of the std::thread object. After the call completes, the std::thread
object is no longer associated with the actual thread of execution and is therefore no
longer joinable:
std::thread t(do_background_work);
t.detach();
assert(!t.joinable());
In order to detach the thread from a std::thread object, there must be a thread to
detach: you can’t call detach() on a std::thread object with no associated thread of
49. 22 CHAPTER 2 Managing threads
execution. This is exactly the same requirement as for join(), and you can check it in
exactly the same way—you can only call t.detach() for a std::thread object t when
t.joinable() returns true.
Consider an application such as a word processor that can edit multiple docu-
ments at once. There are many ways to handle this, both at the UI level and internally.
One way that seems to be increasingly common at the moment is to have multiple
independent top-level windows, one for each document being edited. Although these
windows appear to be completely independent, each with its own menus and so forth,
they’re running within the same instance of the application. One way to handle this
internally is to run each document-editing window in its own thread; each thread runs
the same code but with different data relating to the document being edited and the
corresponding window properties. Opening a new document therefore requires start-
ing a new thread. The thread handling the request isn’t going to care about waiting
for that other thread to finish, because it’s working on an unrelated document, so this
makes it a prime candidate for running a detached thread.
The following listing shows a simple code outline for this approach.
void edit_document(std::string const& filename)
{
open_document_and_display_gui(filename);
while(!done_editing())
{
user_command cmd=get_user_input();
if(cmd.type==open_new_document)
{
std::string const new_name=get_filename_from_user();
std::thread t(edit_document,new_name);
t.detach();
}
else
{
process_user_input(cmd);
}
}
}
If the user chooses to open a new document, you prompt them for the document to
open, start a new thread to open that document B, and then detach it c. Because
the new thread is doing the same operation as the current thread but on a different
file, you can reuse the same function (edit_document) with the newly chosen file-
name as the supplied argument.
This example also shows a case where it’s helpful to pass arguments to the function
used to start a thread: rather than just passing the name of the function to the
std::thread constructor B, you also pass in the filename parameter. Although other
mechanisms could be used to do this, such as using a function object with member
Listing 2.4 Detaching a thread to handle other documents
b
c
50. 23
Passing arguments to a thread function
data instead of an ordinary function with parameters, the Thread Library provides
you with an easy way of doing it.
2.2 Passing arguments to a thread function
As shown in listing 2.4, passing arguments to the callable object or function is funda-
mentally as simple as passing additional arguments to the std::thread constructor.
But it’s important to bear in mind that by default the arguments are copied into inter-
nal storage, where they can be accessed by the newly created thread of execution,
even if the corresponding parameter in the function is expecting a reference. Here’s a
simple example:
void f(int i,std::string const& s);
std::thread t(f,3,”hello”);
This creates a new thread of execution associated with t, which calls f(3,”hello”).
Note that even though f takes a std::string as the second parameter, the string lit-
eral is passed as a char const* and converted to a std::string only in the context of
the new thread. This is particularly important when the argument supplied is a
pointer to an automatic variable, as follows:
void f(int i,std::string const& s);
void oops(int some_param)
{
char buffer[1024];
sprintf(buffer, "%i",some_param);
std::thread t(f,3,buffer);
t.detach();
}
In this case, it’s the pointer to the local variable buffer B that’s passed through to the
new thread c, and there’s a significant chance that the function oops will exit before
the buffer has been converted to a std::string on the new thread, thus leading to
undefined behavior. The solution is to cast to std::string before passing the buffer
to the std::thread constructor:
void f(int i,std::string const& s);
void not_oops(int some_param)
{
char buffer[1024];
sprintf(buffer,"%i",some_param);
std::thread t(f,3,std::string(buffer));
t.detach();
}
In this case, the problem is that you were relying on the implicit conversion of the
pointer to the buffer into the std::string object expected as a function parameter,
because the std::thread constructor copies the supplied values as is, without convert-
ing to the expected argument type.
b
c
Using std::string avoids
dangling pointer
51. 24 CHAPTER 2 Managing threads
It’s also possible to get the reverse scenario: the object is copied, and what you
wanted was a reference. This might happen if the thread is updating a data structure
that’s passed in by reference, for example:
void update_data_for_widget(widget_id w,widget_data& data);
void oops_again(widget_id w)
{
widget_data data;
std::thread t(update_data_for_widget,w,data);
display_status();
t.join();
process_widget_data(data);
}
Although update_data_for_widget B expects the second parameter to be passed by
reference, the std::thread constructor c doesn’t know that; it’s oblivious to the
types of the arguments expected by the function and blindly copies the supplied val-
ues. When it calls update_data_for_widget, it will end up passing a reference to
the internal copy of data and not a reference to data itself. Consequently, when the
thread finishes, these updates will be discarded as the internal copies of the supplied
arguments are destroyed, and process_widget_data will be passed an unchanged
data d rather than a correctly updated version. For those of you familiar with
std::bind, the solution will be readily apparent: you need to wrap the arguments that
really need to be references in std::ref. In this case, if you change the thread invoca-
tion to
std::thread t(update_data_for_widget,w,std::ref(data));
and then update_data_for_widget will be correctly passed a reference to data rather
than a reference to a copy of data.
If you’re familiar with std::bind, the parameter-passing semantics will be unsur-
prising, because both the operation of the std::thread constructor and the opera-
tion of std::bind are defined in terms of the same mechanism. This means that, for
example, you can pass a member function pointer as the function, provided you sup-
ply a suitable object pointer as the first argument:
class X
{
public:
void do_lengthy_work();
};
X my_x;
std::thread t(&X::do_lengthy_work,&my_x);
This code will invoke my_x.do_lengthy_work() on the new thread, because the
address of my_x is supplied as the object pointer B. You can also supply arguments to
such a member function call: the third argument to the std::thread constructor will
be the first argument to the member function and so forth.
b
c
d
b
52. 25
Transferring ownership of a thread
Another interesting scenario for supplying arguments is where the arguments
can’t be copied but can only be moved: the data held within one object is transferred
over to another, leaving the original object “empty.” An example of such a type is
std::unique_ptr, which provides automatic memory management for dynamically
allocated objects. Only one std::unique_ptr instance can point to a given object at a
time, and when that instance is destroyed, the pointed-to object is deleted. The move
constructor and move assignment operator allow the ownership of an object to be trans-
ferred around between std::unique_ptr instances (see appendix A, section A.1.1, for
more on move semantics). Such a transfer leaves the source object with a NULL
pointer. This moving of values allows objects of this type to be accepted as function
parameters or returned from functions. Where the source object is a temporary, the
move is automatic, but where the source is a named value, the transfer must be
requested directly by invoking std::move(). The following example shows the use of
std::move to transfer ownership of a dynamic object into a thread:
void process_big_object(std::unique_ptr<big_object>);
std::unique_ptr<big_object> p(new big_object);
p->prepare_data(42);
std::thread t(process_big_object,std::move(p));
By specifying std::move(p) in the std::thread constructor, the ownership of the
big_object is transferred first into internal storage for the newly created thread and
then into process_big_object.
Several of the classes in the Standard Thread Library exhibit the same ownership
semantics as std::unique_ptr, and std::thread is one of them. Though std::thread
instances don’t own a dynamic object in the same way as std::unique_ptr does, they
do own a resource: each instance is responsible for managing a thread of execution.
This ownership can be transferred between instances, because instances of std::thread
are movable, even though they aren’t copyable. This ensures that only one object is asso-
ciated with a particular thread of execution at any one time while allowing program-
mers the option of transferring that ownership between objects.
2.3 Transferring ownership of a thread
Suppose you want to write a function that creates a thread to run in the background
but passes back ownership of the new thread to the calling function rather than wait-
ing for it to complete, or maybe you want to do the reverse: create a thread and pass
ownership in to some function that should wait for it to complete. In either case, you
need to transfer ownership from one place to another.
This is where the move support of std::thread comes in. As described in the pre-
vious section, many resource-owning types in the C++ Standard Library such as
std::ifstream and std::unique_ptr are movable but not copyable, and std::thread is
one of them. This means that the ownership of a particular thread of execution can
be moved between std::thread instances, as in the following example. The example
53. 26 CHAPTER 2 Managing threads
shows the creation of two threads of execution and the transfer of ownership of those
threads among three std::thread instances, t1, t2, and t3:
void some_function();
void some_other_function();
std::thread t1(some_function);
std::thread t2=std::move(t1);
t1=std::thread(some_other_function);
std::thread t3;
t3=std::move(t2);
t1=std::move(t3);
First, a new thread is started B and associated with t1. Ownership is then transferred
over to t2 when t2 is constructed, by invoking std::move() to explicitly move owner-
ship c. At this point, t1 no longer has an associated thread of execution; the thread
running some_function is now associated with t2.
Then, a new thread is started and associated with a temporary std::thread
object d. The subsequent transfer of ownership into t1 doesn’t require a call to std::
move() to explicitly move ownership, because the owner is a temporary object—moving
from temporaries is automatic and implicit.
t3 is default constructed e, which means that it’s created without any associated
thread of execution. Ownership of the thread currently associated with t2 is transferred
into t3 f, again with an explicit call to std::move(), because t2 is a named object. After
all these moves, t1 is associated with the thread running some_other_function, t2 has no
associated thread, and t3 is associated with the thread running some_function.
The final move g transfers ownership of the thread running some_function back
to t1 where it started. But in this case t1 already had an associated thread (which was
running some_other_function), so std::terminate() is called to terminate the
program. This is done for consistency with the std::thread destructor. You saw in
section 2.1.1 that you must explicitly wait for a thread to complete or detach it before
destruction, and the same applies to assignment: you can’t just “drop” a thread by
assigning a new value to the std::thread object that manages it.
The move support in std::thread means that ownership can readily be trans-
ferred out of a function, as shown in the following listing.
std::thread f()
{
void some_function();
return std::thread(some_function);
}
std::thread g()
{
void some_other_function(int);
std::thread t(some_other_function,42);
return t;
}
Listing 2.5 Returning a std::thread from a function
b
c
d
e
f This assignment will
terminate program!
g
54. 27
Transferring ownership of a thread
Likewise, if ownership should be transferred into a function, it can just accept an
instance of std::thread by value as one of the parameters, as shown here:
void f(std::thread t);
void g()
{
void some_function();
f(std::thread(some_function));
std::thread t(some_function);
f(std::move(t));
}
One benefit of the move support of std::thread is that you can build on the
thread_guard class from listing 2.3 and have it actually take ownership of the thread.
This avoids any unpleasant consequences should the thread_guard object outlive the
thread it was referencing, and it also means that no one else can join or detach
the thread once ownership has been transferred into the object. Because this would
primarily be aimed at ensuring threads are completed before a scope is exited, I named
this class scoped_thread. The implementation is shown in the following listing, along
with a simple example.
class scoped_thread
{
std::thread t;
public:
explicit scoped_thread(std::thread t_):
t(std::move(t_))
{
if(!t.joinable())
throw std::logic_error(“No thread”);
}
~scoped_thread()
{
t.join();
}
scoped_thread(scoped_thread const&)=delete;
scoped_thread& operator=(scoped_thread const&)=delete;
};
struct func;
void f()
{
int some_local_state;
scoped_thread t(std::thread(func(some_local_state)));
do_something_in_current_thread();
}
The example is similar to that from listing 2.3, but the new thread is passed in directly
to the scoped_thread e rather than having to create a separate named variable for it.
Listing 2.6 scoped_thread and example usage
b
c
d
See
listing 2.1
e
f
56. and instruction association “Das Ahnenerbe”. Sievers had actual
knowledge of the criminal aspects of the Rascher experiments. He
was notified that Dachau inmates were to be used. He himself
inspected the experiments. Sievers admitted that Rascher told him
that several died as a result of the high-altitude experiments.
Under these facts Sievers is specially chargeable with the criminal
aspects of these experiments.
FREEZING EXPERIMENTS
Before the high-altitude experiments had actually been
completed, freezing experiments were ordered to be performed at
Dachau. They were conducted from August 1942 to the early part of
1943 by Holzloehner, Finke and Rascher, all of whom were officers in
the Medical Services of the Luftwaffe. Details of the freezing
experiments have been given elsewhere in this judgment.
In May 1943 Rascher was transferred to the Waffen SS and then
proceeded alone to conduct freezing experiments in Dachau until
May 1945. Rascher advised the defendant Rudolf Brandt that Poles
and Russians had been used as subjects.
The witness Neff testified that the defendant Sievers visited the
experimental station quite frequently during the freezing
experiments. He testified further that in September 1942 he received
orders to take the hearts and lungs of 5 experimental subjects killed
in the experiments to Professor Hirt in Strasbourg for further
scientific study; that the travel warrant for the trip was made out by
Sievers; and that the Ahnenerbe Society paid the expenses for the
transfer of the bodies. One of the 5 experimental subjects killed was
a Dutch citizen.
Neff’s testimony is corroborated in large part by the affidavits of
the defendants Rudolf Brandt and Becker-Freyseng, by the testimony
of the witnesses Lutz, Michalowsky and Vieweg, and by the
documentary evidence in the record. In the Sievers’ diary, there are
numerous instances of Sievers’ activities in the aid of Rascher. On 1
February 1943 Sievers noted efforts in obtaining apparatus,
implements and chemicals for Rascher’s experiments. On 6 and 21
57. January 1944 Sievers noted the problem of location. Rascher
reported to Sievers periodically concerning the status and details of
the freezing experiments.
It is plain from the record that the relationship of Sievers and
Rascher in the performance of freezing experiments required Sievers
to make the preliminary arrangements for the performance of the
experiments to familiarize himself with the progress of the
experiments by personal inspection, to furnish necessary equipment
and material, including human beings used during the freezing
experiments, to receive and make progress reports concerning
Rascher, and to handle the matter of evaluation and publication of
such reports. Basically, such activities constituted a performance of
his duties as defined by Sievers in his letter of 28 January 1943 to
Rudolf Brandt, in which he stated that he smoothed the way for
research workers and saw to it that Himmler’s orders were carried
out.
Under these facts Sievers is chargeable with the criminal
activities in these experiments.
MALARIA EXPERIMENTS
Details of these experiments are given elsewhere in this
judgment. These experiments were performed at Dachau by
Schilling and Ploetner. The evidence shows that Sievers had
knowledge of the nature and purpose of these criminal enterprises
and supported them in his official position.
LOST GAS EXPERIMENTS
These experiments were conducted in the Natzweiler
concentration camp under the supervision of Professor Hirt of the
University of Strasbourg. The Ahnenerbe Society and the defendant
Sievers supported this research on behalf of the SS. The
arrangement for the payment of the research subsidies of the
Ahnenerbe was made by Sievers. The defendant Sievers participated
in these experiments by actively collaborating with the defendants
58. Karl Brandt and Rudolf Brandt and with Hirt and his principal
assistant, Dr. Wimmer. The record shows that Sievers was in
correspondence with Hirt at least as early as January 1942, and that
he established contact between Himmler and Hirt.
In a letter of 11 September 1942 to Gluecks, Sievers wrote that
the necessary conditions existed in Natzweiler “for carrying out our
military scientific research work”. He requested that Gluecks issue
the necessary authorization for Hirt, Wimmer, and Kieselbach to
enter Natzweiler, and that provision be made for their board and
accommodations. The letter also stated:
“The experiments which are to be performed on
prisoners are to be carried out in four rooms of an already
existing medical barrack. Only slight changes in the
construction of the building are required, in particular the
installation of the hood which can be produced with very
little material. In accordance with attached plan of the
construction management at Natzweiler, I request that
necessary orders be issued to same to carry out the
reconstruction. All the expenses arising out of our activity
at Natzweiler will be covered by this office.”
In a memorandum of 3 November 1942 to the defendant Rudolf
Brandt, Sievers complained about certain difficulties which had
arisen in Natzweiler because of the lack of cooperation from the
camp officials. He seemed particularly outraged by the fact that the
camp officials were asking that the experimental prisoners be paid
for. A portion of the memorandum follows:
“When I think of our military research work conducted
at the concentration camp Dachau, I must praise and call
special attention to the generous and understanding way in
which our work was furthered there and to the cooperation
we were given. Payment of prisoners was never discussed.
It seems as if at Natzweiler they are trying to make as
much money as possible out of this matter. We are not
conducting these experiments, as a matter of fact, for the
59. sake of some fixed scientific idea, but to be of practical help
to the armed forces and beyond that, to the German people
in a possible emergency.”
Brandt was requested to give his help in a comradely fashion in
setting up the necessary conditions at Natzweiler. The defendant
Rudolf Brandt replied to this memorandum on 3 December 1942 and
told Sievers that he had had occasion to speak to Pohl concerning
these difficulties, and that they would be remedied.
The testimony of the witness Holl was that approximately 220
inmates of Russian, Polish, Czech, and German nationality were
experimented upon by Hirt and his collaborators, and that
approximately 50 died. None of the experimental subjects
volunteered. During the entire period of these experiments, Hirt was
associated with the Ahnenerbe Society.
In early 1944 Hirt and Wimmer summarized their findings from
the Lost experiments in a report entitled “Proposed Treatment of
Poisoning Caused by Lost.” The report was described as from the
Institute for Military Scientific Research, Department H of the
Ahnenerbe, located at the Strasbourg Anatomical Institute. Light,
medium, and heavy injuries due to Lost gas are mentioned. Sievers
received several copies of this report. On 31 March 1944, after Karl
Brandt had received a Fuehrer Decree giving him broad powers in
the field of chemical-warfare, Sievers informed Brandt about Hirt’s
work and gave him a copy of the report. This is proved by Sievers’
letter to Rudolf Brandt on 11 April 1944. Karl Brandt admitted that
the wording of the report made it clear that experiments had been
conducted on human beings.
Sievers testified that on 25 January 1943, he went to Natzweiler
concentration camp and consulted with the camp authorities
concerning the arrangements to be made for Hirt’s Lost experiments.
These arrangements included the obtaining of laboratories and
experimental subjects. Sievers testified that the Lost experiments
were harmful. On the visit of 25 January 1943, Sievers saw ten
persons who had been subjected to Lost experiments and watched
Hirt change the bandages on one of the persons. Sievers testified
60. that in March 1943 he asked Hirt whether any of the experimental
subjects had suffered harm from the experiments and was told by
Hirt that two of the experimental subjects had died due to other
causes.
It is evident that Sievers was criminally connected with these
experiments.
SEA-WATER EXPERIMENTS
These experiments were conducted at Dachau from July through
September 1944. Details of these experiments are explained
elsewhere in the judgment.
The function of the Ahnenerbe in the performance of sea-water
experiments conducted at Dachau from July through September
1944 was chiefly in connection with the furnishing of space and
equipment for the experiments. Sievers made these necessary
arrangements on behalf of the Ahnenerbe. As a result of Schroeder’s
request to Himmler through Grawitz for permission to perform the
sea-water experiments on inmates in Dachau, Himmler directed on 8
July 1944 that the experiments be made on gypsies and three other
persons with other racial qualities as control subjects. Sievers was
advised by Himmler’s office of the above authorization for
experiments at the Rascher station at Dachau.
On 27 June 1944, Rascher was replaced by Ploetner as head of
the Ahnenerbe Institute for Military Scientific Research at Dachau.
Sievers, on 20 July, went to Dachau and conferred with Ploetner of
the Ahnenerbe Institute and the defendant Beiglboeck, who was to
perform the experiments, concerning the execution of the sea-water
experiments and the availability of working space for them. Sievers
agreed to supply working space in Ploetner’s department and at the
Ahnenerbe Entomological Institute.
On 26 July 1944, Sievers made a written report to Grawitz
concerning details of his conference at Dachau. Sievers wrote that
40 experimental persons could be accommodated at “our” research
station, that the Ahnenerbe would supply a laboratory, and that Dr.
Ploetner would give his assistance, help, and advice to the Luftwaffe
61. physicians performing the experiments. Sievers also stated the
number and assignment of the personnel to be employed, estimating
that the work would cover a period of three weeks and designated
23 July 1944 as the date of commencement, provided that
experimental persons were available and the camp commander had
received the necessary order from Himmler. In conclusion, Sievers
expressed his hope that the arrangements which he had made
would permit a successful conduct of the experiments and requested
that acknowledgment be made to Himmler as a participant in the
experiments.
In his testimony Sievers admitted that he had written the above
letter and had conferred with Beiglboeck at Dachau. As the letter
indicates, Sievers knew that concentration camp inmates were to be
used.
Sievers had knowledge of and criminally participated in sea-water
experiments.
TYPHUS EXPERIMENTS
Detailed description of these experiments is contained elsewhere
in this judgment. Sievers participated in the criminal typhus
experiments conducted by Haagen on concentration camp inmates
at Natzweiler by making the necessary arrangements in connection
with securing experimental subjects, handling administrative
problems incident to the experiments, and by furnishing the
Ahnenerbe station with its equipment in Natzweiler for their
performance.
On 16 August 1943, when Haagen was preparing to transfer his
typhus experiments from Schirmeck to Natzweiler, he requested
Sievers to make available a hundred concentration camp inmates for
his research. This is seen from a letter of 30 September 1943 from
Sievers to Haagen in which he states that he will be glad to assist,
and that he is accordingly contacting the proper source to have the
“desired personnel” placed at Haagen’s disposal. As a result of
Sievers’ efforts, a hundred inmates were shipped from Auschwitz to
Natzweiler for Haagen’s experiments. These were found to be unfit
62. for experimentation because of their pitiful physical condition. A
second group of one hundred was then made available. Some of
these were used by Haagen as experimental subjects.
That the experiments were carried out in the Ahnenerbe
experimental station in Natzweiler is proved by excerpts from
monthly reports of the camp doctor in Natzweiler. A number of
deaths occurred among non-German experimental subjects as a
direct result of the treatment to which they were subjected.
POLYGAL EXPERIMENTS
Evidence has been introduced during the course of the trial to
show that experiments to test the efficacy of a blood coagulant
“polygal” were conducted on Dachau inmates by Rascher. The
Sievers’ diary shows that the defendant had knowledge of activities
concerning the production of polygal, and that he lent his support to
the conduct of the experiments.
JEWISH SKELETON COLLECTION
Sievers is charged under the indictment with participation in the
killing of 112 Jews who were selected to complete a skeleton
collection for the Reich University of Strasbourg.
Responding to a request by the defendant Rudolf Brandt, Sievers
submitted to him on 9 February 1942 a report by Dr. Hirt of the
University of Strasbourg on the desirability of securing a Jewish
skeleton collection. In this report, Hirt advocated outright murder of
“Jewish Bolshevik Commissars” for the procurement of such a
collection. On 27 February 1942, Rudolf Brandt informed Sievers that
Himmler would support Hirt’s work and would place everything
necessary at his disposal. Brandt asked Sievers to inform Hirt
accordingly and to report again on the subject. On 2 November 1942
Sievers requested Brandt to make the necessary arrangements with
the Reich Main Security Office for providing 150 Jewish inmates from
Auschwitz to carry out this plan. On 6 November, Brandt informed
Adolf Eichmann, the Chief of Office IV B/4 (Jewish Affairs) of the
63. Reich Main Security Office to put everything at Hirt’s disposal which
was necessary for the completion of the skeleton collection.
From Sievers’ letter to Eichmann of 21 June 1943, it is apparent
that SS Hauptsturmfuehrer Beger, a collaborator of the Ahnenerbe
Society, carried out the preliminary work for the assembling of the
skeleton collection in the Auschwitz concentration camp on 79 Jews,
30 Jewesses, 2 Poles, and 4 Asiatics. The corpses of the victims
were sent in three shipments to the Anatomical Institute of Hirt in
the Strasbourg University.
When the Allied Armies were threatening to overrun Strasbourg
early in September 1944, Sievers dispatched to Rudolf Brandt the
following teletype message:
“Subject: Collection of Jewish Skeletons.
“In conformity with the proposal of 9 February 1942 and
with the consent of 23 February 1942 * * * SS
Sturmbannfuehrer Professor Hirt planned the hitherto
missing collection of skeletons. Due to the extent of the
scientific work connected herewith, the preparation of the
skeletons is not yet concluded. Hirt asks with respect to the
time needed for 80 specimens, and in case the endangering
of Strasbourg has to be reckoned with, how to proceed with
the collection situated in the dissecting room of the
anatomical institute. He is able to carry out the maceration
and thus render them irrecognizable. Then, however, part
of the entire work would have been partly done in vain, and
it would be a great scientific loss for this unique collection,
because hominit casts could not be made afterwards. The
skeleton collection as such is not conspicuous. Viscera could
be declared as remnants of corpses, apparently left in the
anatomical institute by the French and ordered to be
cremated. Decision on the following proposals is requested:
“1. Collection can be preserved.
“2. Collection is to be partly dissolved.
“3. Entire collection is to be dissolved.
64. “Sievers”
The pictures of the corpses and the dissecting rooms of the
Institute, taken by the French authorities after the liberation of
Strasbourg, point up the grim story of these deliberate murders to
which Sievers was a party.
Sievers knew from the first moment he received Hirt’s report of 9
February 1942 that mass murder was planned for the procurement
of the skeleton collection. Nevertheless he actively collaborated in
the project, sent an employee of the Ahnenerbe to make the
preparatory selections in the concentration camp at Auschwitz, and
provided for the transfer of the victims from Auschwitz to Natzweiler.
He made arrangements that the collection be destroyed.
Sievers’ guilt under this specification is shown without question.
Sievers offers two purported defenses to the charges against him
(1) that he acted pursuant to superior orders; (2) that he was a
member of a resistance movement.
The first defense is wholly without merit. There is nothing to
show that in the commission of these ghastly crimes, Sievers acted
entirely pursuant to orders. True, the basic policies or projects which
he carried through were decided upon by his superiors, but in the
execution of the details Sievers had an unlimited power of discretion.
The defendant says that in his position he could not have refused an
assignment. The fact remains that the record shows the case of
several men who did, and who have lived to tell about it.
Sievers’ second matter of defense is equally untenable. In
support of the defense, Sievers offered evidence by which he hoped
to prove that as early as 1933 he became a member of a secret
resistance movement which plotted to overthrow the Nazi
Government and to assassinate Hitler and Himmler; that as a leading
member of the group, Sievers obtained the appointment as Reich
Business Manager of the Ahnenerbe so that he could be close to
Himmler and observe his movements; that in this position he
became enmeshed in the revolting crimes, the subject matter of this
indictment; that he remained as business manager upon advice of
his resistance leader to gain vital information which would hasten
65. the day of the overthrow of the Nazi Government and the liberation
of the helpless peoples coming under its domination.
Assuming all these things to be true, we cannot see how they
may be used as a defense for Sievers. The fact remains that murders
were committed with cooperation of the Ahnenerbe upon countless
thousands of wretched concentration camp inmates who had not the
slightest means of resistance. Sievers directed the program by which
these murders were committed.
It certainly is not the law that a resistance worker can commit no
crime, and least of all, against the very people he is supposed to be
protecting.
MEMBERSHIP IN A CRIMINAL ORGANIZATION
Under count four of the indictment, Wolfram Sievers is charged
with being a member of an organization declared criminal by the
judgment of the International Military Tribunal, namely, the SS. The
evidence shows that Wolfram Sievers became a member of the SS in
1935 and remained a member of that organization to the end of the
war. As a member of the SS he was criminally implicated in the
commission of war crimes and crimes against humanity, as charged
under counts two and three of the indictment.
CONCLUSION
Military Tribunal I finds and adjudges the defendant Wolfram
Sievers guilty under counts two, three and four of the indictment.
ROSE
The defendant Rose is charged under counts two and three of
the indictment with special responsibility for, and participation in
Typhus and Epidemic Jaundice Experiments.
The latter charge has been abandoned by the prosecution.
Evidence was offered concerning Rose’s criminal participation in
malaria experiments at Dachau, although he was not named in the
66. indictment as one of the defendants particularly charged with
criminal responsibility in connection with malaria experiments.
Questions presented by this situation will be discussed later.
The defendant Rose is a physician of large experience, for many
years recognized as an expert in tropical diseases. He studied
medicine at the Universities of Berlin and Breslau and was admitted
to practice in the fall of 1921. After serving as interne in several
medical institutes, he received an appointment on the staff of the
Robert Koch Institute in Berlin. Later he served on the staff of
Heidelberg University and for three years engaged in the private
practice of medicine in Heidelberg. In 1929 he went to China, where
he remained until 1936, occupying important positions as medical
adviser to the Chinese Government. In 1936 he returned to Germany
and became head of the Department for Tropical Medicine at the
Robert Koch Institute in Berlin. Late in August 1939 he joined the
Luftwaffe with the rank of first lieutenant in the Medical Corps. In
that service he was commissioned brigadier general in the reserve
and continued on active duty until the end of the war. He was
consultant on hygiene and tropical medicine to the Chief of the
Medical Service of the Luftwaffe. From 1944 he was also consultant
on the staff of defendant Handloser and was medical adviser to Dr.
Conti in matters pertaining to tropical diseases. During the war Rose
devoted practically all of his time to his duties as consultant to the
Chief of the Medical Service of the Luftwaffe, Hippke, and after 1
January 1944, the defendant Schroeder.
MALARIA EXPERIMENTS
Medical experiments in connection with malaria were carried on
at Dachau concentration camp from February 1942 until the end of
the war. These experiments were conducted under Dr. Klaus Schilling
for the purpose of discovering a method of establishing immunity
against malaria. During the course of the experiments probably as
many as 1,000 inmates of the concentration camp were used as
subjects of the experiments. Very many of these persons were
nationals of countries other than Germany who did not volunteer for
67. the experiments. By credible evidence it is established that
approximately 30 of the experimental subjects died as a direct result
of the experiments and that many more succumbed from causes
directly following the experiments, including non-German nationals.
With reference to Rose’s participation in these experiments, the
record shows the following: The defendant Rose had been
acquainted with Schilling for a number of years, having been his
successor in a position once held by Schilling in the Robert Koch
Institute. Under date 3 February 1941, Rose, writing to Schilling,
then in Italy, referred to a letter received from Schilling, in which the
latter requested “malaria spleens” (spleens taken from the bodies of
persons who had died from malaria). Rose in reply asked for
information concerning the exact nature of the material desired.
Schilling wrote 4 April 1942 from Dachau to Rose at Berlin, stating
that he had inoculated a person intracutaneously with sporocoides
from the salivary glands of a female anopheles which Rose had sent
him. The letter continues:
“For the second inoculation I miss the sporocoides
material because I do not possess the ‘Strain Rose’ in the
anopheles yet. If you could find it possible to send me in
the next days a few anopheles infected with ‘Strain Rose’
(with the last consignment two out of ten mosquitoes were
infected) I would have the possibility to continue this
experiment and I would naturally be very thankful to you
for this new support of my work.
“The mosquito breeding and the experiments proceed
satisfactorily and I am working now on six tertiary strains.”
The letter bears the handwritten endorsement “finished 17 April
1942. L. g. RO 17/4,” which evidence clearly reveals that Rose had
complied with Schilling’s request for material.
Schilling again wrote Rose from Dachau malaria station 5 July
1943, thanking Rose for his letter and “the consignment of
atroparvus eggs.” The letter continues:
68. “Five percent of them brought on water went down and
were therefore unfit for development; the rest of them
hatched almost 100 percent.
“Thanks to your solicitude, achieved again the
completion of my breed.
“Despite this fact I accept with great pleasure your offer
to send me your excess of eggs. How did you dispatch this
consignment? The result could not have been any better!
“Please tell Fraeulein Lange, who apparently takes care
of her breed with greater skill and better success than the
prisoner August, my best thanks for her trouble.
“Again my sincere thanks to you!”
The “prisoner August” mentioned in the letter was doubtless the
witness August Vieweg, who testified before this Tribunal concerning
the malaria experiments.
Rose wrote Schilling 27 July 1943 in answer to the latter’s letter
of 5 July 1943, stating he was glad the shipment of eggs had arrived
in good order and had proved useful. He also gave the information
that another shipment of anopheles eggs would follow.
In the fall of 1942 Rose was present at the “Cold Conference”
held at Nuernberg and heard Holzloehner deliver his lecture on the
freezing experiments which had taken place at Dachau. Rose
testified that after the conference he talked with Holzloehner, who
told him that the carrying out of physiological experiments on
human beings imposed upon him a tremendous mental burden,
adding that he hoped he never would receive another order to
conduct such experiments.
It is impossible to believe that during the years 1942 and 1943
Rose was unaware of malaria experiments on human beings which
were progressing at Dachau under Schilling, or to credit Rose with
innocence of knowledge that the malaria research was not confined
solely to vaccinations designed for the purpose of immunizing the
persons vaccinated. On the contrary, it is clear that Rose well knew
that human beings were being used in the concentration camp as
subjects for medical experimentation.
69. However, no adjudication either of guilt or innocence will be
entered against Rose for criminal participation in these experiments
for the following reason: In preparing counts two and three of its
indictment the prosecution elected to frame its pleading in such a
manner as to charge all defendants with the commission of war
crimes and crimes against humanity, generally, and at the same time
to name in each sub-paragraph dealing with medical experiments
only those defendants particularly charged with responsibility for
each particular item.
In our view this constituted, in effect, a bill of particulars and
was, in essence, a declaration to the defendants upon which they
were entitled to rely in preparing their defenses, that only such
persons as were actually named in the designated experiments
would be called upon to defend against the specific items. Included
in the list of names of those defendants specifically charged with
responsibility for the malaria experiments the name of Rose does not
appear. We think it would be manifestly unfair to the defendant to
find him guilty of an offense with which the indictment affirmatively
indicated he was not charged.
This does not mean that the evidence adduced by the
prosecution was inadmissible against the charges actually preferred
against Rose. We think it had probative value as proof of the fact of
Rose’s knowledge of human experimentation upon concentration
camp inmates.
TYPHUS EXPERIMENTS
These experiments were carried out at Buchenwald and
Natzweiler concentration camps, over a period extending from 1942
to 1945, in an attempt to procure a protective typhus vaccine.
In the experimental block at Buchenwald, with Dr. Ding in
charge, inmates of the camp were infected with typhus for the
purpose of procuring a continuing supply of fresh blood taken from
persons suffering from typhus. Other inmates, some previously
immunized and some not, were infected with typhus to demonstrate
70. the efficacy of the vaccines. Full particulars of these experiments
have been given elsewhere in the judgment.
Rose visited Buchenwald in company with Gildemeister of the
Robert Koch Institute in the spring of 1942. At this time Dr. Ding was
absent, suffering from typhus as the result of an accidental infection
received while infecting his experimental subjects. Rose inspected
the experimental block where he saw many persons suffering from
typhus. He passed through the wards and looked at the clinical
records “of * * * persons with severe cases in the control cases and
* * * lighter cases among those vaccinated.”
The Ding diary, under dates 19 August-4 September 1942,
referring to use of vaccines for immunization, states that 20 persons
were inoculated with vaccine from Bucharest, with a note “this
vaccine was made available by Professor Rose, who received it from
Navy Doctor Professor Ruegge from Bucharest.” Rose denied that he
had ever sent vaccine to Mrugowsky or Ding for use at Buchenwald.
Mrugowsky, from Berlin, under date 16 May 1942, wrote Rose as
follows:
“Dear Professor:
“The Reich Physician SS and Police has consented to the
execution of experiments to test typhus vaccines. May I
therefore ask you to let me have the vaccines.
“The other question which you raised, as to whether the
louse can be infected by a vaccinated typhus patient, will
also be dealt with. In principle, this also has been
approved. There are, however, still some difficulties at the
moment about the practical execution, since we have at
present no facilities for breeding lice.
“Your suggestion to use Olzscha has been passed on to
the personnel department of the SS medical office. It will
be given consideration in due course.”
From a note on the letter, it appears that Rose was absent from
Berlin and was not expected to return until June. The letter,
however, refers to previous contact with Rose and to some
71. suggestions made by him which evidently concern medical
experiments on human beings. Rose in effect admitted that he had
forwarded the Bucharest vaccine to be tested at Buchenwald.
At a meeting of consulting physicians of the Wehrmacht held in
May 1943, Ding made a report in which he described the typhus
experiments he had been performing at Buchenwald. Rose heard the
report at the meeting and then and there objected strongly to the
methods used by Ding in conducting the experiments. As may well
be imagined, this protest created considerable discussion among
those present.
The Ding diary shows that, subsequent to this meeting,
experiments were conducted at Buchenwald at the instigation of the
defendant Rose. The entry under date of 8 March 1944, which refers
to “typhus vaccine experimental series VIII”, appears as follows:
“Suggested by Colonel M. C. of the Air Corps, Professor
Rose (Oberstarzt), the vaccine ‘Kopenhagen’ (Ipsen-Murine-
vaccine) produced from mouse liver by the National Serum
Institute in Copenhagen was tested for its compatibility on
humans. 20 persons were vaccinated for immunization by
intramuscular injection * * *. 10 persons were
contemplated for control and comparison. 4 of the 30
persons were eliminated before the start of the artificial
injection because of intermittent sickness * * *. The
remaining experimental persons were infected on 16 April
44 by subcutaneous injection of 1/20 cc. typhus sick fresh
blood * * *. The following fell sick: 17 persons immunized:
9 medium, 8 seriously; 9 persons control: 2 medium, 7
seriously * * *. 2 June 44: The experimental series was
concluded 13 June 44: Chart and case history completed
and sent to Berlin. 6 deaths (3 Copenhagen) (3 control). Dr.
Ding.”
When on the witness stand Rose vigorously challenged the
correctness of this entry in the Ding diary and flatly denied that he
had sent a Copenhagen vaccine to Mrugowsky or Ding for use at
Buchenwald. The prosecution met this challenge by offering in
72. evidence a letter from Rose to Mrugowsky dated 2 December 1943,
in which Rose stated that he had at his disposal a number of
samples of a new murine virus typhus vaccine prepared from mice
livers, which in animal experiments had been much more effective
than the vaccine prepared from the lungs of mice. The letter
continued:
“To decide whether this first-rate murine vaccine should
be used for protective vaccination of human beings against
lice typhus, it would be desirable to know if this vaccine
showed in your and Ding’s experimental arrangement at
Buchenwald an effect similar to that of the classic virus
vaccines.
“Would you be able to have such an experimental series
carried out? Unfortunately I could not reach you over the
phone. Considering the slowness of postal communications
I would be grateful for an answer by telephone * * *.”
The letter shows on its face that it was forwarded by Mrugowsky to
Ding, who noted its receipt by him 21 February 1944.
On cross-examination, when Rose was confronted with the letter
he admitted its authorship, and that he had asked that experiments
be carried out by Mrugowsky and Ding at Buchenwald.
The fact that Rose contributed actively and materially to the
Mrugowsky-Ding experiments at Buchenwald clearly appears from
the evidence.
The evidence also shows that Rose actively collaborated in the
typhus experiments carried out by Haagen at the Natzweiler
concentration camp for the benefit of the Luftwaffe.
From the exhibits in the record, it appears that Rose and Haagen
corresponded during the month of June 1943 concerning the
production of a vaccine for typhus. Under date 5 June 1943 Haagen
wrote to Rose amplifying a telephone conversation between the two
and referring to a letter from a certain Giroud with reference to a
vaccine which had been used on rabbits. A few days later Rose
replied, thanking him for his letters of 4 and 5 June and for “the
prompt execution of my request.” The record makes it plain that by
73. use of the phrase “the prompt execution of my request” was meant
a request made by Rose to the Chief of the Medical Service of the
Wehrmacht for an order to produce typhus vaccine to be used by the
armed forces in the eastern area.
Under date 4 October 1943 Haagen again wrote Rose concerning
his plans for vaccine production, making reference in the letter to a
report made by Rose on the Ipsen vaccine. Haagen stated that he
had already reported to Rose on the results of experiments with
human beings and expressed his regret that, up to the date of the
letter, he had been unable to “perform infection experiments on the
vaccinated persons.” He also stated that he had requested the
Ahnenerbe to provide suitable persons for vaccination but had
received no answer; that he was then vaccinating other human
beings and would report results later. He concluded by expressing
the wish and need for experimental subjects upon whom to test
vaccinations, and suggested that when subjects were procured,
parallel tests should be made between the vaccine referred to in the
letter and the Ipsen tests.
We think the only reasonable inference which can be drawn from
this letter is that Haagen was proposing to test the efficacy of the
vaccinations which he had completed, which could only be
accomplished by infecting the vaccinated subjects with a virulent
pathogenic virus.
In a letter written by Rose and dated “in the field, 29 September
1943”, directed to the Behring Works at Marburg/Lahn, Rose states
that he is enclosing a memorandum regarding reports by Dr. Ipsen
on his experience in the production of typhus vaccine. Copy of the
report which Rose enclosed is in evidence, Rose stating therein that
he had proposed, and Ipsen had promised, that a number of Ipsen’s
liver vaccine samples should be sent to Rose with the object of
testing its protective efficacy on human beings whose lives were in
special danger. Copies of this report were forwarded by Rose to
several institutions, including that presided over by Haagen.
In November 1943, 100 prisoners were transported to Natzweiler,
of whom 18 had died during the journey. The remainder were in
such poor health that Haagen found them worthless for his
74. experiments and requested additional healthy prisoners through Dr.
Hirt, who was a member of the Ahnenerbe.
Rose wrote to Haagen 13 December 1943, saying among other
things “I request that in procuring persons for vaccination in your
experiment, you request a corresponding number of persons for
vaccination with Copenhagen vaccine. This has the advantage, as
also appeared in the Buchenwald experiments, that the test of
various vaccines simultaneously gives a clearer idea of their value
than the test of one vaccine alone.”
There is much other evidence connecting Rose with the series of
experiments conducted by Haagen but we shall not burden the
judgment further. It will be sufficient to say that the evidence proves
conclusively that Rose was directly connected with the criminal
experiments conducted by Haagen.
Doubtless at the outset of the experimental program launched in
the concentration camps, Rose may have voiced some vigorous
opposition. In the end, however, he overcame what scruples he had
and knowingly took an active and consenting part in the program.
He attempts to justify his actions on the ground that a state may
validly order experiments to be carried out on persons condemned to
death without regard to the fact that such persons may refuse to
consent to submit themselves as experimental subjects. This defense
entirely misses the point of the dominant issue. As we have pointed
out in the case of Gebhardt, whatever may be the condition of the
law with reference to medical experiments conducted by or through
a state upon its own citizens, such a thing will not be sanctioned in
international law when practiced upon citizens or subjects of an
occupied territory.
We have indulged every presumption in favor of the defendant,
but his position lacks substance in the face of the overwhelming
evidence against him. His own consciousness of turpitude is clearly
disclosed by the statement made by him at the close of a vigorous
cross-examination in the following language:
“It was known to me that such experiments had earlier
been carried out, although I basically objected to these
75. experiments. This institution had been set up in Germany
and was approved by the state and covered by the state. At
that moment I was in a position which perhaps corresponds
to a lawyer who is, perhaps, a basic opponent of execution
or death sentence. On occasion when he is dealing with
leading members of the government, or with lawyers during
public congresses or meetings, he will do everything in his
power to maintain his opinion on the subject and have it
put into effect. If, however, he does not succeed, he stays
in his profession and in his environment in spite of this.
Under circumstances he may perhaps even be forced to
pronounce such a death sentence himself, although he is
basically an opponent of that set-up.”
The Tribunal finds that the defendant Rose was a principal in,
accessory to, ordered, abetted, took a consenting part in, and was
connected with plans and enterprises involving medical experiments
on non-German nationals without their consent, in the course of
which murders, brutalities, cruelties, tortures, atrocities, and other
inhuman acts were committed. To the extent that these crimes were
not war crimes they were crimes against humanity.
CONCLUSION
Military Tribunal I finds and adjudges the defendant Gerhard
Rose guilty under counts two and three of the indictment.
RUFF, ROMBERG, AND WELTZ
The defendants Ruff, Romberg, and Weltz are charged under
counts two and three of the indictment with special responsibility for,
and participation in, High-Altitude Experiments.
The defendant Weltz is also charged under counts two and three
with special responsibility for, and participation in, Freezing
Experiments.
76. To the extent that the evidence in the record relates to the high-
altitude experiments, the cases of the three defendants will be
considered together.
Defendant Ruff specialized in the field of aviation medicine from
the completion of his medical education at Berlin and Bonn in 1932.
In January 1934 he was assigned to the German Experimental
Institute for Aviation, a civilian agency, in order to establish a
department for aviation medicine. Later he became chief of the
department.
Defendant Romberg joined the NSDAP in May 1933. From April
1936 until 1938 he interned as an assistant physician at a Berlin
hospital. On 1 January 1938 he joined the staff of the German
Experimental Institution for Aviation as an associate assistant to the
defendant Ruff. He remained as a subordinate to Ruff until the end
of the war.
Defendant Weltz for many years was a specialist in X-ray work. In
the year 1935 he received an assignment as lecturer in the field of
aviation medicine at the University of Munich. At the same time he
instituted a small experimental department at the Physiological
Institute of the University of Munich. Weltz lectured at the University
until 1945; at the same time he did research work at the Institute.
In the summer of 1941 the experimental department at the
Physiological Institute, University of Munich, was taken over by the
Luftwaffe and renamed the “Institute for Aviation Medicine in
Munich.” Weltz was commissioned director of this Institute by
Hippke, then Chief of the Medical Inspectorate of the Luftwaffe. In
his capacity as director of this Institute, Weltz was subordinated to
Luftgau No. VII in Munich for disciplinary purposes. In scientific
matters he was subordinated directly to Anthony, Chief of the
Department for Aviation Medicine in the Office of the Medical
Inspectorate of the Luftwaffe.
HIGH-ALTITUDE EXPERIMENTS
The evidence is overwhelming and not contradicted that
experiments involving the effect of low air pressure on living human
77. beings were conducted at Dachau from the latter part of February
through May 1942. In some of these experiments great numbers of
human subjects were killed under the most brutal and senseless
conditions. A certain Dr. Sigmund Rascher, Luftwaffe officer, was the
prime mover in the experiments which resulted in the deaths of the
subjects. The prosecution maintains that Ruff, Romberg, and Weltz
were criminally implicated in these experiments.
The guilt of the defendant Weltz is said to arise by reason of the
fact that, according to the prosecution’s theory, Weltz, as the
dominant figure proposed the experiments, arranged for their
conduct at Dachau, and brought the parties Ruff, Romberg, and
Rascher together. The guilt of Ruff and Romberg is charged by
reason of the fact that they are said to have collaborated with
Rascher in the conduct of the experiments. The evidence on the
details of the matter appears to be as follows:
In the late summer of 1941 soon after the Institute Weltz at
Munich was taken over by the Luftwaffe, Hippke, Chief of the
Medical Service of the Luftwaffe, approved, in principle, a research
assignment for Weltz in connection with the problem of rescue of
aviators at high altitudes. This required the use of human
experimental subjects. Weltz endeavored to secure volunteer
subjects for the research from various sources; however, he was
unsuccessful in his efforts.
Rascher, one of Himmler’s minor satellites, was at the time an
assistant at the Institute. He, Rascher, suggested the possibility of
securing Himmler’s consent to conducting the experiments at
Dachau. Weltz seized upon the suggestion, and thereafter
arrangements to that end were completed, Himmler giving his
consent for experiments to be conducted on concentration camp
inmates condemned to death, but only upon express condition that
Rascher be included as one of the collaborators in the research.
Rascher was not an expert in aviation medicine. Ruff was the
leading German scientist in this field, and Romberg was his principal
assistant. Weltz felt that before he could proceed with his research
these men should be persuaded to come into the undertaking. He
visited Ruff in Berlin and explained the proposition. Thereafter Ruff
78. and Romberg came to Munich, where a conference was held with
Weltz and Rascher to discuss the technical nature of the proposed
experiments.
According to the testimony of Weltz, Ruff, and Romberg, the
basic consideration which impelled them to agree to the use of
concentration camp inmates as subjects was the fact that the
inmates were to be criminals condemned to death who were to
receive some form of clemency in the event they survived the
experiments. Rascher, who was active in the conference, assured the
defendants that this also was one of the conditions under which
Himmler had authorized the use of camp inmates as experimental
subjects.
The decisions reached at the conference were then made known
to Hippke, who gave his approval to the institution of experiments at
Dachau and issued an order that a mobile low-pressure chamber
which was then in the possession of Ruff at the Department for
Aviation Medicine, Berlin, should be transferred to Dachau for use in
the project.
A second meeting was held at Dachau, attended by Ruff,
Romberg, Weltz, Rascher, and the camp commander, to make the
necessary arrangements for the conduct of the experiments. The
mobile low-pressure chamber was then brought to Dachau, and on
22 February 1942 the first series of experiments was instituted.
Weltz was Rascher’s superior; Romberg was subordinate to Ruff.
Rascher and Romberg were in personal charge of the conduct of the
experiments. There is no evidence to show that Weltz was ever
present at any of these experiments. Ruff visited Dachau one day
during the early part of the experiments, but thereafter remained in
Berlin and received information concerning the progress of the
experiments only through his subordinate, Romberg.
There is evidence from which it may reasonably be found that at
the outset of the program personal friction developed between Weltz
and his subordinate Rascher. The testimony of Weltz is that on
several occasions he asked Rascher for reports on the progress of
the experiments and each time Rascher told Weltz that nothing had
been started with reference to the research. Finally Weltz ordered
79. Rascher to make a report; whereupon Rascher showed his superior a
telegram from Himmler which stated, in substance, that the
experiments to be conducted by Rascher were to be treated as top
secret matter and that reports were to be given to none other than
Himmler. Because of this situation Weltz had Rascher transferred out
of his command to the DVL branch at Dachau. Defendant Romberg
stated that these experiments had been stopped soon after their
inception by the adjutant of the Reich War Ministry, because of
friction between Weltz and Rascher, and that the experiments were
resumed only after Rascher had been transferred out of Weltz
Institute.
While the evidence is convincingly plain that Weltz participated in
the initial arrangements for the experiments and brought all parties
together, it is not so clear that illegal experiments were planned or
carried out while Rascher was under Weltz command, or that he
knew that experiments which Rascher might conduct in the future
would be illegal and criminal.
There appear to have been two distinct groups of prisoners used
in the experimental series. One was a group of 10 to 15 inmates
known in the camp as “exhibition patients” or “permanent
experimental subjects”. Most, if not all, of these were German
nationals who were confined in the camp as criminal prisoners.
These men were housed together and were well-fed and reasonably
contented. None of them suffered death or injury as a result of the
experiments. The other group consisted of 150 to 200 subjects
picked at random from the camp and used in the experiments
without their permission. Some 70 or 80 of these were killed during
the course of the experiments.
The defendants Ruff and Romberg maintain that two separate
and distinct experimental series were carried on at Dachau; one
conducted by them with the use of the “exhibition subjects”, relating
to the problems of rescue at high altitudes, in which no injuries
occurred; the other conducted by Rascher on the large group of
nonvolunteers picked from the camp at random, to test the limits of
human endurance at extremely high altitudes, in which experimental
subjects in large numbers were killed.
80. The prosecution submits that no such fine distinction may be
drawn between the experiments said to have been conducted by
Ruff and Romberg, on the one hand, and Rascher on the other, or in
the prisoners who were used as the subjects of these experiments;
that Romberg—and Ruff as his superior—share equal guilt with
Rascher for all experiments in which deaths to the human subjects
resulted.
In support of this submission the members of the prosecution
cite the fact that Rascher was always present when Romberg was
engaged in work at the altitude chamber; that on at least three
occasions Romberg was at the chamber when deaths occurred to
the so-called Rascher subjects, yet elected to continue the
experiments. They point likewise to the fact that, in a secret
preliminary report made by Rascher to Himmler which tells of
deaths, Rascher mentions the name of Romberg as being a
collaborator in the research. Finally they point to the fact that, after
the experiments were concluded, Romberg was recommended by
Rascher and Sievers for the War Merit Cross, because of the work
done by him at Dachau.
The issue on the question of the guilt or innocence of these
defendants is close; we would be less than fair were we not to
concede this fact. It cannot be denied that there is much in the
record to create at least a grave suspicion that the defendants Ruff
and Romberg were implicated in criminal experiments at Dachau.
However, virtually all of the evidence which points in this direction is
circumstantial in its nature. On the other hand, it cannot be gainsaid
that there is a certain consistency, a certain logic, in the story told by
the defendants. And some of the story is corroborated in significant
particulars by evidence offered by the prosecution.
The value of circumstantial evidence depends upon the
conclusive nature and tendency of the circumstances relied on to
establish any controverted fact. The circumstances must not only be
consistent with guilt, but they must be inconsistent with innocence.
Such evidence is insufficient when, assuming all to be true which the
evidence tends to prove, some other reasonable hypothesis of
innocence may still be true; for it is the actual exclusion of every
81. other reasonable hypothesis but that of guilt which invests mere
circumstances with the force of proof. Therefore, before a court will
be warranted in finding a defendant guilty on circumstantial
evidence alone, the evidence must show such a well-connected and
unbroken chain of circumstances as to exclude all other reasonable
hypotheses but that of the guilt of the defendant. What
circumstances can amount to proof can never be a matter of general
definition. In the final analysis the legal test is whether the evidence
is sufficient to satisfy beyond a reasonable doubt the understanding
and conscience of those who, under their solemn oaths as officers,
must assume the responsibility for finding the facts.
On this particular specification, it is the conviction of the Tribunal
that the defendants Ruff, Romberg, and Weltz must be found not
guilty.
FREEZING EXPERIMENTS
In addition to the high-altitude experiments, the defendant Weltz
is charged with freezing experiments, likewise conducted at Dachau
for the benefit of the German Luftwaffe. These began at the camp at
the conclusion of the high-altitude experiments and were performed
by Holzloehner, Finke, and Rascher, all of whom were officers in the
medical services of the Luftwaffe. Non-German nationals were killed
in these experiments.
We think it quite probable that Weltz had knowledge of these
experiments, but the evidence is not sufficient to prove that he
participated in them.
CONCLUSION
Military Tribunal I finds and adjudges that the defendant
Siegfried Ruff is not guilty under either counts two or three of the
indictment, and directs that he be released from custody under the
indictment when this Tribunal presently adjourns; and
Military Tribunal I finds and adjudges that the defendant Hans
Wolfgang Romberg is not guilty under either counts two or three of
82. the indictment, and directs that he be released from custody under
the indictment when this Tribunal presently adjourns; and
Military Tribunal I finds and adjudges that the defendant Georg
August Weltz is not guilty under either counts two or three of the
indictment; and directs that he be released from custody under the
indictment when this Tribunal presently adjourns.
BRACK
The defendant Brack is charged under counts two and three of
the indictment with personal responsibility for, and participation in,
Sterilization Experiments and the Euthanasia Program of the German
Reich. Under count four the defendant is charged with membership
in an organization declared criminal by the judgment of the
International Military Tribunal, namely, the SS.
The defendant Brack enlisted in an artillery unit of an SA
regiment in 1923, and became a member of the NSDAP and the SS
in 1929. Throughout his career in the Party he was quite active in
high official circles. He entered upon full-time service in the Braune
Haus, the Nazi headquarters at Munich, in the summer of 1932. The
following year he was appointed to the Staff of Bouhler, business
manager of the NSDAP in Munich. When in 1934 Bouhler became
Chief of the Chancellery of the Fuehrer of the NSDAP, Brack was
transferred from the Braune Haus to Bouhler’s Berlin office. In 1936
Brack was placed in charge of office 2 (Amt 2) in the Chancellery of
the Fuehrer in Berlin, that office being charged with the
examinations of complaints received by the Fuehrer from all parts of
Germany. Later, he became Bouhler’s deputy in office 2. As such he
frequently journeyed to the different Gaue for the purpose of gaining
first-hand information concerning matters in which Bouhler was
interested.
Brack was promoted to the rank of Sturmbannfuehrer in the SS in
1935, and in April 1936 to the rank of Obersturmbannfuehrer. The
following September he became a Standartenfuehrer in the SS, and
was transferred to the staff of the Main Office of the SS in
83. November. In November 1940 he was promoted to the grade of
Oberfuehrer.
In 1942 Brack joined the Waffen SS, and during the late summer
of that year was ordered to active duty with a Waffen SS division. He
apparently remained on active duty until the close of the war.
STERILIZATION EXPERIMENTS
The persecution of the Jews had become a fixed Nazi policy very
soon after the outbreak of World War II. By 1941 that persecution
had reached the stage of the extermination of Jews, both in
Germany and in the occupied territories. This fact is confirmed by
Brack himself, who testified that he had been told by Himmler that
he, Himmler, had received a personal order to that effect from Hitler.
The record shows that the agencies organized for the so-called
euthanasia of incurables were used for this bloody pogrom. Later,
because of the urgent need for laborers in Germany, it was decided
not to kill Jews who were able to work but, as an alternative, to
sterilize them.
With this end in view Himmler instructed Brack to inquire of
physicians who were engaged in the Euthanasia Program about the
possibility of a method of sterilizing persons without the victim’s
knowledge. Brack worked on the assignment, with the result that in
March 1941 he forwarded to Himmler his signed report on the
results of experiments concerning the sterilization of human beings
by means of X-rays. In the report a method was suggested by which
sterilization with X-ray could be effected on groups of persons
without their being aware of the operation.
On 23 June 1942 Brack wrote the following letter to Himmler:
“Dear Reichsfuehrer:
“* * * Among 10 millions of Jews in Europe, there are, I
figure, at least 2-3 millions of men and women who are fit
enough to work. Considering the extraordinary difficulties
the labor problem presents us with I hold the view that
those 2-3 millions should be specially selected and
84. preserved. This can however only be done if at the same
time they are rendered incapable to propagate. About a
year ago I reported to you that agents of mine have
completed the experiments necessary for this purpose. I
would like to recall these facts once more. Sterilization, as
normally performed on persons with hereditary diseases is
here out of the question, because it takes too long and is
too expensive. Castration by X-ray however is not only
relatively cheap, but can also be performed on many
thousands in the shortest time. I think, that at this time it is
already irrelevant whether the people in question become
aware of having been castrated after some weeks or
months, once they feel the effects.
“Should you, Reichsfuehrer, decide to choose this way in
the interest of the preservation of labor, then Reichsleiter
Bouhler would be prepared to place all physicians and other
personnel needed for this work at your disposal. Likewise
he requested me to inform you that then I would have to
order the apparatus so urgently needed with the greatest
speed.
“Heil Hitler!
“Yours
“Viktor Brack.”
Brack testified from the witness stand that at the time he wrote
this letter he had every confidence that Germany would win the war.
Brack’s letter was answered by Himmler on 11 August 1942. In
the reply Himmler directed that sterilization by means of X-rays be
tried in at least one concentration camp in a series of experiments,
and that Brack place at his disposal expert physicians to conduct the
operation.
Blankenburg, Brack’s deputy, replied to Himmler’s letter and
stated that Brack had been transferred to an SS division, but that he,
Blankenburg, as Brack’s permanent deputy would “immediately take
the necessary measures and get in touch with the chiefs of the main
offices of the concentration camps.”
85. Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.
More than just a book-buying platform, we strive to be a bridge
connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.
Join us on a journey of knowledge exploration, passion nurturing, and
personal growth every day!
ebookbell.com