Quantum Algorithms Via Linear Algebra A Primer Richard J Lipton Kenneth W Regan
Quantum Algorithms Via Linear Algebra A Primer Richard J Lipton Kenneth W Regan
Quantum Algorithms Via Linear Algebra A Primer Richard J Lipton Kenneth W Regan
Quantum Algorithms Via Linear Algebra A Primer Richard J Lipton Kenneth W Regan
1. Quantum Algorithms Via Linear Algebra A Primer
Richard J Lipton Kenneth W Regan download
https://ebookbell.com/product/quantum-algorithms-via-linear-
algebra-a-primer-richard-j-lipton-kenneth-w-regan-56401378
Explore and download more ebooks at ebookbell.com
2. Here are some recommended products that we believe you will be
interested in. You can click the link to download.
Introduction To Quantum Algorithms Via Linear Algebra Second Edition
Richard J Lipton Kenneth W Regan
https://ebookbell.com/product/introduction-to-quantum-algorithms-via-
linear-algebra-second-edition-richard-j-lipton-kenneth-w-
regan-56402476
Quantum Algorithms For Cryptographically Significant Boolean Functions
An Ibmq Experience Tharrmashastha Sapv
https://ebookbell.com/product/quantum-algorithms-for-
cryptographically-significant-boolean-functions-an-ibmq-experience-
tharrmashastha-sapv-50710372
Quantum Algorithms Lecture Notes Waterloo Co781 Itebooks
https://ebookbell.com/product/quantum-algorithms-lecture-notes-
waterloo-co781-itebooks-23840186
Quantum Machine Learning Quantum Algorithms And Neural Networks
Pethuru Raj
https://ebookbell.com/product/quantum-machine-learning-quantum-
algorithms-and-neural-networks-pethuru-raj-58618384
3. Introduction To Quantum Algorithms Johannes A Buchmann
https://ebookbell.com/product/introduction-to-quantum-algorithms-
johannes-a-buchmann-56632724
A Practical Guide To Quantum Machine Learning And Quantum Optimization
Handson Approach To Modern Quantum Algorithms Elas F Combarro
https://ebookbell.com/product/a-practical-guide-to-quantum-machine-
learning-and-quantum-optimization-handson-approach-to-modern-quantum-
algorithms-elas-f-combarro-48352420
Quantum Computing Algorithms Discover How A Little Math Goes A Long
Way Barry Burd
https://ebookbell.com/product/quantum-computing-algorithms-discover-
how-a-little-math-goes-a-long-way-barry-burd-52393216
Quantum Computing Algorithms And Computational Complexity Fernando L
Pelayo
https://ebookbell.com/product/quantum-computing-algorithms-and-
computational-complexity-fernando-l-pelayo-54698752
Quantum Computing Algorithms Barry Burd
https://ebookbell.com/product/quantum-computing-algorithms-barry-
burd-54534568
8. QUANTUM ALGORITHMS VIA LINEAR ALGEBRA
A Primer
Richard J. Lipton
Kenneth W. Regan
The MIT Press
Cambridge, Massachusetts
London, England
9. c
2014 Massachusetts Institute of Technology
All rights reserved. No part of this book may be reproduced in any form or by any electronic
or mechanical means (including photocopying, recording, or information storage and retrieval)
without permission in writing from the publisher.
MIT Press books may be purchased at special quantity discounts for business or sales promotional
use. For information, please email special sales@mitpress.mit.edu.
This book was set in Times Roman and Mathtime Pro 2 by the authors, and was printed and bound
in the United States of America.
Library of Congress Cataloging-in-Publication Data
Lipton, Richard J., 1946–
Quantum algorithms via linear algebra: a primer / Richard J. Lipton and Kenneth W. Regan.
p. cm.
Includes bibliographical references and index.
ISBN 978-0-262-02839-4 (hardcover : alk. paper)
1. Quantum computers. 2. Computer algorithms. 3. Algebra, Linear. I. Regan, Kenneth W., 1959–
II. Title
QA76.889.L57 2014
005.1–dc23
2014016946
10 9 8 7 6 5 4 3 2 1
10. We dedicate this book to all those who helped create and nourish the beautiful
area of quantum algorithms, and to our families who helped create and
nourish us.
RJL and KWR
12. Contents
Preface xi
Acknowledgements xiii
1 Introduction 1
1.1 The Model 2
1.2 The Space and the States 3
1.3 The Operations 5
1.4 Where Is the Input? 6
1.5 What Exactly Is the Output? 7
1.6 Summary and Notes 8
2 Numbers and Strings 9
2.1 Asymptotic Notation 11
2.2 Problems 12
2.3 Summary and Notes 13
3 Basic Linear Algebra 15
3.1 Hilbert Spaces 16
3.2 Products and Tensor Products 16
3.3 Matrices 17
3.4 Complex Spaces and Inner Products 19
3.5 Matrices, Graphs, and Sums Over Paths 20
3.6 Problems 23
3.7 Summary and Notes 26
4 Boolean Functions, Quantum Bits, and Feasibility 27
4.1 Feasible Boolean Functions 28
4.2 An Example 30
4.3 Quantum Representation of Boolean Arguments 33
4.4 Quantum Feasibility 35
4.5 Problems 38
4.6 Summary and Notes 40
5 Special Matrices 41
5.1 Hadamard Matrices 41
5.2 Fourier Matrices 42
5.3 Reversible Computation and Permutation Matrices 43
5.4 Feasible Diagonal Matrices 44
5.5 Reflections 45
5.6 Problems 46
13. viii Contents
5.7 Summary and Notes 49
6 Tricks 51
6.1 Start Vectors 51
6.2 Controlling and Copying Base States 52
6.3 The Copy-Uncompute Trick 54
6.4 Superposition Tricks 55
6.5 Flipping a Switch 56
6.6 Measurement Tricks 58
6.7 Partial Transforms 59
6.8 Problems 60
6.9 Summary and Notes 62
7 Phil’s Algorithm 63
7.1 The Algorithm 63
7.2 The Analysis 63
7.3 An Example 64
7.4 A Two-Qubit Example 64
7.5 Phil Measures Up 66
7.6 Quantum Mazes versus Circuits versus Matrices 69
7.7 Problems 71
7.8 Summary and Notes 74
8 Deutsch’s Algorithm 77
8.1 The Algorithm 77
8.2 The Analysis 78
8.3 Superdense Coding and Teleportation 82
8.4 Problems 86
8.5 Summary and Notes 87
9 The Deutsch-Jozsa Algorithm 89
9.1 The Algorithm 89
9.2 The Analysis 90
9.3 Problems 92
9.4 Summary and Notes 92
10 Simon’s Algorithm 93
10.1 The Algorithm 93
10.2 The Analysis 94
14. Contents ix
10.3 Problems 95
10.4 Summary and Notes 96
11 Shor’s Algorithm 97
11.1 Strategy 97
11.2 Good Numbers 98
11.3 Quantum Part of the Algorithm 99
11.4 Analysis of the Quantum Part 100
11.5 Probability of a Good Number 102
11.6 Using a Good Number 105
11.7 Continued Fractions 106
11.8 Problems 107
11.9 Summary and Notes 108
12 Factoring Integers 109
12.1 Some Basic Number Theory 109
12.2 Periods Give the Order 110
12.3 Factoring 110
12.4 Problems 112
12.5 Summary and Notes 113
13 Grover’s Algorithm 115
13.1 Two Vectors 115
13.2 The Algorithm 117
13.3 The Analysis 117
13.4 The General Case, with k Unknown 118
13.5 Grover Approximate Counting 119
13.5.1 The Algorithm 122
13.5.2 The Analysis 122
13.6 Problems 126
13.7 Summary and Notes 128
14 Quantum Walks 129
14.1 Classical Random Walks 129
14.2 Random Walks and Matrices 130
14.3 An Encoding Nicety 132
14.4 Defining Quantum Walks 133
14.5 Interference and Diffusion 134
15. x Contents
14.6 The Big Factor 138
14.7 Problems 139
14.8 Summary and Notes 140
15 Quantum Walk Search Algorithms 143
15.1 Search in Big Graphs 143
15.2 General Quantum Walk for Graph Search 145
15.3 Specifying the Generic Walk 147
15.4 Adding the Data 149
15.5 Toolkit Theorem for Quantum Walk Search 150
15.5.1 The Generic Algorithm 151
15.5.2 The Generic Analysis 152
15.6 Grover Search as Generic Walk 152
15.7 Element Distinctness 153
15.8 Subgraph Triangle Incidence 154
15.9 Finding a Triangle 155
15.10Evaluating Formulas and Playing Chess 156
15.11Problems 157
15.12Summary and Notes 158
16 Quantum Computation and BQP 159
16.1 The Class BQP 159
16.2 Equations, Solutions, and Complexity 161
16.3 A Circuit Labeling Algorithm 163
16.4 Sum-Over-Paths and Polynomial Roots 165
16.5 The Additive Polynomial Simulation 168
16.6 Bounding BQP 169
16.7 Problems 170
16.8 Summary and Notes 173
17 Beyond 175
17.1 Reviewing the Algorithms 175
17.2 Some Further Topics 176
17.3 The “Quantum” in the Algorithms 179
Bibliography 183
Index 189
16. Preface
This book is an introduction to quantum algorithms unlike any other. It is short,
yet it is comprehensive and covers the most important and famous quantum
algorithms; it assumes minimal background, yet is mathematically rigorous;
it explains quantum algorithms, yet steers clear of the notorious philosophical
problems and issues of quantum mechanics.
We assume no background in quantum theory, quantum mechanics, or quan-
tum anything. None. Quantum computation can be described in terms of ele-
mentary linear algebra, so some familiarity with vectors, matrices, and their
basic properties is required. However, we will review all that we need from
linear algebra, which is surprisingly little. If you need a refresher, then our
material should be enough; if you are new to linear algebra, then we suggest
some places where you can find the required material. It is really not much, so
do not worry.
We do assume that you are comfortable with mathematical proofs; that is,
we assume “mathematical maturity” in a way that is hard to define. Our proofs
are short and straightforward, except for advanced topics in section 13.5 and
chapters 15 and 16. This may be another surprise: for all the excitement about
quantum algorithms, it is interesting that the mathematical tools and meth-
ods used are elementary. The proofs are neat, clever, and interesting, but you
should have little trouble following the arguments. If you do, it is our fault—
we hope that our explanations are always clear. Our idea of a standard course
runs through section 13.4, possibly including chapter 14.
We strive for mathematical precision. There is always a fine line between
being complete and clear and being pedantic—hopefully we stay on the right
side of this. We started with the principle of supplying all the details—all of
them—on all we present. We have compromised in three places, all having to
do with approximations. The first is our using the quantum Fourier transform
“as-is” rather than approximating it, and the others are in chapters 15 and 16.
For better focus on the algorithms, we chose to de-emphasize quantum cir-
cuits. In fact, we tried to avoid quantum circuits and particularities of quantum
gates altogether. However, they are excellent to illuminate linear algebra, so
we have provided a rich set of exercises in chapters 3 through 7, plus two pop-
ular applications in section 8.3. These can in fact be used to support coverage
of quantum circuits in a wider-scale course. The same goes for complexity
classes. We prefer to speak operationally in terms of feasible computation, and
we try to avoid being wedded to the “asymptotically polynomially bounded”
definition of it. We avoid naming any complexity class until chapter 16. Never-
theless, that chapter has ample complexity content anchored in computational
17. xii Preface
problems rather than machine models and is self-contained enough to support
a course that covers complexity theory. At the same stroke, it gives algebraic
tools for analyzing quantum circuits. We featured tricks we regard as algorith-
mic in the main text and delegated some tricks of implementation to exercises.
What makes an algorithm a quantum algorithm? The answer should have
nothing to do with how the algorithm is implemented in a physical quantum
system. We regard this as really a question about how programming notation—
mathematical notation—represents the feasibility of calculations in nature.
Quantum algorithms use algebraic units called qubits that are richer than bits,
by which they are allowed to count as feasible some operations that when writ-
ten out in simple linear algebra use exponentially long notation. The rules for
these allowed operations are specified in standard models of quantum compu-
tation, which are all equivalent to the one presented in this book. It might seem
ludicrous to believe that nature in any sense uses exponentially long notation,
but some facet of this appears at hand because quantum algorithms can quickly
solve problems that many researchers believe require exponential work by any
“classical” algorithm. In this book, classical means an algorithm written in the
notation for feasible operations used by every computer today.
This leads to a word about our presentation. Almost all summaries, notes,
and books on quantum algorithms use a special notation for vectors and matri-
ces. This is the famous Dirac notation that was invented by Paul Dirac—who
else. It has many advantages and is the de-facto standard in the study of quan-
tum algorithms. It is a great notation for experts and instrumental to becom-
ing an expert, but we suspect it is a barrier for those starting out who are not
experts. Thus, we avoid using it, except for a few places toward the end to give
a second view of some complicated states. Our thesis is that we can explain
quantum algorithms without a single use of this notation. Essentially this book
is a testament to that belief: if you find this book more accessible than oth-
ers, then we believe it owes to this decision. Our notation follows certain ISO
recommendations, including boldface italics for vectors and heavy slant for
matrices and operators.
We hope you will enjoy this little book. It can be used to gain an understand-
ing of quantum algorithms by self-study, as a course or seminar text, or even
as additional material in a general course on algorithms.
Georgia Institute of Technology, Richard J. Lipton
University at Buffalo (SUNY), Kenneth W. Regan
18. Acknowledgements
We thank Aram Harrow, Gil Kalai, and John Preskill for contributions to a
debate since 2012 on themes reflected in the opening and last chapters, Andrew
Childs and Stacey Jeffery for suggestions on chapter 15, and several members
of the IBM Thomas J. Watson Research Center for suggestions and illumi-
nating discussions that stimulated extra coverage of quantum gates and circuit
simulations in the exercises. We thank Marie Lufkin Lee and Marc Lowenthal
and others at MIT Press for patiently shepherding this project to completion,
and the anonymous reviewers of the manuscript in previous stages. We also
thank colleagues and students for some helpful input. Quantum circuit dia-
grams were typeset using version 2 of the Qcircuit.tex package by Steve
Flammia and Bryan Eastin.
20. 1 Introduction
One of the great scientific and engineering questions of our time is:
Are quantum computers possible?
We can build computers out of mechanical gears and levers, out of electric
relays, out of vacuum tubes, out of discrete transistors, and finally today out
of integrated circuits that contain thousands of millions of individual transis-
tors. In the future, it may be possible to build computers out of other types of
devices—who knows.
All of these computers, from mechanical to integrated-circuit-based ones,
are called classical. They are all classical in that they implement the same type
of computer, albeit as the technology gets more sophisticated the computers
become faster, smaller, and more reliable. But they all behave in the same way,
and they all operate in a non-quantum regime.
What distinguishes these devices is that information is manipulated as bits,
which already have determinate values of 0 or 1. Ironically, the key compo-
nents of today’s computers are quantum devices. Both the transistor and its
potential replacement, the Josephson junction, won a Nobel Prize for the quan-
tum theory of their operation. So why is their regime non-quantum? The reason
is that the regime reckons information as bits.
By contrast, quantum computation operates on qubits, which are based on
complex-number values, not just 0 and 1. They can be read only by measuring,
and the readout is in classical bits. To skirt the commonly bandied notion of
observers interfering with quantum systems and postpone the discussion of
measurement as an operation, we offer the metaphor that a bit is what you get
by “cooking” a qubit. From this standpoint, doing a classical computation on
bits is like cooking the ingredients of a pie individually before baking them
together in the pie. The quantum argument is that it’s more expedient to let
the filling bubble in its natural state while cooking everything at once. The
engineering problem is whether the filling can stay coherent long enough for
this to work.
The central question is whether it is possible to build computers that are
inherently quantum. Such computers would exploit the power and wonder of
nature to create systems that can effectively be in multiple states at once. They
open a world with apparent actions at a distance that the great Albert Ein-
stein never believed but that actually happen—a world with other strange and
counter-intuitive effects. To be sure, this is the world we live in, so the question
becomes how much of this world our computers can enact.
21. 2 Chapter 1 Introduction
This question is yet to be resolved. Many believe that such machines will be
built one day. Some others have fundamental doubts and believe there are phys-
ical limits that make quantum computers impossible. It is currently unclear
who is right, but whatever happens will be interesting: a world with quantum
computers would allow us to solve hard problems, while a barrier to them
might shed light on deep questions of physics and information.
Happily this question does not pose a barrier to us. We plan to study quan-
tum algorithms, which are interesting whether quantum computers are built
soon, in the next ten years, in the next fifty years, or never. The area of quantum
algorithms contains some beautiful ideas that everyone interested in computa-
tion should know.
The rationale for this book is to supply a gentle introduction to quan-
tum algorithms. We will say nothing more about quantum computers—about
whether they will be built or how they may work—until the end. We will only
discuss algorithms.
Our goal is to explain quantum algorithms in a way that is accessible to
almost anyone. Curiously, while quantum algorithms are quite different from
classical ones, the mathematical tools needed to understand them are quite
modest. The mathematics that is required to understand them is linear algebra:
vectors, matrices, and their basic properties. That is all. So these are really
linear-algebraic algorithms.
1.1 The Model
The universe is complex and filled with strange and wonderful things. From
lifeforms like viruses, bacteria, and people; to inanimate objects like comput-
ers, airplanes, and bridges that span huge distances; from planets, to whole
galaxies. There is mystery and wonder in them all.
The goal of science in general, and physics specifically, is to explore and
understand the universe by discovering the simplest laws possible that explain
the multitude of phenomena. The method used by physics is the discovery of
models that predict the behavior of all from the smallest to the largest objects.
In ancient times, the models were crude: the earliest models “explained” all
by reducing everything to earth, water, wind, and fire. Today, the models are
much more refined—they replace earth and water by hundreds of particles and
wind and fire by the four fundamental forces. Mainly, the models are better at
22. 1.2 The Space and the States 3
predicting, that is, in channeling reproducible knowledge. Yet the full theory,
the long-desired theory of everything, still eludes us.
Happily, in this introduction to quantum algorithms, we need only a sim-
ple model of how part of the universe works. We can avoid relativity, special
and general; we can avoid the complexities of the Standard Model of particle
physics, with its hundreds of particles; we can even avoid gravity and electro-
magnetism. We cannot quite go back to earth, water, wind, and fire, but we can
avoid having to know and understand much of modern physics. This avowal
of independence from physical qualities does not prevent us from imagining
nature’s workings. Instead, it speaks to our belief that algorithmic considera-
tions in information processing run deeper.
1.2 The Space and the States
So what do we need to know? We need to understand that the state of our
quantum systems will always be described by a single unit vector a that lies in
some fixed vector space of dimension N = 2n, for some n. That is, the state is
always a vector
a =
⎡
⎢
⎢
⎣
a0
.
.
.
aN−1
⎤
⎥
⎥
⎦,
where each entry ak is a real or complex number depending on whether the
space is real or complex. Each entry is called an amplitude. We will not need
to involve the full quantum theory of mixed states, which are formally the same
as classical probability distributions over states like a, which are called pure
states. That is, we consider only pure states in this text.
We must distinguish between general states and basis states, which form
a linear-algebra basis composed of configurations that we may observe. In the
standard basis, the basis states are denoted by the vectors ek whose entries are
0 except for a 1 in place k. We identify ek with the index k itself in [0,N − 1]
and then further identify k with the k-th string x in a fixed total ordering of
{0,1}n. That the basis states correspond to all the length-n binary strings is
why we have N = 2n. The interplays among basis vectors, numerical indices,
and binary strings encoding objects are drawn more formally in chapter 2.
Any vector that is not a basis state is a superposition. Two or more basis
states have nonzero amplitude in any superposition, and only one of them can
23. 4 Chapter 1 Introduction
be observed individually in any measurement. The amplitude ak is not directly
meaningful for what we may expect to observe but rather its squared absolute
value, |ak|2. This gives the probability of measuring the system to be in the
basis state ek. The import of a being a unit vector is that these probabilities
sum to 1, namely,
N−1
k=0 |ak|2 = 1.
For N = 2, the idea that the length of a diagonal line defined by the origin
and a point in the plane involves the sum of two squares is older than Pythago-
ras. The length is 1 precisely when the point lies on the unit circle. We may
regard the basis state e0 as lying on the x-axis, while e1 lies on the y-axis. Then
measurement projects the state a either onto the “x leg” of the triangle it makes
with the x-axis or the “y leg” of the triangle along the y-axis.
It may still seem odd that the probabilities are proportional not to the lengths
of the legs but to their squares. But we know that if the angle is θ from the near
part of the x-axis (so 0 ≤ θ ≤ π/2), then the lengths are cos(θ) and sin(θ),
respectively, and it is cos2(θ) + sin2(θ), not cos(θ) + sin(θ), that sums to 1.
If we wanted to use points whose legs sum directly to 1, we’d have to use the
diamond that is inscribed inside the circle. In N-dimensional space, we’d have
to use the N-dimensional simplex rather than the sphere. Well the simplex is
spiky and was not really studied until the 20th century, whereas the sphere is
smooth and nice and was appreciated by the ancients. Evidently nature agrees
with ancient aesthetics. We may not know why the world works this way, but
we can certainly say, why not?
Once we agree, all we really need to know about the space is that it sup-
ports the picture of Pythagoras, that is, ordinary Euclidean space. Both the real
vector spaces RN and the complex ones CN do so. That is, they agree on how
many components their vectors have and how distances are measured by taking
squares of values from the vector components. They differ only on what kind of
numbers these component values v can be, but the idea of the norm or absolute
value |v| quickly reconciles this difference. This aspect was first formalized by
Euclid’s great rigorizer of the late 19th and early 20th centuries, David Hilbert;
in his honor, the common concept is called a Hilbert Space. Hilbert’s concept
retains its vigor even if “N” can be infinite, or if the “vectors” and “compo-
nents” are strange objects, but in this book, we need not worry: N will always
be finite, and the space HN will be RN or CN. Allowing the latter is the reason
that we say “Hilbert space” not “ordinary Euclidean space.”
24. 1.3 The Operations 5
1.3 The Operations
In fact, our model embraces the circle and the sphere even more tightly. It
throws out—it disallows—all other points. Every state a must be a point on
the unit sphere. If you have heard the term “projective space,” then that is what
we are restricting to. But happily we need not worry about restricting the points
so long as we restrict the operations in our model. The operations must map
any point on the unit sphere to some (other) point on the unit sphere.
We will also restrict the operations to be linear. Apart from measurements,
they must be onto the whole sphere—which makes them map all of HN onto
all of HN. By the theory of vector subspaces, this means the operations must
also be invertible. Operations with all these properties are called unitary.
We will represent the operations by matrices, and we give several equivalent
stipulations for unitary matrices in chapter 3, followed by examples in chap-
ter 5 and tricks for working with them in chapter 6. But we can already under-
stand that compositions of unitary operations are unitary, and their representa-
tions and actions can be figured by the familiar idea of multiplying matrices.
Thus, our model’s programs will simply be compositions of unitary matrices.
The one catch is that the matrices themselves will be huge, out of proportion to
the actual simplicity of the operation as we believe nature meters it. Hence, we
will devote time in chapters 4 and 5 to ensuring these operations are feasible
according to standards already well accepted in classical computation.
Thus, we can comply with the requirements for any computational model
that we must understand what state the computation starts in, how it moves
from one state to another, and how we get information out of the computation.
Start: We will ultimately be able to assume that the start state is always the
elementary vector
e0 =
⎡
⎢
⎢
⎢
⎢
⎣
1
0
.
.
.
0
⎤
⎥
⎥
⎥
⎥
⎦
of length N. Because the first binary string in our ordering of {0,1}n will be 0n,
our usual start state will denote the binary string of n zeros.
Move: If the system is in some state a, then we can move it by applying
a unitary transformation U. Thus, a will move to b where b = Ua. Not all
unitary transformations are allowed, but we will get to that later. Note that if a
is a unit vector, then so is b.
25. 6 Chapter 1 Introduction
End: We get information out of the quantum computation by making a mea-
surement. If the final state is c, then k is seen with probability |ck|2. Note that
the output is just the index k, not the probability of the index. Often we will
have a distinguished set S of indices that stand for accept, with outcomes in
[0,N − 1] S standing for reject.
That is it.
1.4 Where Is the Input?
In any model of computation, we expect that there is some way to input infor-
mation into the model’s devices. That seems, at first glance, to be missing from
our model. The start state is fixed, and the output method is fixed, so where do
we put the input? The answer is that the input can be encoded by the choice
of the unitary transformation U, in particular by the first several unitary opera-
tions in U, so that for different inputs we will apply different transformations.
This can easily be done in the case of classical computations. We can dis-
pense with explicit input provided we are allowed to change the program each
time we want to solve a different problem. Consider a program of this form:
M = 21;
x = Factor(M);
procedure Factor(z) { ... }
Clearly, if we can access the value of the variable x, then we can determine
the factors of 21. If we wanted to factor a more interesting number such as 35
or 1,001 or 11,234,143, then we can simply execute the same program with M
set to that number.
This integration of inputs with programs is more characteristic of quantum
than classical computation. Think of the transformation U as the program, so
varying U is exactly the same as varying the above program in the classical
case. We will show general ways of handling inputs in chapter 6, while for
several famous algorithms, in chapters 8–10, the input is expressly given as a
transformation that is dropped into a larger program. Not to worry—our chap-
ter 7 in the middle also provides support for the classical notion of feeding a
binary string x directly as the input, while also describing the ingredients of
the “quantum power” that distinguish these programs from classical ones.
26. 1.5 What Exactly Is the Output? 7
1.5 What Exactly Is the Output?
The problematic words in this section title are the two short ones. With apolo-
gies to President Bill Clinton, the question is not so much what the meaning
of “is” is, as what the meaning of “the” is. In a classical deterministic model
of computation, every computation has one, definite, output. Even if the model
is randomized, every random input still determines a single computation path,
whose output is definite before its last step is executed.
A quantum computation, however, presents the user at the end with a slot
machine. Measuring is pulling the lever to see what output the wheels give
you. Unlike in some old casinos where slot machines were rigged, you control
how the machine is built, and hopefully you’ve designed it to make the prob-
abilities work in your favor. As with the input, however, the focus shifts from
a given binary string to the machinery itself, and to the action of sampling a
distribution that pulling the measurement lever gives you.
In chapter 6, we will also finesse the issue of having measurements in the
middle of computations that continue from states projected down to a lower-
dimensional space. This could be analogized to having a slot machine still spin-
ning after one wheel has fixed its value. We show why measurements may gen-
erally be postponed until the end, but the algorithms in this textbook already
behave that way. This helps focus the idea that the ultimate goal of quantum
computation is not a single output but rather a sampling device. Chapter 6 also
lays groundwork for how those devices can be re-used to improve one’s chance
of success. Chapter 7 lays out the format for how we present and analyze quan-
tum algorithms and gives further attention to entanglement, interference, and
measurement.
All of this remains mere philosophy, however, unless we can show how
the results of the sampling help solve concrete problems efficiently in distin-
guished ways. These solutions are the ultimate outputs, as exemplified in chap-
ters 8–10. In chapters 11 and 12, the outputs are factorizations of numbers, via
the algorithm famously discovered by Peter Shor. In chapters 13–15, they are
objects that we need to search for in a big search space. Chapter 16 branches
out to topics in quantum complexity theory, defining the class BQP formally
and proving upper and lower bounds on it in terms of classical complexity.
Chapter 17 summarizes the algorithms and discusses some further topics and
readings. Saying this completes the application of the model and the overview
of this book.
27. 8 Chapter 1 Introduction
1.6 Summary and Notes
In old Platonic fashion, we have tried to idealize quantum computation by pure
thought apart from physical properties of the world around us. Of course one
can make errors that way, as Aristotle showed by failing to climb a tower and
drop equal-sized balls of unequal weights. It is vain to think that Pythagoras or
Euclid or Archimedes could have come up with such a computational model
but maybe not so vain to estimate it of Hilbert. Linear algebra and geometry
were both deepened by Hilbert. Meanwhile, the quantum theory emerged and
ideas of computation were extensively discussed long before Alan Turing gave
his definitive classical answer (Turing, 1936).
The main stumbling block may have been probability. Quantum physicists
were forced to embrace probability from the get-go, in a time when Newtonian
determinism was dominant and Einstein said “God”—that is, nature—“does
not play dice.” Some of our algorithms will be deterministic—that is, we will
encounter cases where the final points on the unit sphere coincide with standard
basis vectors, whereupon all the probabilities are 0 or 1. However, coping with
probabilistic output appears necessary to realize the full power of quantum
computation. Another block, even for the physicists happy with dice, may have
been the long time it took to realize the essence of computation. Turing’s paper
reached full flower only with the emergence of computing machines during and
after World War II.
Even then, it took Richard Feynman, arguably the main visionary in the area
after the passing of John von Neumann in 1956, until his last decade in the
1980s to set his vision down (Feynman, 1982, 1985). That is when it came to
the attention of David Deutsch (1985) and some others. The theory still had
several false starts—for instance, the second of us overlapped with Deutsch in
1984–1986 at Oxford’s Mathematical Institute and saw its fellows turn aside
Deutsch’s initial claims to be able to compute classically uncomputable func-
tions.
It can take a long time for a great theory to mature, but a great theorem
such as Peter Shor’s on quantum factoring can accelerate it a lot. We hope
this book helps make the ascent to understanding it easier. We chose M = 21
in section 1.4 because it is currently the highest integer on which practical
runs of Shor’s algorithm have been claimed, but even these are not definitively
established (Smolin et al., 2013).
28. 2 Numbers and Strings
Before we can start to present quantum algorithms, we need to discuss the
interchangeable roles of natural numbers and Boolean strings. The set N of
natural numbers consists of
0,1,2,3,...
as usual.
A Boolean string is a string that consists solely of bits: a bit is either 0 or 1.
In computer science, such strings play a critical role, as you probably already
know, because our computational devices are all based on being either “on” or
“off,” “charged” or “uncharged,” “magnetized” or “unmagnetized,” and so on.
The operations we use on natural numbers are the usual ones. For example,
x + y is the sum of x and y, and x · y is their product. There is nothing new
or surprising here. The operations we use on Boolean strings are also quite
simple: The length of a string is the number of bits in the string, and if x and
y are Boolean strings, then xy is their concatenation. Thus, if x = 0101 and
y = 111, then we have xy = 0101111. This is a kind of “product” operation on
strings, but we find it convenient not to use an explicit operator symbol. If you
see xy and both x and y are strings, then xy is the result of concatenating them
together.
What we need to do is switch from numbers to Boolean strings and back.
Sometimes it is best to use the number representation and other times the string
representation. This kind of dual nature is basic in computer science and will
be used often in describing the quantum algorithms. There are, however, some
hitches that must be regarded to make it work properly. Let’s look and see why.
If m is a natural number, then it can uniquely be written as a binary number:
Let
m = 2n−1
xn−1 + ··· + 2x1 + x0,
where each xi is a bit, and we insist that xn−1 is nonzero. Then we can use m to
denote the Boolean string xn−1 ...x1x0. For instance, 7 maps to the string 111.
In the reverse direction, we can use the string
xn−1,...,x0
to denote the natural number
2n−1
xn−1 + ··· + 2x1 + x0.
For example, the string 10010 is the number 16 + 2 = 18.
Often we will be concerned with numbers in a fixed range 0...N − 1 where
N = 2n. This range is denoted by [N]. Then it will be convenient to omit the
29. 10 Chapter 2 Numbers and Strings
leading 1 and use n bits for each number, so that zero is n-many 0s, one is
0n−11, and so on up to N − 1 = 1n, where 1n means n-many 1’s. We call this
the canonical numbering of {0,1}n. For example, with n = 3:
000 = 0 100 = 4
001 = 1 101 = 5
010 = 2 110 = 6
011 = 3 111 = 7.
The small but important issue is that, for the representation from numbers to
strings to be unambiguous, we must know how long the strings are. Otherwise,
what does the number 0 represent? Does it represent 0 or 00 or 000 and so
on? This is why we said earlier that the mapping between numbers and strings
is not exact. To make it precise, we need to know the length of the strings. A
more technical way of saying this is that once we specify the mapping as being
between the natural numbers 0,1,...,2n − 1 and the strings of length n (that
is, {0,1}n), it is one-to-one. Note that 0 as a number now corresponds to the
unique string
0...0.
total of n zeros
There is one more operation that we use on Boolean strings. If x and y are
Boolean strings of length m, then x • y is their Boolean inner product, which
is defined to be
x1y1 ⊕ ··· ⊕ xmym.
Here ⊕ means exclusive-or, which is the same as addition modulo 2. Hence,
sometimes we may talk about Boolean strings as being members of an m-
dimensional space with addition modulo 2. We must warn that the name inner
product is also used when we talk about Hilbert spaces in chapter 3. Many
sources use x · y to mean concatenation of strings, but we reserve the lighter
dot for numerical multiplication. When x and y are single bits, x · y is the same
as x • y, but using the lighter dot still helps remind us that they are single bits.
Sometimes this type of overloading occurs in mathematics—we try to make
clear which is used when.
A further neat property of Boolean strings is that they can represent subsets
of a set. If the set is {1,2,3}, in that order, then 000 corresponds to the empty
set, 011 to {2,3}, 100 to {1}, 111 to the whole set {1,2,3}, and so on.
30. 2.1 Asymptotic Notation 11
2.1 Asymptotic Notation
Suppose we run an algorithm that on problems of size n works in n “passes,”
where the i-th pass takes i “steps.” How long does the whole algorithm take?
If we want to be exact about the number s(n) of steps, then we can calculate:
s(n) =
n
i=1
i =
n(n + 1)
2
=
1
2
n2
+
1
2
n.
If we view the process pictorially, then we see the passes are tracing out a trian-
gular half of an n × n square along the main diagonal. This intuition says “1
2 n2”
without worrying about whether the main diagonal is included or excluded or
“halved.” The difference is a term 1
2 n whose added size is relatively tiny as n
becomes moderately large, so we may ignore it. Formally, we define:
DEFINITION 2.1 Two functions s(n) and t(n) on N are asymptotically equiv-
alent, written s(n) ∼ t(n), if limn→∞
s(n)
t(n) exists and equals 1.
So s(n) ∼ 1
2 n2, which we can also encapsulate by saying s(n) is quadratic
with “principal constant” 1
2 . But suppose now we don’t know or care about
the actual time units for a “step,” only that the algorithm’s cost scales as n2.
Another way of saying this is that as the data size n doubles, the time for
the algorithm goes up by about a factor of 4. This idea doesn’t care what the
constant multiplying n2 is, only that it is some constant. Hence, we define:
DEFINITION 2.2 Given two functions s(n),t(n) on N, write:
• s(n) = O(t(n)) if there are constants c,d such that for all n,
s(n) ≤ c · t(n) + d.
• s(n) = (t(n)) if t(n) = O(s(n)), and
• s(n) = (t(n)) if s(n) = O(t(n)) and s(n) = (t(n)).
In the first case, we say s(n) is “order-of” t(n) or “Big-Oh-of” t(n), whereas in
the second, we might say t(n) is “asymptotically bounded below by” s(n), and
in the third, we say s(n) and t(n) have the same “asymptotic order.”
A sufficient condition for s(n) = (t(n)) is that the limit limn→∞
s(n)
t(n) exists
and is some positive number. If the limit is zero, then we write s(n) = o(t(n))
instead—this “little-oh” notation is stronger than writing s(n) = O(t(n)) here.
Thus, we can say about our s(n) example above:
31. 12 Chapter 2 Numbers and Strings
• s(n) = (n2);
• s(n) = o(n3);
• log(s(n)) = (logn).
Indeed, the last gives log(s(n)) = (logn), but it does not give log(s(n)) ∼
log(n) because the exponent 2 in s(n) becomes a multiplier of 2 on the loga-
rithm. The choice of base for the logarithm also affects the constant multiplier,
but not the relation. The latter enables us not to care about what the base is
or even whether two logarithms have the same base. This kind of freedom is
important when analyzing the costs of algorithms and even in thinking what
the goals are of designing them.
One further important idea, which we will begin employing in chapter 13,
uses logarithms in a different way. Write f(n) = Õ(g(n)) if there is some finite
k such that f(n) = O(g(n)(logg(n))k). This is pronounced “f is Oh-tilde of g,”
and carries the idea that sometimes logarithmic as well as constant factors can
be effectively ignored.
2.2 Problems
2.1. Let x be a Boolean string. What type of number does the Boolean string
x0 represent?
2.2. Let x be a Boolean string with exactly one bit a 1. What can you say
about the number it represents? Does this identification depend on using the
canonical numbering of {0,1}n, where n is the length of x?
2.3. Compute the 4 × 4 “times-table” of x • y for x,y ∈ {00,01,10,11}. Then
write the entries in the form (−1)x•y.
2.4. Let x be a Boolean string of even length. Can the Boolean string xxx ever
represent a prime number in binary notation?
2.5. Show that a function f : N → N is bounded by a constant if and only if
f(n) = O(1), and is linear if and only if f(n) = (n).
2.6. Show that a function f : N → N is bounded by a polynomial in n, written
f(n) = nO(1), if and only if there is a constant C such that for all sufficiently
large n, f(2n) ≤ Cf(n). How does C relate to the exponent k of the polynomial?
Thus, we can characterize algorithms that run in polynomial time as those
for which the amount of work scales up only linearly as the size of the data
32. 2.3 Summary and Notes 13
grows linearly. Later we will use this criterion as a benchmark for feasible
computation.
2.7. Let M be an n-bit integer, and let a M. Give a bound of the form O(s(n))
for the time needed to find the remainder when M is divided into a2.
2.8. Now suppose we want to compute a75 modulo M. Give a concrete bound
on the number of squarings and divisions by M one needs to do, never allowing
any number to become bigger than M2.
2.9. Use the ideas of problems 2.7 and 2.8 to show that given any a M, the
function fa defined for all integers x M by
fa(x) = ax
mod M
can be computed in nO(1) time.
2.3 Summary and Notes
Numbers and strings are made out of the same “stuff,” which are characters
over a finite alphabet. With numbers we call them “digits,” whereas with binary
strings we call them “bits,” but to a computer they are really the same. Switch-
ing mentally from one to the other is often a powerful way to promote one’s
understanding of theoretical concepts. This is especially true in quantum com-
putations, where the bits of a Boolean string—or rather their indexed loca-
tions in the string—will be treated as quantum coordinates or qubits. The next
chapter shows how we use both numbers and strings as indices to vectors and
matrices.
It is interesting that while the notion of natural numbers is ancient, the notion
of Boolean strings is much more recent. Even more interesting is that it is only
with the rise of computing that the importance of using just the two Boolean
values 0,1 has become so clear. Asymptotic notation helped convince us that
whether we operate in base 10 or 2 or 16 or 64, the difference is secondary
compared to the top-level structure of the algorithm, which usually determines
the asymptotic order of the running time.
34. 3 Basic Linear Algebra
A vector a of dimension N is an ordered list of values, just as usual. Thus, a of
dimension N stands for:
⎡
⎢
⎢
⎢
⎢
⎣
a0
a1
.
.
.
aN−1.
⎤
⎥
⎥
⎥
⎥
⎦
Instead of the standard subscript notation to denote the k-th element, we will
use a(k) to denote it. This has the advantage of making some of our equations
a bit more readable, but just remember that
a(k) is the same as ak.
One rationale for this notation is that a vector can often be best viewed as being
indexed by other sets rather than just the usual 0,...,N − 1. The functional
notation a(k) seems to be more flexible in this regard.
A fancier justification is that sometimes vector spaces are defined as func-
tions over sets, so this notation is consistent with that. In any event, a(k) is
just the element of the vector that is indexed by k. A concrete example is to
consider a vector a of dimension 4. In this case, we may use the notation
a(0),a(1),a(2),a(3),
for its elements, or we may employ the notation
a(00),a(01),a(10),a(11),
using Boolean strings to index the elements.
A philosophical justification for our functional notation is that vectors are
pieces of code. We believe that nature computes with code—not with the graph
of the code. For instance, each elementary standard basis vector ek is 0 except
for the k-th coordinate, which is 1, and we use the subscript when thinking
of it as an object. When N = 2n, we index the complex coordinates from 0
as 0,...,N − 1 and enumerate {0,1}n as x0,...,xN−1, but we index the binary
string places from 1 as 1,...,n. Doing so helps tell them apart. When thinking
of ek as a piece of code, we get the function ek(x) = 1 if x = xk and ek(x) = 0
otherwise. We can also replace k by a binary string as a subscript, for instance,
writing the four standard basis vectors when n = 2 as e00,e01,e10,e11.
35. 16 Chapter 3 Basic Linear Algebra
3.1 Hilbert Spaces
A real Hilbert space is nothing more than a fancy name for the usual Euclidean
space. We will use HN to denote this space of dimension N. The elements are
real vectors a of dimension N. They are added and multiplied by scalars in the
usual manner:
• If a,b are vectors in this space, then so is a + b, which is defined by
⎡
⎢
⎢
⎣
a(0)
.
.
.
a(N − 1)
⎤
⎥
⎥
⎦ +
⎡
⎢
⎢
⎣
b(0)
.
.
.
b(N − 1)
⎤
⎥
⎥
⎦ =
⎡
⎢
⎢
⎣
a(0) + b(0)
.
.
.
a(N − 1) + b(N − 1)
⎤
⎥
⎥
⎦.
• If a is a vector again in this space and c is a real number, then b = ca is
defined by
c
⎡
⎢
⎢
⎣
a(0)
.
.
.
a(N − 1)
⎤
⎥
⎥
⎦ =
⎡
⎢
⎢
⎣
ca(0)
.
.
.
ca(N − 1)
⎤
⎥
⎥
⎦.
The abstract essence of a Hilbert space is that each vector has a norm: The
norm of a vector a, really just its length, is defined to be
||a|| =
k
a(k)2
1/2
.
Note that in the case of two dimensions, the norm of the vector
a =
r
s
is its usual length in the plane,
√
r2 + s2. A Hilbert space simply generalizes
this to many dimensions. A unit vector is just a vector of norm 1. Unit vectors
together comprise the unit sphere in any Hilbert space.
3.2 Products and Tensor Products
The ordinary Cartesian product of an m-dimensional Hilbert space H1 and
an n-dimensional Hilbert space H2 is the (m + n)-dimensional Hilbert space
36. 3.3 Matrices 17
obtained by concatenating vectors from the former with vectors from the latter.
Its vectors have the form a(i) with i ∈ [m + n].
Their tensor product H1 ⊗ H2, however, has vectors of the form a(k), where
1 ≤ k ≤ mn. Indeed, k is in 1–1 correspondence with pairs (i,j) of indices
where i ∈ [m] and j ∈ [n]. Because we regard indices as strings, we can write
them juxtaposed as a(ij).
The tensor product of two vectors a and b is the vector c = a ⊗ b defined by
c(ij) = a(i)b(j).
A vector denoting a pure quantum state is separable if it is the tensor product
of two other vectors; otherwise it is entangled. The vectors e00 and e11 are
separable, but their unit-scaled sum 1
√
2
(e00 + e11) is entangled. The standard
basis vectors of H1 ⊗ H2 are separable, but this is not true of many of their
linear combinations, all of which still belong to H1 ⊗ H2 because it is a Hilbert
space.
Often our quantum algorithms will operate on the product of two Hilbert
spaces, each using binary strings as indices, which gives us the space of vectors
of the form a(xy). Here x ranges over the indices of the first space and y over
those of the second space. Writing a(xy) does not entail that a is separable.
3.3 Matrices
Matrices represent linear operators on Hilbert spaces. We can add them
together, we can multiply them, and of course we can use them to operate on
vectors. We assume these notions are familiar to you; if not, please see sources
in this chapter’s end notes. A typical example is
UVa = b.
This means: apply the V transform to the vector a, then apply the U transform
to the resulting vector, and the answer is the vector b. The matrix IN denotes
the N × N identity matrix. We use square brackets for matrix entries to dis-
tinguish them further from vector amplitudes, so that the identity matrix has
entries IN[r,c] = 1 if r = c, IN[r,c] = 0 otherwise. One of the key properties
of matrices is that they define linear operations, namely:
U(a + b) = Ua + Ub.
37. 18 Chapter 3 Basic Linear Algebra
The fact that all our transformations are linear is what makes quantum algo-
rithms so different from classical ones. This will be clearer as we give exam-
ples, but the linearity restriction both gives quantum algorithms great power
and curiously makes them so different from classical ones.
If U is a matrix, then we use Uk
to denote the k-th power of the matrix,
Uk
= UU ···U
k copies
.
DEFINITION 3.1 The transpose of a matrix U is the matrix V such that
V[r,c] = U[c,r].
We use UT
as usual to denote the transpose of a matrix. We also use trans-
pose for vectors but only when writing them after a matrix in lines of text. Gen-
erally, we minimize the distinction between row and column vectors, using the
latter as standard. The inner product of two real vectors a and b is given by
a,b =
m
k=0
a(k)b(k).
DEFINITION 3.2 A real matrix U is unitary provided UT
U = I.
Here are three unitary 2 × 2 real matrices. Note that the last, called the
Hadamard matrix, requires a constant multiplier to divide out a factor of
2 that comes from squaring it.
I =
1 0
0 1
, X =
0 1
1 0
, H =
1
√
2
1 1
1 −1
.
The first two are also permutation matrices, meaning square matrices each
of whose rows and columns has all zeros except for a single 1. All permutation
matrices are unitary.
Another definition of a unitary matrix is based on the notion of orthogonal-
ity. Call two vectors a and b orthogonal if their inner product is 0. A matrix
U is unitary proved each row is a unit vector and any two distinct rows are
orthogonal. The reason these matrices are so important is that they preserve
the Euclidean length.
LEMMA 3.3 If U is a unitary matrix and a is a vector, then ||Ua|| = ||a||.
38. 3.4 Complex Spaces and Inner Products 19
Proof. Direct calculation gives:
||Ua||2
=
x
|Ua(x)|2
=
x
Ua(x) · Ua(x)
=
x
(
y
U[x,y]a(y))(
z
U[x,z]a(z))
=
x
(
y z
U[x,y]U[x,z]a(y)a(z))
=
y z
(
x
(U[x,y]U[x,z]))a(y)a(z))
=
y
a(y)a(y) = ||a||2
because the inner product of U[−,y] and U[−,z] is 1 or 0 according as y = z.
As far as we can, we try not to care about whether a Hilbert space is real or
complex. However, we do need notation for complex spaces.
3.4 Complex Spaces and Inner Products
To describe some algorithms, mainly Shor’s, we need to use complex vectors
and matrices. In this case, all definitions are the same except for the notion of
transpose and inner product. Both now need to use the conjugation operation:
if z = x + iy where x,y are real numbers and i =
√
−1 as usual, then the conju-
gate of z is x − iy and is denoted by z. We now define the adjoint of a matrix
U to be the matrix V = U∗
such that
V[r,c] = U[c,r].
A complex matrix U is unitary provided U∗
U = I. Furthermore, the inner
product of two complex vectors a and b is defined to be
a,b =
m
k=0
a(k)b(k).
Note that because r is the same as r for a real number r, these concepts
are the same as what we defined before when the entries of the vectors and
39. 20 Chapter 3 Basic Linear Algebra
matrices are all real numbers. Rather then use special notation for the complex
case, we will use the same as in the real case—the context should make it clear
which is being used. The one caveat is that in the few cases where the entries
of a or U are complex, one needs to conjugate them. Upon observing this, the
proof of the following is much the same as for Lemma 3.3 above:
LEMMA 3.4 If U is a unitary matrix and a is a vector, then the length of Ua
is the same as the length of a.
We can also form tensor products of matrices having any dimensions. If U
is m × n and V is r × s, then W = U ⊗ V is the mr × ns matrix whose action
on product vectors c(ij) = a(i)b(j) is as follows:
(Wc)(ij) = (Ua)(i)(Vb)(j).
Because every vector d of dimension rs (whether entangled or not) can be
written as a linear combination of basis vectors, each of which is a product
vector, the action Wd is well defined via the same linear combination of the
outputs on the basis vectors. That is, if
d =
r
i=1
s
j=1
di,jei ⊗ ej,
then
Wd =
r
i=1
s
j=1
di,j (Uei) ⊗
Vej
.
Note that it does not matter whether the scalars di,j are regarded as multiplying
ei, ej, or the whole thing. This fact matters later in section 6.5. We mainly
use tensor products to combine operations that work on separate halves of an
overall index ij.
3.5 Matrices, Graphs, and Sums Over Paths
One rich source of real-valued matrices is graphs. A graph G consists of a
set V of vertices, also called nodes, together with a binary relation E on V
whose members are called edges. The adjacency matrix A = A(G) of a graph
G = (V,E) is defined for all u,v ∈ V by:
A[u,v] =
1 if (u,v) ∈ E
0 otherwise.
40. 3.5 Matrices, Graphs, and Sums Over Paths 21
If E is a symmetric relation, then A is a symmetric matrix. In that case, G is an
undirected graph; otherwise it is directed.
The degree of a vertex u is the number of edges incident to u, which is
the same as the number of 1s in row u of A. A graph is regular of degree d if
every vertex has degree d. In that case, consider the matrix A
= 1
d A. Figure 3.1
exemplifies this for a graph called the four-cycle, C4. This graph is bipartite,
meaning that V can be partitioned into V1,V2 such that every edge connects a
vertex in V1 and a vertex in V2.
Figure 3.1
Four-cycle graph G = C4, stochastic adjacency matrix A
G, and unitary matrix UG.
1 2
3 4
A
G =
1
2
⎡
⎢
⎢
⎢
⎣
0 1 0 1
1 0 1 0
0 1 0 1
1 0 1 0
⎤
⎥
⎥
⎥
⎦
, UG =
1
√
2
⎡
⎢
⎢
⎢
⎣
0 1 0 1
1 0 1 0
0 1 0 −1
1 0 −1 0
⎤
⎥
⎥
⎥
⎦
.
Because every row has non-negative numbers summing to 1, A
is a stochas-
tic matrix. Because the columns also sum to 1, A
is doubly stochastic.
However, A
is not unitary for two reasons. First, the Euclidean norm of each
row and column is (1
2 )2(1 + 1) = 1
2 , not 1. Second, not all pairs of distinct
rows or columns are orthogonal. To meet the first criterion, we multiply by
1
√
2
instead of 1
2 . To meet the second, we can change the entries for the edge
between nodes 3 and 4 from 1 to −1, creating the matrix UG, which is also
shown in figure 3.1. Then UG is unitary—in fact, it is the tensor product H ⊗ X
of two 2 × 2 unitary matrices given after definition 3.2.
This fact is peculiar to the four-cycle. An example of a d-regular graph
whose adjacency matrix cannot similarly be converted into a unitary matrix
is the 3-regular prism graph shown in figure 3.2. Rows 1 and 3 have a nonzero
41. 22 Chapter 3 Basic Linear Algebra
dot product in a single column, namely, column 5, so there is no substitution
of nonzero values for nonzero entries that will make them orthogonal.
Figure 3.2
3-regular prism graph G and stochastic adjacency matrix A.
1 2
3 4
5 6
A
=
1
3
⎡
⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎣
0 1 1 0 1 0
1 0 0 1 0 1
1 0 0 1 1 0
0 1 1 0 0 1
1 0 1 0 0 1
0 1 0 1 1 0
⎤
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎦
However, in chapter 14, we will see a general technique involving a tensor
product of AG with another matrix to create a unitary matrix, one representing
a quantum rather than a classical random walk on the graph G.
The square of an adjacency matrix has entries given by
A2
[i,j] =
k
A[i,k]A[k,j].
The sum counts the number of ways to go from vertex i through some vertex k
and end up at vertex j. That is, A2
[i,j] counts the number of paths of length 2
from i to j. Likewise, A3
[i,j] counts paths of length exactly 3, A4
[i,j] those of
length 4, and so on.
For directed graphs in which each edge goes from one of n sources to one
of m sinks, the adjacency matrices An,m lend themselves to a path-counting
composition. Consider
U = An,m1 Am1,m2 Am2,m3 ···Amk,r.
42. 3.6 Problems 23
Note how the mi dimensions match like dominoes. This makes U a legal matrix
product. The corresponding graph identifies the sinks of the graph G repre-
sented by Am−1,m (here m0 = n) with the sources of G+1. Then U[i,j] counts
the number of paths in the product graph that begin at source node i of G1 and
end at sink node j of the last graph Gk+1.
What links quantum and classical concepts here is that weights can be put on
the edges of these graphs, even complex-number weights. When these are sub-
stituted for the “1” entries of the adjacency matrices, the value U[i,j] becomes
a sum of products over the possible paths. In the classical case, these weights
can be probabilities of individual choices along the paths. In the quantum case,
they can be complex amplitudes, and some of the products in the sum may can-
cel. Either way, the leading algorithmic structure is that of a sum over paths.
As a fundamental idea in quantum mechanics, it was advanced particularly by
Richard Feynman, and yet it needs no more than this much about graphs and
linear algebra to appreciate.
This discussion already conveys the basic flavor of quantum operations. The
matrices are unitary rather than stochastic; the entries are square-roots of prob-
abilities rather than probabilities; negative entries—and later imaginary num-
ber entries—are used to achieve cancellations. After defining feasible compu-
tations, we will examine some important unitary matrices in greater detail.
3.6 Problems
3.1. Show that the product of unitary matrices is unitary.
3.2. If U is a matrix, then what is Uek?
3.3. Show that the columns of a unitary matrix are unit vectors. Also show that
distinct columns are orthogonal.
3.4. Consider the matrix
w w
w −w
.
For what real values of w is it a unitary matrix?
3.5. Consider the matrix U equal to:
1
√
2
IN IN
IN −IN
.
43. 24 Chapter 3 Basic Linear Algebra
Show that for any N, the matrix U is unitary.
3.6. For real vectors a and b, show that the inner product can be defined from
the norm as:
(a,b) = 1/2
||a + b||2
− ||a||2
− ||b||2
.
3.7. For any complex N × N matrix U, we can uniquely write U = R + iQ,
where Q and R have real entries. Show that if U is unitary, then so is the
2N × 2N matrix U
given in block form by
U
=
R Q
−Q R
.
Thus, by doubling the dimension, we can remove the need for complex-number
entries.
3.8. Apply the construction of the last problem to the matrix
Y =
0 −i
i 0
.
This is the second of the so-called Pauli matrices, along with X above and Z
defined below.
3.9. Consider the following matrix:
V =
1
√
2
eiπ/4 e−iπ/4
e−iπ/4 eiπ/4
=
1
2
1 + i 1 − i
1 − i 1 + i
.
What is V2
?
3.10. Let Tα denote the 2 × 2 “twist” matrix
1 0
0 eiα
,
respectively. Show that it is unitary. Also find a complex scalar c such that cTα
has determinant 1 and write out the resulting matrix, which often also goes
under the name Tα.
3.11. The following cases of Tα for α = π, π
2 , π
4 have special names as shown:
Z =
1 0
0 −1
, S =
1 0
0 i
, T =
1 0
0 eiπ/4
.
44. 3.6 Problems 25
How are they related? Also find an equation relating the Pauli matrices X,Y,Z
along with an appropriate “phase scalar” c as in the last problem.
3.12. In this and the next problem, consider the following commutators of the
three matrices of the last problem with the Hadamard matrix H (noting also
that H∗
= H−1
= H, i.e., the Hadamard is self-adjoint as a unitary matrix):
Z
= HZHZ∗
S
= HSHS∗
T
= HTHT∗
Show that Z
and some multiple cS
have nonzero entries that are powers of i.
3.13. Show, however, that no multiple cT
has entries of this form by consid-
ering the mutual angles of its entries. What is the lowest power 2r such that
these angles are multiples of π/2r?
3.14. Define a matrix to be balanced if all of its nonzero entries have the same
magnitude. Of all the 2 × 2 matrices in problem 3.12, say which are balanced.
Is the property of being balanced closed under multiplication?
3.15. Show that the rotation matrix by an angle θ (in the real plane),
Rx(θ) =
cos(θ/2) sin(θ/2)
−sin(θ/2) cos(θ/2)
is unitary. Also, what does R2
x represent?
3.16. Show that for every 2 × 2 unitary matrix U, there are real numbers
θ,α,β,δ such that
U = eiδ
TαRθ Tβ.
Thus, every 2 × 2 unitary operation can be decomposed into a rotation flanked
by two twists, multiplied by an arbitrary phase shift by δ. Write out the decom-
position for the matrix V in problem 3.9. (It doesn’t matter which definition of
“Tα” you use from problem 3.10.)
3.17. Show how to write V as a composition of Hadamard and T matrices.
(All of these problems point out the special nature of the T-matrix and its
close relative, V.) Either of these matrices is often called the “π/8 gate”; the
confusing difference from π/4 owes to the constant c in problem 3.10.
3.18. Show that the four Pauli matrices, I, X, Y, and Z, form an orthonormal
basis for the space of 2 × 2 matrices, regarded as a 4-dimensional complex
Hilbert space.
45. 26 Chapter 3 Basic Linear Algebra
3.19. Define G to be the complete undirected graph on 4 vertices, whose 6
edges connect every pair (i,j) with i = j. Convert its adjacency matrix AG into
a unitary matrix by making some entries negative and multiplying by an appro-
priate constant. Can you do this in a way that preserves the symmetry on the
matrix?
3.20. Show that a simple undirected graph G has a triangle if and only if A2
A has a nonzero entry, where A = AG and here means multiplying entry-
wise. This means that triangle-detection in an n-vertex graph has complexity no
higher than that of squaring a matrix, which is equivalent to that of multiplying
two n × n matrices.
Surprisingly, the obvious matrix multiplication algorithm and its O(n3) run-
ning time are far from optimal, and the current best exponent on the “n” for
matrix multiplication is about 2.372. This certainly improves on the O(n3) run-
ning time of the simple algorithm that tries every set of three vertices to see if
it forms a triangle.
3.7 Summary and Notes
There are many good linear algebra texts and online materials. If this material
is new or you took a class on it once and need a refresher, here are some
suggested places to go:
• The textbook Elementary Linear Algebra by Kuttler (2012).
• Linear algebra video lectures by Gilbert Strang which are maintained
at MITOPENCOURSEWARE: http://ocw.mit.edu/courses/mathematics/18-
06-linear-algebra-spring-2010/video-lectures/
• The textbook Graph Algorithms in the Language of Linear Algebra by Kep-
ner and Gilbert (2011).
The famous text by Nielsen and Chuang (2000) includes a full treatment of
linear algebra needed for quantum computation. Feynman (1982, 1985) wrote
the first two classic papers on quantum computation.
46. 4 Boolean Functions, Quantum Bits, and
Feasibility
A Boolean function f is a mapping from {0,1}n to {0,1}m, for some numbers
n and m. When we define a Boolean function f(x1,...,xn) = (y1,...,ym), we
think of the xi as inputs and the yj as outputs. We also regard the xi together
as a binary string x and similarly write y for the output. When m = 1, there is
some ambiguity between the output as a string or a single bit because we write
just “y” not “(y)” in the latter case as well, but the difference does not matter
in context. When m = 1, you can also think of f as a predicate: x satisfies the
predicate if and only if f(x) = 1.
Thus, Boolean functions give us all of the following: the basic truth values,
binary strings, and, as seen in chapter 2, also numbers and other objects. The
most basic have n = 1 or 2, such as the unary NOT function, and binary AND,
OR, and XOR. We can also regard the following higher-arity versions as basic:
• AND: This is the function f(x1,...,xn) defined as 1 if and only if every
argument is 1. Thus,
f(1,1,1) = 1 and f(1,0,1,1) = 0.
• OR: This is the function f(x1,...,xn) defined as 1 if the number of 1’s in
x1,...,xn is non-zero. Thus,
f(0,1,1) = 1 and f(0,0,0,0) = 0.
• XOR: This is the function f(x1,...,xn) defined as 1 if the number of 1’s in
x1,...,xn is odd. Thus,
f(0,1,1) = 0 and f(1,1,1,1,1) = 1.
The latter is true because there are five 1’s.
The binary operations can also be applied on pairs of strings bitwise. For
instance, if x and y are both Boolean strings of length n, then x ⊕ y is equal to
z = (x1 ⊕ y1,...,xn ⊕ yn).
We could similarly define the bitwise-AND and the bitwise-OR of two equal-
length binary strings. These are not the same as the above n-ary operations but
are instead n applications of binary operations. Each operation connects the i-th
bit of x with then i-th bit of y, for some i, and they intuitively run “in parallel.”
The Boolean inner product, which we defined in chapter 2, is computed by
feeding the bitwise binary AND into the n-ary XOR, that is:
x • y = XOR(x1 ∧ y1,...,xn ∧ yn).
47. 28 Chapter 4 Boolean Functions, Quantum Bits, and Feasibility
In cases like x ⊕ y, we say we have a circuit of n Boolean gates that collec-
tively compute the Boolean function f : {0,1}r → {0,1}n, with r = 2n, defined
by f(x,y) = x ⊕ y. Technically, we need to specify whether x and y are given
sequentially as (x1,...,xn,y1,...,yn) or shuffled as (x1,y1,...,xn,yn), and this
matters to how we would draw the circuit as a picture. But either way we have
a 2n-input function that represents the same function of two n-bit strings.
The number of gates is identified with the amount of work or effort expended
by the circuit, and this in turn is regarded as the sequential time for the circuit to
execute. It does not matter too much whether one counts gates or wires between
gates. What is critical is that only basic operations can be used, and that they
can only apply to previously computed values. In the following sketch, the
NOT of a ∨ b is allowed because a ∨ b has already been computed:
...(a ∨ b)... ...¬(a ∨ b)...
Two Boolean functions that we should not regard as basic are:
• PRIME: This is the function f(x1,...,xn) defined as 1 if the Boolean string
x = x1,...,xn represents a number that is a prime number. Recall a prime
number is a natural number p greater than 1 with only 1 and p as divisors.
• FACTOR: This is the function f(x1,...,xn,w1,...,wn) regarded as having
two integers x and w as arguments—note that we can pad w as well as x by
leading 0’s. It returns 1 if and only if x has no divisor greater than w, aside
from x itself.
The game is, how efficiently can we build a circuit to compute these func-
tions? They are related by PRIME(x) = FACTOR(x,1) for all x. This implies
that a circuit for FACTOR immediately gives one for solving the predicate
PRIME because one can simply fix the “w” inputs to be the padded version of
the number 1. This does not imply the converse relation, however. Although
both of these functions have been studied for 3,000 years, PRIME was shown
only a dozen years ago to be feasible in a sense we describe next, while many
believe that FACTOR is not feasible at all. Unless you are allowed a quantum
circuit, that is.
4.1 Feasible Boolean Functions
Not all Boolean functions are created equal; some are more complex than oth-
ers. In the above examples, which n-bit function would you like to compute if
48. 4.1 Feasible Boolean Functions 29
you had to? I think we all will agree that the easiest is the OR function: just
glance at the input bits and check whether one is a 1. If you only see 0’s, then
clearly the OR function is 0. AND is similar.
Next harder it would seem is the XOR function. The intuitive reason is that
now you have to count the number of bits, and this count has to be exact. If
there are 45 inputs that are 1 and you miscount and think there are 44, then
you will get the wrong value for the function. Indeed, one can argue that n-ary
XOR is harder than the bitwise-XOR function because each of the n binary
XOR operations is “local” on its own pair of bits.
More difficult is the PRIME function. There is no known algorithm that we
can use and just glance at the bits. Is
1010101101101011010111110101
a prime number or not? Of course you first might convert it to a decimal num-
ber: it represents the number 11234143. This still now requires some work to
see if it has any nontrivial divisors, but it does: 23 and 488441.
One of the achievements of computer science is that we can define the clas-
sical complexity of a Boolean function. Thus, AND and XOR are computable
in a linear number of steps, that is, O(n). Known circuits for PRIME take
more than linearly many steps, but the time is still polynomial, that is, nO(1).
But there are also Boolean functions that require time exponential in n. Many
people believe that FACTOR is one of them, but nobody knows for sure.
To see the issue, consider that any Boolean function can be defined by its
truth table. Here is the truth table for the exclusive-or function XOR:
x y x ⊕ y
0 0 0
0 1 1
1 0 1
1 1 0
.
Each row of the table tells you what the function, in this case, the exclusive-
or function, does on the inputs of that row. In general, a Boolean function
f(x1,...,xn) is defined by a truth table that has 2n rows—one for each possible
input. Thus, if n = 3, there are eight possible rows:
000,001,010,011,100,101,110, and 111.
The difficulty is that as the number of inputs grows, the truth table increases
exponentially in size.
49. 30 Chapter 4 Boolean Functions, Quantum Bits, and Feasibility
Thus, representing Boolean functions by their truth tables is alway possible,
but is not always feasible. The tables will be large when there are thirty inputs,
and when there are over 100, the table would be impossible to write down.
The final technical concept we need is having not just a single Boolean func-
tion but rather a family [fn] of Boolean functions, each fn taking n inputs,
that are conceptually related. That is, the [fn] constitute a single function f
on strings of all lengths, so we write f : {0,1}∗ → {0,1}, or for general rather
than one-bit outputs, f : {0,1}∗ → {0,1}∗. Maybe it is confusing to write “f”
also for this kind of function with an infinite domain, but the intent is usually
transparent—as when letting AND, OR, and XOR above apply to any n. Now
we can finally define “feasible”:
DEFINITION 4.1 A Boolean function f = [fn] is feasible provided the indi-
vidual fn are computed by circuits of size nO(1).
4.2 An Example
Consider the Boolean function MAJ(x1,x2,x3,x4,x5), which takes the major-
ity of five Boolean inputs. A first idea is to compute it using applications
of OR and AND as follows: For every three-element subset S = {i,j,k} of
{1,2,3,4,5}, we compute yS = OR(xi,xj,xk). Define y to be the AND of each
of the ten subsets S. Then y = 1 ⇐⇒ no more than 2 bits of x1,...,x5 are 0
⇐⇒ MAJ(x1,x2,x3,x4,x5) is true. The complexity is counted as 11 operations
and, importantly, 35 total arguments of those operations.
On second thought, we can find a program of slightly lower complexity.
Consider the Boolean circuit diagram in figure 4.1.
Expressed as a sequence of operations, in one of many possible orders, the
circuit is equivalent to the following straight-line program:
v1 = OR(x1,x2,x3), v2 = OR(x4,x5),
w1 = AND(x1,x2), w2 = AND(x1,x3), w3 = AND(x2,x3),
w4 = AND(w1,w2), w5 = AND(v1,x4,x5)
u = OR(w1,w2,w3), t = AND(u,v2), y = OR(w4,t,w5).
This program has 10 operations and only 24 applications to arguments. To see
that it is correct, note that w4 is true if and only if x1 = x2 = x3 = 1, and w5 is
true if and only if x4 = x5 = 1 and one of x1,x2,x3 is 1. Finally, t is true if and
50. 4.2 An Example 31
Figure 4.1
Monotone circuit computing MAJ(x1,x2,x3,x4,x5).
x
1 1
2 2
3
4 5
5
4
3
2
1
x x x x
v
v
w w
w
w
w
u
t
y
only if two of x1,x2,x3 and one of x4,x5 are true, which handles the remaining
six true cases.
Now clearly MAJ generalizes into a Boolean function MAJ of strings x of
any length n, such that MAJ(x) returns 1 if more than n/2 of the bits of x are
1. We ask the important question:
Is MAJ feasible?
The operational question about MAJ(x1,x2,x3,x4,x5) is, do the above pro-
grams scale when “5” is replaced by “n”? Technically, scalable is the same
idea as feasible but with the idea mentioned in chapter 2 that a polynomial
bound is the same as having a linear bound each time the size of the input
doubles.
The first idea, when generalized from “5” to “n,” says to take the AND
of every r-sized subset of [n], where r = n/2 + 1, and feed that to an OR.
51. 32 Chapter 4 Boolean Functions, Quantum Bits, and Feasibility
However, there are
n
r
such subsets, which is exponential when r ∼ n/2. So
the first idea definitely does not scale.
The trouble with the second, shorter program is its being rather ad hoc for
n = 5. The question of whether there are programs like it with only AND and
OR gates that scale for all n is a famous historical problem in complexity the-
ory. The answer is known to be yes, but no convenient recipe for constructing
the programs for each n is known, and their size O(n5.3) is comparatively high.
However, if we think in terms of numbers, we can build circuits that easily
scale. Take k = log2(n + 1). We can hold the total count of 1’s in an x of
length n in a k-bit register. So let us mentally draw x going down rather than
across and draw to its right an n × k grid, whose rows will represent successive
values of this register. It is nicest to put the least significant bit leftmost, i.e.,
closest to x, but this is not critical. The row to the right of x1 has value 0 (that
is, 0k as a Boolean string) if x1 = 0 and value 1 (that is, 10k−1) if x1 = 1. As
we scan x downward, if xi = 0, then the row of k bits has the same value as the
previous one, but if xi = 1, then the register is incremented by 1. We might not
regard the increment operation as basic—instead, we might use extra “helper
bits” and gates to compute binary addition with carries. But we can certainly
tell that we will get a Boolean circuit of size O(kn) = O(nlogn), which is
certainly feasible. At the end, we need only compare the final value v with
n/2, and this is also feasible.
This idea broadens to any computation that we can imagine doing with paper
and pencil and some kind of grid, such as multiplication, long division, and
other arithmetic. To avoid the scratchwork and carries impinging on our basic
grid, we can insist that they occupy h-many “helper rows” below the row with
xn, stretching wires down to those rows as needed. The final idea that helps
in progressing to quantum computation is instead of saying we have k(n + h)
bits, say that we have n + h bits that “evolve” going left to right. This also fits
the classical picture articulated by Turing. The cells directly below xn in the
column with the input can be regarded as a segment of “tape” for scratchwork.
The change to this tape at each step of a Turing Machine computation on
x, including changes to the scratchwork, can be recorded in an adjacent new
column. If the machine computation lasts t steps, then with s = n + h as the
measure of “space,” we have an s × t grid for the whole computation.
To finish touring machine and circuit models, we can next imagine that every
cell in this grid depends on its neighbors in the preceding column by a fixed
finite combination of basic Boolean operations. This gives us a circuit of size
52. 4.3 Quantum Representation of Boolean Arguments 33
O(st) = O(t2) because we do at most one bit of scratchwork at each step. If the
machine time t(n) is polynomial, then so is t(n)2.
This relation also goes in the opposite direction if you think of using a
machine to verify the computation by the circuit—provided the circuits are
uniform in a technical sense that captures the conceptual sense we applied
above to Boolean function families.
Hence, the criterion of feasible is broad and is the same for any of the classi-
cal models of computation by machines, programs, or (uniform) circuits. There
is a huge literature on which functions are feasible and which are not. One
can encode anything via Boolean strings, including circuits themselves. The
problem of whether a Boolean circuit can ever output 1—even when it allows
applying NOT only to the original arguments xi and then has just one level of
ternary OR gates feeding into a big AND—is not known to be feasible. That is,
no feasible function is known to give the correct answer for every encoding X
of such a program: this is called the satisfiability problem and has a property
called NP-hardness. We define this with regard to an equivalent problem about
solving equations in chapter 16. For now we are happy with not only defining
classical feasible computation in detail but also showing that equivalent cri-
teria are reached from different models. Now we are ready for the quantum
challenge to this standard.
4.3 Quantum Representation of Boolean Arguments
Let N = 2n. Every coordinate in N-dimensional Hilbert space corresponds to
a binary string of length n. The standard encoding scheme assigns to each
index j ∈ [0,...,n − 1] the n-bit binary string that denotes j in binary nota-
tion, with leading 0’s if necessary. This produces the standard lexicographic
ordering on strings. For instance, with n = 2 and N = 4, we show the indexing
applied to a permutation matrix:
00 01 10 11
00 1 0 0 0
01 0 1 0 0
10 0 0 0 1
11 0 0 1 0
.
53. 34 Chapter 4 Boolean Functions, Quantum Bits, and Feasibility
The mapping is f(00) = 00, f(01) = 01, f(10) = 11, f(11) = 10, and in gen-
eral f(x1,x2) = (x1,x1 ⊕ x2). Thus, the operator writes the XOR into the sec-
ond bit while leaving the first the same. Once can also say that it negates the
second bit if-and-only-if the first bit is 1. This negation itself is represented on
one bit by a matrix we have seen before—now with the indexing scheme, it is:
X =
0 1
0 0 1
1 1 0
.
Thus, the negation is controlled by the first bit, which explains the name
“Controlled-NOT” (CNOT) for the whole 4 × 4 operation.
To get a general Boolean function y = f(x1,...,xn), we need n + 1 Boolean
coordinates, which entails 2N = 2n+1 matrix coordinates. What we really com-
pute is the function
F(x1,...,xn,z) = (x1,...,xn,z ⊕ f(x1,...,xn)).
Formally, F is a Boolean function with outputs in {0,1}n+1 rather than just
{0,1}. Its first virtue, which is necessary to the underlying quantum physics, is
that it is invertible—in fact, F is its own inverse:
F(F(x1,...,xn,z)) = F(x1,...,xn,z ⊕ y)
= (x1,...,xn,(z ⊕ y) ⊕ y) = (x1,...,xn,z).
Its second virtue is having a 2N × 2N permutation matrix Pf that is easy to
describe: the lone 1 in each row x1x2 ···xnz is in column x1x2 ···xnb, where
b = z ⊕ f(x1,...,xn).
If f is a Boolean function with m outputs (y1,...,ym) rather than a single
bit, then we have the same idea with
F(x1,...,xn,z1,...,zm) = (x1,...,xn,z1 ⊕ y1,...,zm ⊕ ym)
instead. The matrix Pf is still a permutation matrix, although of even larger
dimensions 2n+m × 2n+m. Often left unsaid is what happens if we need h-many
“helper bits” to compute the original f. The simple answer is that we can treat
them all as extra outputs of the function, allocating extra zj variables as dummy
inputs so that the ⊕ trick preserves invertibility. Because h is generally poly-
nomial in n, this does not upset feasibility.
In this scheme, adopting the “rows-and-columns” picture we gave for classi-
cal computation above, everything is laid out in n = n + m + h rows, with the
54. 4.4 Quantum Feasibility 35
input x laid out in the first column. Each row is said to represent a qubit, which
is short for quantum bit. In order to distinguish the row from the idea of a qubit
as a physically observable object, we often prefer to say qubit line for the row
itself in the circuit. The h-many helper rows even have their own fancy name
as ancilla qubits, using the Latin word for “chambermaid” or, more simply,
helper.
Writing out a big 2n
× 2n
matrix, just for a permutation, is of course not
feasible. This is a chief reason we prefer to think of operators Pf as pieces
of code. The qubit lines are really coordinates of binary strings that represent
indices to these programs. These strings have size n, and their own indices
1,...,n are what we call quantum coordinates, when trying to be more care-
ful than saying “qubits.” As long as we confine ourselves to linear algebra
operations that are efficiently expressible via these n quantum indices, we can
hope to keep things feasible. The rest of the game with quantum computation
is, which operations are feasible?
4.4 Quantum Feasibility
A quantum algorithm applies a series of unitary matrices to its start vector. Can
we apply any unitary matrix we wish? The answer is no, of course not. If the
quantum algorithms are to be efficient, then there must be a restriction on the
matrices allowed.
If we look at the matrices Pf in section 4.3, we see several issues. First, the
design of Pf seems to take no heed of the complexity of the Boolean func-
tion f but merely creates a permutation out of its exponential-sized truth table.
Because infeasible (families of) Boolean functions exist, there is no way this
alone could scale. Second, even for simple functions like AND(x1,x2,...,xn),
the matrix still has to be huge—even larger than 2n on the side. How do we
distinguish “basic feasible operations”? Third, what do we use for variables?
If we have a 2n-sized vector, do we need exponentially many variables?
The answer is to note that if we keep the number k of arguments for any
operation to a constant, then 2k stays constant. We can therefore use 2k × 2k
matrices that apply to just a few arguments. But what are the arguments? They
are not the same as the Hilbert space coordinates 0,...,N − 1, which would
involve us in exponentially many. The quantum coordinates start off being
labeled x1,x2,...,xn as for Boolean input strings and extend to places for out-
puts and for ancillae, which is the plural of ancilla.
56. Kennard, Mrs. Edward, 92
Kinnoull, the Countess of, 88
Ladies’Automobile Club, 65, 87, 88
Ladies’ Bracelet Handicap, 90
Lamps,
cost of, 18;
position of, 22
Leather coats, disadvantages of, 25
Levers of car, 37, 38
Levitt, Miss Dorothy,
her motoring record, 3, 8, 9, 10;
interest in aeronautics, 5;
personal characteristics, 6;
private life, 7
Licences, 21, 119
Lloyd, Mrs. Herbert, 90
Locke-King, Mrs., 90
Lubricating oil, cost of, 97, 98
Lubrication, 33, 34, 35, 50, 119
Lubricators, types of, 33
Manners for motorists, 69, et seq.
Manville, Mrs. E., 90
Map-reading, 86
Mayhew, Mrs. Mark, 92
Mechanics, woman’s capacity for, 87
Mercédès car, 88
Mirror, use of, 29, 75
Misfire, meaning of, 52
Mixture, control of, 43
Mors car, 91
Motor christening, a, 91
Motoring
as a pastime for women, 15, 85;
dress for, 24, et seq., 67;
cost of, 62, 93, et seq.
57. manners, 69, et seq.
troubles—See Troubles
Napier car, 92
Nicol, Mrs., 92
Number, the, of the car, 22, 102—See also Index-marks
Oil-tank, 33, 34
Otto cycle, 119
Overheating, 50, 120
Paine, Mrs. Claude, 92
Pedals, uses of, 44, 45
Pedestrians, rights of, 70, 72
Petrol, cost of, 21, 97, 98; consumption of, 21 tank, 32, 56 vapour,
inflammability of, 33, 120
Phœnix car, 96, 97
Pick car, 96
Piston, 120
Plowden, Lady, 89
Pony-brake, 120
Puncture—See Tyres
Radiator, 121
Rawson, the Lady Beatrice, 89
Registration of car, 22
Renault car, 91
Reversing, 47
Ridge-Jones, Miss N., 90
Rings and bracelets, 27
Road, rights of the, 69, et seq.
Rolls-Royce car, 89
Rover car, 96, 97
Savory, Miss Isabel, 91
Scarf motoring, how to wear, 27
58. Schiff, Miss, 92
Scotland, index-marks of cars in, 105
Screen, gloss folding, cost of, 18
Seat, extra, cost of, 18, 99
Second-hand cars, 63, 64
Shoes for motoring, 24
Side-brake, 44
Side-slip, 53, 121
Sizaire car, 94, 99
Small car, economy of, 17, 62, 93, et seq.; capabilities of, 94, 95;
types of, 95, 96
Soap, Antioyl, 29
Spares, list of, 19
Sparking-plug, 54, 58, 59, 121
Speed, changing, 37, 38, 45, 46, 47 legal limit of, 73
Speedometer, 67
Starting the car, 42
Steering-wheel, how to hold, 42
Stepney wheel, 52
Sutherland, the Duchess of, 88
Switch, 39
Talbot, the Lady Violet (now Lady Viola Gore), 89
Thompson, Miss Muriel, 90
Throttle, 37, 44, 121
Thrupp, Mrs. George, 91
Timing-gear, 121
Tips, 20, 77, et seq.
Tools and spares, list of, 19
Traffic, driving in, 66
Troubles: with tyres, 52; ignition, 52, 54, 58, 59; feed, 56, 59;
carburettor, 57, 60; valves, 57
Tyre repair outfit, cost of, 19
Tyres, troubles with, 52; cost of maintenance, 97, 98; types of, 53;
non-skid devices, 53
United Kingdom, index-marks of cars in, 101
59. Universal joint, 122
Valves, troubles with, 57
Vauxhall car, 96
Veil, how to wear, 26
Voiturette—See Small Car
Volt, 122
Walker-Munro, Mrs., 92
Water-tank, how to fill, 33
Week-end tips, 81
Weguelin, Mrs., 92
Wimborne, Lady, 88
1909
New NAPIER Models
MODELS from 2 to 6 Cylinders.
POWER from 10 H.P. to 90 H.P.
PRICES from £295 to £1,500.
Tourist Trophy, Isle of Man.
Four-inch Race won by the Four-Cylinder 26-h.p. Napier.
Price £475.
Napier Cars are Cheapest to run and last longest.
Every Chassis is Guaranteed for 3 years.
60. “Viyella” FOR THE
MOTORIST
For Shirt-Blouses, Skirts, Costumes, etc.
For Nightdresses, Pyjamas, etc.
“The acme of comfort.”
To be obtained in the latest designs and colourings from the leading
Drapers, or name of nearest sent, on application, by “VIYELLA” (D.L.) 25
26 Newgate Street, London, E.C.
“Viyella” Hosiery Underwear, and “Viyella” Gloves, made from the
same yarns as the celebrated cloth. Specially suitable for sensitive skins.
THE COUNTRY HANDBOOKS
A Series of Illustrated Practical Handbooks dealing with Country Life.
Suitable for the Pocket or Knapsack. Under the General Editorship of
HARRY ROBERTS. Foolscap 8vo (6½ by 4 inches). Price, bound in Limp
Cloth, 3s. net.
THE TRAMP’S HANDBOOK. By H. Roberts.
THE STILL ROOM. By Mrs. Roundell.
THE BIRD BOOK. By A. J. R. Roberts.
THE LITTLE FARM. By “Home Counties.”
THE FISHERMAN’S HANDBOOK. By Edgar S. Shrubsole.
THE SAILING HANDBOOK. By Clove Hitch.
THE KENNEL HANDBOOK. By C. J. Davies.
THE GUN ROOM. By Alex Innes Shand.
THE COUNTRY COTTAGE. By G. H. Morris and Esther Wood.
THE MOTOR BOOK. By R. J. Mecredy.
THE STABLE HANDBOOK. By T. F. Dale.
THE TREE BOOK. By M. R. Jarvis and Harry Roberts.
THE INSECT BOOK. By W. Percival Westell.
THE PHOTOGRAPHER’S HANDBOOK. By Charles Harrison and
John C. Douglas.
THE VET. BOOK. By F. Townend Barton.
THE SMALL HOLDING. By F. E. Green.
John Lane, The Bodley Head, Vigo Street, London
61. “It is giving us every satisfaction.”
OF THE THIRD
De Dion Bouton
owned by Mrs. Chester, Ashurst, Haslemere, her
chauffeur, M. E. J. James, wrote in January 1908 as
follows:
“We have sold our 12-h.p. De Dion and have now a 24-
h.p. car. It is giving us every satisfaction, as we have now
done 5000 miles without any trouble at all.
“I should like to mention that I have driven this 24-h.p.
car now for seven months, before that the 12-h.p. for two
years, and before I had an 8-h.p. for two years also, and
during that time of just on five years I have never been
hung up on the road, except, of course, for tyres, which I
think speaks very well for De Dion cars.”
Models from 8 h.p. to 30 h.p.
Catalogue gratis on application.
De Dion Bouton (1907), Ltd.
Sole Authorised Representatives of Messrs. De Dion
Bouton et Cie., of Puteaux, France, for the United
Kingdom and all British Colonies and Dependencies.
10 Great Marlborough St., Regent St., W.
Telegrams—“Andesite, London.” Telephone—Nos. 8160 8161 Central
THE
MOTOR BOOK
By R. J. MECREDY
62. With Numerous Illustrations (“The County Handbooks.”)
Fcap. 8vo. 3s. net.
PRESS OPINIONS
Scotsman.—“An admirable, succinct and clear account of
the mechanism of a typical petrol car. Contains as much
information as the ordinary owner of a motor is likely to
want.”
Aberdeen Free Press.—“An exceedingly exhaustive
account of how the motor works. The machinery is described
with the utmost clearness. It should prove of the utmost value
to all motorists who are not practical mechanics.”
Birmingham Post.—“His work is very valuable. In
addition it is a very dainty volume, nicely printed, illustrated,
and bound.”
Morning Post.—“In any case the book will help
inexperienced enthusiasts to run their cars straight and to keep
them in order.”
Motor-Car World.—“The book is the most satisfactory
work on motor-cars which we have perused. In the last two of
these chapters Mr. Mecredy gives many valuable hints on the
care of cars, how to prevent trouble, and how to remedy
defects when they do occur. The book is profusely illustrated
by means of both photographic reproductions and diagrams.”
John Lane, The Bodley Head, Vigo St., London
63. NEW FICTION
GALAHAD JONES. A Tragic Farce. By Arthur H.
Adams. Crown 8vo, 6s.
⁂ Galahad Jones is a middle-aged bank clerk, with a
family. One day, on his way home, a letter falls to his feet
from the balcony of a house he is passing. It is addressed “To
You,” and on reading it he discovers that he is requested to
meet the writer in the garden of the house at 10 o’clock that
night. In a spirit of knight-errantry he decides to do so, and
learns that the writer—a young girl—is kept practically in
prison by her father, because of her affection for a man of
64. whom he does not approve. The chivalry of Galahad Jones
plunges him into many difficulties and leads to some very
awkward and extremely amusing situations.
JOAN OF THE HILLS. A Novel by T. B. Clegg, author
of “The Love Child,” “The Wilderness,” “The
Bishop’s Scapegoat.” Crown 8vo, 6s.
⁂ Mr. Clegg’s previous novels have given him a position
as an Australian novelist to be reckoned with. The present
story opens in London, but Mr. Clegg is soon back in
Australia, describing the life on a remote Australian station
with its refreshing bush atmosphere. “Joan of the Hills”
should increase the reputation that Mr. Clegg has already
achieved.
THE MEASURE OF OUR YOUTH. A Novel. By Alice
Herbert. Crown 8vo, 6s.
⁂ A brilliant novel of modern life, by a new author. Its
leading interest is the eternal one of sex; but the treatment is
particularly fresh and fearless, and there is a sense of humour
and of style that will please the fastidious. The realism of the
writing will be forgiven for the sake of the delicate and poetic
vein of thought that underlies the story, which is full of
interest for the psychologist.
SIXPENNY PIECES. By A. Neil Lyons. Uniform with
“Arthur’s.” Crown 8vo. 6s.
⁂ Mr. Lyons’ new book has for its central figure a
“Sixpenny Doctor” in the east end of London. The sketches
are connected by a thread of continuous interest as in Mr.
Lyons’ former book, the now famous “Arthur’s.” The volume
is instinct with a realism that differs altogether from the so-
called realism of the accepted “gutter” novels, for it is the
realism of life as it is, and not as imagined.
THE PRINCE’S PRANKS. A Novel. By Charles
Lowe. Crown 8vo, 6s.
65. John Lane, The Bodley Head, Vigo St., London
NEW FICTION
CHIP: A Novel. By F. E. Mills Young, author of “A
Mistaken Marriage.” Crown 8vo, 6s.
⁂ This is a story of the veld, of the lives of a small
community of Europeans dwelling far from civilisation amid
the silence and solitude of the swamps of East Africa. To the
unhealthiness of the climate is added another danger—the
66. disaffection of the natives upon the farm, caused by their fear
and dislike of their employer, Mordaunt, the hero of the tale.
Reckless, holding life cheaply, and with a scorn of fear,
Mordaunt, a man of great strength of character, yet one who
allows an early disappointment to embitter his life, courts
danger as he has for years vainly courted death. Then across
his path comes Chip, the heroine of the tale, who, disguised as
a boy, seeks and obtains the post of overseer on the ranch. The
story describes their daily life, the dangers which they face
together, and the great influence which the mysterious boy-
overseer exercises over the dissipated misogynist, his
employer.
LITTLE DINNERS WITH THE SPHINX. By Richard
LeGallienne. Crown 8vo, 6s.
DIANA DETHRONED: A Novel. By W. M. Letts.
Crown 8vo, 6s.
⁂ Phœbe Lankester, unconsciously to herself, is pledged to
the old pagan ideal represented by Diana the Huntress.
Healthy in body and mind, Phœbe stands aloof from the
troubles and desires of humanity, until in her own wrecked
happiness she awakes slowly to the need of some power
greater and kindlier than ever Diana knew. It is only after the
absolute surrender of self and after the awakening of a greater,
more maternal love than she has as yet known that she finds
peace. Love and death and pity have conquered Diana, and
the statue of the goddess that once adorned the Lankesters’
hall is banished to a lumber-room.
SOMEONE PAYS: A Novel. By Noel Barwell. Crown
8vo, 6s.
⁂ “Someone Pays,” though exemplifying a subtle train of
cause and effect, is not a novel with a problem or a purpose.
The story is told by means of the correspondence passing
between a number of persons. We are first introduced to the
post-bag at a country Vicarage where Sir Bernard Orr’s son is
67. being coached. Later the scene changes to Cambridge, and we
watch the developments of a romance and an awkward
entanglement which arise at the Vicarage. Everything is
smoothed out and ends happily for all parties, especially for
an unscrupulous triumphant cleric.
THE ODD MAN. A Novel. By Arnold Holcombe.
Crown 8vo, 6s.
John Lane, The Bodley Head, Vigo St., London
68. *** END OF THE PROJECT GUTENBERG EBOOK THE WOMAN AND
THE CAR ***
Updated editions will replace the previous one—the old editions
will be renamed.
Creating the works from print editions not protected by U.S.
copyright law means that no one owns a United States
copyright in these works, so the Foundation (and you!) can copy
and distribute it in the United States without permission and
without paying copyright royalties. Special rules, set forth in the
General Terms of Use part of this license, apply to copying and
distributing Project Gutenberg™ electronic works to protect the
PROJECT GUTENBERG™ concept and trademark. Project
Gutenberg is a registered trademark, and may not be used if
you charge for an eBook, except by following the terms of the
trademark license, including paying royalties for use of the
Project Gutenberg trademark. If you do not charge anything for
copies of this eBook, complying with the trademark license is
very easy. You may use this eBook for nearly any purpose such
as creation of derivative works, reports, performances and
research. Project Gutenberg eBooks may be modified and
printed and given away—you may do practically ANYTHING in
the United States with eBooks not protected by U.S. copyright
law. Redistribution is subject to the trademark license, especially
commercial redistribution.
START: FULL LICENSE
70. PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK
To protect the Project Gutenberg™ mission of promoting the
free distribution of electronic works, by using or distributing this
work (or any other work associated in any way with the phrase
“Project Gutenberg”), you agree to comply with all the terms of
the Full Project Gutenberg™ License available with this file or
online at www.gutenberg.org/license.
Section 1. General Terms of Use and
Redistributing Project Gutenberg™
electronic works
1.A. By reading or using any part of this Project Gutenberg™
electronic work, you indicate that you have read, understand,
agree to and accept all the terms of this license and intellectual
property (trademark/copyright) agreement. If you do not agree
to abide by all the terms of this agreement, you must cease
using and return or destroy all copies of Project Gutenberg™
electronic works in your possession. If you paid a fee for
obtaining a copy of or access to a Project Gutenberg™
electronic work and you do not agree to be bound by the terms
of this agreement, you may obtain a refund from the person or
entity to whom you paid the fee as set forth in paragraph 1.E.8.
1.B. “Project Gutenberg” is a registered trademark. It may only
be used on or associated in any way with an electronic work by
people who agree to be bound by the terms of this agreement.
There are a few things that you can do with most Project
Gutenberg™ electronic works even without complying with the
full terms of this agreement. See paragraph 1.C below. There
are a lot of things you can do with Project Gutenberg™
electronic works if you follow the terms of this agreement and
help preserve free future access to Project Gutenberg™
electronic works. See paragraph 1.E below.
71. 1.C. The Project Gutenberg Literary Archive Foundation (“the
Foundation” or PGLAF), owns a compilation copyright in the
collection of Project Gutenberg™ electronic works. Nearly all the
individual works in the collection are in the public domain in the
United States. If an individual work is unprotected by copyright
law in the United States and you are located in the United
States, we do not claim a right to prevent you from copying,
distributing, performing, displaying or creating derivative works
based on the work as long as all references to Project
Gutenberg are removed. Of course, we hope that you will
support the Project Gutenberg™ mission of promoting free
access to electronic works by freely sharing Project Gutenberg™
works in compliance with the terms of this agreement for
keeping the Project Gutenberg™ name associated with the
work. You can easily comply with the terms of this agreement
by keeping this work in the same format with its attached full
Project Gutenberg™ License when you share it without charge
with others.
1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside
the United States, check the laws of your country in addition to
the terms of this agreement before downloading, copying,
displaying, performing, distributing or creating derivative works
based on this work or any other Project Gutenberg™ work. The
Foundation makes no representations concerning the copyright
status of any work in any country other than the United States.
1.E. Unless you have removed all references to Project
Gutenberg:
1.E.1. The following sentence, with active links to, or other
immediate access to, the full Project Gutenberg™ License must
appear prominently whenever any copy of a Project
Gutenberg™ work (any work on which the phrase “Project
72. Gutenberg” appears, or with which the phrase “Project
Gutenberg” is associated) is accessed, displayed, performed,
viewed, copied or distributed:
This eBook is for the use of anyone anywhere in the United
States and most other parts of the world at no cost and
with almost no restrictions whatsoever. You may copy it,
give it away or re-use it under the terms of the Project
Gutenberg License included with this eBook or online at
www.gutenberg.org. If you are not located in the United
States, you will have to check the laws of the country
where you are located before using this eBook.
1.E.2. If an individual Project Gutenberg™ electronic work is
derived from texts not protected by U.S. copyright law (does not
contain a notice indicating that it is posted with permission of
the copyright holder), the work can be copied and distributed to
anyone in the United States without paying any fees or charges.
If you are redistributing or providing access to a work with the
phrase “Project Gutenberg” associated with or appearing on the
work, you must comply either with the requirements of
paragraphs 1.E.1 through 1.E.7 or obtain permission for the use
of the work and the Project Gutenberg™ trademark as set forth
in paragraphs 1.E.8 or 1.E.9.
1.E.3. If an individual Project Gutenberg™ electronic work is
posted with the permission of the copyright holder, your use and
distribution must comply with both paragraphs 1.E.1 through
1.E.7 and any additional terms imposed by the copyright holder.
Additional terms will be linked to the Project Gutenberg™
License for all works posted with the permission of the copyright
holder found at the beginning of this work.
1.E.4. Do not unlink or detach or remove the full Project
Gutenberg™ License terms from this work, or any files
73. containing a part of this work or any other work associated with
Project Gutenberg™.
1.E.5. Do not copy, display, perform, distribute or redistribute
this electronic work, or any part of this electronic work, without
prominently displaying the sentence set forth in paragraph 1.E.1
with active links or immediate access to the full terms of the
Project Gutenberg™ License.
1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if
you provide access to or distribute copies of a Project
Gutenberg™ work in a format other than “Plain Vanilla ASCII” or
other format used in the official version posted on the official
Project Gutenberg™ website (www.gutenberg.org), you must,
at no additional cost, fee or expense to the user, provide a copy,
a means of exporting a copy, or a means of obtaining a copy
upon request, of the work in its original “Plain Vanilla ASCII” or
other form. Any alternate format must include the full Project
Gutenberg™ License as specified in paragraph 1.E.1.
1.E.7. Do not charge a fee for access to, viewing, displaying,
performing, copying or distributing any Project Gutenberg™
works unless you comply with paragraph 1.E.8 or 1.E.9.
1.E.8. You may charge a reasonable fee for copies of or
providing access to or distributing Project Gutenberg™
electronic works provided that:
• You pay a royalty fee of 20% of the gross profits you derive
from the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
74. payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
about donations to the Project Gutenberg Literary Archive
Foundation.”
• You provide a full refund of any money paid by a user who
notifies you in writing (or by e-mail) within 30 days of receipt
that s/he does not agree to the terms of the full Project
Gutenberg™ License. You must require such a user to return or
destroy all copies of the works possessed in a physical medium
and discontinue all use of and all access to other copies of
Project Gutenberg™ works.
• You provide, in accordance with paragraph 1.F.3, a full refund of
any money paid for a work or a replacement copy, if a defect in
the electronic work is discovered and reported to you within 90
days of receipt of the work.
• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.
1.E.9. If you wish to charge a fee or distribute a Project
Gutenberg™ electronic work or group of works on different
terms than are set forth in this agreement, you must obtain
permission in writing from the Project Gutenberg Literary
Archive Foundation, the manager of the Project Gutenberg™
trademark. Contact the Foundation as set forth in Section 3
below.
1.F.
1.F.1. Project Gutenberg volunteers and employees expend
considerable effort to identify, do copyright research on,
transcribe and proofread works not protected by U.S. copyright
75. law in creating the Project Gutenberg™ collection. Despite these
efforts, Project Gutenberg™ electronic works, and the medium
on which they may be stored, may contain “Defects,” such as,
but not limited to, incomplete, inaccurate or corrupt data,
transcription errors, a copyright or other intellectual property
infringement, a defective or damaged disk or other medium, a
computer virus, or computer codes that damage or cannot be
read by your equipment.
1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except
for the “Right of Replacement or Refund” described in
paragraph 1.F.3, the Project Gutenberg Literary Archive
Foundation, the owner of the Project Gutenberg™ trademark,
and any other party distributing a Project Gutenberg™ electronic
work under this agreement, disclaim all liability to you for
damages, costs and expenses, including legal fees. YOU AGREE
THAT YOU HAVE NO REMEDIES FOR NEGLIGENCE, STRICT
LIABILITY, BREACH OF WARRANTY OR BREACH OF CONTRACT
EXCEPT THOSE PROVIDED IN PARAGRAPH 1.F.3. YOU AGREE
THAT THE FOUNDATION, THE TRADEMARK OWNER, AND ANY
DISTRIBUTOR UNDER THIS AGREEMENT WILL NOT BE LIABLE
TO YOU FOR ACTUAL, DIRECT, INDIRECT, CONSEQUENTIAL,
PUNITIVE OR INCIDENTAL DAMAGES EVEN IF YOU GIVE
NOTICE OF THE POSSIBILITY OF SUCH DAMAGE.
1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you
discover a defect in this electronic work within 90 days of
receiving it, you can receive a refund of the money (if any) you
paid for it by sending a written explanation to the person you
received the work from. If you received the work on a physical
medium, you must return the medium with your written
explanation. The person or entity that provided you with the
defective work may elect to provide a replacement copy in lieu
of a refund. If you received the work electronically, the person
or entity providing it to you may choose to give you a second
opportunity to receive the work electronically in lieu of a refund.
76. If the second copy is also defective, you may demand a refund
in writing without further opportunities to fix the problem.
1.F.4. Except for the limited right of replacement or refund set
forth in paragraph 1.F.3, this work is provided to you ‘AS-IS’,
WITH NO OTHER WARRANTIES OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.
1.F.5. Some states do not allow disclaimers of certain implied
warranties or the exclusion or limitation of certain types of
damages. If any disclaimer or limitation set forth in this
agreement violates the law of the state applicable to this
agreement, the agreement shall be interpreted to make the
maximum disclaimer or limitation permitted by the applicable
state law. The invalidity or unenforceability of any provision of
this agreement shall not void the remaining provisions.
1.F.6. INDEMNITY - You agree to indemnify and hold the
Foundation, the trademark owner, any agent or employee of the
Foundation, anyone providing copies of Project Gutenberg™
electronic works in accordance with this agreement, and any
volunteers associated with the production, promotion and
distribution of Project Gutenberg™ electronic works, harmless
from all liability, costs and expenses, including legal fees, that
arise directly or indirectly from any of the following which you
do or cause to occur: (a) distribution of this or any Project
Gutenberg™ work, (b) alteration, modification, or additions or
deletions to any Project Gutenberg™ work, and (c) any Defect
you cause.
Section 2. Information about the Mission
of Project Gutenberg™
77. Project Gutenberg™ is synonymous with the free distribution of
electronic works in formats readable by the widest variety of
computers including obsolete, old, middle-aged and new
computers. It exists because of the efforts of hundreds of
volunteers and donations from people in all walks of life.
Volunteers and financial support to provide volunteers with the
assistance they need are critical to reaching Project
Gutenberg™’s goals and ensuring that the Project Gutenberg™
collection will remain freely available for generations to come. In
2001, the Project Gutenberg Literary Archive Foundation was
created to provide a secure and permanent future for Project
Gutenberg™ and future generations. To learn more about the
Project Gutenberg Literary Archive Foundation and how your
efforts and donations can help, see Sections 3 and 4 and the
Foundation information page at www.gutenberg.org.
Section 3. Information about the Project
Gutenberg Literary Archive Foundation
The Project Gutenberg Literary Archive Foundation is a non-
profit 501(c)(3) educational corporation organized under the
laws of the state of Mississippi and granted tax exempt status
by the Internal Revenue Service. The Foundation’s EIN or
federal tax identification number is 64-6221541. Contributions
to the Project Gutenberg Literary Archive Foundation are tax
deductible to the full extent permitted by U.S. federal laws and
your state’s laws.
The Foundation’s business office is located at 809 North 1500
West, Salt Lake City, UT 84116, (801) 596-1887. Email contact
links and up to date contact information can be found at the
Foundation’s website and official page at
www.gutenberg.org/contact
78. Section 4. Information about Donations to
the Project Gutenberg Literary Archive
Foundation
Project Gutenberg™ depends upon and cannot survive without
widespread public support and donations to carry out its mission
of increasing the number of public domain and licensed works
that can be freely distributed in machine-readable form
accessible by the widest array of equipment including outdated
equipment. Many small donations ($1 to $5,000) are particularly
important to maintaining tax exempt status with the IRS.
The Foundation is committed to complying with the laws
regulating charities and charitable donations in all 50 states of
the United States. Compliance requirements are not uniform
and it takes a considerable effort, much paperwork and many
fees to meet and keep up with these requirements. We do not
solicit donations in locations where we have not received written
confirmation of compliance. To SEND DONATIONS or determine
the status of compliance for any particular state visit
www.gutenberg.org/donate.
While we cannot and do not solicit contributions from states
where we have not met the solicitation requirements, we know
of no prohibition against accepting unsolicited donations from
donors in such states who approach us with offers to donate.
International donations are gratefully accepted, but we cannot
make any statements concerning tax treatment of donations
received from outside the United States. U.S. laws alone swamp
our small staff.
Please check the Project Gutenberg web pages for current
donation methods and addresses. Donations are accepted in a
number of other ways including checks, online payments and
79. credit card donations. To donate, please visit:
www.gutenberg.org/donate.
Section 5. General Information About
Project Gutenberg™ electronic works
Professor Michael S. Hart was the originator of the Project
Gutenberg™ concept of a library of electronic works that could
be freely shared with anyone. For forty years, he produced and
distributed Project Gutenberg™ eBooks with only a loose
network of volunteer support.
Project Gutenberg™ eBooks are often created from several
printed editions, all of which are confirmed as not protected by
copyright in the U.S. unless a copyright notice is included. Thus,
we do not necessarily keep eBooks in compliance with any
particular paper edition.
Most people start at our website which has the main PG search
facility: www.gutenberg.org.
This website includes information about Project Gutenberg™,
including how to make donations to the Project Gutenberg
Literary Archive Foundation, how to help produce our new
eBooks, and how to subscribe to our email newsletter to hear
about new eBooks.
90. Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.
More than just a book-buying platform, we strive to be a bridge
connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.
Join us on a journey of knowledge exploration, passion nurturing, and
personal growth every day!
ebookbell.com