1. UNIT III
Quantum computing and machine learning: Clustering Structure and Quantum
Computing-Quantum Pattern Recognition-Quantum Classification-Boosting and
Adiabatic- Quantum Computing.
2. Clustering Structure and Quantum Computing
● Unsupervised Learning finds a natural fit with quantum computing.
● Quantum states are also becoming capable of self analysis.
● In a QRAM, the address and output registers are composed of qubits.
● The address register contains a superposition of addresses and the
output register will contain a superposition of information, correlated with the address
register:
● QRAM uses a unique architecture that significantly reduces the requirements for a
memory call.
● Instead of needing N switches for N memory cells, QRAM requires only O(log2N)
switches. This makes the system more robust and reduces the power needed for
addressing.
3. ● A qutrit is a three-level quantum system. Let us label the three levels |wait>,
|left>, and |right>.
● During each memory call, each qutrit is in the |wait> state.
● The |wait> state is transformed into |left> and |right> depending on the current
qubit. If the state is not in |wait>, it routes the current qubit.
4. ● The result is a superposition of routes.
● Once the routes are thus carved out, a bus qubit is sent through to interact
with the memory cells at the end of the routes.
● Then it is sent back to write the result to the output register. Finally, a reverse
evolution on the states is performed to reset all of them to |wait>.
● The advantage of the bucket-brigade approach is the low number of qutrits
involved in the retrieval: in each route of the final superposition, only log N
qutrits are not in the |wait> state.
5. Calculating Dot Products
● The training instances are presented as quantum states |xi>.
● To reconstruct a state from the QRAM, we need to query the memory O(log
N) times.
● To evaluate the dot product of two training instances, we need to do the
following:
• Generate two states, |ψ>and |φ>, with an ancilla variable;
• Estimate the parameter , the sum of the squared norms
of the two instances;
• Perform a projective measurement on the ancilla alone, comparing the two
states.
6. ● We calculate the dot product in the linear kernel
● The state |ψ> = 1 / √2 (|0>|xi> + |1>|xj>) is easy to construct by querying the
QRAM.
● We estimate the other state
7. ● By an appropriate choice of t (||xi||t, ||xj||t << 1), and by measuring the ancilla
bit, we get the state φ with probability 1 /2 Z2 t 2, which in turns allows the
estimation of Z.
● If the desired accuracy of the estimation is , then the complexity of
constructing |φ> and Z is O(ϵ−1).
● If we have |ψi and |φi, we perform a swap test on the ancilla alone.
● A swap test is a sequence of a Hadamard gate, Fredkin gate and another
Hadamard gate which check the equivalence of two states |f> and |f ’>, using
an ancilla state |a>
9. ● With this algebraic manipulation, we conclude that if |ψ> and |φ> are equal,
then the measurement in the end will always give us zero.
● With the QRAM accesses and the estimations, the overall complexity of
evaluating a single dot product xT
i xj is O(ϵ−1 logN). Calculating the kernel
matrix is straightforward.
● As the inner product in this formulation derives from the Euclidean distance,
the kernel matrix is also easily calculated with this distance function.
10. Quantum Random Access Memory (QRAM)
● QRAM is a proposed quantum counterpart to classical Random
Access Memory (RAM).
● It aims to enable quantum computers to store and access large
datasets efficiently.
● QRAM can accelerate machine learning algorithms by enabling
efficient access to training data and performing quantum
operations on that data.
● This could lead to breakthroughs in areas such as image
recognition, natural language processing, and drug discovery.
11. ● qRAM works by using quantum superposition to perform
memory access.
● In order to access a superposition of the memory cells the
address register, a, must contain a superposition of the address.
● The qRAM returns a superposition of data in a data register, d,
correlated to the address register.
● Let Dj be the content of the jth memory cell.
● This is shown in the formula below.
12. qRAM Architectures
● Quantum Bus:
○ a signal that is sent back and forth in a binary tree that
represents the RAM.
○ The bus can be routed through every through every
possible path of the binary tree.
● Fanout:
○ In the Fanout scheme [9], the index register gives the
direction to reach the memory cell needed.
○ The index register is a binary string. Each bit tells
○ which direction to take at a bifurcation of the tree. If a bit in
the string is a 0, then all the switches will point up.
13. ● Bucket Brigade:
○ lowers the complexity to O(logN), exponentially decreasing
access complexity
○ Instead of encountering binary switches of “0” or “1” going
down a path to memory locations, trits are used.
○ These trits can take the values of “wait”, “left”, and “right”.
Initially, each trit is in the “wait” state.
14. Quantum Phase Estimation
● QPE is one of the essential algorithms in Quantum Computing.
● It is key to many quantum algorithms such as Shor’s algorithm.
● Consider a unitary operator U with eigenvalue and eigenvector as
● The goal of QPE is to estimate the angle θ.
● To perform this algorithm, we assume that the state |u⟩ can be
prepared. QPE uses two registers.
15. ● First contains t number of qubits, which naturally depend on
two things: the number of digits of accuracy in the estimation of
θ we want and the probability with which we wish to succeed in
the phase estimation procedure.
● The second register is initialized in the state |u⟩ and contains
the number of qubits sufficient for it.
● The circuit is initialized by applying a Hadamard gate on all
qubits in the first register, followed by the application of
controlled-U operations on the second register, with U raised to
successive powers of two.
18. Variational Quantum Eigensolver (VQE)
● VQE is an algorithm that searches for ground states using quantum
states that can be efficiently described by a quantum computer.
● The algorithm is as follows:
20. Quantum Principal Component Analysis
● Principal component analysis relies on the eigen decomposition of a
matrix
● In a quantum context, translates to simulating a Hamiltonian
● QPCA utilizes quantum superposition to represent vast information in
parallel, leading to faster identification of relevant features in datasets.
● Quantum entanglement plays an essential role in enhancing QPCA
performance, although challenges like decoherence exist.
23. Q - Means
- Unsupervised learning algorithm
- It follows K-means algorithm’s steps
- q-means can be considered/presented as delta-k-means algorithm
which is k-means + its noise and non-deterministic character
- Use quantum subroutines for distance estimation, finding the minimum
value among a set of elements, matrix multiplication for obtaining the
new centroids as quantum states, and efficient tomography.
24. ctd..
- First, pick some random initial points, using the quantum version of a
classical techniques (like the k -means )
- Then, in Steps 1 and 2 all data points are assigned to a cluster.
- In Steps 3 and 4, update the centroids of the clusters and retrieve the
information classically.
- The process is repeated until convergence.
25. Quantum Tomography
● Roughly speaking, tomography is the problem where we are given an
unknown mixed state ρ ∈ C d×d and where our goal is to “learn” what ρ is
● In quantum tomography we are given repeated copies of an unknown
quantum state (or quantum channel) and the goal is to find a full classical
description of the quantum state (or quantum channel) by extracting
information by means of repeated measurements.
30. Quantum Neural Networks(QNN)
● Quantum machine learning (QML) is to integrate notions from quantum
computing and classical machine learning to open the way for new and
improved learning schemes.
● Basic principles : to connect multiple isolated quantum systems with a
particular topology, to process quantum data using quantum unitary
transformations, and to extract and identify critical information
31. ● QNNs apply this generic principle by combining classical neural networks and
parametrized quantum circuits.
● QNNs can be viewed from two perspectives:
○ From a machine learning perspective - find the pattern - These models can load classical data
(inputs) into a quantum state, and later process it with quantum gates parametrized by
trainable weights.
○ From a quantum computing perspective - parametrized quantum circuits that can be trained in
a variational manner using classical optimizers. These circuits contain a feature map (with
input parameters) and an ansatz (with trainable weights).
33. ● Since all quantum gate operations are reversible linear operations, we can
replace a neural network’s activation functions with entanglement layers,
giving them a multilayer structure.
● the steps for VQC quantum neural networks to process data are as follows:
they first encode their input values into an appropriate qubit state, then
transform this qubit state through its parametrized rotation gates and
entangling gates, and finally measure this transformed qubit state by
calculating the expected value of the system’s Hamiltonian operator. From
here, the output expected value is then decoded into a more appropriate
output, and the parameters of the quantum neural network are then updated
using an appropriate optimizer.
35. Implementation in qiskit-machine-learning
● The QNNs in qiskit-machine-learning are meant as application-agnostic
computational units that can be used for different use cases, and their setup
will depend on the application they are needed for.
● The module contains an interface for the QNNs and two specific
implementations:
○ NeuralNetwork: The interface for neural networks. This is an abstract class all QNNs inherit
from.
○ EstimatorQNN: A network based on the evaluation of quantum mechanical observables.
○ SamplerQNN: A network based on the samples resulting from measuring a quantum circuit.
36. Ctd..
● These implementations are based on the qiskit primitives.
● The primitives are the entry point to run QNNs on either a simulator or real
quantum hardware. Each implementation, EstimatorQNN and SamplerQNN, takes
in an optional instance of its corresponding primitive, which can be any subclass of
BaseEstimator and BaseSampler, respectively.
● The qiskit.primitives module provides a reference implementation for the Sampler
and Estimator classes to run statevector simulations. By default, if no instance is
passed to a QNN class, an instance of the corresponding reference primitive
(Sampler or Estimator) is created automatically by the network. For more
information about primitives please refer to the primitives documentation.
37. ● The NeuralNetwork class is the interface for all QNNs available in qiskit-
machine-learning.
● It exposes a forward and a backward pass that take data samples and
trainable weights as input.
● It’s important to note that NeuralNetworks are “stateless”.
● They do not contain any training capabilities (these are pushed to the actual
algorithms or applications: classifiers, regressors, etc), nor do they store the
values for trainable weights.