A complex vector space is a nonempty set V, whose elements we
shall call vectors, with three operations
Addition: +: V×V→V
Negation: −: V→V
Scalar multiplication: ⋅:
C×V→V
and a distinguished element called the zero vector0∈V in the set. These operations and zero
must satisfy the following properties:
∀v,w,x∈V and for all
c,c1,c2∈C,
i. Commutativity of addition:
v+w=w+v,
ii. Associativity of addition:
(v+w)+x=v+(w+x),
iii. Additive identity:
v+0=v=0+v,
iv. Additive inverse:
v+(−v)=0=(−v)+v,
v. Multiplication identity: 1⋅v=v,
vi. Scalar multiplication distributes over addition:
c⋅(v+w)=c⋅v+c⋅w,
vii. Scalar multiplication distributes over complex addition:
(c1+c2)⋅v=c1⋅v+c2⋅v,
example
ex:n-dim-vector-space Cn, the set of vectors of length n
with complex entries, is a complex vector space.
example
ex:nn-dim-vector-space Cm×n, the set of all
m-by-n matrices (two-dimensional arrays) with complex entries, is a
complex vector space.
Matrix multiplication relates to the transpose:
(A×B)⊤=B⊤×A⊤
Matrix multiplication respects to the conjugate:
A×B=A×B
Matrix multiplication relates to the adjoint:
(A×B)†=B†×A†
The physical explanation. The elements of Cn are the
ways of describing the states of a quantum system. Some suitable
elements of Cn×n will correspond to the changes that occur
to the states of a quantum system. Given a state
x∈Cn and a matrix
A∈Cn×n, we shall form another state of
the system A×x which is an element of
Cn. Formally, × in this case is a function
×:Cn×n×Cn→Cn.
We say that the algebra of matrices "acts" on the vectors to yield
new vectors.
A linear map from V to V′ is a function
f:V→V′,∀v,v1,v2∈V,
and c∈C where
f respects the addition:
f(v1+v2)=f(v1)+f(v2)
f respects the scalar multiplication:
f(c⋅v)=c⋅f(v)
The physical explanation. We shall call any linear map from a
complex vector space to itself an operator. If
F:Cn→Cn is an operator on
Cn and A is an n-by-n matrix such that for
all v we have
F(v)=A×v, then we say that
F is represented by A. Several different matrices might
represent the same operator.
Let V be a complex (real) vector space.
v∈V is a linear combination of the vectors
v0,v1,⋯,vn−1 in
V if v can be written as
v=c0⋅v0+c1⋅v1+⋯+cn−1⋅vn−1
for some c0,c1,⋯,cn−1 in C(R).
definition
A set
{v0,v1,⋯,vn−1}
of vectors in V is called linearly independent if
0=c0⋅v0+c1⋅v1+⋯+cn−1⋅vn−1
implies that c0=c1=⋯=cn−1=0. This means that the only way
that a linear combination of the vectors can be the zero vector is if
all the cj are zero.
corollary
For any vi∣i=0,1,⋯,n−1, cannot be written as a
combination of the others {vj}j=0,j=in−1
corollary
For any 0=v∈V, unique
coefficients {ci}i=0n−1
definition
A set
B={v0,v1,⋯,vn−1}⊆V
of vectors is called a basis of a (complex) vector space V if
both
The dimension of a (complex) vector space is the number of elements in a
basis of the vector space.
definition
A change of basis matrix or a transition matrix from basis B
to basis D is a matrix
MD←B such that their
coefficients satisfy
vD=MD←B×vB
In other words, MD←B is a way
of getting the coefficients with respect to one basis from the
coefficients with respect to another basis.
remark
Utilities of Transition Matrix
Operator re-representation in a new basis
AD=MD←B−1×AB×MD←B
State re-representation in a new basis
vD=MD←B×vB
Figure 1.1: The Hardamard matrix for basis
transition
example: hadamard matrix
ex:hadamard In R2, the transition matrix from the canonical
basis {[10],[10]} to this other basis {[2121],[21−21]} is the Hadamard matrix: H=21[111−1]=[212121−21] as shown in Figure
1.1.
Figure 1.2: A ball rolling down a ramp
The motivation to change basis. In physics, we are often faced with
a problem in which it is easier to calculate something in a noncanonical
basis. For example, consider a ball rolling down a ramp as depicted in
Figure 1.2.
Figure 1.3: Problem-solving flowchart
The ball will not be moving in the direction of the canonical basis.
Rather it will be rolling downward in the direction of +45◦, −45◦ basis.
Suppose we wish to calculate when this ball will reach the bottom of the
ramp or what is the speed of the ball. To do this, we change the problem
from one in the canonical basis to one in the other basis. In this other
basis, the motion is easier to deal with. Once we have completed the
calculations, we change our results into the more understandable
canonical basis and produce the desired answer. We might envision this
as the flowchart shown in Figure1.3.
Throughout this course, we shall go from one basis to another basis,
perform some calculations, and finally revert to the original basis. The
Hadamard matrix will frequently be the means by which we change the
basis.
An inner product (also called a dot product or scalar product) on a
complex vector space V is a function
⟨⋅,⋅⟩:V×V→C
that satisfies the following conditions for all v,
v1, v2, and v3 in
V and for a,c∈C:
i. Nondegenerate:
⟨v,v⟩≥0 and ⟨v,v⟩⇔v=0
ii. Respects addition: ⟨v1+v2,v3⟩⟨v1,v2+v3⟩=⟨v1,v3⟩+⟨v2,v3⟩=⟨v1,v2⟩+⟨v1,v3⟩
iii. Respects scalar multiplication: ⟨c⋅v1,v2⟩⟨v1,c⋅v2⟩=c×⟨v1,v2⟩=c×⟨v1,v2⟩ -->
iv. Skew symmetric:
⟨v1,v2⟩=⟨v2,v1⟩
definition
A vector space with an inner space.
example
The inner product is given as
⟨v1,v2⟩=v1⊤×v2
example
ex:Cn Cn: The inner product is given as
⟨v1,v2⟩=v1†×v2
example
ex:Rnn Rn×n has an inner product given for matrices
A,B∈Rn×n as
⟨A,B⟩=\Tr(A⊤×B)
where the trace of a square matrix C is given as the sum
of the diagonal elements. That is, \Tr(C)=∑i=0n−1C[i,i]
example
ex:Cnn Cn×n has an inner product given for matrices
A,B∈Cn×n as
⟨A,B⟩=\Tr(A†×B)
definition
Norm is a unary function derived from inner product
∣⋅∣:V→R defined as
∣v∣=⟨v,v⟩,
which has the following properties
Norm is nondegenerate:
∣v∣>0 if v=0 and ∣0∣=0
Norm satisfies the triangular inequality:
∣v+w∣≤∣v∣+∣w∣
Distance is a binary function defined based on norm
d(⋅,⋅):V×V→R
defined as
d(v1,v2)=∣v1−v2∣=⟨v1−v2,v1−v2⟩,
which has the following properties
Distance is nondegenerate:
d(v,w)>0 if v=w and d(v,w)=0⇔v=w
Distance satisfies the triangular inequality:
d(u,v)≤d(u,w)+d(w,v)
Distance is symmetric:
d(u,v)=d(v,u)
definition
A basis
B={v0,v1,⋯,vn−1}
for an inner space ⟨vi,vj⟩={1,0,ifi=jifi=j with the following property
For ∀v∈V and any orthonormal basis
{ei}i=0n−1 we have
v=∑i=0n−1⟨ei,v⟩ei
Note: inner product defines geometry in the vector space (Figure
1.4).
Figure 1.4: Inner product lays the geometric foundation in the vector space
Within an inner product space V, ⟨⋅,⋅⟩
(with the derived norm and a distance function), a sequence of vectors
v0,v1,⋯ is called a Cauchy sequence
if ∀ϵ>0, there exists an N0∈N such that
for all
m,n≥N0,d(vm,vn)≤ϵ.
definition
For any Cauchy sequence v0,v1,⋯, it
is complete if there exist a v∈V,
such that
n→∞limd(vn−v)=0.
definition
A Hilbert space is a complex inner space that is complete.
For a matrix A∈Cn×n, if there is a number
c∈C and a vector 0=v∈Cn such
that Av=c⋅v then c is
called an eigenvalue of A and v is called an
eigenvector of A associate with c.
An n-by-n matrix A is called hermitian if
A†=A. In other words,
A[j,k]=A[k,j].
definition
If A is a hermitian matrix then the operator that it
represents is called self-adjoint.
proposition
if
A∈Cn×n is Hermitian,
∀v,w∈Cn we have
⟨Av,w⟩=⟨v,Aw⟩
proof
Proof.⟨Av,w⟩=(Av)†×w=v†×A†×w=v†×A×w=v†×(Aw)=⟨v,Aw⟩ definition of inner product multiplication relates to the adjoint definition of Hermitian matrices multiplication is associative definition of inner product ◻
:::
proposition
For a Hermitian matrix, its all
eigenvalues are real.
proof
Proof. Let A∈Cn×n be a Hermitian matrix
with an eigenvalue c∈C and an eigenvector
v∈Cnc⟨v,v⟩=⟨v,cv⟩=⟨v,Av⟩=⟨Av,v⟩=⟨cv,v⟩=c⟨v,v⟩ inner product respects scalar multiplication definition of eigenvalue and eigenvector see Proposition of Symmetry definition of eigenvalue and eigenvector inner product respects scalar multiplication ◻
proposition
For a Hermitian matrix, distinct eigenvectors that have distinct
eigenvalues are orthogonal
proof
Proof. Let A∈Cn×n be a Hermitian matrix
with two distinct eigenvectors
v1=v2∈Cn and their related
eigenvalues c1,c2∈Cc2⟨v1,v2⟩=⟨v1,c2v2⟩=⟨v1,Av2⟩=⟨Av1,v2⟩=⟨c1v1,v2⟩=c1⟨v1,v2⟩=c1⟨v1,v2⟩ inner product respects scalar multiplication definition of eigenvalue and eigenvector see Proposition \refprop:symmetry definition of eigenvalue and eigenvector inner product respects scalar multiplication see proposition \refprop:real ◻
proposition
Every self-adjoint operator A on a finite-dimensional complex
vector space V can be represented by a diagonal matrix whose
diagonal entries are the eigenvalues of A, and whose
eigenvectors form an orthonormal basis for V (we shall call
this basis an eigenbasis).
Physical Meaning of Hermitian Matrix. Hermitian matrices and their
eigenbases will play a major role in our story. We shall see in the
following lectures that associated with every physical observable of a
quantum system there is a corresponding Hermitian matrix. Measurements
of that observable always lead to a state that is represented by one of
the eigenvectors of the associated Hermitian matrix.
Given a reversible matrix U∈Cn×n such
that
U×U†=U†×U=In
then U is a unitary matrix.
example
U1=cosθsinθ0−sinθcosθ0001 for any θ. U2=21+i2−1213i313−i2153+i2154+3i2155i
proposition
If U∈Cn×n is unitary,
∀v,w∈Cn we have
⟨Uv,Uw⟩=⟨v,w⟩
proof
Proof. Let A∈Cn×n be a Hermitian matrix
with two distinct eigenvectors
v1=v2∈Cn and their related
eigenvalues c1,c2∈C⟨Uv,Uw⟩=(Uv)†×(Uw)=v†U†×Uw=v†×I×w=⟨v,w⟩ definition for inner product multiplication relates to adjoint definition for unitary matrices definition for inner product ◻
:::
proposition
If U∈Cn×n is unitary,
∀v,∈Cn we have
∣Uv∣=∣v∣
proof
Proof. Let A∈Cn×n be a Hermitian matrix
with two distinct eigenvectors
v1=v2∈Cn and their related
eigenvalues c1,c2∈C∣Uv∣=⟨Uv,⟨Uv⟩=⟨v,v⟩=∣v∣ definition for norm unitary matrices preserve inner product definition for norm ◻
:::
proposition
If U∈Cn×n is unitary,
∀v,w∈Cn we have
d(Uv,Uw)=d(v,w)
proof
Proof.d(Uv,Uw)=∣Uv−Uw∣=∣U(v−w)∣=∣v−w∣=d(v,w) definition for distance multiplication distributes over addition unitary matrices preserve norm definition of distance ◻
:::
proposition
The modulus of eigenvalues of unitary matrix is 1.
proposition
Unitary matrix is the transition matrix from an orthonormal basis to
another orthonormal basis.
Physical meaning of unitary Matrix. What does unitary really mean?
As we saw, it means that it preserves the geometry. But it also means
something else: If U is unitary and
UV=V′, then we can easily form
U† and multiply both sides of the equation by
U† to get
U†UV=U†V′
or V=U†V′. In other words,
because U is unitary, there is a related matrix that can
"undo" the action that U performs. U†
takes the result of U's action and gets back the original
vector. In the quantum world, all actions (that are not measurements)
are "undoable" or "reversible" in such a manner.
Figure 1.5: The role of Hermitian and unitary matrices
The roles of Hermitian and unitary matrices in quantum computing. As shown in Figure 1.5,
the Hermitian matrix plays an important role in the quantum measurement
phrase, which decides the concrete basis to observe the final
computational result ∣ψ∗⟩. Once the basis (H1 or
H2) is decided, the observation result must be
probabilistically collapsed into one of the eigenvectors of the
corresponding basis. The unitary matrix plays a role of action to change
the state of the quantum computer. Considering its reversible property,
all actions performed in quantum computing can be undone by performing
an action described by U†. The relations of
identity, Hermitian, unitary, and square matrices are shown in Figure
1.6.