Skip to main content

Complex Vector Space

Complex Vector Space

definition

A complex vector space is a nonempty set V\mathbb{V}, whose elements we shall call vectors, with three operations

  • Addition: ++: V×VV\mathbb{V}\times \mathbb{V}\rightarrow \mathbb{V}

  • Negation: -: VV\mathbb{V}\rightarrow \mathbb{V}

  • Scalar multiplication: \cdot: C×VV\mathbb{C}\times \mathbb{V}\rightarrow \mathbb{V}

and a distinguished element called the zero vector 0V\boldsymbol{0}\in \mathbb{V} in the set. These operations and zero must satisfy the following properties: v,w,xV\forall \boldsymbol{v, w, x}\in \mathbb{V} and for all c,c1,c2Cc, c_1, c_2\in\mathbb{C},

i. Commutativity of addition: v+w=w+v\boldsymbol{v}+\boldsymbol{w} = \boldsymbol{w}+\boldsymbol{v},

ii. Associativity of addition: (v+w)+x=v+(w+x)(\boldsymbol{v}+\boldsymbol{w}) + \boldsymbol{x} = \boldsymbol{v} + (\boldsymbol{w}+\boldsymbol{x}),

iii. Additive identity: v+0=v=0+v\boldsymbol{v} + \boldsymbol{0} = \boldsymbol{v} = \boldsymbol{0} + \boldsymbol{v},

iv. Additive inverse: v+(v)=0=(v)+v\boldsymbol{v} + (-\boldsymbol{v}) = \boldsymbol{0} = (-\boldsymbol{v}) + \boldsymbol{v},

v. Multiplication identity: 1v=v1\cdot \boldsymbol{v}=\boldsymbol{v},

vi. Scalar multiplication distributes over addition: c(v+w)=cv+cwc\cdot(\boldsymbol{v}+\boldsymbol{w})=c\cdot \boldsymbol{v}+c\cdot\boldsymbol{w},

vii. Scalar multiplication distributes over complex addition: (c1+c2)v=c1v+c2v(c_1+c_2)\cdot\boldsymbol{v}=c_1\cdot \boldsymbol{v}+c_2\cdot\boldsymbol{v},

example

ex:n-dim-vector-space Cn\mathbb{C}^n, the set of vectors of length nn with complex entries, is a complex vector space.

example

ex:nn-dim-vector-space Cm×n\mathbb{C}^{m\times n}, the set of all mm-by-nn matrices (two-dimensional arrays) with complex entries, is a complex vector space.

Unary Operations

Three unary operations for ACm×n\forall \mathbf{A}\in\mathbb{C}^{m\times n}

  • Transpose: ACn×m  such that  A(j,k)=A(k,j)\mathbf{A}^{\top}\in\mathbb{C}^{n\times m} \textrm{\ \ such that\ \ } \mathbf{A}^{\top}(j,k)=\mathbf{A}^{\top}(k,j)

  • Conjugate: ACm×n  such that  A(j,k)=A(j,k)\overline{\mathbf{A}}\in\mathbb{C}^{m\times n} \textrm{\ \ such that\ \ } \overline{\mathbf{A}}(j,k)=\overline{\mathbf{A}(j,k)}

  • Ajoint: ACn×m  such that  A(j,k)=A(k,j)\mathbf{A}^{\dagger}\in\mathbb{C}^{n\times m} \textrm{\ \ such that\ \ } \mathbf{A}^{\dagger}(j,k)=\overline{\mathbf{A}(k,j)}

property

cC\forall c\in \mathbb{C} and A,BCm×n\mathbf{A}, \mathbf{B}\in\mathbb{C}^{m\times n}

  • Transpose is idempotent: (A)=A(\mathbf{A}^{\top})^{\top}=\mathbf{A}

  • Transpose respects addition: (A+B)=A+B(\mathbf{A}+\mathbf{B})^{\top}=\mathbf{A}^{\top}+\mathbf{B}^{\top}

  • Transpose respects scalar multiplication: (cA)=cA(c\cdot \mathbf{A})^{\top}=c\cdot \mathbf{A}^{\top}

property

cC\forall c\in \mathbb{C} and A,BCm×n\mathbf{A}, \mathbf{B}\in\mathbb{C}^{m\times n}

  • Conjugate is idempotent: A=A\overline{\overline{\mathbf{A}}}=\mathbf{A}

  • Conjugate respects addition: A+B=A+B\overline{\mathbf{A}+\mathbf{B}}=\overline{\mathbf{A}}+\overline{\mathbf{B}}

  • Conjugate respects scalar multiplication: cA=cA\overline{c\cdot \mathbf{A}}=\overline{c}\cdot \overline{\mathbf{A}}

property

cC\forall c\in \mathbb{C} and A,BCm×n\mathbf{A}, \mathbf{B}\in\mathbb{C}^{m\times n}

  • Adjoint is idempotent: (A)=A{(\mathbf{A}^{\dagger})}^{\dagger}=\mathbf{A}

  • Adjoint respects addition: (A+B)=A+B(\mathbf{A}+\mathbf{B})^{\dagger}=\mathbf{A}^{\dagger}+\mathbf{B}^{\dagger}

  • Conjugate respects scalar multiplication: (cA)=cA(c\cdot \mathbf{A})^{\dagger}=\overline{c}\cdot \mathbf{A}^{\dagger}

Matrix Multiplication

property

ACm×n,BCn×p,CCn×p,and DCp×q,\forall\mathbf{A}\in\mathbb{C}^{m\times n}, \mathbf{B}\in\mathbb{C}^{n\times p}, \mathbf{C}\in\mathbb{C}^{n\times p}, \textrm{and}\ \mathbf{D}\in\mathbb{C}^{p\times q},

  • Matrix multiplication distributes over addition: A×(B+C)=(A×B)+(A×C)(B+C)×D=(B×D)+(C×D)\begin{aligned} \mathbf{A}\times (\mathbf{B}+\mathbf{C}) &= (\mathbf{A}\times \mathbf{B})+(\mathbf{A}\times \mathbf{C})\\ (\mathbf{B}+\mathbf{C})\times \mathbf{D} &= (\mathbf{B}\times \mathbf{D})+(\mathbf{C}\times \mathbf{D}) \end{aligned}

  • Matrix multiplication respect scalar multiplication: c(A×B)=(cA)×B=A×(cB)c\cdot(\mathbf{A}\times\mathbf{B})=(c\cdot\mathbf{A})\times\mathbf{B}=\mathbf{A}\times(c\cdot\mathbf{B})

  • Matrix multiplication relates to the transpose: (A×B)=B×A(\mathbf{A}\times\mathbf{B})^{\top}=\mathbf{B}^{\top}\times\mathbf{A}^{\top}

  • Matrix multiplication respects to the conjugate: A×B=A×B\overline{\mathbf{A}\times\mathbf{B}}=\overline{\mathbf{A}}\times \overline{\mathbf{B}}

  • Matrix multiplication relates to the adjoint: (A×B)=B×A(\mathbf{A}\times\mathbf{B})^{\dagger}=\mathbf{B}^{\dagger}\times\mathbf{A}^{\dagger}

The physical explanation. The elements of Cn\mathbb{C}^n are the ways of describing the states of a quantum system. Some suitable elements of Cn×n\mathbb{C}^{n×n} will correspond to the changes that occur to the states of a quantum system. Given a state xCn\boldsymbol{x}\in \mathbb{C}^n and a matrix ACn×n\mathbf{A}\in\mathbb{C}^{n\times n}, we shall form another state of the system A×x\mathbf{A}\times \boldsymbol{x} which is an element of Cn\mathbb{C}^n. Formally, ×\times in this case is a function ×:Cn×n×CnCn\times: \mathbb{C}^{n\times n}\times \mathbb{C}^n\rightarrow \mathbb{C}^n. We say that the algebra of matrices "acts" on the vectors to yield new vectors.

Linear Map

definition

A linear map from V\mathbb{V} to V\mathbb{V}^{'} is a function f:VV,v,v1,v2Vf: \mathbb{V}\rightarrow \mathbb{V}^{'}, \forall \boldsymbol{v}, \boldsymbol{v}_1, \boldsymbol{v}_2\in\mathbb{V}, and cCc\in\mathbb{C} where

  • ff respects the addition: f(v1+v2)=f(v1)+f(v2)f(\boldsymbol{v}_1+\boldsymbol{v}_2)=f(\boldsymbol{v}_1)+f(\boldsymbol{v}_2)

  • ff respects the scalar multiplication: f(cv)=cf(v)f(c\cdot\boldsymbol{v})=c\cdot f(\boldsymbol{v})

The physical explanation. We shall call any linear map from a complex vector space to itself an operator. If F:CnCnF: \mathbb{C}^n\rightarrow\mathbb{C}^n is an operator on Cn\mathbb{C}^n and A\mathbf{A} is an nn-by-nn matrix such that for all v\boldsymbol{v} we have F(v)=A×vF(\boldsymbol{v}) = \mathbf{A}\times \boldsymbol{v}, then we say that FF is represented by A\mathbf{A}. Several different matrices might represent the same operator.

Basis and Dimension

Basis

definition

Let V\mathbb{V} be a complex (real) vector space. vV\boldsymbol{v}\in\mathbb{V} is a linear combination of the vectors v0,v1,,vn1\boldsymbol{v}_0, \boldsymbol{v}_1, \cdots, \boldsymbol{v}_{n-1} in V\mathbb{V} if v\mathbf{v} can be written as v=c0v0+c1v1++cn1vn1\boldsymbol{v} = c_0\cdot\boldsymbol{v}_0+c_1\cdot\boldsymbol{v}_1+\cdots+c_{n-1}\cdot\boldsymbol{v}_{n-1} for some c0,c1,,cn1c_0, c_1, \cdots, c_{n-1} in C\mathbb{C}(R\mathbb{R}).

definition

A set {v0,v1,,vn1}\{\boldsymbol{v}_0, \boldsymbol{v}_1, \cdots, \boldsymbol{v}_{n-1}\} of vectors in V\mathbb{V} is called linearly independent if 0=c0v0+c1v1++cn1vn1\mathbf{0} = c_0\cdot\boldsymbol{v}_0+c_1\cdot\boldsymbol{v}_1+\cdots+c_{n-1}\cdot\boldsymbol{v}_{n-1} implies that c0=c1==cn1=0c_0=c_1=\cdots=c_{n-1}=0. This means that the only way that a linear combination of the vectors can be the zero vector is if all the cjc_j are zero.

corollary

For any vii=0,1,,n1\boldsymbol{v}_{i|i=0,1,\cdots,n-1}, cannot be written as a combination of the others {vj}j=0,jin1\{\boldsymbol{v}_j\}_{j=0,j\neq i}^{n-1}

corollary

For any 0vV\boldsymbol{0}\neq\boldsymbol{v}\in\mathbb{V}, unique coefficients {ci}i=0n1\{c_i\}_{i=0}^{n-1}

definition

A set B={v0,v1,,vn1}V\mathcal{B}=\{\boldsymbol{v}_0, \boldsymbol{v}_1, \cdots, \boldsymbol{v}_{n-1}\}\subseteq\mathbb{V} of vectors is called a basis of a (complex) vector space V\mathbb{V} if both

  • vV,v=c0v0+c1v1++cn1vn1\forall \boldsymbol{v}\in\mathbb{V}, \boldsymbol{v}=c_0\cdot\boldsymbol{v}_0+c_1\cdot\boldsymbol{v}_1+\cdots+c_{n-1}\cdot\boldsymbol{v}_{n-1}

  • {viv0V}i=0n1\{\boldsymbol{v}_i|\boldsymbol{v}_0\in\mathbb{V}\}_{i=0}^{n-1} is linearly independent

Dimension

definition

The dimension of a (complex) vector space is the number of elements in a basis of the vector space.

definition

A change of basis matrix or a transition matrix from basis B\mathcal{B} to basis D\mathcal{D} is a matrix MDB\mathbf{M}_{\mathcal{D}\leftarrow\mathcal{B}} such that their coefficients satisfy vD=MDB×vB\boldsymbol{v}_{\mathcal{D}}=\mathbf{M}_{\mathcal{D}\leftarrow\mathcal{B}}\times \boldsymbol{v}_{\mathcal{B}}

In other words, MDB\mathbf{M}_{\mathcal{D}\leftarrow\mathcal{B}} is a way of getting the coefficients with respect to one basis from the coefficients with respect to another basis.

remark

Utilities of Transition Matrix

  • Operator re-representation in a new basis AD=MDB1×AB×MDB\mathbf{A}_{\mathcal{D}}=\mathbf{M}_{\mathcal{D}\leftarrow\mathcal{B}}^{-1}\times\mathbf{A}_{\mathcal{B}}\times\mathbf{M}_{\mathcal{D}\leftarrow\mathcal{B}}

  • State re-representation in a new basis vD=MDB×vB\boldsymbol{v}_{\mathcal{D}}=\mathbf{M}_{\mathcal{D}\leftarrow\mathcal{B}}\times \boldsymbol{v}_{\mathcal{B}}

hadamard
Figure 1.1: The Hardamard matrix for basis transition
example: hadamard matrix

ex:hadamard In R2\mathbb{R}^2, the transition matrix from the canonical basis {[10],[10]}\begin{Bmatrix} \begin{bmatrix} 1 \\ 0 \end{bmatrix}, \begin{bmatrix} 1 \\ 0 \end{bmatrix} \end{Bmatrix} to this other basis {[1212],[1212]}\begin{Bmatrix} \begin{bmatrix} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{bmatrix}, \begin{bmatrix} \frac{1}{\sqrt{2}} \\ -\frac{1}{\sqrt{2}} \end{bmatrix} \end{Bmatrix} is the Hadamard matrix: H=12[1111]=[12121212]\mathbf{H}=\frac{1}{\sqrt{2}}\begin{bmatrix} 1& 1\\1& -1 \end{bmatrix} =\begin{bmatrix} \frac{1}{\sqrt{2}}& \frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{2}}& -\frac{1}{\sqrt{2}} \end{bmatrix} as shown in Figure 1.1.

rolling-ball
Figure 1.2: A ball rolling down a ramp

The motivation to change basis. In physics, we are often faced with a problem in which it is easier to calculate something in a noncanonical basis. For example, consider a ball rolling down a ramp as depicted in Figure 1.2.

basis-transition
Figure 1.3: Problem-solving flowchart

The ball will not be moving in the direction of the canonical basis. Rather it will be rolling downward in the direction of +45◦, −45◦ basis. Suppose we wish to calculate when this ball will reach the bottom of the ramp or what is the speed of the ball. To do this, we change the problem from one in the canonical basis to one in the other basis. In this other basis, the motion is easier to deal with. Once we have completed the calculations, we change our results into the more understandable canonical basis and produce the desired answer. We might envision this as the flowchart shown in Figure1.3.

Throughout this course, we shall go from one basis to another basis, perform some calculations, and finally revert to the original basis. The Hadamard matrix will frequently be the means by which we change the basis.

Inner Product and Hilbert Space

Inner Product

definition

An inner product (also called a dot product or scalar product) on a complex vector space V\mathbb{V} is a function ,:V×VC\langle\cdot, \cdot\rangle:\mathbb{V}\times \mathbb{V}\rightarrow \mathbb{C} that satisfies the following conditions for all v\boldsymbol{v}, v1\boldsymbol{v}_1, v2\boldsymbol{v}_2, and v3\boldsymbol{v}_3 in V\mathbb{V} and for a,cCa, c\in \mathbb{C}:

i. Nondegenerate: v,v0 and v,vv=0\langle\boldsymbol{v}, \boldsymbol{v}\rangle\geq 0 \textrm{\ and\ } \langle\boldsymbol{v}, \boldsymbol{v}\rangle\Leftrightarrow \boldsymbol{v}=\boldsymbol{0}

ii. Respects addition: v1+v2,v3=v1,v3+v2,v3v1,v2+v3=v1,v2+v1,v3\begin{aligned} \langle\boldsymbol{v}_1+\boldsymbol{v}_2, \boldsymbol{v}_3\rangle&= \langle\boldsymbol{v}_1, \boldsymbol{v}_3\rangle + \langle\boldsymbol{v}_2, \boldsymbol{v}_3\rangle\\ \langle\boldsymbol{v}_1, \boldsymbol{v}_2+\boldsymbol{v}_3\rangle&= \langle\boldsymbol{v}_1, \boldsymbol{v}_2\rangle + \langle\boldsymbol{v}_1, \boldsymbol{v}_3\rangle \end{aligned}

iii. Respects scalar multiplication: cv1,v2=c×v1,v2v1,cv2=c×v1,v2\begin{aligned} \langle c\cdot \boldsymbol{v}_1, \boldsymbol{v}_2\rangle &= \overline{c}\times \langle \boldsymbol{v}_1, \boldsymbol{v}_2\rangle\\ \langle \boldsymbol{v}_1, c\cdot \boldsymbol{v}_2\rangle &= c\times \langle \boldsymbol{v}_1, \boldsymbol{v}_2\rangle \end{aligned} -->

iv. Skew symmetric: v1,v2=v2,v1\langle \boldsymbol{v}_1, \boldsymbol{v}_2\rangle = \overline{\langle \boldsymbol{v}_2, \boldsymbol{v}_1\rangle}

definition

A vector space with an inner space.

example

The inner product is given as v1,v2=v1×v2\langle\boldsymbol{v}_1, \boldsymbol{v}_2\rangle=\boldsymbol{v}_1^{\top}\times \boldsymbol{v}_2

example

ex:Cn Cn\mathbb{C}^n: The inner product is given as v1,v2=v1×v2\langle\boldsymbol{v}_1, \boldsymbol{v}_2\rangle=\boldsymbol{v}_1^{\dagger}\times \boldsymbol{v}_2

example

ex:Rnn Rn×n\mathbb{R}^{n\times n} has an inner product given for matrices A,BRn×n\mathbf{A}, \mathbf{B}\in\mathbb{R}^{n\times n} as A,B=\Tr(A×B)\langle\mathbf{A}, \mathbf{B}\rangle=\Tr(\mathbf{A}^{\top}\times \mathbf{B}) where the trace of a square matrix C\mathbf{C} is given as the sum of the diagonal elements. That is, \Tr(C)=i=0n1C[i,i]\Tr(C)=\sum_{i=0}^{n-1}C[i,i]

example

ex:Cnn Cn×n\mathbb{C}^{n\times n} has an inner product given for matrices A,BCn×n\mathbf{A}, \mathbf{B}\in\mathbb{C}^{n\times n} as A,B=\Tr(A×B)\langle\mathbf{A}, \mathbf{B}\rangle=\Tr(\mathbf{A}^{\dagger}\times \mathbf{B})

definition

Norm is a unary function derived from inner product :VR|\cdot|: \mathbb{V}\rightarrow \mathbb{R} defined as v=v,v|\boldsymbol{v}|=\sqrt{\langle\boldsymbol{v},\boldsymbol{v}\rangle}, which has the following properties

  • Norm is nondegenerate: v>0 if v0 and 0=0|\boldsymbol{v}|>0 \textrm{\ if\ } \boldsymbol{v}\neq \boldsymbol{0} \textrm{\ and\ } |\boldsymbol{0}|=0

  • Norm satisfies the triangular inequality: v+wv+w|\boldsymbol{v}+\boldsymbol{w}|\leq|\boldsymbol{v}|+|\boldsymbol{w}|

  • Norm respects scalar multiplication: cv=cv|c\cdot\boldsymbol{v}|=|c|\cdot|\boldsymbol{v}|

definition

Distance is a binary function defined based on norm d(,):V×VRd(\cdot, \cdot): \mathbb{V}\times \mathbb{V}\rightarrow \mathbb{R} defined as d(v1,v2)=v1v2=v1v2,v1v2d(\boldsymbol{v}_1, \boldsymbol{v}_2)=|\boldsymbol{v}_1-\boldsymbol{v}_2|=\sqrt{\langle\boldsymbol{v}_1-\boldsymbol{v}_2, \boldsymbol{v}_1-\boldsymbol{v}_2\rangle}, which has the following properties

  • Distance is nondegenerate: d(v,w)>0 if vw and d(v,w)=0  v=wd(\boldsymbol{v}, \boldsymbol{w})>0 \textrm{\ if\ } \boldsymbol{v}\neq\boldsymbol{w} \textrm{\ and\ } d(\boldsymbol{v}, \boldsymbol{w})=0\ \Leftrightarrow\ \boldsymbol{v}=\boldsymbol{w}

  • Distance satisfies the triangular inequality: d(u,v)d(u,w)+d(w,v)d(\boldsymbol{u}, \boldsymbol{v})\leq d(\boldsymbol{u}, \boldsymbol{w})+d(\boldsymbol{w}, \boldsymbol{v})

  • Distance is symmetric: d(u,v)=d(v,u)d(\boldsymbol{u}, \boldsymbol{v})=d(\boldsymbol{v}, \boldsymbol{u})

definition

A basis B={v0,v1,,vn1}\mathcal{B}=\{\boldsymbol{v}_0, \boldsymbol{v}_1, \cdots, \boldsymbol{v}_{n-1}\} for an inner space vi,vj={1,if  i=j0,if  ij\langle\boldsymbol{v}_i, \boldsymbol{v}_j\rangle= \begin{cases} 1, & \textrm{if}\ \ i=j\\ 0, & \textrm{if}\ \ i\neq j \end{cases} with the following property

  • For vV\forall \boldsymbol{v}\in\mathbb{V} and any orthonormal basis {ei}i=0n1\{\boldsymbol{e}_i\}_{i=0}^{n-1} we have v=i=0n1ei,vei\boldsymbol{v}=\sum_{i=0}^{n-1}\langle\boldsymbol{e}_i, \boldsymbol{v}\rangle\boldsymbol{e}_i

Note: inner product defines geometry in the vector space (Figure 1.4).

inner-product
Figure 1.4: Inner product lays the geometric foundation in the vector space

Hilbert Space

definition

Within an inner product space V\mathbb{V}, ,\langle\cdot, \cdot\rangle (with the derived norm and a distance function), a sequence of vectors v0,v1,\boldsymbol{v}_0, \boldsymbol{v}_1, \cdots is called a Cauchy sequence if ϵ>0\forall \epsilon>0, there exists an N0NN_0\in\mathbb{N} such that for all m,nN0,d(vm,vn)ϵm, n\geq N_0, d(\boldsymbol{v}_m, \boldsymbol{v}_n)\leq \epsilon.

definition

For any Cauchy sequence v0,v1,\boldsymbol{v}_0, \boldsymbol{v}_1, \cdots, it is complete if there exist a vV\overline{\boldsymbol{v}}\in\mathbb{V}, such that limnd(vnv)=0\lim\limits_{n\rightarrow \infty}d(\boldsymbol{v}_n-\overline{\boldsymbol{v}})=0.

definition

A Hilbert space is a complex inner space that is complete.

Eigenvalue and Eigenvector

definition

For a matrix ACn×n\mathbf{A}\in\mathbb{C}^{n\times n}, if there is a number cCc\in\mathbb{C} and a vector 0vCn0\neq \boldsymbol{v}\in\mathbb{C}^n such that Av=cv\mathbf{A}\boldsymbol{v}=c\cdot\boldsymbol{v} then cc is called an eigenvalue of A\mathbf{A} and v\boldsymbol{v} is called an eigenvector of A\mathbf{A} associate with cc.

Hermitian and Unitary Matrices

Hermitian Matrix

definition

An nn-by-nn matrix A\mathbf{A} is called hermitian if A=A\mathbf{A}^{\dagger}=\mathbf{A}. In other words, A[j,k]=A[k,j]A[j, k]=\overline{A[k, j]}.

definition

If A\mathbf{A} is a hermitian matrix then the operator that it represents is called self-adjoint.

proposition

if ACn×n\mathbf{A}\in \mathbb{C}^{n\times n} is Hermitian, v,wCn\forall \boldsymbol{v},\boldsymbol{w}\in\mathbb{C}^{n} we have Av,w=v,Aw\langle\mathbf{A}\boldsymbol{v}, \boldsymbol{w}\rangle=\langle\boldsymbol{v}, \mathbf{A}\boldsymbol{w}\rangle

proof

Proof. Av,w=(Av)×w definition of inner product=v×A×w multiplication relates to the adjoint=v×A×w  definition of Hermitian matrices=v×(Aw)  multiplication is associative=v,Aw definition of inner product\begin{aligned} \langle\mathbf{A}\boldsymbol{v},\boldsymbol{w}\rangle &=\left(\mathbf{A}\boldsymbol{v}\right)^{\dagger}\times \boldsymbol{w} &\textrm{\textcolor{blue}{\ definition of inner product}}\\ &=\boldsymbol{v}^{\dagger}\times \mathbf{A}^{\dagger}\times \boldsymbol{w} &\textrm{\textcolor{blue}{\ multiplication relates to the adjoint}}\\ &=\boldsymbol{v}^{\dagger}\times \mathbf{A}\times \boldsymbol{w} \ &\textrm{\textcolor{blue}{\ definition of Hermitian matrices}}\\ &=\boldsymbol{v}^{\dagger}\times\left(\mathbf{A}\boldsymbol{w}\right) \ &\textrm{\textcolor{blue}{\ multiplication is associative}}\\ &=\langle\boldsymbol{v}, \mathbf{A}\boldsymbol{w}\rangle &\textrm{\textcolor{blue}{\ definition of inner product}} \end{aligned} ◻

:::

proposition

For a Hermitian matrix, its all eigenvalues are real.

proof

Proof. Let ACn×n\mathbf{A}\in\mathbb{C}^{n\times n} be a Hermitian matrix with an eigenvalue cCc\in\mathbb{C} and an eigenvector vCn\boldsymbol{v}\in\mathbb{C}^n cv,v=v,cv inner product respects scalar multiplication=v,Av definition of eigenvalue and eigenvector=Av,v see Proposition of Symmetry=cv,v definition of eigenvalue and eigenvector=cv,v inner product respects scalar multiplication\begin{aligned} c\langle\boldsymbol{v},\boldsymbol{v}\rangle &=\langle\boldsymbol{v}, c\boldsymbol{v}\rangle &\textrm{\textcolor{blue}{\ inner product respects scalar multiplication}}\\ &=\langle\boldsymbol{v}, \mathbf{A}\boldsymbol{v}\rangle &\textrm{\textcolor{blue}{\ definition of eigenvalue and eigenvector}}\\ &=\langle\mathbf{A}\boldsymbol{v}, \boldsymbol{v}\rangle &\textrm{\textcolor{blue}{\ see Proposition of Symmetry}}\\ &=\langle c\boldsymbol{v}, \boldsymbol{v}\rangle &\textrm{\textcolor{blue}{\ definition of eigenvalue and eigenvector}}\\ &=\overline{c}\langle\boldsymbol{v}, \boldsymbol{v}\rangle &\textrm{\textcolor{blue}{\ inner product respects scalar multiplication}}\\ \end{aligned} ◻

proposition

For a Hermitian matrix, distinct eigenvectors that have distinct eigenvalues are orthogonal

proof

Proof. Let ACn×n\mathbf{A}\in\mathbb{C}^{n\times n} be a Hermitian matrix with two distinct eigenvectors v1v2Cn\boldsymbol{v}_1\neq\boldsymbol{v}_2\in\mathbb{C}^n and their related eigenvalues c1,c2Cc_1,c_2\in\mathbb{C} c2v1,v2=v1,c2v2 inner product respects scalar multiplication=v1,Av2 definition of eigenvalue and eigenvector=Av1,v2 see Proposition \refprop:symmetry=c1v1,v2 definition of eigenvalue and eigenvector=c1v1,v2 inner product respects scalar multiplication=c1v1,v2 see proposition \refprop:real\begin{aligned} c_2\langle\boldsymbol{v}_1,\boldsymbol{v}_2\rangle &=\langle\boldsymbol{v}_1, c_2\boldsymbol{v}_2\rangle &\textrm{\textcolor{blue}{\ inner product respects scalar multiplication}}\\ &=\langle\boldsymbol{v}_1, \mathbf{A}\boldsymbol{v}_2\rangle &\textrm{\textcolor{blue}{\ definition of eigenvalue and eigenvector}}\\ &=\langle\mathbf{A}\boldsymbol{v}_1, \boldsymbol{v}_2\rangle &\textrm{\textcolor{blue}{\ see Proposition \ref{prop:symmetry}}}\\ &=\langle c_1\boldsymbol{v}_1, \boldsymbol{v}_2\rangle &\textrm{\textcolor{blue}{\ definition of eigenvalue and eigenvector}}\\ &=\overline{c_1}\langle\boldsymbol{v}_1, \boldsymbol{v}_2\rangle &\textrm{\textcolor{blue}{\ inner product respects scalar multiplication}}\\ &=c_1\langle\boldsymbol{v}_1, \boldsymbol{v}_2\rangle &\textrm{\textcolor{blue}{\ see proposition \ref{prop:real}}} \end{aligned} ◻

proposition

Every self-adjoint operator A\mathbf{A} on a finite-dimensional complex vector space V\mathbb{V} can be represented by a diagonal matrix whose diagonal entries are the eigenvalues of A\mathbf{A}, and whose eigenvectors form an orthonormal basis for V\mathbb{V} (we shall call this basis an eigenbasis).

Physical Meaning of Hermitian Matrix. Hermitian matrices and their eigenbases will play a major role in our story. We shall see in the following lectures that associated with every physical observable of a quantum system there is a corresponding Hermitian matrix. Measurements of that observable always lead to a state that is represented by one of the eigenvectors of the associated Hermitian matrix.

Unitary Matrix

definition

Given a reversible matrix UCn×n\mathbf{U}\in\mathbb{C}^{n\times n} such that U×U=U×U=In\mathbf{U}\times \mathbf{U}^{\dagger} = \mathbf{U}^{\dagger}\times \mathbf{U}=\mathbf{I}_n then U\mathbf{U} is a unitary matrix.

example

U1=[cosθsinθ0sinθcosθ0001]\mathbf{U}_1=\begin{bmatrix} \cos\theta &-\sin\theta &0\\[8pt] \sin\theta &\cos\theta &0\\[8pt] 0 &0 &1 \end{bmatrix} for any θ\theta. U2=[1+i2i33+i21512134+3i21512i35i215]\mathbf{U}_2=\begin{bmatrix} \frac{1+i}{2} &\frac{i}{\sqrt{3}} &\frac{3+i}{2\sqrt{15}}\\[8pt] \frac{-1}{2} &\frac{1}{\sqrt{3}} &\frac{4+3i}{2\sqrt{15}}\\[8pt] \frac{1}{2} &\frac{-i}{\sqrt{3}} &\frac{5i}{2\sqrt{15}} \end{bmatrix}

proposition

If UCn×n\mathbf{U}\in\mathbb{C}^{n\times n} is unitary, v,wCn\forall \boldsymbol{v}, \boldsymbol{w}\in\mathbb{C}^n we have Uv,Uw=v,w\langle\mathbf{U}\boldsymbol{v}, \mathbf{U}\boldsymbol{w}\rangle=\langle\boldsymbol{v}, \boldsymbol{w}\rangle

proof

Proof. Let ACn×n\mathbf{A}\in\mathbb{C}^{n\times n} be a Hermitian matrix with two distinct eigenvectors v1v2Cn\boldsymbol{v}_1\neq\boldsymbol{v}_2\in\mathbb{C}^n and their related eigenvalues c1,c2Cc_1,c_2\in\mathbb{C} Uv,Uw=(Uv)×(Uw) definition for inner product=vU×Uw multiplication relates to adjoint=v×I×w definition for unitary matrices=v,w definition for inner product\begin{aligned} \langle\mathbf{U}\boldsymbol{v}, \mathbf{U}\boldsymbol{w}\rangle &=\left(\mathbf{U}\boldsymbol{v}\right)^{\dagger}\times \left(\mathbf{U}\boldsymbol{w}\right) &\textrm{\textcolor{blue}{\ definition for inner product}}\\ &=\boldsymbol{v}^{\dagger}\mathbf{U}^{\dagger}\times \mathbf{U}\boldsymbol{w} &\textrm{\textcolor{blue}{\ multiplication relates to adjoint}}\\ &=\boldsymbol{v}^{\dagger}\times \mathbf{I}\times \boldsymbol{w} &\textrm{\textcolor{blue}{\ definition for unitary matrices}}\\ &=\langle \boldsymbol{v}, \boldsymbol{w}\rangle &\textrm{\textcolor{blue}{\ definition for inner product}} \end{aligned} ◻

:::

proposition

If UCn×n\mathbf{U}\in\mathbb{C}^{n\times n} is unitary, v,Cn\forall \boldsymbol{v}, \in\mathbb{C}^n we have Uv=v|\mathbf{U}\boldsymbol{v}|=|\boldsymbol{v}|

proof

Proof. Let ACn×n\mathbf{A}\in\mathbb{C}^{n\times n} be a Hermitian matrix with two distinct eigenvectors v1v2Cn\boldsymbol{v}_1\neq\boldsymbol{v}_2\in\mathbb{C}^n and their related eigenvalues c1,c2Cc_1,c_2\in\mathbb{C} Uv=Uv,Uv definition for norm=v,v unitary matrices preserve inner product=v definition for norm\begin{aligned} |\mathbf{U}\boldsymbol{v}| &=\sqrt{\langle\mathbf{U}\boldsymbol{v}, \langle\mathbf{U}\boldsymbol{v}\rangle} &\textrm{\textcolor{blue}{\ definition for norm}}\\ &=\sqrt{\langle\boldsymbol{v}, \boldsymbol{v}\rangle} &\textrm{\textcolor{blue}{\ unitary matrices preserve inner product}}\\ &=|\boldsymbol{v}| &\textrm{\textcolor{blue}{\ definition for norm}}\\ \end{aligned} ◻

:::

proposition

If UCn×n\mathbf{U}\in\mathbb{C}^{n\times n} is unitary, v,wCn\forall \boldsymbol{v}, \boldsymbol{w}\in\mathbb{C}^n we have d(Uv,Uw)=d(v,w)d(\mathbf{U}\boldsymbol{v}, \mathbf{U}\boldsymbol{w})=d(\boldsymbol{v}, \boldsymbol{w})

proof

Proof. d(Uv,Uw)=UvUw definition for distance=U(vw) multiplication distributes over addition=vw unitary matrices preserve norm=d(v,w) definition of distance\begin{aligned} d(\mathbf{U}\boldsymbol{v}, \mathbf{U}\boldsymbol{w}) &=|\mathbf{U}\boldsymbol{v}-\mathbf{U}\boldsymbol{w}| &\textrm{\textcolor{blue}{\ definition for distance}}\\ &=|\mathbf{U}(\boldsymbol{v}-\boldsymbol{w})| &\textrm{\textcolor{blue}{\ multiplication distributes over addition}}\\ &=|\boldsymbol{v}-\boldsymbol{w}| &\textrm{\textcolor{blue}{\ unitary matrices preserve norm}}\\ &=d(\boldsymbol{v}, \boldsymbol{w}) &\textrm{\textcolor{blue}{\ definition of distance}} \end{aligned} ◻

:::

proposition

The modulus of eigenvalues of unitary matrix is 11.

proposition

Unitary matrix is the transition matrix from an orthonormal basis to another orthonormal basis.

Physical meaning of unitary Matrix. What does unitary really mean? As we saw, it means that it preserves the geometry. But it also means something else: If U\mathbf{U} is unitary and UV=V\mathbf{U}\mathbf{V}=\mathbf{V}^{'}, then we can easily form U\mathbf{U}^{\dagger} and multiply both sides of the equation by U\mathbf{U}^{\dagger} to get UUV=UV\mathbf{U}^{\dagger}\mathbf{U}\mathbf{V}=\mathbf{U}^{\dagger}\mathbf{V}^{'} or V=UV\mathbf{V}=\mathbf{U}^{\dagger}\mathbf{V}^{'}. In other words, because U\mathbf{U} is unitary, there is a related matrix that can "undo" the action that U\mathbf{U} performs. U\mathbf{U}^{\dagger} takes the result of U\mathbf{U}'s action and gets back the original vector. In the quantum world, all actions (that are not measurements) are "undoable" or "reversible" in such a manner.

UH
Figure 1.5: The role of Hermitian and unitary matrices

The roles of Hermitian and unitary matrices in quantum computing. As shown in Figure 1.5, the Hermitian matrix plays an important role in the quantum measurement phrase, which decides the concrete basis to observe the final computational result ψ\ket{\psi^{*}}. Once the basis (H1\mathbf{H}_1 or H2\mathbf{H}_2) is decided, the observation result must be probabilistically collapsed into one of the eigenvectors of the corresponding basis. The unitary matrix plays a role of action to change the state of the quantum computer. Considering its reversible property, all actions performed in quantum computing can be undone by performing an action described by U\mathbf{U}^{\dagger}. The relations of identity, Hermitian, unitary, and square matrices are shown in Figure 1.6.

types-of-matrices
Figure 1.6: Types of matrices.