Complex Vector Space
Complex Vector Space
definition
A complex vector space is a nonempty set , whose elements we shall call vectors, with three operations
-
Addition: :
-
Negation: :
-
Scalar multiplication: :
and a distinguished element called the zero vector in the set. These operations and zero must satisfy the following properties: and for all ,
i. Commutativity of addition: ,
ii. Associativity of addition: ,
iii. Additive identity: ,
iv. Additive inverse: ,
v. Multiplication identity: ,
vi. Scalar multiplication distributes over addition: ,
vii. Scalar multiplication distributes over complex addition: ,
ex:n-dim-vector-space , the set of vectors of length with complex entries, is a complex vector space.
ex:nn-dim-vector-space , the set of all -by- matrices (two-dimensional arrays) with complex entries, is a complex vector space.
Unary Operations
Three unary operations for
-
Transpose:
-
Conjugate:
-
Ajoint:
and
-
Transpose is idempotent:
-
Transpose respects addition:
-
Transpose respects scalar multiplication:
and
-
Conjugate is idempotent:
-
Conjugate respects addition:
-
Conjugate respects scalar multiplication:
and
-
Adjoint is idempotent:
-
Adjoint respects addition:
-
Conjugate respects scalar multiplication:
Matrix Multiplication
-
Matrix multiplication distributes over addition:
-
Matrix multiplication respect scalar multiplication:
-
Matrix multiplication relates to the transpose:
-
Matrix multiplication respects to the conjugate:
-
Matrix multiplication relates to the adjoint:
The physical explanation. The elements of are the ways of describing the states of a quantum system. Some suitable elements of will correspond to the changes that occur to the states of a quantum system. Given a state and a matrix , we shall form another state of the system which is an element of . Formally, in this case is a function . We say that the algebra of matrices "acts" on the vectors to yield new vectors.
Linear Map
A linear map from to is a function , and where
-
respects the addition:
-
respects the scalar multiplication:
The physical explanation. We shall call any linear map from a complex vector space to itself an operator. If is an operator on and is an -by- matrix such that for all we have , then we say that is represented by . Several different matrices might represent the same operator.
Basis and Dimension
Basis
Let be a complex (real) vector space. is a linear combination of the vectors in if can be written as for some in ().
A set of vectors in is called linearly independent if implies that . This means that the only way that a linear combination of the vectors can be the zero vector is if all the are zero.
For any , cannot be written as a combination of the others
For any , unique coefficients
A set of vectors is called a basis of a (complex) vector space if both
-
-
is linearly independent
Dimension
The dimension of a (complex) vector space is the number of elements in a basis of the vector space.
A change of basis matrix or a transition matrix from basis to basis is a matrix such that their coefficients satisfy
In other words, is a way of getting the coefficients with respect to one basis from the coefficients with respect to another basis.
Utilities of Transition Matrix
-
Operator re-representation in a new basis
-
State re-representation in a new basis
ex:hadamard In , the transition matrix from the canonical basis to this other basis is the Hadamard matrix: as shown in Figure 1.1.
The motivation to change basis. In physics, we are often faced with a problem in which it is easier to calculate something in a noncanonical basis. For example, consider a ball rolling down a ramp as depicted in Figure 1.2.
The ball will not be moving in the direction of the canonical basis. Rather it will be rolling downward in the direction of +45◦, −45◦ basis. Suppose we wish to calculate when this ball will reach the bottom of the ramp or what is the speed of the ball. To do this, we change the problem from one in the canonical basis to one in the other basis. In this other basis, the motion is easier to deal with. Once we have completed the calculations, we change our results into the more understandable canonical basis and produce the desired answer. We might envision this as the flowchart shown in Figure1.3.
Throughout this course, we shall go from one basis to another basis, perform some calculations, and finally revert to the original basis. The Hadamard matrix will frequently be the means by which we change the basis.
Inner Product and Hilbert Space
Inner Product
An inner product (also called a dot product or scalar product) on a complex vector space is a function that satisfies the following conditions for all , , , and in and for :
i. Nondegenerate:
ii. Respects addition:
iii. Respects scalar multiplication: -->
iv. Skew symmetric:
A vector space with an inner space.
The inner product is given as
ex:Cn : The inner product is given as
ex:Rnn has an inner product given for matrices as where the trace of a square matrix is given as the sum of the diagonal elements. That is,
ex:Cnn has an inner product given for matrices as
Norm is a unary function derived from inner product defined as , which has the following properties
-
Norm is nondegenerate:
-
Norm satisfies the triangular inequality:
-
Norm respects scalar multiplication:
Distance is a binary function defined based on norm defined as , which has the following properties
-
Distance is nondegenerate:
-
Distance satisfies the triangular inequality:
-
Distance is symmetric:
A basis for an inner space with the following property
- For and any orthonormal basis we have
Note: inner product defines geometry in the vector space (Figure 1.4).
Hilbert Space
Within an inner product space , (with the derived norm and a distance function), a sequence of vectors is called a Cauchy sequence if , there exists an such that for all .
For any Cauchy sequence , it is complete if there exist a , such that .
A Hilbert space is a complex inner space that is complete.
Eigenvalue and Eigenvector
For a matrix , if there is a number and a vector such that then is called an eigenvalue of and is called an eigenvector of associate with .
Hermitian and Unitary Matrices
Hermitian Matrix
An -by- matrix is called hermitian if . In other words, .
If is a hermitian matrix then the operator that it represents is called self-adjoint.
if is Hermitian, we have
Proof. ◻
:::
For a Hermitian matrix, its all eigenvalues are real.
Proof. Let be a Hermitian matrix with an eigenvalue and an eigenvector ◻
For a Hermitian matrix, distinct eigenvectors that have distinct eigenvalues are orthogonal
Proof. Let be a Hermitian matrix with two distinct eigenvectors and their related eigenvalues ◻
Every self-adjoint operator on a finite-dimensional complex vector space can be represented by a diagonal matrix whose diagonal entries are the eigenvalues of , and whose eigenvectors form an orthonormal basis for (we shall call this basis an eigenbasis).
Physical Meaning of Hermitian Matrix. Hermitian matrices and their eigenbases will play a major role in our story. We shall see in the following lectures that associated with every physical observable of a quantum system there is a corresponding Hermitian matrix. Measurements of that observable always lead to a state that is represented by one of the eigenvectors of the associated Hermitian matrix.
Unitary Matrix
Given a reversible matrix such that then is a unitary matrix.
for any .
If is unitary, we have
Proof. Let be a Hermitian matrix with two distinct eigenvectors and their related eigenvalues ◻
:::
If is unitary, we have
Proof. Let be a Hermitian matrix with two distinct eigenvectors and their related eigenvalues ◻
:::
If is unitary, we have
Proof.