Calendar Description
Matrices, operations on matrices. Determinants. Adjoints and inverses.
Solution of linear equations: elimination and iterative methods.
Eigenvalues and eigenvectors with engineering applications. Complex numbers.
The philosophy of the course is that:
- We want you to understand that complex numbers are as easy to use and
as natural as the real numbers, with the only significant difference being
that you cannot order. For this, you will learn and use and manipulate
complex numbers with both the rectangular and polar representations,
using the real and imaginary components, and the magnitude and argument,
respectively. You will learn complex arithmetic, including addition,
subtraction, multiplication and division, for which we will use the
complex conjugate. We will also describe integer exponentiation.
We will then see that both real numbers and complex numbers satisfy the
properties of a field. We will see four inequalities involving complex
numbers, the fundamental theorem of algebra, the roots of unity, the
roots of polynomials with real coefficients, and geometric sums when
the ratio is complex.
- You will learn vectors, vector operations including scalar multiplication
and vector addition, as well as the properties of vector spaces.
We will focus on finite-dimensional vector spaces, but we will
also look at the vector spaces of polynomials, semi-infinite sequences, and
functions of a real variable.
- We will learn how to identify when a subset of a vector space is a
vector space in its own right; that is, the idea of a subspace. We will
classify subspaces of lower-dimensional vector spaces.
- We will describe and use different means of measuring the
length of vectors, including the 1-, 2- and the infinity-norms,
and while we will see that there are some properties shared by all
three of these norms, we will also focus on the 2-norm. We will
define norms for both real and complex vector spaces.
- Next we will define and use the concept of an inner product of
vectors. From this, we will see that the 2-norm is induced by the
inner product. The inner product allows us to define orthogonality,
and we will use the Pythagorean theorem, projections, the Cauchy-Bunyakovsky-Schwarz
theorem, the concept of an angle between vectors and the
Gram-Schmidt algorithm for converting a set of vectors to an
orthogonal set.
- We will next define linear combinations of vectors, and determine means of
finding a linear combination of a set of given vectors that equals a specified
target vector. This will introduce the augmented matrix representation of a
system of linear equations, Gaussian elimination with partial pivoting
and backward substitution.
With this, we will define the rank of such an augmented matrix. From this,
we will define linear dependence and the relationship between the span of a
set of vectors (all possible linear combinations) and subspaces. From this
we will define when a set of vectors is linearly independent and see that we
can define basis and dimensions for subspaces.
- At this point, we will take a diversion to 3-dimensional space, including
the definitions of lines and planes in 3-space as well as the cross product.
- Now that we understand vector spaces and subspaces, we will move to the
last major topic in the course: the concept of linear operators between
vector spaces. We will define when such a mapping is linear, and describe
both the range and null space of a linear operator. For linear operators between
finite-dimensional vector spaces, we will see how to find
bases for each of these, we will see that the inverse problem is equivalent
to finding a linear combination of the column vectors that make up the
matrix representation, and see that Gaussian elimination can be described
by specific linear row operations.
- Next, we will define the inverse of a linear operator and describe a technique
for finding them for small matrices, with the admonition that finding the inverse
is often numerically unstable.
- Instead, we will then look at the LU decomposition of a matrix into a
permutation matrix, a lower-triangular matrix L and an upper triangular matrix U, and
how, given this representation, we may solve a system of linear equations.
- Next, we will describe a property of a matrix through the determinant: the
idea that linear operators have a consistent ratio of volumes together with
either preservation or inversions of orientation.
- Next, we will describe the adjoint of a linear operator, and see that for
finite-dimensional vector spaces, the adjoint may be represented by the
transpose for real vector spaces and the conjugate transpose for complex vector
spaces. We will define both self- and skew-adjoint matrices, together with
symmetric and skew-symmetric, and conjugate symmetric and conjugate skew-symmetric
matrices. We will also define unitary matrices.
- Finally, we will describe eigenvalues and eigenvectors, and we will see that
when such eigenvalues exist, eigenvectors to different eigenvalues are
linearly independent, that we can find eigenvectors through finding bases
for the null space of appropriately modified linear transformations, and how
in some cases, we are guaranteed that a basis can be generated from the
eigenvectors of specific matrices, namely those that are normal, or for
real finite-dimensional vector spaces, those that are symmetric.