Skip to the content of the web site.

Functions are Vectors!

Vectors

In linear algebra, you learned about vectors, and you saw that the properties of vectors may be summarized as follows:

A collection V of object v is called a real vector space if the two operators + and ⋅ (called vector addition and scalar multiplication) are defined and satsify:

  1. If uV and uV then u + vV.
  2. u + v = v + uu, vV (commutativity).
  3. (u + v) + w = u + (v + w) ∀ u, v, wV (associativity).
  4. 0V such that 0 + v = vvV (the zero vector).
  5. vV, ∃ uV such that v + u = 0.
  6. a⋅(bv) = (ab)⋅vvV and a, bR.
  7. 1⋅v = v where 1 ∈ R.
  8. a⋅(u + v) = au + avu, vV and aR.
  9. (a + b)⋅v = av + bvvV and a, bR.

If you try this out with all 3-dimensional vectors (a, b, c)T, you will find that all of these properties hold.

Functions

If you consider real-valued functions f(x), g(x), and h(x) of a variable x, and try all of the above requirements, you will find that all of the properties hold. Therefore, the collection of all real-valued functions of a variable form a vector space. The zero vector is the zero function.

Dot Products

The dot product is the sum of the pairwise products of two vectors. The corresponding result for functions is the integral of the products:

Now, we define the 2-norm of a vector as being the square root of the dot product of the vector with itself:

Thus, we can define the 2-norm of a function as:

Note, however, that not all functions have a finite 2-norm. For example, if we are integrating from −∞ to ∞, the norm of the function f(x) = 1 is ∞. Thus, we have to restrict ourselves to functions which have a finite 2-norm. Fortunately, the collection of all functions which satisfy this condition is still a vector space.

Other Norms

We can define other norms, similar to the 1-norm and the ∞-norm of vectors:

The sup, or supremum, of a function is a generalization of the maximum of a function.

So, What is a Matrix for Functions?

If functions are vectors, can't we have matrices, too? The answer is: yes, and you've already seen them (or will see them). Remember that a matrix is nothing more than a bunch of row vectors, and the result of a matrix-vector product is the dot-product of each of the row vectors with the vector. If we define the ith row of M by Mi, then we may write a matrix-vector product as:

To extend this to functions, let us consider only those function which are defined on the interval [0, &infin). Define a function of two variables L(x,y) = e-xy. This function is symmetric in x and y. Thus, let us define the product of this function L with a function f(x) as

You have already seen this matrix-for-functions: it is the Laplace transform. Instead of calling the function L a matrix, it is called an operator.

Now you should be thinking: if we can define norms on matrices, why can we not define a norm on an operator? That's for later on...

Copyright ©2005 by Douglas Wilhelm Harder. All rights reserved.