Loading [MathJax]/jax/output/CommonHTML/fonts/TeX/fontdata.js
Skip to the content of the web site.

NE 112 Linear algebra for nanotechnology engineers

This includes data sturctures and algorithms for working with sparse matrices using the new Yale sparse matrix format. The two main data structures are

  1. vector, and
  2. sparse_matrix.

1. Vector data structure and functions

The vector data structure is simply an array of n entries. Functions related to vectors include:

  1. vector initialization and clean up,
  2. vector set, get, assignment and queries,
  3. vector operations,
  4. vector functions,
  5. random vector generators, and
  6. vector printing.

1.1 Vector initialization and clean up

The first three functions initialize the vector data structure. These require that the vector has not been previously initialized, otherwise we would have a memory leak.

void vector_init( vector_t *u, size_t n );

Initialize a vector u by allocating sufficient memory for n entries. The entries are not initialized, so during one initialization all the entries may be zero, during a subsequent initialization all the entries may appear to be random values.

void vector_init_scalar( vector_t *u, size_t n, double r );

Initialize a vector u by allocating sufficient memory for n entries. All the entries are initialized to the real value r; that is ujr for 0j<n.

void vector_init_vector( vector_t *u, vector_t *v );

Initialize a vector u by allocating sufficient memory to match the dimension of the vector u and copy the entries of v over to u; that is ujvj for 0j<n.

void vector_destroy( vector_t *u );

Deallocate the memory for the vector u.

This requires that the vector has been initalized; otherwise the pointer to the memory would be a wild pointer, and after the memory is deallocated, the pointer to the memory is set to the null pointer to avoid having a dangling pointer.

1.2 Vector set, get, assignment and queries

These all require that any vector arguments have been initalized; otherwise we have a wild pointer.

double vector_get( vector_t *u, size_t j );

Return the entry uj where 0j<n.

void vector_set( vector_t *u, size_t j, double r );

Assign the jth entry of u to r; that is, assign ujr where 0j<n.

void vector_vector_assign( vector_t *u, vector_t *v );

Copy the entries of the vector v over to the entries of u; that is, assign ujvj for 0j<n. The run time is O(n).

void vector_scalar_assign( vector_t *u, double alpha );

Assign all the entries in the vector u to the value \alaph; that is, assign uj\alaph for 0j<n. The run time is O(n).

Also, the dimensions of the two vectors must be equal; that is dim(u)=dim(v).

size_t vector_dimension( vector_t *u );

Return the dimension n of the vector u. The run time is O(1).

1.3 Vector operations

The following are vector operations for

  1. multiplying a vector by a scalar,
  2. adding a scalar to each entry of a vector,
  3. adding a vector to a given vector, and
  4. adding a scalar multiple of a vector to a given vector.

These operations require that the vector has been initalized; otherwise we have a wild pointer.

void vector_scalar_multiply( vector_t *u, double r );

Perform scalar multiplication uru; that is, assign ujruj for 0j<n.

void vector_scalar_add( vector_t *u, double r );

Add the scalar r onto each entry of the vecotr u; that is, assign ujruj for 0j<n.

void vector_vector_add( vector_t *u, vector_t v * );

Add the entries of the vector v onto the entries of the vector u; that is, assign ujuj+vj for 0j<n.

void vector_vector_scalar_add( vector_t *u, vector_t *v, double r );

Add a scalar multiple of the vector v onto the vector u; that is, that is, assign ujuj+rvj for 0j<n.

1.4 Vector functions

These operations require that the vector has been initalized; otherwise we have a wild pointer.

double inner_product( vector_t *u, vector_t *v );

Calculate the inner product of the two vectors u and v; that is, u,v=uv=n1j=0ujvj.

void vector_project( vector_t *u, vector_t *v );

Project the vector u onto the vector v; that is, \projv(u)=u,vv,vv=uvvvv, or uju,vv,vvj for 0j<n.

void vector_perpendicularize( vector_t *, vector_t * );

Find the perpendicular component of the vector u relative to the vector v; that is, v(u)=uu,vv,vv=uuvvvv, or ujuju,vv,vvj for 0j<n.

double vector_norm( vector_t *, norm_t );

Calculate the norm of a vector u using one of the metrics

  • the 1-norm (ONE_NORM) or ,
  • the mean error (MEAN_ERROR) or \frac{1}{n}\|{\bf u}\|_1 = \frac{1}{n} \sum_{j = 0}^{n - 1} \left | u_j \right |,
  • the 2-norm (TWO_NORM) or \|{\bf u}\|_2 = \sqrt{\sum_{j = 0}^{n - 1} u_j^2},
  • the 2-norm squared (TWO_NORM_SQUARED) or \|{\bf u}\|_2^2 = \sum_{j = 0}^{n - 1} u_j^2,
  • the root mean squared error (ROOT_MEAN_SQUARED_ERROR) or \frac{1}{\sqrt{n}}\|{\bf u}\|_2 = \sqrt{\frac{1}{n} \sum_{j = 0}^{n - 1} u_j^2},
  • the mean squared error (MEAN_SQUARED_ERROR) or \frac{1}{n}\|{\bf u}\|_2^2 = \frac{1}{n} \sum_{j = 0}^{n - 1} u_j^2,
  • the infinity norm (INFINITY_NORM) or \|{\bf u}\|_\infty = \max_{0 ≤ j < n} \left | u_j \right |.

Note that there is no mean infinity error.

void vector_normalize( vector_t *u, norm_t nm );

Normalize a non-zero vector by dividing it by its 1-norm, mean error, 2-norm, root mean squared error or infinity norm.

Note that we cannot normalize with respect to, for example, the two-norm squared, for in general

\left \| \frac{\bf u}{\|{\bf u}\|_2^2} \right \|_2^2 \ne 1.

This is because the two-norm squared and the mean squared error are not vector norms.

double vector_distance( vector_t *u, vector_t *v, norm_t );

Calculate the distance between two vectors {\bf u} and {\bf v} using

The metrics available for measuring the distance between two vectors include

  • the 1-norm (ONE_NORM) or \|{\bf u} - {\bf v}\|_1 = \sum_{j = 0}^{n - 1} \left | u_j - v_j \right |,
  • the mean error (MEAN_ERROR) or \frac{1}{n}\|{\bf u} - {\bf v}\|_1 = \frac{1}{n} \sum_{j = 0}^{n - 1} \left | u_j - v_j \right |,
  • the 2-norm (TWO_NORM) or \|{\bf u} - {\bf v}\|_2 = \sqrt{\sum_{j = 0}^{n - 1} \left ( u_j - v_j \right )^2},
  • the 2-norm squared (TWO_NORM_SQUARED) or \|{\bf u} - {\bf v}\|_2^2 = \sum_{j = 0}^{n - 1} \left ( u_j - v_j \right )^2,
  • the root mean squared error (ROOT_MEAN_SQUARED_ERROR) or \frac{1}{\sqrt{n}}\|{\bf u} - {\bf v}\|_2 = \sqrt{\frac{1}{n} \sum_{j = 0}^{n - 1} \left ( u_j - v_j \right )^2},
  • the mean squared error (MEAN_SQUARED_ERROR) or \frac{1}{n}\|{\bf u} - {\bf v}\|_2^2 = \frac{1}{n} \sum_{j = 0}^{n - 1} \left ( u_j - v_j \right )^2,
  • the infinity norm (INFINITY_NORM) or \|{\bf u} - {\bf v}\|_\infty = \max_{0 ≤ j < n} \left | u_j - v_j \right |.

Note that there is no mean infinity error.

void gram_schmidt( vector_t **array, size_t N );

Given an array of N vectors, all of the same dimension n, perform the numerically stable variation of the Gram-Schmidt algorithm producing an array of orthogonal vectors (some of which may be the zero vector).

1.5 Random vector generators

void random_vector( vector_t *u, distribution_t dist );

Overwite u with a random vector where the entries are either

  • uniform distributed on (0, 1) (UNIFORM_POSITIVE),
  • uniform distributed on (-1, 1) (UNIFORM_SYMMETRIC), or
  • normal distributed with mean zero and standard deviation 1 (STANDARD_NORMAL).

Using vector_scalar_multiply and vector_scalar_add, vectors with uniform distributions on arbitrary intervals [a, b] or normally distributed with mean \mu and standard deviation \sigma can easily be created.

1.6 Vector printing

int vector_printf( vector_t *, char *, output_t );

Print the vector {\bf u} as if it were input for one of the specified languages:

  • C for the C programming language,
  • MATLAB for the Matlab programming language,
  • MAPLE for the Maple programming language,
  • MATHEMATICA for the Mathematica programming language,

The format of each double-precision floating-point number is through the string format using printf formatting.

2. Sparse matrix data structure and functions

The sparse matrix data structure used is the new Yale format. For a square matrix A of dimension n, under the assumption that the diagonal is in general non-zero, one array of size n stores the diagonal entries. The maximum number of non-zero entries is N and the values are stored in lexicographical order (that is, (0,1), (0,2), ..., (0,n - 1), (1,0), (1,2), ..., (1,n - 1), ..., (n - 1, 0), ..., (n - 1, n - 2)) in an array of size N. Also, the column indices are stored in the same order in a second column-index array, but rather than storing another array of the corresponding row entries, instead, we have an array of n + 1 entries indicating where the first location an entry of row i would occur with the last entry being one beyond the last entry stored.

typedef struct { size_t dimension; size_t max_non_zero_off_diagonal_values; size_t *a_row_indices; size_t *a_column_indices; double *a_diagonal_values; double *a_off_diagonal_values; } sparse_matrix_t;

Functions related to matrices include:

  1. matrix initialization and clean up,
  2. matrix set, get, append, assignment and queries,
  3. matrix operations,
  4. matrix-vector and matrix-matrix functions and solvers, and
  5. random sparse matrix generators, and
  6. matrix printing.

2.1 Matrix initialization and clean up

The first three functions initialize the vector data structure. These require that the vector has not been previously initialized, otherwise we would have a memory leak.

void matrix_init( sparse_matrix_t *A, size_t n, size_t N );

Initialize a matrix A by allocating sufficient memory for n diagonal entries and N off-diagonal entries. The diagonal values are not initialized, so during one initialization the entries may be zero, during a subsequent initialization all the entries may appear to be random values.

void matrix_init_scalar( sparse_matrix_t *A, size_t n, size_t N, double r );

Initialize a matrix A by allocating sufficient memory for n diagonal entries and N off-diagonal entries. All the diagonal entries are initialized to the real value r; that is a_{i,i} \leftarrow r for 0 ≤ i < n.

void matrix_init_vector( sparse_matrix_t *A, vector_t *u, size_t N );

Given an n-dimensional vector {bf u}, initialize a matrix A by allocating sufficient memory for n diagonal entries and N off-diagonal entries. The diagonal entries are assigned initialized to the corresponding entries in the vector {\bf u}; that is a_{i,i} \leftarrow u_i for 0 ≤ i < n.

void matrix_destroy( sparse_matrix_t * );

Deallocate the memory for the matrix A.

This requires that the vector has been initalized; otherwise the pointer to the memory would be a wild pointer, and after the memory is deallocated, the pointer to the memory is set to the null pointer to avoid having a dangling pointer.

2.2 Matrix set, get, append, assignment and queries

These all require that any matrix arguments have been initalized; otherwise we have a wild pointer.

double matrix_get( sparse_matrix_t *A, size_t i, size_t j );

Return the entry a_{i,j} where 0 ≤ i < n and 0 ≤ j < n.

void matrix_set( sparse_matrix_t *A, size_t i, size_t j, double r );

Assign the (i,j)^{\text th} entry of A to r; that is, assign a_{i,j} \leftarrow r where 0 ≤ i < n and 0 ≤ j < n. This is potentially an O(N) operator and should only be used sparingly. If at all possibly, use the next function matrix_append.

void matrix_append( sparse_matrix_t *A, size_t i, size_t j, double r );

Assign the (i,j)^{\text th} entry of A to r; that is, assign a_{i,j} \leftarrow r where 0 ≤ i < n and 0 ≤ j < n where it is required that this entry must be placed in the next available position in the data structure. For this to be true, if (i',j') is the location of any other non-zero off-diagonal entry, then either i > i' or if i = i', j > j'.

void matrix_matrix_assign( sparse_matrix_t *A, sparse_matrix_t *B );

Copy the entries of the matrix B over to the entries of A; that is, assign a_{i,j} \leftarrow b_{i,j} for 0 ≤ j < n and 0 ≤ j < n.

The dimensions of both matrices must be equal, and the capacity for non-diagonal entries of A must be at least as large as the number of off-diagonal entries in B.

void matrix_vector_assign( sparse_matrix_t *A, vector_t *v );

Copy the entries of the vector {\bf v} over to the diagonal entries of A; that is, assign a_{i,i} \leftarrow v_i for 0 ≤ i < n. All off-diagonal entries in A are set to zero.

This requires that the dimensions of A and {\bf v} are equal.

void matrix_scalar_assign( sparse_matrix_t *A, double r );

Assign all the diagonal entries in the matrix A to the value r; that is, assign a_{i,i} \leftarrow r for 0 ≤ i < n. All off-diagonal entries in A are set to zero.

size_t matrix_dimension( sparse_matrix_t *A );

Return the dimension of the matrix A. The run time is O(1).

double matrix_density( sparse_matrix_t *A );

Calculate the density of the matrix, meaning the number of non-zero entries over the total number of entries. This function checks the diagonal, and thus runs in O(n) time.

bool is_diagonal( sparse_matrix_t *A );

Return true if the matrix A is diagonal (meaning that a_{i,j} = 0 if i \neq j) and false otherwise. This runs in O(1) time.

bool is_upper_triangular( sparse_matrix_t *A );

Return true if the matrix A is upper triangular (meaning that a_{i,j} = 0 if i > j) and false otherwise. This runs in O(n) time.

bool is_lower_triangular( sparse_matrix_t *A );

Return true if the matrix A is lower triangular (meaning that a_{i,j} = 0 if i < j) and false otherwise. This runs in O(n) time.

bool is_symmetric( sparse_matrix_t *A );

Return true if the matrix A is symmetric (meaning that a_{i,j} = a_{j,k}) and false otherwise. This runs in O(n^3d^2) time.

bool is_skew_symmetric( sparse_matrix_t *A );

Return true if the matrix A is skew-symmetric (meaning that a_{i,j} = -a_{j,i}) and false otherwise. This runs in O(n^3d^2) time.

bool is_valid( sparse_matrix_t *A );

Return true if the matrix A data structure appears to be consistent, and false otherwise. It ensures that the row indices are increasing, starting with 0, that the number of off-diagonal entries is less than the maximum allowed, that the columns within each row are linearly ordered, that all column indices are within the dimensions of the matrix, and that the off-diagonal entries are all non-zero.

This routine should be used if you are directly manipulating the sparse matrix data structure and need to confirm that any resulting data structure continues to be valid.

2.3 Matrix operations

Matrix operations are generally in-place, meaning that the first argument is modified based on the subsequent arguments. The only exception is matrix-matrix addition, an operation that cannot be done in-place without O(n) additional memory.

void matrix_scalar_multiply( sparse_matrix_t *A, double alpha );

Multiply each entry in the matrix A by the scalar \alpha; that is, a_{i,j} \leftarrow \alpha a_{i,j}. The run time is O(n^2d + n) unless \alpha = 0, in which case the run time is O(n).

void matrix_matrix_add( sparse_matrix_t *A, sparse_matrix_t *B, sparse_matrix_t *C );

Overwrite the matrix A with the sum B + C; that is, a_{i,j} = b_{i,j} + c_{i,j}. Using only local variables, it is not possible to perform this in place, and therefore a third matrix must be invovled.

void matrix_vector_add( sparse_matrix_t *A, vector_t *v );

Add the entries of the vector onto the diagonal of the matrix A; that is, let a_{i,i} \leftarrow a_{i,i} + v_i. The run time is O(n).

void matrix_scalar_add( sparse_matrix_t *, double );

Add the scalar \alpha onto each entry of the diagonal of the matrix A; that is, let a_{i,i} \leftarrow a_{i,i} + \alpha. The run time is O(n).

2.4 Matrix functions and solvers

void matrix_vector_multiply( vector_t *v, sparse_matrix_t *A, vector_t *u );

Assign {\bf v} the product A{\bf u}; that is v_i = \sum_{j = 1}^n a_{i,j}u_j. The run time is O(dn^2).

void matrix_transpose_vector_multiply( vector_t *, sparse_matrix_t *, vector_t * );

Assign {\bf v} the product A^T{\bf u}; that is v_i = \sum_{j = 1}^n a_{j,i}u_j. The run time is O(dn^2).

void diagonal_solve( vector_t *v, sparse_matrix_t *A, vector_t *b );

Solve a diagonal system of linear equations A{\bf u} = {\bf b} for {\bf u} using direct calculation; that is u_i = \frac{b_i}{a_{i,i}}. The matrix A must be diagonal.

void backward_substitute( vector_t *u, sparse_matrix_t *A, vector_t *b );

Solve a upper-triangular system of linear equations A{\bf u} = {\bf b} for {\bf u} using backward substitution; that is, u_i = \frac{b_i - \sum_j = {i + 1}^{n - 1} a_{i,j}u_j}{a_{i,i}} for i from n - 1 down to 0. The matrix A must be upper-triangular.

void forward_substitute( vector_t *u, sparse_matrix_t *A, vector_t *b );

Solve a lower-triangular system of linear equations A{\bf u} = {\bf b} for {\bf u} using forward substitution; that is, u_i = \frac{b_i - \sum_j = {j = 0}^{i - 1} a_{i,j}u_j}{a_{i,i}} for i from 0 to n - 1. The matrix A must be lower-triangular.

size_t jacobi( vector_t *u, sparse_matrix_t *A, vector_t *v, vector_t *t_1, size_t max_iterations, double step_tolerance );

Approximate a solution to the system of linear equations defined by A{\bf u} = {\bf b} for {\bf u} using the Jacobi method. The algorithm will iterate a maximum of max_iteration times, and halts when the difference between successive approximations is less than step_tolerance. The number of iterations is returned, and if the algorithm failed to converge, the return value will be max_iterations + 1.

size_t gauss_seidel( vector_t *, sparse_matrix_t *, vector_t *, vector_t *, size_t, double );

Approximate a solution to the system of linear equations defined by A{\bf u} = {\bf b} for {\bf u} using the Gauss-Seidel method. The algorithm will iterate a maximum of max_iteration times, and halts when the difference between successive approximations is less than step_tolerance. The number of iterations is returned, and if the algorithm failed to converge, the return value will be max_iterations + 1.

size_t steepest_descent( vector_t *, sparse_matrix_t *, vector_t *, vector_t *, vector_t *, size_t, double );

Approximate a solution to the system of linear equations defined by A{\bf u} = {\bf b} for {\bf u} using the steepest-descent method. The algorithm will iterate a maximum of max_iteration times, and halts when the difference between successive approximations is less than step_tolerance. The number of iterations is returned, and if the algorithm failed to converge, the return value will be max_iterations + 1.

The matrix must be symmetric and positive definite.

size_t minimal_residual( vector_t *, sparse_matrix_t *, vector_t *, vector_t *, vector_t *, size_t, double );

Approximate a solution to the system of linear equations defined by A{\bf u} = {\bf b} for {\bf u} using the minimal-residual method. The algorithm will iterate a maximum of max_iteration times, and halts when the difference between successive approximations is less than step_tolerance. The number of iterations is returned, and if the algorithm failed to converge, the return value will be max_iterations + 1.

The matrix must be positive definite.

size_t residual_norm_steepest_descent( vector_t *, sparse_matrix_t *, vector_t *, vector_t *, vector_t *, vector_t *, size_t, double );

Approximate a solution to the system of linear equations defined by A{\bf u} = {\bf b} for {\bf u} using the residual-norm steepest-descent method. The algorithm will iterate a maximum of max_iteration times, and halts when the difference between successive approximations is less than step_tolerance. The number of iterations is returned, and if the algorithm failed to converge, the return value will be max_iterations + 1.

void lu( sparse_matrix_t *A, sparse_matrix_t *L, sparse_matrix_t *U );

Calculate the LU-decomposition of the matrix A assuming it exists and a permutation matrix is not necessary. The matrices L and U are overwritten. The run-time is more strongly based on the density of the lower- and upper-triangular matrices L and U, respectively, for even if the matrix A is sparse, the matrices L and U may be dense.

2.5 Random sparse matrix generators

These functions are designed to have a run time equal to the number of diagonal entries and the number non-zero entries which may be either dense or based on a specified density d, meaning O(dn^2 + n). In the case of symmetric and skew-symmetric matrices, additional work must be done, and therefore the run time becomes O(dn^2 + n^2). In all cases, only local variables are used—no additional arrays of memory are allocated.

The algorithm used to achieve the desired density uses an exponential distribution to create a poisson distribution of entries that are either zero or otherwise assigned a random value. Because a poisson distribution assumes some bins may contain more than one item, to achieve the desired distribtuion of non-zero entries, it is necessary to correct the density. Thus, these algorithms are necessarily slower for densities closer to 1.

void random_matrix( sparse_matrix_t *A, matrix_shape_t shape, distribution_t dist, diagonal_t diag, double density );

Overwrite A with a random matrix that has a shape that is one of

  • GENERIC meaning no special restrictions,
  • UPPER_TRIANGULAR meaning that a_{i,j} = 0 if i > j,
  • LOWER_TRIANGULAR meaning that a_{i,j} = 0 if i < j,
  • SYMMETRIC meaning that a_{i,j} = a_{j,i}, and
  • SKEW_SYMMETRIC meaning that a_{i,j} = -a_{j,i}.

The density of the entries is given by the argument density and this determines the density of entries off of the diagonal. The distribution on the diagonal entries is decided by:

  • DENSE_DIAGONAL meaning that each diagonal entry is assigned a random value,
  • RANDOM_DIAGONAL meaning that the density on the diagonal matches the density of the matrix, and
  • ZERO_DIAGONAL meaning that a_{i,i} = 0 for all i.

The random values themselves are chosen from one of three distributions:

  • UNIFORM_POSITIVE meaning that each random value is selected from a uniform distribution from (0, 1),
  • UNIFORM_SYMMETRIC meaning that each random value is selected from a uniform distribution from (-1, 1), and
  • STANDARD_NORMAL meaning that each random value is selected from a standard normal distribution (meaning that \mu = 0 and \sigma = 1).

void random_diagonal_matrix( sparse_matrix_t *A, distribution_t dist, diagonal_t diag, double density );

Overwrite A with a random diagonal matrix where the distribution of diagonal entries is decided by:

  • DENSE_DIAGONAL meaning that each diagonal entry is assigned a random value,
  • RANDOM_DIAGONAL meaning that the density on the diagonal equals the argument density, and
  • ZERO_DIAGONAL meaning that a_{i,i} = 0 for all i (which is essentially useless but included for completeness).

The random values themselves are chosen from one of three distributions:

  • UNIFORM_POSITIVE meaning that each random value is selected from a uniform distribution from (0, 1),
  • UNIFORM_SYMMETRIC meaning that each random value is selected from a uniform distribution from (-1, 1), and
  • STANDARD_NORMAL meaning that each random value is selected from a standard normal distribution (meaning that \mu = 0 and \sigma = 1).

void random_tridiagonal_matrix( sparse_matrix_t *A, matrix_shape_t shape, distribution_t dist );

Overwrite A with a random dense tridiagonal matrix, meaning that all entries off the diagonal, the super diagonal and the sub diagonal are zero. Additionally, the matrix may have a shape that is one of

  • GENERIC meaning no special restrictions,
  • UPPER_TRIANGULAR meaning that additionally a_{i,i - 1} = 0,
  • LOWER_TRIANGULAR meaning that additionally a_{i,i + 1} = 0,
  • SYMMETRIC meaning that a_{i,i + 1} = a_{i + 1,i}, and
  • SKEW_SYMMETRIC meaning that a_{i,i + 1} = -a_{i + 1,i}.

The random values themselves are chosen from one of three distributions:

  • UNIFORM_POSITIVE meaning that each random value is selected from a uniform distribution from (0, 1),
  • UNIFORM_SYMMETRIC meaning that each random value is selected from a uniform distribution from (-1, 1), and
  • STANDARD_NORMAL meaning that each random value is selected from a standard normal distribution (meaning that \mu = 0 and \sigma = 1).

void random_band_diagonal_matrix( sparse_matrix_t *A, size_t m, matrix_shape_t shape, distribution_t dist, diagonal_t diag, double density );

Overwrite A with a random band-diagonal matrix meaning that a_{i,j} = 0 if |i - j| > m. Additionally, the matrix may have a shape that is one of

  • GENERIC meaning no special restrictions,
  • UPPER_TRIANGULAR meaning that a_{i,j} = 0 if i > j,
  • LOWER_TRIANGULAR meaning that a_{i,j} = 0 if i < j,
  • SYMMETRIC meaning that a_{i,j} = a_{j,i}, and
  • SKEW_SYMMETRIC meaning that a_{i,j} = -a_{j,i}.

The density of the entries on the band is given by the argument density and this determines the density of entries off of the diagonal. The distribution on the diagonal entries is decided by:

  • DENSE_DIAGONAL meaning that each diagonal entry is assigned a random value,
  • RANDOM_DIAGONAL meaning that the density on the diagonal matches the density of the matrix, and
  • ZERO_DIAGONAL meaning that a_{i,i} = 0 for all i.

The random values themselves are chosen from one of three distributions:

  • UNIFORM_POSITIVE meaning that each random value is selected from a uniform distribution from (0, 1),
  • UNIFORM_SYMMETRIC meaning that each random value is selected from a uniform distribution from (-1, 1), and
  • STANDARD_NORMAL meaning that each random value is selected from a standard normal distribution (meaning that \mu = 0 and \sigma = 1).

2.6 Matrix printing

In each of these examples, we will show the output for the matrix

A = \left( \begin{matrix} 0.000246 & 0.473772 & 0.000000 & 0.000000 & 0.000000 \\ 0.473772 & 0.134160 & -0.711116 & 0.000000 & 0.000000 \\ 0.000000 & -0.711116 & 1.105896 & 1.668291 & 0.000000 \\ 0.000000 & 0.000000 & 1.668291 & 1.214598 & 0.259607 \\ 0.000000 & 0.000000 & 0.000000 & 0.259607 & -0.902819 \end{matrix} \right ).

int matrix_printf( sparse_matrix_t *A, char *format, output_t style );

Print the matrix A as if it were input for one of the specified languages:

  • C for the C programming language,
  • MATLAB for the Matlab programming language,
  • MAPLE for the Maple programming language,
  • MATHEMATICA for the Mathematica programming language,

The format of each double-precision floating-point number is through the string format using printf formatting. The above matrix printed in the Matlab format with "% f" yields an output of

                    [
                      0.000246  0.473772  0.000000  0.000000  0.000000
                      0.473772  0.134160 -0.711116  0.000000  0.000000
                      0.000000 -0.711116  1.105896  1.668291  0.000000
                      0.000000  0.000000  1.668291  1.214598  0.259607
                      0.000000  0.000000  0.000000  0.259607 -0.902819];

int matrix_data_structure_printf( sparse_matrix_t *A, char *format );

Print the sparse matrix data structure, including

  • the dimension of the matrix,
  • the number of off-diagonal entries together with the maximum number of off-diagonal entries,
  • a list of the diagonal entries,
  • the row indices indicating the boundaries of those columns containing entries in the ith row,
  • the column indices indicating the column of the corresponding off-diagonal values, and
  • a list of the off-diagonal values.

The above matrix printed in this format with "%f" yields an output of

                               Dimension: 5 x 5
                    Off-diagonal entries: 8 (max 20)
                        Diagonal entries: 0.000246 0.134160 1.105896 1.214598 -0.902819
                             Row indices: 0 1 3 5 7 8
                          Column indices: 1 0 2 1 3 2 4 3
                     Off-diagonal values: 0.473772 0.473772 -0.711116 -0.711116 1.668291 1.668291 0.259607 0.259607

int matrix_density_print( sparse_matrix_t *A )

Print density plot of a matrix where

  • zero entries on the diagonal are reprsented by a .,
  • zero entries below the diagonal are reprsented by a |,
  • zero entries above the diagonal are reprsented by a -,
  • positive entries are reprsented by an x, and
  • negative entries are reprsented by an o.

The above matrix printed in this format yields an output of

                     x x - - -
                     x x o - -
                     | o x x -
                     | | x x x
                     | | | x o