17.2 ELL_Matrix Methods

The CÆSAR Code Package uses the ELL_Matrix class to contain and manipulate algebraic matrices. The following discussion assumes that A and B are matrices of dimension M×N, ai, j is an element of A, x is a vector of length N, and y is a vector of length M.

The CÆSAR Code Package contains procedures to calculate matrix norms and matrix-vector products. In parallel, these operations require global communication. Matrix-vector products (MatVecs) are defined by

Ax = y = $\displaystyle \sum_{{j=1,N}}^{}$ai, j xj  . (17.9)
A matrix norm is any scalar-valued function on a matrix, denoted by the double bar notation $ \left\Vert\vphantom{ A }\right.$A$ \left.\vphantom{ A }\right\Vert$, that satisfies these properties:

$\displaystyle \left\Vert\vphantom{ A }\right.$A$\displaystyle \left.\vphantom{ A }\right\Vert$ $\displaystyle \geq$ 0 $\displaystyle \mbox{($\left\Vert A \right\Vert = 0$ iff $A = 0$)}$  ,
$\displaystyle \left\Vert\vphantom{ A+B }\right.$A + B$\displaystyle \left.\vphantom{ A+B }\right\Vert$ $\displaystyle \leq$ $\displaystyle \left\Vert\vphantom{ A }\right.$A$\displaystyle \left.\vphantom{ A }\right\Vert$ + $\displaystyle \left\Vert\vphantom{ B }\right.$B$\displaystyle \left.\vphantom{ B }\right\Vert$  ,  
$\displaystyle \left\Vert\vphantom{ sA }\right.$sA$\displaystyle \left.\vphantom{ sA }\right\Vert$ = $\displaystyle \left\vert\vphantom{ s }\right.$s$\displaystyle \left.\vphantom{ s }\right\vert$$\displaystyle \left\Vert\vphantom{ A }\right.$A$\displaystyle \left.\vphantom{ A }\right\Vert$  , $\displaystyle \mbox{where $s$ is a scalar}$  .
(17.10)
A frequently used matrix norm is the Frobenius norm:

$\displaystyle \left\Vert\vphantom{ A }\right.$A$\displaystyle \left.\vphantom{ A }\right\Vert _{F}^{}$ = $\displaystyle \sqrt{{\sum_{i=1,M} \sum_{j=1,N} \left\vert a_{i,j} \right\vert^2}}$  . (17.11)
The general class of matrix norms known as the p-norms are defined in terms of vector norms by:

$\displaystyle \left\Vert\vphantom{ A }\right.$A$\displaystyle \left.\vphantom{ A }\right\Vert _{p}^{}$ = $\displaystyle \sup_{{x \neq 0}}^{}$$\displaystyle {\frac{{\left\Vert Ax \right\Vert _p}}{{\left\Vert x \right\Vert _p}}}$ = $\displaystyle \max_{{x: \left\Vert x \right\Vert _p=1}}^{}$$\displaystyle \left\Vert\vphantom{ Ax }\right.$Ax$\displaystyle \left.\vphantom{ Ax }\right\Vert _{p}^{}$       , p$\displaystyle \ge$1  . (17.12)
In particular, the 1 and $ \infty$ norms are the most easily calculated:
$\displaystyle \left\Vert\vphantom{ A }\right.$A$\displaystyle \left.\vphantom{ A }\right\Vert _{1}^{}$ = $\displaystyle \max_{{1 \leq j \leq N}}^{}$$\displaystyle \sum_{{i=1,M}}^{}$$\displaystyle \left\vert\vphantom{ a_{i,j} }\right.$ai, j$\displaystyle \left.\vphantom{ a_{i,j} }\right\vert$         (maximum absolute column sum), (17.13)
$\displaystyle \left\Vert\vphantom{ A }\right.$A$\displaystyle \left.\vphantom{ A }\right\Vert _{{\infty}}^{}$ = $\displaystyle \max_{{1 \leq i \leq M}}^{}$$\displaystyle \sum_{{j=1,N}}^{}$$\displaystyle \left\vert\vphantom{ a_{i,j} }\right.$ai, j$\displaystyle \left.\vphantom{ a_{i,j} }\right\vert$         (maximum absolute row sum). (17.14)

In general, p-norms are difficult to calculate. The 2-norm can be shown to be:
$\displaystyle \left\Vert\vphantom{ A }\right.$A$\displaystyle \left.\vphantom{ A }\right\Vert _{2}^{}$ = $\displaystyle \sqrt{{\mbox{maximum eigenvalue of} \; \; A^H A}}$  , (17.15)
  = $\displaystyle \sqrt{{\mbox{maximum eigenvalue of} \; \; A^T A}}$  ,$\displaystyle \mbox{\hspace{2em} if $A$ is real}$  , (17.16)

where AH is the Hermitian (or complex conjugate transpose, or adjoint) of A. Calculating the 2-norm of a matrix requires an iterative procedure, and is not currently done in CÆSAR (only Frobenius, p = 1 and p = $ \infty$ norms are calculated). However, CÆSAR calculates an estimate of the 2-norm as a range (or the middle of the range) using the following relationships:

$\displaystyle {\frac{{1}}{{\sqrt{N}}}}$$\displaystyle \left\Vert\vphantom{ A }\right.$A$\displaystyle \left.\vphantom{ A }\right\Vert _{F}^{}$ $\displaystyle \leq$ $\displaystyle \left\Vert\vphantom{ A }\right.$A$\displaystyle \left.\vphantom{ A }\right\Vert _{2}^{}$ $\displaystyle \leq$ $\displaystyle \left\Vert\vphantom{ A }\right.$A$\displaystyle \left.\vphantom{ A }\right\Vert _{F}^{}$  ,
$\displaystyle \max\limits_{{i,j}}^{}$$\displaystyle \left\vert\vphantom{ a_{i,j} }\right.$ai, j$\displaystyle \left.\vphantom{ a_{i,j} }\right\vert$ $\displaystyle \leq$ $\displaystyle \left\Vert\vphantom{ A }\right.$A$\displaystyle \left.\vphantom{ A }\right\Vert _{2}^{}$ $\displaystyle \leq$ $\displaystyle \sqrt{{MN}}$$\displaystyle \max\limits_{{i,j}}^{}$$\displaystyle \left\vert\vphantom{ a_{i,j} }\right.$ai, j$\displaystyle \left.\vphantom{ a_{i,j} }\right\vert$  ,
$\displaystyle {\frac{{1}}{{\sqrt{N}}}}$$\displaystyle \left\Vert\vphantom{ A }\right.$A$\displaystyle \left.\vphantom{ A }\right\Vert _{{\infty}}^{}$ $\displaystyle \leq$ $\displaystyle \left\Vert\vphantom{ A }\right.$A$\displaystyle \left.\vphantom{ A }\right\Vert _{2}^{}$ $\displaystyle \leq$ $\displaystyle \sqrt{{M}}$$\displaystyle \left\Vert\vphantom{ A }\right.$A$\displaystyle \left.\vphantom{ A }\right\Vert _{{\infty}}^{}$  ,
$\displaystyle {\frac{{1}}{{\sqrt{M}}}}$$\displaystyle \left\Vert\vphantom{ A }\right.$A$\displaystyle \left.\vphantom{ A }\right\Vert _{1}^{}$ $\displaystyle \leq$ $\displaystyle \left\Vert\vphantom{ A }\right.$A$\displaystyle \left.\vphantom{ A }\right\Vert _{2}^{}$ $\displaystyle \leq$ $\displaystyle \sqrt{{N}}$$\displaystyle \left\Vert\vphantom{ A }\right.$A$\displaystyle \left.\vphantom{ A }\right\Vert _{1}^{}$  ,
    $\displaystyle \left\Vert\vphantom{ A }\right.$A$\displaystyle \left.\vphantom{ A }\right\Vert _{2}^{}$ $\displaystyle \leq$ $\displaystyle \sqrt{{ \left\Vert A \right\Vert _1 \left\Vert A \right\Vert _{\infty}}}$  .
(17.17)
For a similar but more complete discussion of matrix operations see Golub & Van Loan (1989).

Michael L. Hall