### Matrices

A matrix is just a vector of vectors, all the same length. This means you can enter a matrix using nested brackets. You can also use the semicolon character to enter a matrix. We'll show both methods here:

1:  [ [ 1, 2, 3 ]             1:  [ [ 1, 2, 3 ]
[ 4, 5, 6 ] ]                 [ 4, 5, 6 ] ]
.                             .

[[1 2 3] [4 5 6]]             ' [1 2 3; 4 5 6] RET


We'll be using this matrix again, so type s 4 to save it now.

Note that semicolons work with incomplete vectors, but they work better in algebraic entry. That's why we use the apostrophe in the second example.

When two matrices are multiplied, the lefthand matrix must have the same number of columns as the righthand matrix has rows. Row i, column j of the result is effectively the dot product of row i of the left matrix by column j of the right matrix.

If we try to duplicate this matrix and multiply it by itself, the dimensions are wrong and the multiplication cannot take place:

1:  [ [ 1, 2, 3 ]   * [ [ 1, 2, 3 ]
[ 4, 5, 6 ] ]     [ 4, 5, 6 ] ]
.

RET *


Though rather hard to read, this is a formula which shows the product of two matrices. The *' function, having invalid arguments, has been left in symbolic form.

We can multiply the matrices if we transpose one of them first.

2:  [ [ 1, 2, 3 ]       1:  [ [ 14, 32 ]      1:  [ [ 17, 22, 27 ]
[ 4, 5, 6 ] ]           [ 32, 77 ] ]          [ 22, 29, 36 ]
1:  [ [ 1, 4 ]              .                       [ 27, 36, 45 ] ]
[ 2, 5 ]                                    .
[ 3, 6 ] ]
.

U v t                   *                     U TAB *


Matrix multiplication is not commutative; indeed, switching the order of the operands can even change the dimensions of the result matrix, as happened here!

If you multiply a plain vector by a matrix, it is treated as a single row or column depending on which side of the matrix it is on. The result is a plain vector which should also be interpreted as a row or column as appropriate.

2:  [ [ 1, 2, 3 ]      1:  [14, 32]
[ 4, 5, 6 ] ]        .
1:  [1, 2, 3]
.

r 4 r 1                *


Multiplying in the other order wouldn't work because the number of rows in the matrix is different from the number of elements in the vector.

(*) Exercise 1. Use *' to sum along the rows of the above @c{$2\times3$} 2x3 matrix to get [6, 15]. Now use *' to sum along the columns to get [5, 7, 9]. See section Matrix Tutorial Exercise 1. (*)

An identity matrix is a square matrix with ones along the diagonal and zeros elsewhere. It has the property that multiplication by an identity matrix, on the left or on the right, always produces the original matrix.

1:  [ [ 1, 2, 3 ]      2:  [ [ 1, 2, 3 ]      1:  [ [ 1, 2, 3 ]
[ 4, 5, 6 ] ]          [ 4, 5, 6 ] ]          [ 4, 5, 6 ] ]
.                  1:  [ [ 1, 0, 0 ]          .
[ 0, 1, 0 ]
[ 0, 0, 1 ] ]
.

r 4                    v i 3 RET              *


If a matrix is square, it is often possible to find its inverse, that is, a matrix which, when multiplied by the original matrix, yields an identity matrix. The & (reciprocal) key also computes the inverse of a matrix.

1:  [ [ 1, 2, 3 ]      1:  [ [   -2.4,     1.2,   -0.2 ]
[ 4, 5, 6 ]            [    2.8,    -1.4,    0.4 ]
[ 7, 6, 0 ] ]          [ -0.73333, 0.53333, -0.2 ] ]
.                      .

r 4 r 2 |  s 5         &


The vertical bar | concatenates numbers, vectors, and matrices together. Here we have used it to add a new row onto our matrix to make it square.

We can multiply these two matrices in either order to get an identity.

1:  [ [ 1., 0., 0. ]      1:  [ [ 1., 0., 0. ]
[ 0., 1., 0. ]            [ 0., 1., 0. ]
[ 0., 0., 1. ] ]          [ 0., 0., 1. ] ]
.                         .

M-RET  *                  U TAB *


Matrix inverses are related to systems of linear equations in algebra. Suppose we had the following set of equations:

This can be cast into the matrix equation,

We can solve this system of equations by multiplying both sides by the inverse of the matrix. Calc can do this all in one step:

2:  [6, 2, 3]          1:  [-12.6, 15.2, -3.93333]
1:  [ [ 1, 2, 3 ]          .
[ 4, 5, 6 ]
[ 7, 6, 0 ] ]
.

[6,2,3] r 5            /


The result is the [a, b, c] vector that solves the equations. (Dividing by a square matrix is equivalent to multiplying by its inverse.)

Let's verify this solution:

2:  [ [ 1, 2, 3 ]                1:  [6., 2., 3.]
[ 4, 5, 6 ]                    .
[ 7, 6, 0 ] ]
1:  [-12.6, 15.2, -3.93333]
.

r 5  TAB                         *


Note that we had to be careful about the order in which we multiplied the matrix and vector. If we multiplied in the other order, Calc would assume the vector was a row vector in order to make the dimensions come out right, and the answer would be incorrect. If you don't feel safe letting Calc take either interpretation of your vectors, use explicit @c{$N\times1$} Nx1 or @c{$1\times N$} 1xN matrices instead. In this case, you would enter the original column vector as [, , ]' or `[6; 2; 3]'.

(*) Exercise 2. Algebraic entry allows you to make vectors and matrices that include variables. Solve the following system of equations to get expressions for x and y in terms of a and b.

See section Matrix Tutorial Exercise 2. (*)

(*) Exercise 3. A system of equations is "over-determined" if it has more equations than variables. It is often the case that there are no values for the variables that will satisfy all the equations at once, but it is still useful to find a set of values which "nearly" satisfy all the equations. In terms of matrix equations, you can't solve A X = B directly because the matrix A is not square for an over-determined system. Matrix inversion works only for square matrices. One common trick is to multiply both sides on the left by the transpose of A: Now @c{$A^T A$} trn(A)*A is a square matrix so a solution is possible. It turns out that the X vector you compute in this way will be a "least-squares" solution, which can be regarded as the "closest" solution to the set of equations. Use Calc to solve the following over-determined system:

See section Matrix Tutorial Exercise 3. (*)