Math-MatrixReal
view release on metacpan or search on metacpan
lib/Math/MatrixReal.pm view on Meta::CPAN
The cofactor matrix is constructed as follows:
For each element, cross out the row and column that it sits in.
Now, take the determinant of the matrix that is left in the other
rows and columns.
Multiply the determinant by (-1)^(i+j), where i is the row index,
and j is the column index.
Replace the given element with this value.
The cofactor matrix can be used to find the inverse of the matrix. One formula for the
inverse of a matrix is the cofactor matrix transposed divided by the original
determinant of the matrix.
The following two inverses should be exactly the same:
my $inverse1 = $matrix->inverse;
my $inverse2 = ~($matrix->cofactor)->each( sub { (shift)/$matrix->det() } );
Caveat: Although the cofactor matrix is simple algorithm to compute the inverse of a matrix, and
can be used with pencil and paper for small matrices, it is comically slower than
lib/Math/MatrixReal.pm view on Meta::CPAN
| x[1] y[1] e[1] |
determinant | x[2] y[2] e[2] |
| x[3] y[3] e[3] |
where the "C<x[i]>" and "C<y[i]>" are the components of the two vectors
"x" and "y", respectively, and the "C<e[i]>" are unity vectors (i.e.,
vectors with a length equal to one) with a one in row "i" and zero's
elsewhere (this means that you have numbers and vectors as elements
in this matrix!).
This determinant evaluates to the rather simple formula
z[1] = x[2] * y[3] - x[3] * y[2]
z[2] = x[3] * y[1] - x[1] * y[3]
z[3] = x[1] * y[2] - x[2] * y[1]
A characteristic property of the vector product is that the resulting
vector is orthogonal to both of the input vectors (if neither of both
is the null vector, otherwise this is trivial), i.e., the scalar product
of each of the input vectors with the resulting vector is always zero.
lib/Math/MatrixReal.pm view on Meta::CPAN
(Beware that theoretically, infinite loops might result if the starting
vector is too far "off" the solution! In practice, this shouldn't be
a problem. Anyway, you can always press <ctrl-C> if you think that the
iteration takes too long!)
The difference between the three methods is the following:
In the "Global Step Method" ("GSM"), the new vector "C<x(t+1)>"
(called "y" here) is calculated from the vector "C<x(t)>"
(called "x" here) according to the formula:
y[i] =
( b[i]
- ( a[i,1] x[1] + ... + a[i,i-1] x[i-1] +
a[i,i+1] x[i+1] + ... + a[i,n] x[n] )
) / a[i,i]
In the "Single Step Method" ("SSM"), the components of the vector
"C<x(t+1)>" which have already been calculated are used to calculate
the remaining components, i.e.
lib/Math/MatrixReal.pm view on Meta::CPAN
y[i] =
( b[i]
- ( a[i,1] y[1] + ... + a[i,i-1] y[i-1] + # note the "y[]"!
a[i,i+1] x[i+1] + ... + a[i,n] x[n] ) # note the "x[]"!
) / a[i,i]
In the "Relaxation method" ("RM"), the components of the vector
"C<x(t+1)>" are calculated by "mixing" old and new value (like
cold and hot water), and the weight "C<$weight>" determines the
"aperture" of both the "hot water tap" as well as of the "cold
water tap", according to the formula:
y[i] =
( b[i]
- ( a[i,1] y[1] + ... + a[i,i-1] y[i-1] + # note the "y[]"!
a[i,i+1] x[i+1] + ... + a[i,n] x[n] ) # note the "x[]"!
) / a[i,i]
y[i] = weight * y[i] + (1 - weight) * x[i]
Note that the weight "C<$weight>" should be greater than zero and
less than two (!).
( run in 0.505 second using v1.01-cache-2.11-cpan-26ccb49234f )