Math-MatrixReal
view release on metacpan or search on metacpan
lib/Math/MatrixReal.pm view on Meta::CPAN
Note that (especially for vectors) it makes a big difference if you
have a row vector, like this:
[ -1 0 1 ]
or a column vector, like this:
[ -1 ]
[ 0 ]
[ 1 ]
the one vector being the transposed of the other!
This is especially true for the matrix product of two vectors:
[ -1 ]
[ -1 0 1 ] * [ 0 ] = [ 2 ] , whereas
[ 1 ]
* [ -1 0 1 ]
[ -1 ] [ 1 0 -1 ]
[ 0 ] * [ -1 0 1 ] = [ -1 ] [ 1 0 -1 ] = [ 0 0 0 ]
[ 1 ] [ 0 ] [ 0 0 0 ] [ -1 0 1 ]
[ 1 ] [ -1 0 1 ]
So be careful about what you really mean!
Hint: throughout this module, whenever a vector is explicitly required
for input, a B<COLUMN> vector is expected!
=item *
C<$trace = $matrix-E<gt>trace();>
This returns the trace of the matrix, which is defined as
the sum of the diagonal elements. The matrix must be
quadratic.
=item *
C<$minor = $matrix-E<gt>minor($row,$col);>
Returns the minor matrix corresponding to $row and $col. $matrix must be quadratic.
If $matrix is n rows by n cols, the minor of $row and $col will be an (n-1) by (n-1)
matrix. The minor is defined as crossing out the row and the col specified and returning
the remaining rows and columns as a matrix. This method is used by C<cofactor()>.
=item *
C<$cofactor = $matrix-E<gt>cofactor();>
The cofactor matrix is constructed as follows:
For each element, cross out the row and column that it sits in.
Now, take the determinant of the matrix that is left in the other
rows and columns.
Multiply the determinant by (-1)^(i+j), where i is the row index,
and j is the column index.
Replace the given element with this value.
The cofactor matrix can be used to find the inverse of the matrix. One formula for the
inverse of a matrix is the cofactor matrix transposed divided by the original
determinant of the matrix.
The following two inverses should be exactly the same:
my $inverse1 = $matrix->inverse;
my $inverse2 = ~($matrix->cofactor)->each( sub { (shift)/$matrix->det() } );
Caveat: Although the cofactor matrix is simple algorithm to compute the inverse of a matrix, and
can be used with pencil and paper for small matrices, it is comically slower than
the native C<inverse()> function. Here is a small benchmark:
# $matrix1 is 15x15
$det = $matrix1->det;
timethese( 10,
{'inverse' => sub { $matrix1->inverse(); },
'cofactor' => sub { (~$matrix1->cofactor)->each ( sub { (shift)/$det; } ) }
} );
Benchmark: timing 10 iterations of LR, cofactor, inverse...
inverse: 1 wallclock secs ( 0.56 usr + 0.00 sys = 0.56 CPU) @ 17.86/s (n=10)
cofactor: 36 wallclock secs (36.62 usr + 0.01 sys = 36.63 CPU) @ 0.27/s (n=10)
=item *
C<$adjoint = $matrix-E<gt>adjoint();>
The adjoint is just the transpose of the cofactor matrix. This method is
just an alias for C< ~($matrix-E<gt>cofactor)>.
=back
=item *
C<$part_of_matrix = $matrix-E<gt>submatrix(x1,y1,x2,Y2);>
Submatrix permit to select only part of existing matrix in order to produce a new one.
This method take four arguments to define a selection area:
=over 6
=item - firstly: Coordinate of top left corner to select (x1,y1)
=item - secondly: Coordinate of bottom right corner to select (x2,y2)
=back
Example:
my $matrix = Math::MatrixReal->new_from_string(<<'MATRIX');
[ 0 0 0 0 0 0 0 ]
[ 0 0 0 0 0 0 0 ]
[ 0 0 0 0 0 0 0 ]
[ 0 0 0 0 0 0 0 ]
[ 0 0 0 0 1 0 1 ]
[ 0 0 0 0 0 1 0 ]
[ 0 0 0 0 1 0 1 ]
MATRIX
lib/Math/MatrixReal.pm view on Meta::CPAN
the case of a matrix representing an equation system) of the
matrix that was initially fed into "decompose_LR()".
If "n" is the number of rows and columns of the (quadratic!) matrix,
then "n - order" is the dimension of the solution space of the
associated equation system.
=item *
C<$rank = $LR_matrix-E<gt>rank_LR();>
This is an alias for the C<order_LR()> function. The "order"
is usually called the "rank" in the United States.
=item *
C<$scalar_product = $vector1-E<gt>scalar_product($vector2);>
Returns the scalar product of vector "C<$vector1>" and vector "C<$vector2>".
Both vectors must be column vectors (i.e., a matrix having
several rows but only one column).
This is a (more efficient!) shortcut for
$temp = ~$vector1 * $vector2;
$scalar_product = $temp->element(1,1);
or the sum C<i=1..n> of the products C<vector1[i] * vector2[i]>.
Provided none of the two input vectors is the null vector, then
the two vectors are orthogonal, i.e., have an angle of 90 degrees
between them, exactly when their scalar product is zero, and
vice-versa.
=item *
C<$vector_product = $vector1-E<gt>vector_product($vector2);>
Returns the vector product of vector "C<$vector1>" and vector "C<$vector2>".
Both vectors must be column vectors (i.e., a matrix having several rows
but only one column).
Currently, the vector product is only defined for 3 dimensions (i.e.,
vectors with 3 rows); all other vectors trigger an error message.
In 3 dimensions, the vector product of two vectors "x" and "y"
is defined as
| x[1] y[1] e[1] |
determinant | x[2] y[2] e[2] |
| x[3] y[3] e[3] |
where the "C<x[i]>" and "C<y[i]>" are the components of the two vectors
"x" and "y", respectively, and the "C<e[i]>" are unity vectors (i.e.,
vectors with a length equal to one) with a one in row "i" and zero's
elsewhere (this means that you have numbers and vectors as elements
in this matrix!).
This determinant evaluates to the rather simple formula
z[1] = x[2] * y[3] - x[3] * y[2]
z[2] = x[3] * y[1] - x[1] * y[3]
z[3] = x[1] * y[2] - x[2] * y[1]
A characteristic property of the vector product is that the resulting
vector is orthogonal to both of the input vectors (if neither of both
is the null vector, otherwise this is trivial), i.e., the scalar product
of each of the input vectors with the resulting vector is always zero.
=item *
C<$length = $vector-E<gt>length();>
This is actually a shortcut for
$length = sqrt( $vector->scalar_product($vector) );
and returns the length of a given column or row vector "C<$vector>".
Note that the "length" calculated by this method is in fact the
"two"-norm (also know as the Euclidean norm) of a vector "C<$vector>"!
The general definition for norms of vectors is the following:
sub vector_norm
{
croak "Usage: \$norm = \$vector->vector_norm(\$n);"
if (@_ != 2);
my($vector,$n) = @_;
my($rows,$cols) = ($vector->[1],$vector->[2]);
my($k,$comp,$sum);
croak "Math::MatrixReal::vector_norm(): vector is not a column vector"
unless ($cols == 1);
croak "Math::MatrixReal::vector_norm(): norm index must be > 0"
unless ($n > 0);
croak "Math::MatrixReal::vector_norm(): norm index must be integer"
unless ($n == int($n));
$sum = 0;
for ( $k = 0; $k < $rows; $k++ )
{
$comp = abs( $vector->[0][$k][0] );
$sum += $comp ** $n;
}
return( $sum ** (1 / $n) );
}
Note that the case "n = 1" is the "one"-norm for matrices applied to a
vector, the case "n = 2" is the euclidian norm or length of a vector,
and if "n" goes to infinity, you have the "infinity"- or "maximum"-norm
for matrices applied to a vector!
=item *
C<$xn_vector = $matrix-E<gt>>C<solve_GSM($x0_vector,$b_vector,$epsilon);>
lib/Math/MatrixReal.pm view on Meta::CPAN
) / a[i,i]
There is one major restriction, though: a fix-point iteration is
guaranteed to converge only if the first derivative of the iteration
function has an absolute value less than one in an area around the
point "C<x(*)>" for which "C<Phi( x(*) ) = x(*)>" is to be true, and
if the start vector "C<x(0)>" lies within that area!
This is best verified graphically, which unfortunately is impossible
to do in this textual documentation!
See literature on Numerical Analysis for details!
In our case, this restriction translates to the following three conditions:
There must exist a norm so that the norm of the matrix of the iteration
function, C<( En - A )>, has a value less than one, the matrix "A" may
not have any zero value on its main diagonal and the initial vector
"C<x(0)>" must be "good enough", i.e., "close enough" to the solution
"C<x(*)>".
(Remember school math: the first derivative of a straight line given by
"C<y = a * x + b>" is "a"!)
The three methods expect a (quadratic!) matrix "C<$matrix>" as their
first argument, a start vector "C<$x0_vector>", a vector "C<$b_vector>"
(which is the vector "b" in your equation system "C<A * x = b>"), in the
case of the "Relaxation Method" ("RM"), a real number "C<$weight>" best
between zero and two, and finally an error limit (real number) "C<$epsilon>".
(Note that the weight "C<$weight>" used by the "Relaxation Method" ("RM")
is B<NOT> checked to lie within any reasonable range!)
The three methods first test the first two conditions of the three
conditions listed above and return an empty list if these conditions
are not fulfilled.
Therefore, you should always test their return value using some
code like:
if ( $xn_vector = $A_matrix->solve_GSM($x0_vector,$b_vector,1E-12) )
{
# do something with the solution...
}
else
{
# do something with the fact that there is no solution...
}
Otherwise, they iterate until C<abs( Phi(x) - x ) E<lt> epsilon>.
(Beware that theoretically, infinite loops might result if the starting
vector is too far "off" the solution! In practice, this shouldn't be
a problem. Anyway, you can always press <ctrl-C> if you think that the
iteration takes too long!)
The difference between the three methods is the following:
In the "Global Step Method" ("GSM"), the new vector "C<x(t+1)>"
(called "y" here) is calculated from the vector "C<x(t)>"
(called "x" here) according to the formula:
y[i] =
( b[i]
- ( a[i,1] x[1] + ... + a[i,i-1] x[i-1] +
a[i,i+1] x[i+1] + ... + a[i,n] x[n] )
) / a[i,i]
In the "Single Step Method" ("SSM"), the components of the vector
"C<x(t+1)>" which have already been calculated are used to calculate
the remaining components, i.e.
y[i] =
( b[i]
- ( a[i,1] y[1] + ... + a[i,i-1] y[i-1] + # note the "y[]"!
a[i,i+1] x[i+1] + ... + a[i,n] x[n] ) # note the "x[]"!
) / a[i,i]
In the "Relaxation method" ("RM"), the components of the vector
"C<x(t+1)>" are calculated by "mixing" old and new value (like
cold and hot water), and the weight "C<$weight>" determines the
"aperture" of both the "hot water tap" as well as of the "cold
water tap", according to the formula:
y[i] =
( b[i]
- ( a[i,1] y[1] + ... + a[i,i-1] y[i-1] + # note the "y[]"!
a[i,i+1] x[i+1] + ... + a[i,n] x[n] ) # note the "x[]"!
) / a[i,i]
y[i] = weight * y[i] + (1 - weight) * x[i]
Note that the weight "C<$weight>" should be greater than zero and
less than two (!).
The three methods are supposed to be of different efficiency.
Experiment!
Remember that in most cases, it is probably advantageous to first
"normalize()" your equation system prior to solving it!
=back
=head1 OVERLOADED OPERATORS
=head2 SYNOPSIS
=over 2
=item *
Unary operators:
"C<->", "C<~>", "C<abs>", C<test>, "C<!>", 'C<"">'
=item *
Binary operators:
"C<.>"
Binary (arithmetic) operators:
"C<+>", "C<->", "C<*>", "C<**>",
"C<+=>", "C<-=>", "C<*=>", "C</=>","C<**=>"
=item *
Binary (relational) operators:
"C<==>", "C<!=>", "C<E<lt>>", "C<E<lt>=>", "C<E<gt>>", "C<E<gt>=>"
"C<eq>", "C<ne>", "C<lt>", "C<le>", "C<gt>", "C<ge>"
Note that the latter ("C<eq>", "C<ne>", ... ) are just synonyms
of the former ("C<==>", "C<!=>", ... ), defined for convenience
only.
=back
=head2 DESCRIPTION
=over 5
( run in 0.790 second using v1.01-cache-2.11-cpan-39bf76dae61 )