-
Matrix And Its Applications
CHAPTER ONE -- [Total Page(s) 2]
Page 2 of 2
-
-
-
Mathematicians also attempted to develop for algebra of vectors but
there was no natural definition of the product of two vectors that held
in arbitrary dimensions. The first vector algebra that involved a non-
commutative vector product (that is V x W need not equal W x V) was
proposed by Hermann Grassman in his book – Ausedehnungslehre (1844).
Grossmann’s text also introduced the product of a column matrix and a
row matrix, which resulted in what is now called a simple or a rank one
matrix. In the late 19th century the American mathematical physicist,
Willard Gibbs published his famous treatise on vector analysis. In that
treatise Gibbs represented general matrices, which he called dyadics as
sum of simple matrices, which Gibbs called dyads. Later the physicist,
P.A.M. Dirac introduced the term “bracket†for what we now call the
“scalar product†of a “bar†(row) vector times a “ket†(column) vector
and the term “ket-bra†for the product of a ket times a bra, resulting
in what we now call a simple matrix, as above. Our convention of
identifying column matrices and vector was introduced by physicists in
the 20th century.
Matrices continued to be closely associated with
linear transformations. By 1900, they were just a finite dimensional sub
case of the emerging theory of linear transformations. The modern
definition of a vector space was introduced by Peano in 1888. Abstract
vector space whose elements were function soon followed. There was
renewed interests in matrices, particularly on the numerical analysis of
matrices, John Von Neumann and Herman Goldstein introduced condition
numbers in analyzing round – off errors in 1947. Alan Turing and Von
Neumann, the 20th century giants in the development of stored – program
computers. Turning introduced the LU decomposition of a matrix in 1948.
The L is a lower triangular matrix with I’s on the diagonal and the U is
an echelon matrix. It is common to use LU decomposition in the solution
of n sequence of systems of linear equations, each having the same
co-efficient matrix. The benefit of the QR decomposition was realized a
decade later. The Q is a matrix whose column are orthogonal vector and R
is a square upper triangular invertible matrix with positive entities
on its diagonal.
The QR factorization is used in computer algorithms
for various computations, such as solving equations and find
eigenvalues. Frobenius in 1878 wrote an important work on matrices on
linear substitutions and bilinear forms, although he seemed unaware of
Cayley’s work. However be proved important results in canonical matrices
as representatives of equivalences classes of matrices. He cites
Kronecker and Weiserstrases having considered special cases of his
results in 1868 and 1874 respectively.
Frobenius also proved the
general result that a matrix satisfies it’s characteristic equation.
This 1878 paper by Frobenius also contains the definition of the rank of
a matrix, which he used in his work on canonical forms and the
definition of orthogonal matrices.
An axiomatic definition of
a determinant was used by Weierstrass in his lectures and after his
death, it was published in 1903 in the note on determinant theory. In
the same year Kronecker’s lectures on determinants were also published
after his death. With these two publications, the modern theory of
determinants was in place but matrix theory took slightly longer to
become a fully accepted theory. An important early text which brought
matrices into their proper place within mathematics was introduction to
higher algebra by Bocher in 1907. Turnbull and Aitken wrote influential
text in the 1930s and Missky’s; “An introduction to linear algebra†in
1955 saw matrix theory to reach its present major role as one of the
most important undergraduate mathematics topic.
CHAPTER ONE -- [Total Page(s) 2]
Page 2 of 2
-