Calculate the determinant of the product of two matrices. Determinant of the product of matrices. Determinant of the product of two square matrices

Lecture 6

4.6 Determinant of the product of two square matrices.

Product of two square matrices n-th order is always defined. In this case, the following theorem is important.

Theorem. The determinant of the matrix product is equal to the product of the determinants of the factor matrices:

Proof. Let

And
,

.

Let's create an auxiliary determinant

.

By corollary of Laplace's theorem we have:

.

So,
, we will show that
. To do this, we transform the determinant as follows. First ones first P
, add to
-th column. Then the first P columns multiplied by
, add to
-th column, etc. In the last step to
the first column will be added P columns multiplied by
. As a result, we obtain the determinant

.

Expanding the resulting determinant using Laplace's theorem in terms of the last P columns, we find:

So, the equalities have been proven
And
, from which it follows that
.

4.7.Inverse matrix

Definition 1 . Let a square matrix be given A P-th order. Square matrix
of the same order are called reverse to the matrix A, if , where E-identity matrix P-th order.

Statement. If there is a matrix inverse of the matrix A, then such a matrix is ​​unique.

Proof. Let us assume that the matrix
is not the only matrix inverse of the matrix A. Let's take another inverse matrix B. Then the conditions are satisfied

Let's look at the work
. For him there are equalities

from which it follows that
. Thus, the uniqueness of the inverse matrix is ​​proved.

When proving the theorem on the existence of an inverse matrix, we will need the concept of “adjoint matrix”.

Definition 2 . Let the matrix be given

.

whose elements are algebraic complements elements matrices A, called annexed matrix to matrix A.

Let us pay attention to the fact that to construct the adjoint matrix WITH matrix elements A you need to replace them with algebraic additions, and then transpose the resulting matrix.

Definition 3. Square matrix A called non-degenerate , If
.

Theorem. In order for the matrix A had an inverse matrix
, it is necessary and sufficient that the matrix A was non-degenerate. In this case, the matrix
is determined by the formula

, (1)

Where - algebraic additions of matrix elements A.

Proof. Let the matrix A has an inverse matrix
. Then the conditions from which it follows are satisfied. From the last equality we obtain that the determinants
And
. These determinants are related by the relation
. Matrices A And
non-degenerate because their determinants are non-zero.

Let now the matrix A non-degenerate. Let us prove that the matrix A has an inverse matrix
and it is determined by formula (1). To do this, let’s look at the work

matrices A and the matrix associated with it WITH.

According to the matrix multiplication rule, the element works
matrices A And WITH has the form: . Since the sum of the products of elements i th rows to algebraic complements of the corresponding elements j- th row is equal to zero at
and the determinant at
. Hence,

Where E– identity matrix P-th order. The equality is proved in a similar way
. Thus,

, which means that
and matrix is the inverse of the matrix A. Therefore, the non-singular matrix A has an inverse matrix, which is determined by formula (1).

Corollary 1 . Matrix determinants A And
related by the relation
.

Corollary 2 . Main property of the adjoint matrix WITH to the matrix A is expressed

equalities
.

Corollary 3 . Determinant of a non-singular matrix A and the matrix associated with it

WITH bound by equality
.

Corollary 3 follows from the equality
and properties of determinants, according to which when multiplied by P- th power of this number. In this case

whence it follows that
.

Example. A:

.

Solution. Matrix determinant

different from zero. Therefore the matrix A has the opposite. To find it, we first calculate the algebraic complements:

,
,
,

,
,
,


,
.

Now, using formula (1), we write the inverse matrix

.

4.8. Elementary transformations over matrices. Gauss algorithm.

Definition 1. Under elementary transformations above the size matrix

understand the following steps.

    Multiplying any row (column) of a matrix by any non-zero number.

    Adding to any i th row of the matrix of any of its j- th string multiplied by an arbitrary number.

    Adding to any i the th column of the matrix of any of its j- th column multiplied by an arbitrary number.

    Rearranging the rows (columns) of a matrix.

Definition 2. Matrices A And IN we'll call equivalent , if one of them can be transformed into another using elementary transformations. Will write
.

Matrix equivalence has the following properties:


Definition 3 . Stepped called a matrix A having the following properties:

1) if i-th line is zero, i.e. consists of all zeros, then
-th line is also zero;

2) if the first non-zero elements i th and
th rows are located in columns with numbers k And l, That
.

Example. Matrices

And

are stepwise, and the matrix

is not stepped.

Let us show how, using elementary transformations, we can reduce the matrix A to a stepped view.

Gaussian algorithm . Consider the matrix A size
. Without loss of generality we can assume that
. (If in the matrix A If there is at least a nonzero element, then by rearranging the rows and then the columns, we can ensure that this element falls at the intersection of the first row and the first column.) Add to the second row of the matrix A first multiplied by , to the third line – the first, multiplied by etc.

As a result we get that

.

Elements in the latest
lines are determined by the formulas:

,
,
.

Consider the matrix

.

If all matrix elements are equal to zero, then

and the equivalent matrix is ​​stepwise. If among the matrix elements at least one is different from zero, then we can assume without loss of generality that
(this can be achieved by rearranging the rows and columns of the matrix ). In this case, transforming the matrix just like a matrix A, we get

respectively,

.

Here
,
,
.

and
,
, … ,
. In the matrix A T lines and to bring it to stepwise form in the indicated way, you will need no more T steps. Then the process may end at k-th step if and only if all elements of the matrix

are equal to zero. In this case

and
,
, … ,
.

4.9. Finding the inverse matrix using elementary transformations.

For a large matrix, it is convenient to find the inverse matrix using elementary transformations on matrices. This method is as follows. Write out the composite matrix
and according to the Gaussian method scheme, they are performed on the rows of this matrix (i.e., simultaneously in the matrix A and in the matrix E) elementary transformations. As a result, the matrix A is converted to the identity matrix, and the matrix E– into the matrix
.

Example. Find the matrix inverse of a matrix

.

Solution. Let's write the composite matrix
and transform it using elementary string transformations in accordance with the Gaussian method. As a result we get:

.

From these transformations we conclude that

.

4.10 Matrix rank.

Definition. Integer r called rank matrices A, if it has a minor order r, nonzero, and all minors are of order higher r are equal to zero. The rank of the matrix will be denoted by the symbol
.

The rank of the matrix is ​​calculated using the method bordering minors .


Example. Using the method of bordering minors, calculate the rank of the matrix

.

Solution.


The above method is not always convenient, because... associated with the calculation of large

number of determinants.

Statement. The rank of a matrix does not change during elementary transformations of its rows and columns.

The stated statement indicates the second way to calculate the rank of a matrix. It is called by the method of elementary transformations . To find the rank of a matrix, you need to use the Gaussian method to reduce it to stepwise form, and then select the maximum non-zero minor. Let's explain this with an example.

Example. Using elementary transformations, calculate the rank of the matrix

.

Solution. Let us perform a chain of elementary transformations in accordance with the Gaussian method. As a result, we obtain a chain of equivalent matrices.

Definition. Product of two matrices A And IN called a matrix WITH, the element of which located at the intersection i th line and j th column, equal to the sum of the products of the elements i th row of the matrix A to the corresponding (in order) elements j th matrix column IN.

From this definition follows the matrix element formula C:

Matrix product A to the matrix IN denoted by AB.

Example 1. Find the product of two matrices A And B, If

,

.

Solution. It is convenient to find the product of two matrices A And IN write as in Fig. 2:

In the diagram, gray arrows indicate which rows of the matrix are elements A to the elements of which column of the matrix IN need to multiply to get matrix elements WITH, and the lines are the colors of the matrix element C the corresponding matrix elements are connected A And B, whose products are added to obtain a matrix element C.

As a result, we obtain the elements of the matrix product:



Now we have everything to write down the product of two matrices:

.

Product of two matrices AB makes sense only if the number of matrix columns A coincides with the number of matrix rows IN.

This important feature will be easier to remember if you use the following reminders more often:

There is another important feature of the product of matrices with respect to the number of rows and columns:

In the product of matrices AB the number of rows is equal to the number of rows of the matrix A, and the number of columns is equal to the number of matrix columns IN .

Example 2. Find the number of rows and columns of a matrix C, which is the product of two matrices A And B following dimensions:

a) 2 X 10 and 10 X 5;

b) 10 X 2 and 2 X 5;

Example 3. Find the product of matrices A And B, If:

.

A B- 2. Therefore, the dimension of the matrix C = AB- 2 X 2.

Calculating matrix elements C = AB.

The found product of matrices: .

You can check the solution to this and other similar problems at online matrix product calculator .

Example 5. Find the product of matrices A And B, If:

.

Solution. Number of rows in matrix A- 2, number of columns in the matrix B C = AB- 2 X 1.

Calculating matrix elements C = AB.

The product of matrices will be written as a column matrix: .

You can check the solution to this and other similar problems at online matrix product calculator .

Example 6. Find the product of matrices A And B, If:

.

Solution. Number of rows in matrix A- 3, number of columns in the matrix B- 3. Therefore, the dimension of the matrix C = AB- 3 X 3.

Calculating matrix elements C = AB.

The found product of matrices: .

You can check the solution to this and other similar problems at online matrix product calculator .

Example 7. Find the product of matrices A And B, If:

.

Solution. Number of rows in matrix A- 1, number of columns in the matrix B- 1. Therefore, the dimension of the matrix C = AB- 1 X 1.

Calculating the matrix element C = AB.

The product of matrices is a matrix of one element: .

You can check the solution to this and other similar problems at online matrix product calculator .

The software implementation of the product of two matrices in C++ is discussed in the corresponding article in the “Computers and Programming” block.

Matrix exponentiation

Raising a matrix to a power is defined as multiplying a matrix by the same matrix. Since a product of matrices exists only when the number of columns of the first matrix coincides with the number of rows of the second matrix, only square matrices can be raised to a power. n th power of a matrix by multiplying the matrix by itself n once:

Example 8. Given a matrix. Find A² and A³ .

Find the matrix product yourself and then look at the solution

Example 9. Given a matrix

Find the product of the given matrix and the transposed matrix, the product of the transposed matrix and the given matrix.

Properties of the product of two matrices

Property 1. The product of any matrix A and the identity matrix E of the corresponding order, both on the right and on the left, coincides with the matrix A, i.e. AE = EA = A.

In other words, the role of the unit matrix in matrix multiplication is the same as the role of units in number multiplication.

Example 10. Verify that Property 1 is true by finding the matrix products

to the identity matrix on the right and left.

Solution. Since the matrix A contains three columns, then you need to find the product AE, Where

-
third order identity matrix. Let's find the elements of the work WITH = AE :



It turns out that AE = A .

Now let's find the product EA, Where E is a second-order identity matrix, since matrix A contains two rows. Let's find the elements of the work WITH = EA :

Theorem. Let A and B be two square matrices of order n. Then the determinant of their product is equal to the product of the determinants, i.e.

| AB | = | A| | B|.

¢ Let A = (a ij) n x n , B = (b ij) n x n . Consider the determinant d 2 n of order 2n

d2n = | A | | B | (-1) 1 + ... + n + 1 + ... + n = | A | | B|.

If we show that the determinant of d 2 n is equal to the determinant of the matrix C=AB, then the theorem will be proven.

In d 2 n we will make the following transformations: to 1 line we add (n+1) line multiplied by a 11; (n+2) string multiplied by a 12, etc. (2n) string multiplied by a 1 n . In the resulting determinant, the first n elements of the first row will be zeros, and the other n elements will be like this:

a 11 b 11 + a 12 b 21 + ... + a 1n b n1 = c 11;

a 11 b 12 + a 12 b 22 + ... + a 1n b n2 = c 12;

a 11 b 1n + a 12 b 2n + ... + a 1n b nn = c 1n.

Similarly, we obtain zeros in 2, ..., n rows of the determinant d 2 n, and the last n elements in each of these rows will become the corresponding elements of matrix C. As a result, the determinant d 2 n is transformed into an equal determinant:

d2n = | C | (-1) 1 + ... + n + ... + 2n = |AB|. £

Consequence. The determinant of the product of a finite number of square matrices is equal to the product of their determinants.

¢ The proof is carried out by induction: | A 1 ... A i +1 | = | A 1 ... A i | | A i +1 | = ... = = | A 1 | ... | A i +1 | . This chain of equalities is correct according to the theorem. £

Inverse matrix.

Let A = (a ij) n x n be a square matrix over the field P.

Definition 1. Matrix A will be called singular if its determinant is equal to 0. Matrix A will be called non-singular otherwise.

Definition 2. Let A Î P n . We will call the matrix B Î P n inverse to A if AB = BA=E.

Theorem (matrix invertibility criterion). A matrix A is invertible if and only if it is non-singular.

¢ Let A have an inverse matrix. Then AA -1 = E and, applying the theorem on the multiplication of determinants, we obtain | A | | A -1 | = | E | or | A | | A -1 | = 1. Therefore, | A | No. 0.

Let, back, | A | ¹ 0. It is necessary to show that there is a matrix B such that AB = BA = E. As B we take the following matrix:

where A ij is the algebraic complement to the element a ij. Then

It should be noted that the result will be an identity matrix (it is enough to use Corollaries 1 and 2 from Laplace’s theorem § 6), i.e. AB = E. Similarly, it is shown that BA = E. £

Example. For matrix A, find the inverse matrix, or prove that it does not exist.

det A = -3 inverse matrix exists. Now we calculate the algebraic additions.

A 11 = -3 A 21 = 0 A 31 = 6

A 12 = 0 A 22 = 0 A 32 = -3

A 13 = 1 A 23 = -1 A 33 = -1



So, the inverse matrix looks like: B = =

Algorithm for finding the inverse matrix for matrix A.

1. Calculate det A.

2. If it is 0, then the inverse matrix does not exist. If det A is not equal to 0, we calculate algebraic additions.

3. We put algebraic additions in the appropriate places.

4. Divide all elements of the resulting matrix by det A.

Exercise 1. Find out whether the inverse matrix is ​​unique.

Exercise 2. Let the elements of matrix A be rational integers. Will the elements of the inverse matrix be rational integers?

Systems of linear equations.

Definition 1. An equation of the form a 1 x 1 + ....+a n x n =b, where a, ...,a n are numbers; x 1 , ... , x n - unknowns, called a linear equation with n unknown.

s equations with n unknowns is called a system s linear equations with n unknown, i.e.

Matrix A, composed of coefficients for the unknowns of system (1), is called the matrix of system (1).

.


If we add a column of free terms to matrix A, we obtain an extended matrix of system (1).

X = - column of unknowns.

Column of free members.

In matrix form, the system looks like: AX=B (2).

A solution to system (1) is an ordered set n numbers (α 1 ,…, α n) such that if we make a substitution in (1) x 1 = α 1 , x 2 = α 2 ,…, x n = α n , then we get numerical identities.

Definition 2. System (1) is called consistent if it has solutions, and inconsistent otherwise.

Definition 3. Two systems are called equivalent if their solution sets coincide.

There is a universal way to solve system (1) - the Gauss method (method of sequential elimination of unknowns), see, page 15.

Let us consider in more detail the case when s = n. There is Cramer's method for solving such systems.

Let d = det,

d j is the determinant of d, in which the jth column is replaced by a column of free terms.



Theorem (Cramer's rule). If the determinant of the system d ¹ 0, then the system has a unique solution, obtained by the formulas:

x 1 = d 1 / d …x n = d n / d

¢The idea of ​​the proof is to rewrite system (1) in the form of a matrix equation. Let's put

and consider the equation AX = B (2) with an unknown column matrix X. Since A, X, B are size matrices n x n, n x 1, n x 1 Accordingly, the product of rectangular matrices AX is defined and has the same dimensions as the matrix B. Thus, equation (2) makes sense.

The connection between system (1) and equation (2) is that what is the solution of a given system if and only if

the column is the solution to equation (2).

Indeed, this statement means the equality

=

Because ,

where A ij is the algebraic complement of the element a ij in the determinant d, then

= ,

from where (4).

In equality (4) in brackets is written the expansion into elements of the jth column of the determinant d j , which is obtained from the determinant d after replacing it

the jth column is the column of free terms. That's why, x j = d j / d.£

Consequence. If a homogeneous system of n linear equations from n of unknowns has a non-zero solution, then the determinant of this system is equal to zero.

TOPIC 3. Polynomials in one variable.

The determinant of a matrix is ​​denoted. In other words, the determinant of a matrix is ​​the sum of products from the set multiplied by the sign of the corresponding substitution.

The second-order determinant is equal to the product of the elements of the main diagonal subtract the product of the elements on the side diagonal.


We got the triangle rule:


The simplest properties of determinants

The determinant of a matrix with a zero row (column) is equal to zero

The determinant of a triangular matrix is ​​equal to the product of the elements located on the main diagonal

It is a triangular matrix if the elements under the main diagonal are zero.

The determinant of a diagonal matrix is ​​equal to the product of the elements located on the main diagonal. A matrix is ​​diagonal if all elements located outside the main diagonal are zero.

Basic properties of determinants

field of scalars,

Proof:

let's denote If it “runs through” the entire set, then it also “runs through” everything, i.e.


When rearranging two columns (rows) of a matrix, its determinant will change sign.

Proof:

I) Rearranging columns:

Let be a matrix obtained by rearranging two columns with numbers, where. Let's consider the transposition:

Transposition is an odd substitution,

In the proof we will use the equality:

If it runs through the entire set of values, then it also runs through all the values ​​and

II) String rearrangement

Let it be obtained from a permutation of two rows, then obtained from a permutation of two columns, then

III) Determinant of a matrix having two identical rows (columns) equal to zero

Proof:

Let us carry out for such a field where

Comment

Find the proof for the case in Kulikova’s textbook Algebra and Number Theory

Let there be two identical lines with numbers and, where, we swap the lines and, we get a matrix

If two identical columns, then the transposed matrix has two identical rows

IV) If all elements of any row (column) of the matrix are multiplied by, then the determinant is multiplied by

Proof:

Let it be obtained from multiplication by rows

since then

Similar proof for columns

V) Determinant of a matrix whose two rows (columns) are proportional to zero

Proof:

Let the rows in the matrix be proportional, i.e. -string is equal to the product of -string. Let

For columns:

Let it be obtained from, . The columns are both proportional and


VI) If each element of a row (column) of a square matrix is ​​the sum of two elements, then the determinant is equal to the sum of the two determinants. In the matrix of the first determinant, in the - row (column), the first terms are written, and in the matrix of the second determinant, the second terms are written. The remaining elements of the matrices of these determinants are the same as those of the matrix

Proof:


VII) If you add another row (column) multiplied by to any row (column) of the matrix of the determinant, then the determinant will not change.

Proof:


Same for columns.

VIII) If any row (column) of the matrix is ​​a linear combination of other rows (columns), then the determinant

Proof:

If some string is a linear combination of other strings, then other strings can be added to it, multiplied by scalars so that a zero string is obtained. The determinant of such a matrix is ​​equal to zero.

(first multiply the first line by -2 and add with the second, then by -3 and add with the third). This rule of reduction to triangular form is used for determinants of order:


since the determinant of a triangular matrix is ​​equal to the product of the elements located on the main diagonal.

If a square matrix is ​​the product of some matrices (which may be rectangular), then it is often important to be able to express the determinant of the product in terms of the properties of the factors. The following theorem is a powerful indicator of this.

Minors and algebraic complements.

Determinant theorems.

field of scalars,

Def. The minor element of the order determinant is the order determinant obtained by striking out the -row and -column.

Principal minors of the determinant

There are qualifiers for major minors

Consider the matrix and calculate its minors

Definition. The algebraic complement of an element is a number

Example: Let's calculate,

Proof:

(in the sum only those terms are non-zero, where)

Then the substitution has the form: , where. Let us match the substitution i.e.

Such a correspondence is called a one-to-one mapping from the set of permutations to the set of permutations, . Obviously, they have the same inversions, which means they have the same parity and signs

If all elements of any row (column) of a matrix are equal to zero, with the possible exception of one element, then the determinant of the matrix is ​​equal to the product of this element and its algebraic complement

Proof:

Let all elements be rows of the matrix except the element

by rearranging rows and columns, we moved the element to the lower right corner, which means rows and columns. The sign will change once, after which the result will be a matrix in which all elements of the last row except maybe equal to zero. According to Lemma 1, because

Lagrange's theorem

is equal to the sum of the products of the elements of any column (row) of the matrix and their algebraic complement. In other words: the decomposition along the -column of the matrix has the form: , and the decomposition along the -row of the matrix:

Proof:

consider the -column of the matrix and write it in the form: , according to the 6th property of determinants:

determinant matrix lagrangian mathematical

The formula for decomposition in the -row of a matrix is ​​proved in a similar way.

Theorem 2

The following equalities are valid:

Consider a matrix that is obtained from a matrix as follows: all columns of the matrix, except the th column, are the same as those of the matrix. The th column of the matrix coincides with the -th column, then they have two identical columns, so the determinant of the matrix is ​​equal to zero, let us expand the determinant of the matrix in the -th column.

Then. Formula (2) is shown similarly.

Consequence:

Determinant of matrix product

field of scalars,

Let the elementary matrix be of order, then the equality is true:

1) ., i.e. obtained from the matrix by multiplying the -row by a scalar. Matrix determinant.

The matrix is ​​obtained by multiplying the -row by a scalar, so the determinant

Matrix obtained by adding to -row

  • -elementary matrices
  • 1), the proof follows from Lemma 1

2), proof from statement (1) provided

Theorem 1

The determinant of the product of two matrices is equal to the product of their determinants, i.e.

Proof:

Let the rows of the matrix be linearly independent, then there is a chain of elementary transformations

then by Lemma 2 it follows that. From what () we have: , then

2) The rows are linearly dependent, then there is a chain of elementary transformations that translates into an echelon matrix that has a zero row, i.e. , . Then

From the fact that there is also a zero line in the product, because

Necessary and sufficient conditions for the determinant to be equal to zero


field of scalars, - matrix over the field

Theorem 1

rows (columns) of the matrix are linearly dependent

Adequacy:

If the rows (columns) of a matrix are linearly dependent, then some row is a linear combination of other rows (8 properties of determinants each)

Necessity:

Let be. Let us prove that the rows are linearly dependent. Suppose that the strings are linearly independent, then there is a chain of elementary transformations that translates. From what was proved in point II it follows that. We received a contradiction. Let us prove that if the -row of the matrix is ​​linearly dependent, but (the number of column vectors) is linearly dependent.

Theorem 2

the following conditions are equivalent:

  • 2) - linearly dependent
  • 3) -reversible
  • 4) can be represented as a product of elementary matrices

Proof:

proven in Theorem 1

Matrix partitioning

If matrix, matrix, matrix and matrix are written in the form

Then they form some matrix. In this case they can be called matrix blocks. And marked accordingly. Representation (1) is called matrix partitioning.

If the matrix product exists and is divided into blocks, and the partition along the columns of the matrix corresponds to the partition along the rows of the matrix, then we can expect that it has blocks given by the formula

Thus, we assume that the product of matrices in terms of blocks obtained by appropriate partitions of the factors formally coincides with the product of these matrices in terms of scalar elements. Let's show this with an example:

Exercise 1. Let


This can be verified by direct calculation

Theorem (1)

Let the matrix of have blocks where is a matrix, and the matrix of with blocks of size. Then has blocks

Proof. Note that each product exists and is a matrix. Therefore, there is and will be a matrix. For fixed, each has columns and for fixed, each has rows, which implies that the blocks of some matrix.

Let be some matrix element located in a block cell. Since there is a sum of elements in cells and matrices, . But the matrix element in a cell is the sum of the products of the elements in the matrix row and the elements in the matrix column. Further, the elements of the row of the matrix coincide with some elements of the row in, namely, with, where the index is determined by the inequalities

The elements of the matrix column will be the elements in. Hence,

We have defined order minors for the determinant. In general, if from a matrix we remove all rows except rows and all columns except columns, then the determinant of the resulting matrix is ​​called the minor of the order matrix, then

Minors for which are called principal for the matrix. If is a matrix, then the algebraic complement, for example, is

If a square matrix is ​​the product of some matrices (which may be rectangular), then it is sometimes important to express the determinant of the product in terms of the properties of the factors. The following theorem is a powerful result of this kind.

Comment. The operation of matrix multiplication is non-commutative, i.e. Indeed, if the product AB exists, then BA may not exist at all due to the mismatch of dimensions (see the previous example). If both AB and BA exist, then they can have different dimensions (if).

For square matrices of the same order, the products AB and BA exist and have the same dimension, but their corresponding elements are generally not equal.

However, in some cases the products AB and BA coincide.

Consider the product of a square matrix A and an identity matrix E of the same order:

We get the same result for the product EA. So, for any square matrix A AE = EA = A.

Inverse matrix.

Definition 3.7. A square matrix A is called singular if and nonsingular if.

Definition 3.8. A square matrix B is called the inverse of a square matrix A of the same order if AB = BA = E. In this case, B is denoted.

Let us consider the condition for the existence of a matrix inverse to a given one and the method for calculating it.

Theorem 3.2. For an inverse matrix to exist, it is necessary and sufficient that the original matrix be nonsingular.

Proof.

1) Necessity: since then (Theorem 3.1), therefore

2) Sufficiency: set the matrix in the following form:

Then any element of the product (or) not lying on the main diagonal is equal to the sum of the products of the elements of one row (or column) of matrix A by the algebraic complements to the elements of another column and, therefore, is equal to 0 (as a determinant with two equal columns). The elements on the main diagonal are equal. Thus,

*=. The theorem has been proven.

Comment. Let us formulate once again the method of calculating the inverse matrix: its elements are the algebraic complements to the elements of the transposed matrix A, divided by its determinant.

Continuing the topic:
Solutions

There are a variety of situations in our lives in which unwanted individuals gain access to our personal computer without permission. These could be children, acquaintances,...