Find basic solutions online. Solving systems of linear equations using the Jordan-Gauss method. §1. Systems of linear equations

§1. Systems linear equations.

View system

called a system m linear equations with n unknown.

Here
- unknown, - coefficients for unknowns,
- free terms of the equations.

If all free terms of the equations are equal to zero, the system is called homogeneous.By decision system is called a collection of numbers
, when substituting them into the system instead of unknowns, all equations turn into identities. The system is called joint, if it has at least one solution. A compatible system that has a unique solution is called certain. The two systems are called equivalent, if the sets of their solutions coincide.

System (1) can be represented in matrix form using the equation

(2)

.

§2. Compatibility of systems of linear equations.

Let us call the extended matrix of system (1) the matrix

Kronecker-Capelli theorem. System (1) is consistent if and only if the rank of the system matrix is ​​equal to the rank of the extended matrix:

.

§3. Systems solutionn linear equations withn unknown.

Consider an inhomogeneous system n linear equations with n unknown:

(3)

Cramer's theorem.If the main determinant of the system (3)
, then the system has a unique solution, determined by the formulas:

those.
,

Where - determinant obtained from the determinant replacement th column to the column of free members.

If
, and at least one of ≠0, then the system has no solutions.

If
, then the system has infinitely many solutions.

System (3) can be solved using its matrix form (2). If the matrix rank A equals n, i.e.
, then the matrix A has an inverse
. Multiplying the matrix equation
to the matrix
on the left, we get:

.

The last equality expresses the method of solving systems of linear equations using an inverse matrix.

Example. Solve a system of equations using an inverse matrix.

Solution. Matrix
non-degenerate, since
, which means there is an inverse matrix. Let's calculate the inverse matrix:
.


,

Exercise. Solve the system using Cramer's method.

§4. Solving arbitrary systems of linear equations.

Let a non-homogeneous system of linear equations of the form (1) be given.

Let us assume that the system is consistent, i.e. the condition of the Kronecker-Capelli theorem is satisfied:
. If the matrix rank
(number of unknowns), then the system has a unique solution. If
, then the system has infinitely many solutions. Let me explain.

Let the rank of the matrix r(A)= r< n. Because the
, then there is some non-zero minor of order r. Let's call it the basic minor. The unknowns whose coefficients form a basis minor will be called basic variables. We call the remaining unknowns free variables. Let's rearrange the equations and renumber the variables so that this minor is located in the upper left corner of the system matrix:

.

First r lines are linearly independent, the rest are expressed through them. Therefore, these lines (equations) can be discarded. We get:

Let's give the free variables arbitrary numerical values: . Let us leave only the basic variables on the left side and move the free ones to the right side.

Got the system r linear equations with r unknown, whose determinant is different from 0. It has a unique solution.

This system is called the general solution of the system of linear equations (1). Otherwise: the expression of basic variables through free ones is called general decision systems. From it you can get an infinite number of private solutions, giving free variables arbitrary values. A particular solution obtained from a general one for zero values ​​of free variables is called basic solution. The number of different basic solutions does not exceed
. A basic solution with non-negative components is called supporting system solution.

Example.

,r=2.

Variables
- basic,
- free.

Let's add up the equations; let's express
through
:

- common decision.

- private solution for
.

- basic solution, reference.

§5. Gauss method.

The Gauss method is a universal method for studying and solving arbitrary systems of linear equations. It consists of reducing the system to a diagonal (or triangular) form by sequentially eliminating unknowns using elementary transformations that do not violate the equivalence of systems. A variable is considered excluded if it is contained in only one equation of the system with a coefficient of 1.

Elementary transformations systems are:

Multiplying an equation by a number other than zero;

Adding an equation multiplied by any number with another equation;

Rearranging equations;

Rejecting the equation 0 = 0.

Elementary transformations can be performed not on equations, but on extended matrices of the resulting equivalent systems.

Example.

Solution. Let us write down the extended matrix of the system:

.

Carrying out elementary transformations, we will reduce the left side of the matrix to unit form: we will create ones on the main diagonal, and zeros outside it.









Comment. If, when performing elementary transformations, an equation of the form 0 is obtained = k(Where To0), then the system is inconsistent.

The solution of systems of linear equations by the method of sequential elimination of unknowns can be written in the form tables.

The left column of the table contains information about excluded (basic) variables. The remaining columns contain the coefficients of the unknowns and the free terms of the equations.

The extended matrix of the system is recorded in the source table. Next, we begin to perform Jordan transformations:

1. Select a variable , which will become the basis. The corresponding column is called the key column. Choose an equation in which this variable will remain, being excluded from other equations. The corresponding table row is called a key row. Coefficient , standing at the intersection of a key row and a key column, is called a key.

2. The key string elements are divided into the key element.

3. The key column is filled with zeros.

4. The remaining elements are calculated using the rectangle rule. Make up a rectangle, at the opposite vertices of which there is a key element and a recalculated element; from the product of the elements located on the diagonal of the rectangle with the key element, the product of the elements of the other diagonal is subtracted, and the resulting difference is divided by the key element.

Example. Find the general solution and basic solution of the system of equations:

Solution.

General solution of the system:

Basic solution:
.

A single substitution transformation allows you to move from one basis of the system to another: instead of one of the main variables, one of the free variables is introduced into the basis. To do this, select a key element in the free variable column and perform transformations according to the above algorithm.

§6. Finding support solutions

The reference solution of a system of linear equations is a basic solution that does not contain negative components.

The reference solutions of the system are found by the Gaussian method when the following conditions are met.

1. In the original system, all free terms must be non-negative:
.

2. The key element is selected among the positive coefficients.

3. If a variable introduced into the basis has several positive coefficients, then the key line is the one in which the ratio of the free term to the positive coefficient is the smallest.

Note 1. If, in the process of eliminating unknowns, an equation appears in which all coefficients are non-positive and the free term
, then the system has no non-negative solutions.

Note 2. If there is not a single positive element in the columns of coefficients for free variables, then transition to another reference solution is impossible.

Example.

Since not every basic solution is a reference solution, computational difficulties arise when finding reference solutions of the system the usual method Gauss. We have to find all the basic solutions and select the reference ones from them. There is an algorithm that allows you to immediately find reference solutions.

  1. When filling out the original Gauss table, all free terms are made non-negative.
  2. The key element is selected in a special way:
    1. a) any column of coefficients for unknowns is chosen as a key column if it contains at least one positive element;
    2. b) the key row is taken to be the one whose ratio of the free term to the positive element of the key column is minimal.

At the intersection of a key row and a key column is the key element. Next, carry out the usual Jordan transformation.

Homogeneous systems of linear equations

Let a homogeneous system be given

Let us consider the corresponding inhomogeneous system

Using matrices

these systems can be written in matrix form.

A`x = . (8.3)

A`x = `b . (8.4)

The following are true properties of solutions of homogeneous and inhomogeneous systems.

Theorem 8.23. A linear combination of solutions to a homogeneous system (8.1) is a solution to a homogeneous system.

Proof. Let `x , `y And `z - solutions of a homogeneous system. Let's consider `t = α `x + β `y + γ `z , where α, β and γ are some arbitrary numbers. Because `x , `y And `z are solutions, then A`x = , A`y = and A`z = . We'll find A`t .

A`t = A · (α `x + β `y + γ `z ) = A · α `x +A · β `y + A γ `z =

= α A`x + β A`y + γ A`z = α + β + γ = .

A`t = Þ `t is a solution to the system.

Theorem 8.24. The difference between two solutions of the inhomogeneous system (8.2) is a solution to the homogeneous system (8.1).



Proof. Let `x And ` y- system solutions. Let's consider `t = `x - `y .

A`x = `b , A`y = `b

A`t = A (`x - `y ) = A`x - A`y = `b - `b =.

`t = `x + `y is a solution to a homogeneous system.

Theorem 8.25. The sum of a solution of a homogeneous system with a solution of an inhomogeneous system is a solution of the inhomogeneous system.

Proof. Let `x - solution of a homogeneous system, `y - solution of a heterogeneous system. Let's show that `t = `x + `y - solution of a heterogeneous system.

A`x = , A`y = `b

A`t = A (`x + `y ) = A`x + A`y = `b + = `b .

A`t = `b Þ `t is a solution to the inhomogeneous system.

Consider a homogeneous system

x 1 = x 2 = … = x n

Theorem 8.26.

Proof. s 1, s 2, …, s n m

A linearly dependent. A n, i.e. r(A)< n.

Consequence.

Proof. Because r(A)< n

Compatibility of a homogeneous system

Consider a homogeneous system

A homogeneous system is always consistent, since it has a trivial (zero) solution x 1 = x 2 = … = x n= 0. Let's find out when this system has a non-trivial solution.

Theorem 8.26. A homogeneous system has a nontrivial solution if and only if the rank of the matrix composed of the coefficients of the unknowns is less than the number of unknowns.

Proof. Let the system have a nontrivial solution. This can be if and only if the numbers are found s 1, s 2, …, s n, not all equal to zero, when substituting them into the system we obtain m identities. These m identities can be written in the form

Consequently, the system of matrix column vectors A linearly dependent. A this can be the case if and only if the rank of the column vector system is less n, i.e. r(A)< n.

Consequence. A homogeneous system with a square matrix has a nontrivial solution if and only if the determinant of the matrix composed of the coefficients of the unknowns is equal to zero.

Proof. Because r(A)< n , then the columns of the matrix are linearly dependent and, therefore, the determinant of the matrix is ​​equal to zero.

General solution of a homogeneous system

System (8.1) always has a trivial solution. If the rank of the matrix composed of the coefficients of the unknowns is less than the number of unknowns, then system (8.1) has nontrivial solutions.

1)r(A) = n - system (8.1) has only a trivial solution:

2)r(A) = r< n - system (8.1) has non-trivial solutions.

The number of free variables in the second case will be equal to n-r, and basic r. Giving free variables arbitrary values, we will obtain different solutions to system (8.1), i.e., to any vector of dimension n-r

(c r + 1, c r + 2, …, c n)

will correspond to the solution of system (8.1)

(c 1, c 2, …, c r, c r + 1, …, c n).

Definition 8.51. A fundamental system of solutions to a homogeneous system (8.1) is a maximal linearly independent system of solutions to system (8.1). The fundamental system contains n-r linearly independent solutions of system (8.1).

To obtain a fundamental system of solutions, you need to (n - r)-dimensional space take linearly independent system from n-r vectors and use them to construct the corresponding solutions of system (8.1). The resulting solutions will form a fundamental system of solutions `x 1 , `x 2 , ..., `xn-r . Since this system is maximal, any solution to system (8.1) can be represented as a linear combination of solutions to the fundamental system `x = α 1 `x 1 ‌ α 2 `x 2 ‌ ... ‌ α n-r`xn-r . The resulting expression is the general solution of the homogeneous system (8.1).

Example 8.25.

x 1 , x 2 - basic, x 3 , x 4 , x 5- free. The last two equations are linearly expressed through the first two, so they can be discarded:

Ð3 x 1 + x 2 - 8x 3 + 2x 4 + x 5 = 0, í î2 x 1 - 2x 2 - 3x 3 - 7x 4 + 2x 5 = 0.

Let us express the basic variables.

Multiply the first equation by 2 and add it to the second. Divide the result by 8.

Multiply the first equation by 2, the second by -3 and add the resulting equations. Divide the result by 8.

As the values ​​of the free variables, we take the coordinates of the vectors of the three-dimensional unit basis.

`x = α 1 `x 1 + α 2 `x 2 + α 3 `x 3 - general solution of a homogeneous system.

The sum of the general solution of the homogeneous system (8.1) with any solution of the inhomogeneous system (8.2) is the general solution of the inhomogeneous system (8.2).

Application of linear algebra in economics

Production figures

The company produces four types of products every day, the main production and economic indicators of which are given in the following table.

The following daily indicators need to be determined: raw material consumption S, working time costs T and cost R manufactured products of the enterprise.

Based on the given data, we will create four vectors that characterize the entire production cycle:

`q = (20, 50, 30, 40) - assortment vector;

`s = (5, 2, 7, 4) - vector of raw material consumption;

= (10, 5, 15, 8) - vector of working time costs;

`p = (30, 15, 45, 20) - price vector.

Then the required quantities will be the corresponding scalar products of the assortment vector and three other vectors:

S = `q`s = 100 + 100 + 210 + 160 = 570 kg,

T = `q = 1220 h, P = `q`p = 3500 den. units

Raw material consumption

The company produces four types of products using four types of raw materials. Raw material consumption rates are given as matrix elements A:

Type of raw material 1 2 3 4

Product type.

It is required to find the costs of each type of raw material for a given production plan for each type of product: 60, 50, 35 and 40 units, respectively.

Let's draw up a vector plan for production:

`q = (60, 50, 35, 40).

Then the solution to the problem is given by a cost vector, the coordinates of which are the costs of raw materials for each type of raw material: this cost vector is calculated as the product of the vector `q to the matrix A:

End product of the industry

The industry consists of n enterprises producing one type of product each: let’s denote the volume of production i th enterprise through X i. Each of the enterprises in the industry, to ensure its production, consumes part of the products produced by itself and other enterprises. Let and ij - share of production i th enterprise, consumed j- an enterprise to ensure the production of its products in volume x j. Let's find the value y i- quantity of products i th enterprise intended for sale outside this industry (volume of the final product). This value can be easily calculated using the formula

Let us introduce into consideration a square matrix of order n, describing the domestic consumption of the industry

A = (a ij); i,j = 1, 2, ..., n.

Then the final product vector is a solution to the matrix equation

`x - A`x = `y

using identity matrix E we get

(E-A)`x = `y

Example 8.26. Let the vector of industry output and the matrix of domestic consumption have the form, respectively

We obtain a vector of volumes of the final product intended for sale outside the industry, consisting of three enterprises:

Product Output Forecast

Let C = (c ij); i = 1, 2, ..., m, j = 1, 2, ..., n, - raw material cost matrix t types when releasing products n species. Then, with known stock volumes of each type of raw material, which form the corresponding vector

`q = (q 1 , q 2 , ..., q m)

vector plan `x = (x 1 , x 2 , ..., x n) product output is determined from the system solution m equations with n unknown:

C`x T = `x T ,

where is the index " T" means transposing a row vector to a column vector.

Example 8.27. The company produces three types of products using three types of raw materials. The necessary production characteristics are presented by the following data:

Type of raw material Consumption of raw materials by type of product, weight. unit/ed. Stock of raw materials, weight. units

It is required to determine the volume of output of each type of product with given reserves of raw materials.

Problems of this kind are typical when making forecasts and assessments of the functioning of enterprises, expert assessments projects for the development of mineral deposits, as well as in planning the microeconomics of enterprises.

Let us denote the unknown volumes of production by x 1, x 2 And x 3. Then, provided that the reserves of each type of raw material are completely consumed, it is possible to write down balance relationships that form a system of three equations with three unknowns:

Solving this system of equations in any way, we find that with given reserves of raw materials, production volumes will be for each type, respectively (in conventional units):

x 1 = 150, x 2 = 250, x 3 = 100.

Consider an arbitrary system

To find general method research and solutions of such a system will be introduced into consideration of it special case. View system

(16.2)

is called a system in basic form.

Unknown are called free, and - non-free or basic unknowns. The choice of basic and free variables may be different in the general case.

System (16.1) has a solution if and only if it can be written in basic form.

Let us transfer all free unknowns to the right-hand sides of the equations of system (16.2). Then we get

(16.3)

If free unknown give specific numerical values, then using formulas (16.3) you can calculate the basic unknowns. Thus, the basic system (16.2) always has a solution. Moreover, the following options are possible.

1). m=n, that is, the number of equations is equal to the number of unknowns. In this case, all variables are basic. The system has the form

and is definite because it has a single, obvious solution. The matrix of such a system is the identity matrix

.

2). m Then a system with an extended matrix of the form

(16.4)

has infinitely many solutions, since for each numerical set of free unknowns, the basic unknowns receive certain values ​​according to formulas (16.3).

Totality n values ​​of the unknowns related by relations (16.3), where the unknowns can take any numerical values, is called the general solution of system (16.2) or solution in basic form. Private a solution is any solution obtained from a general solution for specific numerical values ​​of free unknowns.

Conclusion: a system with a basis is always consistent. Moreover, it is definite if all its unknowns are basic, and indefinite if, in addition to the basic ones, there is at least one free unknown.

17.Gauss method.

Let us now consider the general method for studying and solving systems of the form (16.1), which is called the Gauss method. It consists in transforming this system to an equivalent system with a basis, for which the question of solutions was discussed in the previous section 16.

The Gauss method is reduced to the sequential elimination of unknowns and is based on the application of elementary transformations that lead to an equivalent system. Elementary transformations include:

1) swapping positions of equations in the system;

2) multiplying the equation by a constant number other than zero;

3) adding to the equation another equation, previously multiplied by an arbitrary number;

4) discarding or adding an equation of the form (we will call such an equation an extra equation).

An equation of the form , where , will be called a contradictory equation. If, as a result of elementary transformations, a contradictory equation is obtained, then the system is inconsistent.

For ease of recording, instead of the entire system of equations, we will write only the extended matrix of coefficients, separating the column of the right sides with a vertical bar

. (17.1)

Elementary transformations for equivalent systems generate admissible transformations for matrices. Thus, in the matrix you can:

1) swap lines;

2) multiply any string by a number other than zero;

3) add to the line any other line multiplied by any number;

4) discard the zero row, that is, the row of coefficients of the extra equation.

The universal Gaussian method has several computational schemes. Let's look at a single division scheme here. Its idea is to use elementary transformations to reduce matrix (17.1) to the form

(17.2)

or get an inconsistent line , where , that is, make sure that the system is inconsistent. If no contradictions are obtained, then the system is consistent and solutions can be sought. Thus, the method consists of two stages.

The first stage is the so-called “direct move”. Its goal is to transform the matrix to a form where there are 1s on the main diagonal and 0s below the main diagonal. To do this, we sequentially perform the following steps.

1st step. Let's call the element in the upper left corner of the matrix the leading element, and the row containing the leading element the leading row. Let's transform the matrix so that the leading element is equal to 1. If there is a 1 in the left column, then we swap the rows. If not, then we change the lines so that the leading element is non-zero, and divide the leading line by the leading element. We get the matrix

2nd step – multiplication of zeros in the left column under the leading element equal to 1. To do this, to each i-th row we add the leading row, previously multiplied by the first element of the i-th row, taken with the opposite sign. For example, we multiply the first line by () and add it with the second line.


If during these transformations a null string is obtained, then it should be discarded. If a contradictory string is received, then the system has no solutions. If there are no contradictions, then the result will be a matrix that may have fewer rows than the original one. It looks like:

3rd step – mentally separate the row and column containing the leading element. In them, the “direct move” is completed. In the matrix remaining inside the dotted lines, we again select the leading element and repeat the entire procedure, starting from the 1st step.

If the new leading element and all the elements below it are zeros, then we can swap the columns of the entire matrix so that the new leading element is equal to 1 or at least non-zero. This can always be done (otherwise the leading line is either redundant or inconsistent). However, this leads to the replacement of variables, which must be marked in the diagram.

We get the matrix

, (17.3)

If m , or when m=n

. (17.4)

Stage 2 – “reverse stroke”. At this stage, zeros are multiplied over the main diagonal of matrices (17.3) or (17.4), moving along it in the opposite direction: up and to the left. This produces a matrix of the form (16.4). The solution to a system with such a matrix is ​​discussed in Section 16.

It is not difficult to solve systems with a matrix of the form (17.3) or (17.4), which are obtained as a result of the “direct move”. We solve such a system, starting with the last equation and substituting the found unknowns into the previous equations.

Example 17.1. Solve the system of equations using the Gauss method:

"Direct move":


“Reverse”: we write the resulting extended matrix in the form of a system of equations

We will solve this system starting from the last equation. We substitute the value from the last equation of the system into the second equation: . We'll get it. Now we substitute the found values ​​of the variables into the first equation to find . Then

Answer:

The reverse stroke can also be written in matrix form. To do this, multiply zeros over 1s, starting from the lower right corner and moving upward along the main diagonal.

The form of the resulting matrix allows us to conclude that the system specified in this example is consistent and definite.

Let us now give an example of an incompatible system.

Example 17.2: Solve the system

Solution: Let us write down the extended matrix of this system. In its upper left corner there is 1. To multiply zeros under 1, multiply the first line by -2 and add to the second line. Then multiply the first line by -3 and add it to the third line.

Having multiplied the zeros in the first column, we mentally discard the first row and the first column and continue forward movement in the matrix located inside the dotted line. All transformations result in a contradictory string

and therefore the system is inconsistent and has no solution.

Answer: there is no decision.

It should be noted that using the Gaussian scheme it is possible to solve simultaneously several systems with the same left sides and different right sides. Let's give an example.

Example 17.3: Solve two systems

Solution."Direct move":



"Reverse":

Answer:

18. Finding a solution in basic form.

The Gaussian scheme allows you to determine at the first stage whether the system is cooperative. And if the system is consistent (there are no contradictory rows), then by the type of matrix at the end of the “direct move” one can judge whether it is definite (a square matrix) or indefinite (the number of rows is less than the number of columns).

Example 18.1 Use the Gaussian method to solve the system and present its solution in basic form:

Solution. Let's write out the extended matrix of the system and perform the first stage of the Gaussian scheme - “direct motion”.

The forward stroke is completed. The number of rows is less than the number of columns, which means the system is uncertain and has an infinite number of solutions.

"Reverse move." Let us now write out an equivalent system with a new matrix.

Let us move the terms with the variable to the right side of the equalities. We get

Substituting the value x from the last equation to the previous one, we get

Therefore, the solution to the system is the set

where is a free variable, and are the basic variables.

Example 18.2. Use the Gauss method to solve a homogeneous system and present its solution in basic form:

Solution: For a homogeneous system, the column of free terms is zero, so they write out not the extended, but the usual matrix of the system.

"Direct move":

"Reverse":

where is a free variable, and are the basic variables.

Let's do a check here, that is, substitute the found solution into the original system. Then we have:

§19. Calculation of the inverse matrix using the Gaussian scheme.

Let be a non-singular square matrix. Then there is an inverse matrix for it. Let us denote by column the number of the inverse matrix. A-priory

Hence, to find the th column of the inverse matrix, it is necessary to solve the system

(19.1)

To find the entire matrix, it is necessary to solve n systems of the form (19.1) with identical left-hand sides and different right-hand sides, consisting of zeros and one 1 in the th row.

Thus, the expanded matrix looks like:

Example 19.1. Using the Gaussian method, find the matrix inverse of the matrix . Using the found inverse matrix, solve the system

Solution. Let's create an extended matrix and perform a “direct move”.

Example 1. Find a general solution and some particular solution of the system

Solution We do it using a calculator. Let's write out the extended and main matrices:

The main matrix A is separated by a dotted line. We write unknown systems at the top, keeping in mind the possible rearrangement of terms in the equations of the system. By determining the rank of the extended matrix, we simultaneously find the rank of the main one. In matrix B, the first and second columns are proportional. Of the two proportional columns, only one can fall into the basic minor, so let’s move, for example, the first column beyond the dotted line with the opposite sign. For the system, this means transferring terms from x 1 to the right side of the equations.

Let's reduce the matrix to triangular form. We will work only with rows, since multiplying a matrix row by a number other than zero and adding it to another row for the system means multiplying the equation by the same number and adding it with another equation, which does not change the solution of the system. We work with the first row: multiply the first row of the matrix by (-3) and add to the second and third rows in turn. Then multiply the first line by (-2) and add it to the fourth.

The second and third lines are proportional, therefore, one of them, for example the second, can be crossed out. This is equivalent to crossing out the second equation of the system, since it is a consequence of the third.

Now we work with the second line: multiply it by (-1) and add it to the third.

The minor circled with a dotted line has the highest order (of possible minors) and is nonzero (it is equal to the product of the elements on the main diagonal), and this minor belongs to both the main matrix and the extended one, therefore rangA = rangB = 3.
Minor is basic. It includes coefficients for the unknowns x 2 , x 3 , x 4 , which means that the unknowns x 2 , x 3 , x 4 are dependent, and x 1 , x 5 are free.
Let's transform the matrix, leaving only the basis minor on the left (which corresponds to point 4 of the above solution algorithm).

The system with the coefficients of this matrix is ​​equivalent to the original system and has the form

Using the method of eliminating unknowns we find:
, ,

We obtained relations expressing the dependent variables x 2, x 3, x 4 through the free ones x 1 and x 5, that is, we found a general solution:

By assigning any values ​​to the free unknowns, we obtain any number of particular solutions. Let's find two particular solutions:
1) let x 1 = x 5 = 0, then x 2 = 1, x 3 = -3, x 4 = 3;
2) put x 1 = 1, x 5 = -1, then x 2 = 4, x 3 = -7, x 4 = 7.
Thus, two solutions were found: (0,1,-3,3,0) – one solution, (1,4,-7,7,-1) – another solution.

Example 2. Explore compatibility, find a general and one particular solution to the system

Solution. Let's rearrange the first and second equations to have one in the first equation and write the matrix B.

We get zeros in the fourth column by operating with the first row:

Now we get the zeros in the third column using the second line:

The third and fourth lines are proportional, so one of them can be crossed out without changing the rank:
Multiply the third line by (–2) and add it to the fourth:

We see that the ranks of the main and extended matrices are equal to 4, and the rank coincides with the number of unknowns, therefore, the system has a unique solution:
;
x 4 = 10- 3x 1 – 3x 2 – 2x 3 = 11.

Example 3. Examine the system for compatibility and find a solution if it exists.

Solution. We compose an extended matrix of the system.

We rearrange the first two equations so that there is 1 in the upper left corner:
Multiplying the first line by (-1), adding it to the third:

Multiply the second line by (-2) and add it to the third:

The system is inconsistent, since in the main matrix we received a row consisting of zeros, which is crossed out when the rank is found, but in the extended matrix the last row remains, that is, r B > r A .

Exercise. Investigate this system of equations for compatibility and solve it using matrix calculus.
Solution

Example. Prove the compatibility of the system of linear equations and solve it in two ways: 1) by the Gauss method; 2) Cramer's method. (enter the answer in the form: x1,x2,x3)
Solution :doc :doc :xls
Answer: 2,-1,3.

Example. A system of linear equations is given. Prove its compatibility. Find a general solution of the system and one particular solution.
Solution
Answer: x 3 = - 1 + x 4 + x 5 ; x 2 = 1 - x 4 ; x 1 = 2 + x 4 - 3x 5

Exercise. Find the general and particular solutions of each system.
Solution. We study this system using the Kronecker-Capelli theorem.
Let's write out the extended and main matrices:

1 1 14 0 2 0
3 4 2 3 0 1
2 3 -3 3 -2 1
x 1x 2x 3x 4x 5

Here matrix A is highlighted in bold.
Let's reduce the matrix to triangular form. We will work only with rows, since multiplying a matrix row by a number other than zero and adding it to another row for the system means multiplying the equation by the same number and adding it with another equation, which does not change the solution of the system.
Let's multiply the 1st line by (3). Multiply the 2nd line by (-1). Let's add the 2nd line to the 1st:
0 -1 40 -3 6 -1
3 4 2 3 0 1
2 3 -3 3 -2 1

Let's multiply the 2nd line by (2). Multiply the 3rd line by (-3). Let's add the 3rd line to the 2nd:
0 -1 40 -3 6 -1
0 -1 13 -3 6 -1
2 3 -3 3 -2 1

Multiply the 2nd line by (-1). Let's add the 2nd line to the 1st:
0 0 27 0 0 0
0 -1 13 -3 6 -1
2 3 -3 3 -2 1

The selected minor has the highest order (of possible minors) and is non-zero (it is equal to the product of the elements on the reverse diagonal), and this minor belongs to both the main matrix and the extended one, therefore rang(A) = rang(B) = 3 Since the rank of the main matrix is ​​equal to the rank of the extended matrix, then the system is collaborative.
This minor is basic. It includes coefficients for the unknowns x 1 , x 2 , x 3 , which means that the unknowns x 1 , x 2 , x 3 are dependent (basic), and x 4 , x 5 are free.
Let's transform the matrix, leaving only the basis minor on the left.
0 0 27 0 0 0
0 -1 13 -1 3 -6
2 3 -3 1 -3 2
x 1x 2x 3 x 4x 5
The system with the coefficients of this matrix is ​​equivalent to the original system and has the form:
27x 3 =
- x 2 + 13x 3 = - 1 + 3x 4 - 6x 5
2x 1 + 3x 2 - 3x 3 = 1 - 3x 4 + 2x 5
Using the method of eliminating unknowns we find:
We obtained relations expressing the dependent variables x 1 , x 2 , x 3 through the free ones x 4 , x 5 , that is, we found common decision:
x 3 = 0
x 2 = 1 - 3x 4 + 6x 5
x 1 = - 1 + 3x 4 - 8x 5
uncertain, because has more than one solution.

Exercise. Solve the system of equations.
Answer:x 2 = 2 - 1.67x 3 + 0.67x 4
x 1 = 5 - 3.67x 3 + 0.67x 4
By assigning any values ​​to the free unknowns, we obtain any number of particular solutions. The system is uncertain

In general, the linear equation has the form:

The equation has a solution: if at least one of the coefficients of the unknowns is different from zero. In this case, any -dimensional vector is called a solution to the equation if, when substituting its coordinates, the equation becomes an identity.

General characteristics of the resolved system of equations

Example 20.1

Describe the system of equations.

Solution:

1. Is there a contradictory equation involved?(If the coefficients, in this case the equation has the form: and is called controversial.)

  • If a system contains something contradictory, then such a system is inconsistent and has no solution.

2. Find all allowed variables. (The unknown is calledpermitted for a system of equations, if it is included in one of the equations of the system with a coefficient of +1, but is not included in the remaining equations (i.e., it is included with a coefficient equal to zero).

3. Is the system of equations solved? (The system of equations is called resolved, if each equation of the system contains a resolved unknown, among which there are no coincident ones)

The resolved unknowns, taken one from each equation of the system, form full set of resolved unknowns systems. (in our example this is)

Allowed unknowns included in the complete set are also called basic(), and not included in the set - free ().

In the general case, the resolved system of equations has the form:

At this stage, the main thing is to understand what it is resolved unknown(included in the basis and free).

General Particular Basic solutions

General solution a resolved system of equations is a set of expressions of resolved unknowns through free terms and free unknowns:

Private decision is called a solution that is obtained from a general solution for specific values ​​of free variables and unknowns.

Basic solution is a particular solution obtained from the general one for zero values ​​of the free variables.

  • The basic solution (vector) is called degenerate, if the number of its non-zero coordinates is less than the number of allowed unknowns.
  • The basic solution is called non-degenerate, if the number of its non-zero coordinates is equal to the number of allowed unknowns of the system included in the complete set.

Theorem (1)

The resolved system of equations is always consistent(because it has at least one solution); Moreover, if the system does not have free unknowns,(that is, in a system of equations, all allowed ones are included in the basis) then it is defined(has a unique solution); if there is at least one free variable, then the system is not defined(has an infinite number of solutions).

Example 1. Find the general, basic and any particular solution to the system of equations:

Solution:

1. Are we checking if the system is authorized?

  • The system is resolved (since each of the equations contains a resolved unknown)

2. We include allowed unknowns in the set - one from each equation.

3. We write down the general solution depending on what allowed unknowns we included in the set.

4. Finding a particular solution. To do this, we equate free variables that we did not include in the set with arbitrary numbers.

Answer: private solution(one of the options)

5. Finding the basic solution. To do this, we equate the free variables that we did not include in the set to zero.

Elementary transformations of linear equations

Systems of linear equations are reduced to equivalent resolved systems using elementary transformations.

Theorem (2)

If any multiply the equation of the system by some nonzero number, and leave the rest of the equations unchanged, then . (that is, if you multiply the left and right sides of the equation by the same number, you get an equation equivalent to this one)

Theorem (3)

If add another to any equation of the system, and leave all other equations unchanged, then we get a system equivalent to this one. (that is, if you add two equations (by adding their left and right sides) you will get an equation equivalent to the data)

Corollary of Theorems (2 and 3)

If add another equation to an equation multiplied by a certain number, and leave all other equations unchanged, then we get a system equivalent to this one.

Formulas for recalculating system coefficients

If we have a system of equations and we want to transform it into a resolved system of equations, the Jordan-Gauss method will help us with this.

Jordan transform with a resolving element allows you to obtain for a system of equations the resolved unknown in the equation with number . (example 2).

The Jordan transformation consists of elementary transformations of two types:

Let's say we want to make the unknown in the lower equation a resolved unknown. To do this, we must divide by , so that the sum is .

Example 2 Let's recalculate the system coefficients

When dividing an equation with a number by , its coefficients are recalculated using the formulas:

To exclude from the equation with number , you need to multiply the equation with number by and add to this equation.

Theorem (4) On reducing the number of equations of the system.

If a system of equations contains a trivial equation, then it can be excluded from the system, and a system equivalent to the original one will be obtained.

Theorem (5) On the incompatibility of the system of equations.

If a system of equations contains an inconsistent equation, then it is inconsistent.

Jordan-Gauss method algorithm

The algorithm for solving systems of equations using the Jordan-Gauss method consists of a number of similar steps, at each of which actions are performed in the following order:

  1. Checks to see if the system is inconsistent. If a system contains an inconsistent equation, then it is inconsistent.
  2. The possibility of reducing the number of equations is checked. If the system contains a trivial equation, it is crossed out.
  3. If the system of equations is resolved, then write down the general solution of the system and, if necessary, particular solutions.
  4. If the system is not resolved, then in an equation that does not contain a resolved unknown, a resolving element is selected and a Jordan transform is performed with this element.
  5. Then go back to point 1
Example 3 Solve a system of equations using the Jordan-Gauss method.

Find: two general and two corresponding basic solutions

Solution:

The calculations are shown in the table below:

To the right of the table are actions on equations. The arrows indicate to which equation the equation with the resolving element is added, multiplied by a suitable factor.

The first three rows of the table contain the coefficients of the unknowns and the right-hand sides of the original system. The results of the first Jordan transform with a resolving element equal to one are given in lines 4, 5, 6. The results of the second Jordan transform with a resolving element equal to (-1) are given in lines 7, 8, 9. Since the third equation is trivial, it can be omitted consider.



What else to read