Three types of linear dependence. Linear dependence and independence of vectors Geometric criterion for linear dependence of three vectors

Let the functions have derivatives of the limit (n-1).

Consider the determinant: (1)

W(x) is called the Wronski determinant for functions.

Theorem 1. If functions are linearly dependent in the interval (a, b), then their Wronskian W(x) is identically equal to zero in this interval.

Proof. According to the conditions of the theorem, the relation is satisfied

, (2) where not all are equal to zero. Let . Then

(3). We differentiate this identity n-1 times and,

Substituting instead their obtained values ​​into the Wronsky determinant,

we get:

(4).

In the Wronski determinant, the last column is a linear combination of the previous n-1 columns and is therefore equal to zero at all points in the interval (a, b).

Theorem 2. If the functions y1,…, yn are linearly independent solutions of the equation L[y] = 0, all of whose coefficients are continuous in the interval (a, b), then the Wronskian of these solutions is nonzero at each point of the interval (a, b).

Proof. Let's assume the opposite. There is X0, where W(X0)=0. Let's create a system of n equations

(5).

Obviously, system (5) has a non-zero solution. Let (6).

Let's make a linear combination of solutions y1,…, yn.

Y(x) is a solution to the equation L[y] = 0. In addition, . By virtue of the uniqueness theorem, the solution to the equation L[y] = 0 with zero initial conditions can only be zero, i.e. .

We get the identity where not all are equal to zero, which means that y1,..., yn are linearly dependent, which contradicts the conditions of the theorem. Consequently, there is no such point where W(X0)=0.

Based on Theorem 1 and Theorem 2, the following statement can be formulated. In order for n solutions of the equation L[y] = 0 to be linearly independent in the interval (a, b), it is necessary and sufficient that their Wronskian does not vanish at any point in this interval.

The following obvious properties of the Wronskian also follow from the proven theorems.

  1. If the Wronskian of n solutions to the equation L[y] = 0 is equal to zero at one point x = x0 from the interval (a, b), in which all coefficients pi(x) are continuous, then it is equal to zero at all points of this interval.
  2. If the Wronskian of n solutions to the equation L[y] = 0 is nonzero at one point x = x0 from the interval (a, b), then it is nonzero at all points of this interval.

Thus, for the linearity of n independent solutions of the equation L[y] = 0 in the interval (a, b), in which the coefficients of the equation рi(x) are continuous, it is necessary and sufficient that their Wronskian be nonzero at least at one point of this interval .

A necessary and sufficient condition for the linear dependence of two

vectors is their collinearity.

2. Scalar product- an operation on two vectors, the result of which is a scalar (number) that does not depend on the coordinate system and characterizes the lengths of the factor vectors and the angle between them. This operation corresponds to multiplication length given vector x on projection another vector y to a given vector x. This operation is usually considered to be commutative and linear in each factor.

Properties of the dot product:

3. Three vectors (or more) are called coplanar, if they, being reduced to a common origin, lie in the same plane.

A necessary and sufficient condition for the linear dependence of three vectors is their coplanarity. Any four vectors are linearly dependent. Basis in space is any ordered triple of non-coplanar vectors. A basis in space allows each vector to be uniquely associated with an ordered triple of numbers - the coefficients of the representation of this vector in a linear combination of basis vectors. On the contrary, we associate a vector with each ordered triple of numbers using a basis if we make a linear combination. An orthogonal basis is called orthonormal , if its vectors are equal in length to one. For an orthonormal basis in space, the notation is often used. Theorem: In an orthonormal basis, the coordinates of the vectors are the corresponding orthogonal projections of this vector onto the directions of the coordinate vectors. Triple of non-coplanar vectors a, b, c called right, if the observer from their common origin bypasses the ends of the vectors a, b, c in the order given appears to occur clockwise. Otherwise a, b, c - left three. All right (or left) triples of vectors are called equally oriented. A rectangular coordinate system on a plane is formed by two mutually perpendicular coordinate axes OX And OY. The coordinate axes intersect at the point O, which is called the origin, the positive direction is chosen on each axis. IN right-sided coordinate system, the positive direction of the axes is chosen so that when the axis is directed OY up, axis OX looked to the right.

Four corners (I, II, III, IV) formed by the coordinate axes X"X And Y"Y, are called coordinate angles or quadrants(see Fig. 1).

if the vectors and relative to an orthonormal basis on the plane have coordinates and, respectively, then the scalar product of these vectors is calculated by the formula

4. Cross product of two vectors a and b is an operation on them, defined only in three-dimensional space, the result of which is vector with the following

properties:

The geometric meaning of the vector product of vectors is the area of ​​a parallelogram constructed on vectors. A necessary and sufficient condition for the collinearity of a nonzero vector and a vector is the existence of a number that satisfies the equality.

If two vectors are defined by their rectangular Cartesian coordinates, or more precisely, represented by a vortonormed basis

and the coordinate system is right-handed, then their vector product has the form

To remember this formula, it is convenient to use the determinant:

5. Mixed product vectors - the scalar product of a vector and the vector product of vectors and :

Sometimes it is called triple scalar product vectors, most likely due to the fact that the result is a scalar (more precisely, a pseudoscalar).

Geometric meaning: The modulus of the mixed product is numerically equal to the volume of the parallelepiped formed by the vectors.

When two factors are rearranged, the mixed product changes sign to the opposite:

With a cyclic (circular) rearrangement of factors, the mixed product does not change:

The mixed product is linear in any factor.

The mixed product is zero if and only if the vectors are coplanar.

1. Condition for coplanarity of vectors: Three vectors are coplanar if and only if their mixed product is zero.

§ A triple of vectors containing a pair of collinear vectors is coplanar.

§ Mixed product of coplanar vectors. This is a criterion for the coplanarity of three vectors.

§ Coplanar vectors are linearly dependent. This is also a criterion for coplanarity.

§ There are real numbers such that for coplanar , except in the cases of or . This is a reformulation of the previous property and also a criterion of coplanarity.

§ In 3-dimensional space, 3 non-coplanar vectors form a basis. That is, any vector can be represented in the form: . Then they will be coordinates in this basis.

The mixed product in the right Cartesian coordinate system (in an orthonormal basis) is equal to the determinant of the matrix composed of vectors and:



§ 6. General equation (complete) of the plane

where and are constants, and at the same time they are not equal to zero; in vector form:

where is the radius vector of the point, the vector is perpendicular to the plane (normal vector). Direction cosines vector:

If one of the coefficients in a plane equation is zero, the equation is called incomplete. When the plane passes through the origin of coordinates, when (or , ) the plane is parallel to the axis (respectively or ). When ( , or ) the plane is parallel to the plane (respectively or ).

§ Equation of a plane in segments:

where , , are the segments cut off by the plane on the and axes.

§ Equation of a plane passing through a point perpendicular to the normal vector :

in vector form:

(mixed product of vectors), otherwise

§ Normal (normalized) plane equation

§ The angle between two planes. If P.'s equations are given in the form (1), then

If in vector form, then

§ Planes are parallel, If

Or (Vector product)

§ Planes are perpendicular, If

Or . (Scalar product)

7. Equation of a plane passing through three given points , not lying on the same straight line:

8. The distance from a point to a plane is the smallest of the distances between this point and the points of the plane. It is known that the distance from a point to a plane is equal to the length of the perpendicular drawn from this point to the plane.

§ Point Deviation from the plane given by the normalized equation

If and the origin of coordinates lie on different sides of the plane, in the opposite case . The distance from a point to a plane is

§ The distance from the point to the plane specified by the equation is calculated by the formula:

9. Bunch of planes- equation of any plot passing through the line of intersection of two planes

where α and β are any numbers that are not simultaneously zero.

In order for the three planes defined by their general equations A 1 x+B 1 y+C 1 z+D 1 =0, A 2 x+B 2 y+C 2 z+D 2 =0, A 3 x+B 3 y+C 3 z+D 3 =0 relative to the PDSC belonged to one bundle, proper or improper, it is necessary and sufficient that the rank of the matrix be equal to either two or one.
Theorem 2. Let two planes π 1 and π 2 be given with respect to the PDSC by their general equations: A 1 x+B 1 y+C 1 z+D 1 =0, A 2 x+B 2 y+C 2 z+D 2 = 0. In order for the plane π 3, defined relative to the PDSC by its general equation A 3 x+B 3 y+C 3 z+D 3 =0, to belong to the beam formed by the planes π 1 and π 2, it is necessary and sufficient that the left side of the equation of the plane π 3 was represented as a linear combination of the left sides of the equations of the planes π 1 and π 2.

10.Vector parametric equation of a line in space:

where is the radius vector of some fixed point M 0 lying on a line is a nonzero vector collinear to this line, and is the radius vector of an arbitrary point on the line.

Parametric equation of a line in space:

M

Canonical equation of the line in space:

where are the coordinates of some fixed point M 0 lying on a straight line; - coordinates of the vector collinear to this line.

General vector equation of a line in space:

Since a straight line is the intersection of two different non-parallel planes, defined respectively by the general equations:

then the equation of the straight line can be specified by the system of these equations:

The angle between the direction vectors and will be equal to the angle between the straight lines. The angle between vectors is found using the scalar product. cosA=(ab)/IaI*IbI

The angle between a straight line and a plane is found by the formula:


where (A;B;C;) coordinates of the normal vector of the plane
(l;m;n;) coordinates of the direction vector of the line

Conditions for parallelism of two lines:

a) If the lines are given by equations (4) with an angular coefficient, then the necessary and sufficient condition for their parallelism is the equality of their angular coefficients:

k 1 = k 2 . (8)

b) For the case when the lines are given by equations in general form (6), a necessary and sufficient condition for their parallelism is that the coefficients for the corresponding current coordinates in their equations are proportional, i.e.

Conditions for perpendicularity of two straight lines:

a) In the case when the lines are given by equations (4) with an angular coefficient, a necessary and sufficient condition for their perpendicularity is that their angular coefficients are inverse in magnitude and opposite in sign, i.e.

b) If the equations of lines are given in general form (6), then the condition for their perpendicularity (necessary and sufficient) is to satisfy the equality

A 1 A 2 + B 1 B 2 = 0. (12)

A line is called perpendicular to a plane if it is perpendicular to any line in this plane. If a line is perpendicular to each of two intersecting lines of a plane, then it is perpendicular to that plane. In order for a line and a plane to be parallel, it is necessary and sufficient that the normal vector to the plane and the direction vector of the line be perpendicular. To do this, it is necessary that their scalar product be equal to zero.

In order for a straight line and a plane to be perpendicular, it is necessary and sufficient that the normal vector to the plane and the direction vector of the straight line be collinear. This condition is satisfied if the vector product of these vectors was equal to zero.

12. In space, the distance from a point to a line given by a parametric equation

can be found as the minimum distance from a given point to an arbitrary point on a line. Coefficient t this point can be found by the formula

Distance between crossing lines is called the length of their common perpendicular. It is equal to the distance between parallel planes passing through these lines.

In this article we will cover:

  • what are collinear vectors;
  • what are the conditions for collinearity of vectors;
  • what properties of collinear vectors exist;
  • what is the linear dependence of collinear vectors.
Definition 1

Collinear vectors are vectors that are parallel to one line or lie on one line.

Example 1

Conditions for collinearity of vectors

Two vectors are collinear if any of the following conditions are true:

  • condition 1 . Vectors a and b are collinear if there is a number λ such that a = λ b;
  • condition 2 . Vectors a and b are collinear with equal coordinate ratios:

a = (a 1 ; a 2) , b = (b 1 ; b 2) ⇒ a ∥ b ⇔ a 1 b 1 = a 2 b 2

  • condition 3 . Vectors a and b are collinear provided that the cross product and the zero vector are equal:

a ∥ b ⇔ a, b = 0

Note 1

Condition 2 not applicable if one of the vector coordinates is zero.

Note 2

Condition 3 applies only to those vectors that are specified in space.

Examples of problems to study the collinearity of vectors

Example 1

We examine the vectors a = (1; 3) and b = (2; 1) for collinearity.

How to solve?

In this case, it is necessary to use the 2nd collinearity condition. For given vectors it looks like this:

The equality is false. From this we can conclude that vectors a and b are non-collinear.

Answer : a | | b

Example 2

What value m of the vector a = (1; 2) and b = (- 1; m) is necessary for the vectors to be collinear?

How to solve?

Using the second collinearity condition, vectors will be collinear if their coordinates are proportional:

This shows that m = - 2.

Answer: m = - 2 .

Criteria for linear dependence and linear independence of vector systems

Theorem

A system of vectors in a vector space is linearly dependent only if one of the vectors of the system can be expressed in terms of the remaining vectors of this system.

Proof

Let the system e 1 , e 2 , . . . , e n is linearly dependent. Let us write a linear combination of this system equal to the zero vector:

a 1 e 1 + a 2 e 2 + . . . + a n e n = 0

in which at least one of the combination coefficients is not equal to zero.

Let a k ≠ 0 k ∈ 1 , 2 , . . . , n.

We divide both sides of the equality by a non-zero coefficient:

a k - 1 (a k - 1 a 1) e 1 + (a k - 1 a k) e k + . . . + (a k - 1 a n) e n = 0

Let's denote:

A k - 1 a m , where m ∈ 1 , 2 , . . . , k - 1 , k + 1 , n

In this case:

β 1 e 1 + . . . + β k - 1 e k - 1 + β k + 1 e k + 1 + . . . + β n e n = 0

or e k = (- β 1) e 1 + . . . + (- β k - 1) e k - 1 + (- β k + 1) e k + 1 + . . . + (- β n) e n

It follows that one of the vectors of the system is expressed through all other vectors of the system. Which is what needed to be proven (etc.).

Adequacy

Let one of the vectors be linearly expressed through all other vectors of the system:

e k = γ 1 e 1 + . . . + γ k - 1 e k - 1 + γ k + 1 e k + 1 + . . . + γ n e n

We move the vector e k to the right side of this equality:

0 = γ 1 e 1 + . . . + γ k - 1 e k - 1 - e k + γ k + 1 e k + 1 + . . . + γ n e n

Since the coefficient of the vector e k is equal to - 1 ≠ 0, we get a non-trivial representation of zero by a system of vectors e 1, e 2, . . . , e n , and this, in turn, means that this system of vectors is linearly dependent. Which is what needed to be proven (etc.).

Consequence:

  • A system of vectors is linearly independent when none of its vectors can be expressed in terms of all other vectors of the system.
  • A system of vectors that contains a zero vector or two equal vectors is linearly dependent.

Properties of linearly dependent vectors

  1. For 2- and 3-dimensional vectors, the following condition is met: two linearly dependent vectors are collinear. Two collinear vectors are linearly dependent.
  2. For 3-dimensional vectors, the following condition is satisfied: three linearly dependent vectors are coplanar. (3 coplanar vectors are linearly dependent).
  3. For n-dimensional vectors, the following condition is satisfied: n + 1 vectors are always linearly dependent.

Examples of solving problems involving linear dependence or linear independence of vectors

Example 3

Let's check the vectors a = 3, 4, 5, b = - 3, 0, 5, c = 4, 4, 4, d = 3, 4, 0 for linear independence.

Solution. Vectors are linearly dependent because the dimension of vectors is less than the number of vectors.

Example 4

Let's check the vectors a = 1, 1, 1, b = 1, 2, 0, c = 0, - 1, 1 for linear independence.

Solution. We find the values ​​of the coefficients at which the linear combination will equal the zero vector:

x 1 a + x 2 b + x 3 c 1 = 0

We write the vector equation in linear form:

x 1 + x 2 = 0 x 1 + 2 x 2 - x 3 = 0 x 1 + x 3 = 0

We solve this system using the Gauss method:

1 1 0 | 0 1 2 - 1 | 0 1 0 1 | 0 ~

From the 2nd line we subtract the 1st, from the 3rd - the 1st:

~ 1 1 0 | 0 1 - 1 2 - 1 - 1 - 0 | 0 - 0 1 - 1 0 - 1 1 - 0 | 0 - 0 ~ 1 1 0 | 0 0 1 - 1 | 0 0 - 1 1 | 0 ~

From the 1st line we subtract the 2nd, to the 3rd we add the 2nd:

~ 1 - 0 1 - 1 0 - (- 1) | 0 - 0 0 1 - 1 | 0 0 + 0 - 1 + 1 1 + (- 1) | 0 + 0 ~ 0 1 0 | 1 0 1 - 1 | 0 0 0 0 | 0

From the solution it follows that the system has many solutions. This means that there is a non-zero combination of values ​​of such numbers x 1, x 2, x 3 for which the linear combination of a, b, c equals the zero vector. Therefore, the vectors a, b, c are linearly dependent. ​​​​​​​

If you notice an error in the text, please highlight it and press Ctrl+Enter

A necessary condition for the linear dependence of n functions.

Let the functions have derivatives of the limit (n-1).

Consider the determinant: (1)

W(x) is usually called the Wronski determinant for functions.

Theorem 1. If the functions are linearly dependent in the interval (a,b), then their Wronskian W(x) is identically equal to zero in this interval.

Proof. According to the conditions of the theorem, the relation is satisfied

, (2) where not all are equal to zero. Let . Then

(3). We differentiate this identity n-1 times and,

substituting their resulting values ​​into the Wronsky determinant,

we get:

In Wronski's determinant, the last column is a linear combination of the previous n-1 columns and is therefore equal to zero at all points of the interval (a,b).

Theorem 2. If the functions y 1 ,..., y n are linearly independent solutions of the equation L[y] = 0, all coefficients of which are continuous in the interval (a,b), then the Wronskian of these solutions is nonzero at each point interval (a,b).

Proof. Let's assume the opposite. There is X 0, where W(X 0)=0. Let's create a system of n equations

Obviously, system (5) has a non-zero solution. Let (6).

Let's make a linear combination of solutions y 1,..., y n.

Y(x) is a solution to the equation L[y] = 0. In addition, . By virtue of the uniqueness theorem, the solution of the equation L[y] = 0 with zero initial conditions must be only zero, ᴛ.ᴇ. .

We obtain the identity where not all are equal to zero, which means that y 1 ,..., y n are linearly dependent, which contradicts the conditions of the theorem. Consequently, there is no such point where W(X 0)=0.

Based on Theorem 1 and Theorem 2, the following statement can be formulated. In order for n solutions of the equation L[y] = 0 to be linearly independent in the interval (a,b), it is extremely important and sufficient that their Wronskian does not vanish at any point in this interval.

The following obvious properties of the Wronskian also follow from the proven theorems.

  1. If the Wronskian of n solutions of the equation L[y] = 0 is equal to zero at one point x = x 0 from the interval (a,b), in which all coefficients p i (x) are continuous, then it is equal to zero at all times ex points of this interval.
  2. If the Wronskian of n solutions to the equation L[y] = 0 is nonzero at one point x = x0 from the interval (a,b), then it is nonzero at all points of this interval.

However, for the linearity of n independent solutions of the equation L[y] = 0 in the interval (a,b), in which the coefficients of the equation p i (x) are continuous, it is extremely important and sufficient that their Wronskian be different from zero at least one point of this interval.

A necessary condition for the linear dependence of n functions. - concept and types. Classification and features of the category "Necessary condition for the linear dependence of n functions." 2017, 2018.

-

On board cargo handling gear Lecture No. 6 Topic: Cargo gear 6.1. On board cargo handling gear. 6.2. Cargo cranes. 6.3. Ramp. Transshipment is the movement of cargo onto or from a vehicle. Many... .


  • - Cargo cranes

    Certificates Division of tasks Inspections, certification and responsibilities are divided as follows: &... .


  • - Do you know him? Lo conoces?

    There – allá Here – aqui In the cafe – en el cafe At work – en el trabajo At the sea – en el mar 1. Do you know where the cafe is? 2. You don’t know where Sasha is? 3. Don't you know where the library is? 4. You don’t know where Olya is now? 5. You don’t know where Natasha is now? Good afternoon Me... .


  • - Determination of Zmin and Xmin from the condition of no undercut

    Fig.5.9. About trimming wheel teeth. Let's consider how the shear coefficient x of the rack is related to the number of teeth that can be cut by the rack on the wheel. Let the rail be installed in position 1 (Fig. 5.9.). In this case, the straight line of the rack heads will intersect the N-N engagement line in...

  • Def. System of elements x 1,…,x m linear. pr-va V is called linearly dependent if ∃ λ 1 ,…, λ m ∈ ℝ (|λ 1 |+…+| λ m | ≠ 0) such that λ 1 x 1 +…+ λ m x m = θ .

    Def. A system of elements x 1 ,…,x m ∈ V is called linearly independent if the equality λ 1 x 1 +…+ λ m x m = θ ⟹λ 1 =…= λ m =0.

    Def. An element x ∈ V is called a linear combination of elements x 1 ,…,x m ∈ V if ∃ λ 1 ,…, λ m ∈ ℝ such that x= λ 1 x 1 +…+ λ m x m .

    Theorem (linear dependence criterion): A system of vectors x 1 ,…,x m ∈ V is linearly dependent if and only if at least one vector of the system is linearly expressed in terms of the others.

    Doc. Necessity: Let x 1 ,…,x m be linearly dependent ⟹ ∃ λ 1 ,…, λ m ∈ ℝ (|λ 1 |+…+| λ m | ≠ 0) such that λ 1 x 1 +…+ λ m -1 x m -1 + λ m x m = θ. Let's say λ m ≠ 0, then

    x m = (- ) x 1 +…+ (- ) x m -1.

    Adequacy: Let at least one of the vectors be linearly expressed through the remaining vectors: x m = λ 1 x 1 +…+ λ m -1 x m -1 (λ 1 ,…, λ m -1 ∈ ℝ) λ 1 x 1 +…+ λ m -1 x m -1 +(-1) x m =0 λ m =(-1) ≠ 0 ⟹ x 1 ,…,x m - linearly independent.

    Ven. linear dependence condition:

    If a system contains a zero element or a linearly dependent subsystem, then it is linearly dependent.

    λ 1 x 1 +…+ λ m x m = 0 – linearly dependent system

    1) Let x 1 = θ, then this equality is valid for λ 1 =1 and λ 1 =…= λ m =0.

    2) Let λ 1 x 1 +…+ λ m x m =0 – linearly dependent subsystem ⟹|λ 1 |+…+| λ m | ≠ 0 . Then for λ 1 =0 we also get, |λ 1 |+…+| λ m | ≠ 0 ⟹ λ 1 x 1 +…+ λ m x m =0 – linearly dependent system.

    Basis of linear space. Coordinates of the vector in a given basis. Coordinates of the sums of vectors and the product of a vector and a number. A necessary and sufficient condition for the linear dependence of a system of vectors.

    Definition: An ordered system of elements e 1, ..., e n of a linear space V is called the basis of this space if:

    A) e 1 ... e n are linearly independent

    B) ∀ x ∈ α 1 … α n such that x= α 1 e 1 +…+ α n e n

    x= α 1 e 1 +…+ α n e n – expansion of the element x in the basis e 1, …, e n

    α 1 … α n ∈ ℝ – coordinates of element x in the basis e 1, …, e n

    Theorem: If in a linear space V a basis e 1, …, e n is given then ∀ x ∈ V the column of coordinates x in the basis e 1, …, e n is uniquely determined (the coordinates are uniquely determined)

    Proof: Let x=α 1 e 1 +…+ α n e n and x=β 1 e 1 +…+β n e n


    x= ⇔ = Θ, i.e. e 1, …, e n are linearly independent, then - =0 ∀ i=1, …, n ⇔ = ∀ i=1, …, n etc.

    Theorem: let e 1, …, e n be the basis of the linear space V; x, y are arbitrary elements of the space V, λ ∈ ℝ is an arbitrary number. When x and y are added, their coordinates are added; when x is multiplied by λ, the x coordinates are also multiplied by λ.

    Proof: x= (e 1, …, e n) and y= (e 1, …, e n)

    x+y= + = (e 1, …, e n)

    λx= λ ) = (e 1, …, e n)

    Lemma1: (necessary and sufficient condition for the linear dependence of a system of vectors)

    Let e ​​1 …е n be the basis of space V. A system of elements f 1 , …, f k ∈ V is linearly dependent if and only if the coordinate columns of these elements in the basis e 1, …, e n are linearly dependent

    Proof: let us expand f 1, …, f k according to the basis e 1, …, e n

    f m =(e 1, …, e n) m=1, …, k

    λ 1 f 1 +…+λ k f k =(e 1, …, e n)[ λ 1 +…+ λ n ] that is, λ 1 f 1 +…+λ k f k = Θ ⇔

    ⇔ λ 1 +…+ λ n = which is what needed to be proven.

    13. Dimension of linear space. Theorem on the connection between dimension and basis.
    Definition: A linear space V is called an n-dimensional space if there are n linearly independent elements in V, and a system of any n+1 elements of the space V is linearly dependent. In this case, n is called the dimension of the linear space V and is denoted by dimV=n.

    A linear space is called infinite-dimensional if ∀N ∈ ℕ in the space V there is a linearly independent system containing N elements.

    Theorem: 1) If V is an n-dimensional linear space, then any ordered system of n linearly independent elements of this space forms a basis. 2) If in a linear space V there is a basis consisting of n elements, then the dimension of V is equal to n (dimV=n).

    Proof: 1) Let dimV=n ⇒ in V ∃ n linearly independent elements e 1, …, e n. We will prove that these elements form a basis, that is, we will prove that ∀ x ∈ V can be expanded in e 1, …, e n . Let's add x to them: e 1, ..., e n, x - this system contains n+1 vectors, which means it is linearly dependent. Since e 1, …, e n is linearly independent, then by Theorem 2 x linearly expressed through e 1, …, e n i.e. ∃ ,…, such that x= α 1 e 1 +…+ α n e n . So e 1, …, e n is the basis of the space V. 2) Let e ​​1, …, e n be the basis of V, so there are ∃ n linearly independent elements in V. Let us take arbitrary f 1 ,…,f n ,f n +1 ∈ V – n+1 elements. Let us show their linear dependence. Let's break them down according to their basis:

    f m =(e 1, …,e n) = where m = 1,…,n Let’s create a matrix of coordinate columns: A= The matrix contains n rows ⇒ RgA≤n. Number of columns n+1 > n ≥ RgA ⇒ Columns of matrix A (i.e., columns of coordinates f 1 ,…,f n ,f n +1) are linearly dependent. From Lemma 1 ⇒ ,…,f n ,f n +1 are linearly dependent ⇒ dimV=n.

    Consequence: If any basis contains n elements, then any other basis in this space contains n elements.

    Theorem 2: If the system of vectors x 1 ,… ,x m -1 , x m is linearly dependent, and its subsystem x 1 ,… ,x m -1 is linearly independent, then x m is linearly expressed through x 1 ,… ,x m -1

    Proof: Because x 1 ,… ,x m -1 , x m is linearly dependent, then ∃ , …, , ,

    , …, | , | such that . If , , …, | => x 1 ,… ,x m -1 – are linearly independent, which cannot be. This means m = (- ) x 1 +…+ (- ) x m -1.



    What else to read