Theory of interpretation of aerial and space images

Comparative decryption a series of zonal images is based on the use of spectral images of objects depicted in the image. The spectral image of an object in a photograph is determined visually by the tone of its image in a series of zonal black and white photographs; tone is assessed on a standardized scale in optical density units. Based on the data obtained, a spectral image curve is constructed, reflecting the change in the optical density of the image in images in different spectral zones. In this case, the values ​​of the optical density of prints plotted along the ordinate axis D, in contrast to the accepted one, they decrease upward along the axis so that the curve of the spectral image corresponds to the curve of spectral brightness. Some commercial programs provide automatic plotting of spectral images from digital images. The logical scheme for comparative interpretation of multispectral images includes the following steps: determination of the spectral image of an object from photographs- comparison with known spectral reflectance- object identification.

When deciphering contours over the entire area of ​​an image, the spectral image is successfully used to determine the boundaries of the distribution of objects being deciphered, which is carried out using comparative decoding techniques. Let's explain them. On each of the zonal images, the images are divided by tone certain populations objects, and in photographs in different zones these aggregates are different. Comparison of zonal images makes it possible to separate these populations and identify individual objects. Such a comparison can be realized by combining (“subtracting”) schemes for deciphering zonal images, each of which identifies different sets of objects, or by obtaining difference images from zonal images. Comparative interpretation is most applicable when studying plant objects, primarily forests and agricultural crops.

When sequentially interpreting multispectral images, the fact is also used that the dark contours of vegetation in the red zone against a lighter background, due to the increase in the brightness of its image in the near infrared zone, seem to “disappear” from the image, without interfering with the perception of large features of the tectonic structure and relief. This opens up the possibility, for example, during geomorphological studies to decipher relief forms of different origins from different zonal images - endogenous according to images in the near-infrared zone and exogenous in the red zone. Sequential decryption provides technologically comparative simple operations step-by-step summation of results.



Deciphering multi-temporal images. Multi-temporal images provide a qualitative study of changes in the objects under study and indirect interpretation of objects based on their dynamic characteristics.

Dynamics studies. The process of extracting dynamic information from images includes identifying changes, graphically displaying them, and meaningfully interpreting them. To identify changes in images taken at different times, they need to be compared with each other, which is done through alternate (separate) or simultaneous (joint) observation. Technically, visual comparison of images from different times is carried out most simply by observing them one by one. The very old method of “blinking” makes it possible, for example, to simply detect a newly appearing individual object by quickly alternately examining two photographs at different times. An illustrative cinemagram can be assembled from a series of photographs of a changing object. Thus, if images of the Earth received every 0.5 hours from geostationary satellites in the same angle are edited into an animation file, then it is possible to repeatedly reproduce the daily development of cloudiness on the screen.

To identify small changes, it turns out to be more effective not to sequentially, but to jointly observe images at different times, for which special techniques are used: combining images (monocular and binocular); synthesizing a difference or summary (usually color) image; stereoscopic observations.

At monocular During observation, photographs brought to the same scale and projection and made on a transparent basis are combined by superimposing one on top of the other and viewed in the light. When computer decrypting images to view images together, it is advisable to use programs that ensure that the combined images are perceived as translucent or that “reveal” areas of one image against the background of another.

Binocular observation, when each of two photographs at different times is viewed with one eye, is most conveniently carried out using a stereoscope, in which the observation channels have independent adjustment of the magnification and brightness of the image. Binocular observations are effective in detecting changes in clear objects against a relatively uniform background, such as changes in a river bed.

From black and white photographs taken at different times, it is possible to obtain synthesized color image. However, as experience shows, the interpretation of such a color image is difficult. This technique is effective only when studying the dynamics of objects that are simple in structure and have sharp boundaries.

When studying changes due to movement and movement of objects, the best results are obtained stereoscopic observation multi-temporal images (pseudo-stereo effect). Here you can evaluate the nature of the movement, stereoscopically perceive the boundaries of a moving object, for example, the boundaries of an active landslide on a mountain slope.

In contrast to alternate methods, joint observation of images from different times requires preliminary corrections - bringing them to the same scale, transformation, and these procedures are often more complex and time-consuming than the determination of changes itself.

Decryption using dynamic features. Patterns of temporary changes in geographical objects, which are characterized by changes in states over time, can serve as their decoding signs, which, as already noted, are called the temporary image of the object. For example, thermal images obtained in different time days, make it possible to recognize objects that have a specific daily temperature variation. When working with multi-temporal images, the same techniques are used as when interpreting multispectral images. They are based on sequential and comparative analysis and synthesis and are common for working with any series of images.

Field and office interpretation. At field In decoding, identification of objects is carried out directly on the ground by comparing the object in nature with its image in the photograph. The decryption results are applied to the image or a transparent overlay attached to it. This is the most reliable type of decryption, but also the most expensive. Field interpretation can be performed not only on photographic prints, but also on screen (digital) photographs. In the latter case, a field microcomputer with a sensitive tablet screen, as well as special software, is usually used. The results of decoding are marked in a field on the screen using a computer pen, fixed with a set of symbols and recorded in text or tabular form in several layers of microcomputer memory. It is possible to enter additional audio information about the decryption object. During field interpretation, it is often necessary to add missing objects to images. Additional photography is carried out by eye or instrumental method. For this purpose, satellite positioning receivers are used, which make it possible to determine in the field the coordinates of objects that are absent in the image, with almost any required accuracy. When deciphering images at a scale of 1:25,000 and smaller, it is convenient to use portable satellite receivers connected to a microcomputer into a single field decoder kit.

A type of field interpretation includes aerovisual decryption, which is most effective in the tundra and desert. The altitude and flight speed of a helicopter or light aircraft are chosen depending on the scale of the images: the smaller the scale, the larger they are. Aerovisual interpretation is effective when working with satellite images. However, it is not easy to implement. The performer must be able to quickly navigate and recognize objects.

At office decryption, which is the main and most common type of decryption, an object is recognized by direct and indirect decryption features without going into the field and directly comparing the image with the object. In practice, both types of decryption are usually combined. A rational scheme for their combination provides for preliminary office, selective field and final office interpretation of aerospace images. The ratio of field and office interpretation also depends on the scale of the images. Large-scale aerial photographs are interpreted primarily in the field. When working with satellite images covering large areas, the role of desk interpretation increases. Ground-based field information when working with satellite images is often replaced by cartographic information obtained from maps - topographic, geological, soil, geobotanical, etc.

Reference decryption. Office decryption is based on the use decrypted standards, created in the field in key areas typical for a given territory. Thus, decryption standards are photographs of characteristic areas with the results of deciphering typical objects printed on them, accompanied by a description of decryption features. Next, the standards are used for office decoding, which is performed using the geographic interpolation And extrapolations, that is, by spreading the identified decryption features to areas between the standards and beyond. Office decoding using standards was developed during topographic mapping of hard-to-reach areas, when a number of organizations created photo libraries of standards. The cartographic service of our country published albums of samples of interpretation of various types of objects on aerial photographs. When interpreting space images thematically, most of them multispectral, such a teaching role is performed by those trained at Moscow State University. M.V. Lomonosov scientific and methodological atlases “Interpretation of multispectral aerospace images”, containing methodological recommendations and examples of the results of interpretation of various components of the natural environment, socio-economic objects, and the consequences of anthropogenic impact on nature.

Preparing images for visual interpretation. Original images are rarely used for geographic interpretation. When interpreting aerial photographs, contact prints are usually used, and satellite images It is advisable to decipher it “in the light”, using transparencies on film, which more fully convey small and low-contrast details of the space image.

Converting photos.For faster, simpler and more complete extraction of the necessary information from an image, its transformation is performed, which boils down to obtaining another image with specified properties. It is aimed at highlighting necessary and removing unnecessary information. It should be emphasized that image transformation does not add new information, but only brings it into a form convenient for further use.

Conversion of images can be done by photographic, optical and computer methods, or a combination of them. Photographic methods are based on various modes of photochemical processing; optical - on the transformation of the light flux passed through the image. Computer image conversions are the most common. We can say that at present there is no alternative to computer transformations. Common computer image transformations for visual interpretation, such as compression-decompression, contrast conversion, color image synthesis, quantization and filtering, as well as the creation of new derivative geoimages.

Enlarging pictures. When visually interpreting, it is customary to use technical means that expand the capabilities of the eye, for example, magnifying glasses with different magnifications - from 2x to 10x. A measuring lens with a scale in the field of view is useful. The need for magnification becomes clear from a comparison of the resolution of images and the eye. The resolution of the eye at the distance of best vision (250 mm) is assumed to be 5 mm-1. To distinguish, for example, all the details in a satellite photograph with a resolution of 100 mm-1, it must be magnified by 100/5 = 20 times. Only in this case can you use all the information contained in the photograph. It must be taken into account that obtaining images with high magnification (more than 10x) by photographic or optical methods is not easy: large photo enlargers are required or very high illumination of the original images is difficult to achieve.

Features of viewing images on a computer screen. The characteristics of the display screen are important for the perception of images: the best interpretation results are achieved on screens big size, reproducing the maximum number of colors and having a high image refresh rate. The magnification of a digital image on a computer screen is close to optimal in cases where one pixel of the screen pix corresponds to one pixel of the image.

If the pixel size on the terrain PIX (spatial resolution) is known, then the image scale of the image on the display screen is equal to:

For example, digital satellite image TM/Landsat with a pixel size on the ground PIX = 30 m will be reproduced on the display screen with pix d= 0.3 mm on a scale of 1:100,000. If it is necessary to examine small details, the screenshot using a computer program can be further enlarged by 2, 3, 4 times or more; in this case, one pixel of the image is represented by 4, 9, 16 or more screen pixels, but the image takes on a “pixel” structure noticeable to the eye. In practice, the most common additional increase is 2 - 3x. To view the entire image on the screen at the same time, the image must be reduced in size. However, in this case, only every 2nd, 3rd, 4th, etc. are displayed. rows and columns of the image and loss of details and small objects is inevitable.

The effective operation time for decoding screen images is shorter than for visual decoding of fingerprints. It is also necessary to take into account current sanitary standards for working on a computer, regulating, in particular, the minimum distance of the codebreaker’s eyes from the screen (at least 500 mm), the duration of continuous operation, the intensity of electromagnetic fields, noise, etc.

Devices and aids. Often in the process of visual interpretation it is necessary to make simple measurements and quantitative estimates. For this, various types of auxiliary means are used: palettes, scales and tone tables, nomograms, etc. Stereoscopes are used for stereoscopic viewing of photographs. various designs. The best device for office deciphering should be considered a stereoscope with a double observation system, which ensures viewing of a stereo pair by two decipherers. The transfer of interpretation results from individual images to a common cartographic basis is usually carried out using a small special optical-mechanical device.

Presentation of decryption results. The results of visual decoding are most often presented in graphic, text and, less commonly, digital forms. Usually, as a result of decryption work, a photograph is obtained in which the conventional signs objects being studied. The decoding results are also recorded on a transparent overlay. When working on a computer, it is convenient to present the results in the form of printer prints (hard copies). Based on satellite images, so-called decryption schemes, which in their content represent fragments of thematic maps compiled in the scale and projection of the image.

Visual decryption method, direct and indirect signs of decryption.

Materials used in visual interpretation

The concept of decoding images. Classification of decryption.

Deciphering (interpretation) is called the analysis of video information in order to extract information about the surface and interior of the Earth (other planets, their satellites), objects located on the surface, processes occurring on the surface and in the near-surface space.

The information includes, for example, determining the spatial position of objects, their quality and quantitative characteristics, clarification of the boundaries of the strike of the processes being studied and data on their dynamics, and much more. The tasks of decoding also include obtaining information from other sources that cannot be read directly from images, for example, information about the presence, position and properties of undisplayed objects, names of settlements, rivers, and tracts. Such sources can be materials from previously completed interpretation, plans, maps, auxiliary photographs, reference books, the terrain itself. The results of visual decoding are recorded by conventional signs on the deciphered image, machine decoding - by tone, color, symbol or other symbols.

Another definition of decryption:

Deciphering images (interpretation) - the process of recognizing local objects from a photographic image and identifying their content with symbols indicating qualitative and quantitative characteristics .

Depending on the content, decoding is divided into:

General geographical

special (thematic, sectoral).

General geographic decoding includes two types:

Topographic interpretation-produced to detect, recognize and obtain characteristics of objects that should be depicted on topographic maps. It is one of the foundations of the processes technological scheme updating and creating maps.

Landscape interpretation– carried out for regional and typological zoning of the area and solving special problems.

Special (thematic, industry) decoding produced to solve departmental problems in determining the characteristics of individual sets of objects. There are many varieties of thematic decoding. agricultural, forestry. geological, soil, geobotanical, etc. and other departmental purposes. If the ultimate task of special interpretation is the compilation of thematic maps, for example agricultural, soil or geobotanical, then. in the absence of a suitable topographical basis, special interpretation is accompanied by topographical interpretation.

The basis for the methodological classification of decryption at its current level of development is the means of reading and analyzing video information. Based on this, the following main decryption methods can be distinguished:

visual, in which information from images is read and analyzed by a person:

machine-visual, in which video information is pre-converted by specialized or universal interpretation machines in order to facilitate subsequent visual analysis of the resulting image:

automated(conversational), in which reading from images and analysis. or direct analysis of line-by-line recorded video information, are performed by specialized or universal interpretation machines with the active part of the operator:

auto(machine) in which deciphering is performed entirely by interpretation machines. A person defines tasks and sets a program for processing and video information.

In all methods, lower levels of classification can be distinguished - methods and variants of methods.

Schematic diagram decryption process in any method remains unchanged - recognition is performed by comparing and determining the degree of proximity of a certain set of features of the object being deciphered with the corresponding reference features located in the memory of a person or machine. The recognition process is preceded by a learning process (or self-learning), during which a list of objects to be deciphered is determined, a set of their characteristics is selected, and the permissible degree of their difference is established.

If there is insufficient a priori information about the classes of objects and their characteristics, a person and a machine can divide the depicted objects according to the proximity of some characteristics into homogeneous groups - clusters, the content of which is then determined by a person or a machine using additional data.

2. Visual decoding method, direct and indirect signs of decoding .

Natural objects depicted in photographs can be identified and interpreted by a decoder by their properties, which are reflected in the decryption characteristics of these objects. All decryption features can be divided into two groups: direct decryption features and indirect ones.

Direct features include those properties and characteristics of objects that are directly displayed in photographs and can be perceived visually or using technical means.

To direct decoding signsm include the shape and size of the image of objects in plan and height, the overall (integral) tone of black-and-white or color of color (spectrozonal) images, and the texture of the image.

Form in most cases, it is a sufficient feature to separate objects of natural and anthropogenic origin. Objects created by humans tend to have correct configurations. For example, any buildings and structures have regular geometric shapes. The same can be said about canals, highways and railways, parks and squares, arable and cultivated forage lands and other objects. The shape of objects is sometimes used as an indirect sign to determine the characteristics of other objects.

Dimensions of decrypted objects in most cases they are assessed relatively. The relative height of objects is judged directly by their image on the edges of images obtained using wide-angle shooting systems. The size, as well as the shape in height, can be judged by the shadows falling from objects. Of course, the area on which the shadow falls must be horizontal.

The dimensions of the image of objects, as well as the shape, are distorted due to the influence of the terrain and the specifics of the projection used in the filming system.

Image Tone is a function of the brightness of the object within the spectral sensitivity of the radiation receiver of the shooting system. In photometry, the analogue of tone is the optical density of the image. the inconstancy of this feature is associated with the following factors: lighting conditions, surface structure, type of photographic material and processing conditions, zone of the electromagnetic spectrum and other reasons. Tone is assessed visually by assigning the image to a certain level of a non-standardized achromatic scale, for example, light tone, light gray, gray, etc. The number of steps is determined by the threshold of light sensitivity of the human visual apparatus.

It has been experimentally established that the human eye has been experimentally established that the human eye can distinguish up to 25 gradations of gray tone; for practical purposes, a gray scale of tones from seven to ten levels is more often used (Table 2).

Table 1 Quantitative characteristics of image density

With the help of computers, it is possible to distinguish up to 225 levels of gray tone from photographs and films. In addition, these levels, depending on the task at hand, can be grouped into certain steps with their quantitative characteristics. The tone of a photographic image is significantly influenced by the texture properties of objects, on which the distribution of light reflected from the surface of the object into space depends.

Optical density serves as a code that conveys the properties of objects. Objects that are completely different in color can appear in the same tone on a black-and-white photograph or television image. Given the instability of the indicator, when deciphering, the phototone is assessed only in combination with other decoding features (for example, structure). Nevertheless, it is the phototone that acts as the main deciphering feature that forms the outlines of the boundaries, dimensions and structure of the image of the object.

Tone can be quite an informative sign if the elements of the shooting system and shooting conditions are correctly selected.

The tone of the image of arable land can vary significantly in time and space, since it significantly depends on the state of the surface of unoccupied fields (plowed, harrowed, dry, wet, etc.), on the type and phenophase of crops on occupied fields.

Image color is a spectral characteristic and determines the energy of the light flux. The color gamut of images is essential feature decryption. This sign should be considered in two aspects. In the first case, when the image on aerial and satellite images is formed in colors close to natural colors (color images), recognition and classification of terrain objects does not cause any particular difficulties. In this case, such characteristics of color as its lightness and saturation are taken into account, as well as different shades of the same color. In another case, a color image is formed in arbitrary colors (pseudo-colors), as is the case with spectrozonal photography. The meaning of this deliberate distortion of the color scheme of nature in the image is that in the photographs the observer more easily perceives the color contrasts of the details of the image, therefore color aerial and space photographs have a higher decipherability than black and white ones. The best results are obtained when interpreting spectrozonal aerial photographs with higher color contrast

Terrain features Color (tone) of the image on aerial photographs
black and white colored spectrozonal
Pine forest light gray dark green dark purple
Spruce forest grey green brownish purple
Deciduous forest bright light gray light green bluish and greenish purple
Oak forest grey green greenish blue with shades
Birch forest light gray green
Aspen forest bright light gray light green
deciduous shrub grey green greenish blue
Herbaceous vegetation grey green grayish blue, light purple
Field technical crops gray with shades green with shades blue, brick, cherry, purple
Consolidated sands grey grayish yellow purple
The buildings gray with shades light red, light gray, green monotonously purple
Paved roads grey light gray purple

The colors of a spectrozonal aerial photograph are less stable than those of a color photograph in natural colors. If necessary, they can be significantly changed using light filters.

There is a special decoding technique where color in images is used to encode image details that have the same optical density. This method is widely used in interpreting zonal images obtained as a result of multispectral surveys. It is very effective when carrying out landscape decoding. In this case, individual elementary landscape units can be coded in some color based on their related characteristics and properties.

Shadow as a decryption feature plays an important role in deciphering objects and their properties. A falling shadow cast by an object on the earth's surface, located on the side opposite to the Sun, emphasizes the volume of the object and its shape. Its outline and size depend on the height of the Sun, the terrain (area) on which the shadow falls, and the direction of illumination.

There are several ways to determine the height of an object from a falling shadow:

where l is the length of the shadow of the object on the aerial photograph;

m is the denominator of the image scale;

n is the relative length of the shadow, which is taken from the tables of V.I. Drury (see Smirnov L.E., 1975)

where b₁ is the length of the object’s shadow on the aerial photograph;

h₂ is the height of a known object on an aerial photograph;

b₂ - length of the shadow on an aerial photograph of a known object

By the shape of the falling shadow, you can recognize both artificial objects (buildings, pillars, triangulation points) and natural objects. Falling shadows are widely used as decoding features in the study of vegetation. .Casting shadows display the elongated shape of the object's silhouette. This property is used when deciphering fences, telegraph poles, water and silo towers, external signs of geodetic network points, individual trees, as well as sharply defined landforms (cliffs, gullies, etc.). It should be borne in mind that the size of the shadow is influenced by the terrain. Each breed has its own specific crown shape, which is reflected in its shadow and makes it possible to determine its species composition. For example, the shape of the falling shadow of a spruce resembles an acute triangle, while that of a pine tree is oval. However, it should be remembered that the shadow is a very dynamic decoding sign (it changes throughout the day). It can exceed the size of the object when the Sun is low above the horizon

Texture (image structure) - the nature of the distribution of optical density over the image field of the object. The structure of the image is the most stable direct deciphering feature, practically independent of shooting conditions. Structure is a complex feature that combines some other direct deciphering features (shape, tone, size, shadow) of a compact group of homogeneous and heterogeneous details of the image of the area in the image. The repeatability, placement and quantity of these parts lead to the identification of new properties and help to increase the reliability of interpretation. The importance of this feature increases as the image scale decreases. For example, the texture of a forest massif is formed by the image of the crowns of individual trees in the photographs, and with a high resolution of the shooting system - by the image of also the elements of the crowns - branches or even leaves; the texture of clean arable land is formed by the display of arable furrows or individual clods.

There is a fairly large number of structures formed by combinations of points, areas, narrow stripes various shapes, width and length. Some of them are discussed below.

Granular structure typical for depicting forests. The drawing is created gray spots rounded shape (tree crowns) against a darker background created by shaded spaces between trees. The image of cultivated vegetation (gardens) has a similar structure.

Homogeneous structure It is formed by the same type of microrelief and is characteristic of lowland grassy swamps, steppe plains, clay deserts, and reservoirs with calm water conditions.

Banded structure characteristic of images of vegetable gardens and plowed fields and is a consequence of the parallel arrangement of furrows.

Fine grain structure typical for depicting shrubs of various species.

Mosaic structure formed by vegetation or soil cover of unequal moisture content and is characteristic of randomly located areas of various colors, sizes and shapes. A similar structure, created by alternating rectangles of various sizes and densities, is characteristic of the depiction of personal plots,

Spotted structure typical for images of gardens and swamps.

Square structure characteristic of some types of forest swamps and urban settlements. It is formed by a combination of forest areas separated light stripes swamps, and reads as combinations of squares of uniform tone. The same structure is created by images of multi-story buildings (relatively large rectangles) and elements of intra-block development in populated areas.

As the scale decreases, texture is created by larger terrain elements, for example, individual arable fields. Texture is one of the most informative features. It is by texture that a person unmistakably recognizes forests, gardens, settlements and many other objects. For the listed objects, the texture is relatively stable over time.

Indirect signs can be divided into three main groups. natural, anthropogenic and natural-anthropogenic. Indirect decryption features are quite stable and depend on scale to a lesser extent.

TO natural relate to the interrelationships and interdependence of objects and phenomena in nature. They are also called landscape. Such features may be, for example, the dependence of the type of vegetation cover on the type of soil, its salinity and moisture content, or the connection between relief and geological structure areas and their joint role in the soil-forming process.

By using anthropogenic indirect signs identify objects created by man. In this case, functional connections between objects, their position in the general complex of structures, zonal specificity of the organization of the territory, and communication support for objects are used. For example, a livestock farm of an agricultural enterprise can be identified by a set of main and auxiliary buildings, internal layout territory, intensively knocked out purlins, the position of the deciphered complex of structures relative to the residential area, the nature of the road network. Similarly, repair shops are identified by the image of the machines located on the territory; a stud farm is reliably identified by the arena adjacent to its territory. At the same time, each of the complex’s structures is not decipherable separately, without connection with the others. . For example, a light, winding line connecting populated areas is almost certainly a depiction of a country road; with the same probability, light winding lines are lost in a forest or field - field or forest roads; building near the intersection of a light winding strip ( dirt road) with the railway indicates the presence of a crossing here; a road that ends on the river bank and its continuation on the other bank indicates the presence of a ford or ferry; a group of buildings near a repeatedly branching railway suggests the presence of a railway station. Logical analysis of direct and indirect decryption features significantly increases the reliability of decryption.

TO natural-anthropogenic indirect characteristics include the dependence of human economic activity on certain natural conditions, the manifestation of properties natural objects in human activity and more. For example, based on the placement of certain types of crops, one can make a certain judgment about the properties of soils and their moisture content; the elements of a closed drainage system can be deciphered by changes in surface moisture at the locations of drains. Objects used in identifying and determining the characteristics of objects that cannot be directly deciphered are called indicators, and decryption - indicator. Such decoding can be multi-stage, when direct indicators of the objects being deciphered are identified with the help of auxiliary indicators. Indication decoding techniques are used to solve problems of detecting and determining the characteristics of objects not shown in photographs. The most important indicators of various phenomena in indirect interpretation are vegetation, relief and hydrography.

Vegetation is a good indicator of soils, quaternary sediments, soil moisture, etc. When interpreting, the following indicator signs of vegetation can be used:

Morphological characteristics make it possible to distinguish tree, shrub and meadow vegetation in aerospace images.

Floristic (species) characteristics make it possible to decipher the species composition, for example, pine plantations are confined to sandy automorphic soils, black alder plantations to sod-gley soils.

Physiological signs are based on the connection between the hydrogeological and geochemical conditions of the growing site and the chemical properties of the rocks. For example, lichens on limestones are orange, and on granites they are yellow.

Phenological characteristics are based on differences in the rhythms of vegetation development. This is especially evident in autumn in deciduous vegetation in the change in leaf color. Color aerospace images clearly distinguish the species composition of vegetation, which emphasizes the growing conditions.

Phytocenotic characteristics make it possible to decipher the types of forest vegetation and associations of meadow vegetation that are confined to certain growing conditions. For example, lichen pine forests grow on elevated relief elements with automorphic loose-sandy soils, while lichen pine forests are confined to low relief elements and sod-podzolic-boggy soils.

Relief is one of the most important indicators. Relationship between relief and other components natural complexes, its large role in shaping the external appearance of landscapes and the possibility of direct interpretation make it possible to use relief as an indicator of a wide variety of natural objects and their properties. Such indicators can be the following morphometric and morphological features of the relief: a) absolute heights and amplitudes of height fluctuations in a given area; b) general terrain dissection and slope angles; c) the orientation of individual relief forms and the exposure of slopes (solar, wind), which, together with absolute heights, determine the climatic conditions and water regime in a given territory; d) connection between relief and geology; e) the genesis of the relief, its age and modern dynamics, etc.

Hydrography is an important indicator of physical-geographical and geological conditions. The close connection between the structure and density of the hydrographic network (lakes, rivers and swamps) with geology and relief makes it possible to use aerial photographs, especially river networks, as a direct landscape feature when analyzing the area in geomorphological, geological and paleographic terms.

Decryption features are usually used collectively, without dividing them into any groups. The image on the deciphered area is usually perceived by a person as a single whole - a model of the area. Based on the analysis of the model, we create a preliminary hypothesis about the essence of the object (phenomenon) and its properties. The correctness of the hypothesis is confirmed or rejected (sometimes repeatedly) with the help of additional signs.

5. Information properties of images from the point of view of visual interpretation

To assess the information properties of an image, two characteristics are used:

1. information content;

2. . decipherability.

Information content - expert review the potential possibility of obtaining the necessary information about objects from these images. It is impossible to select a quantitative criterion for assessing the information content of an image. Information content is usually assessed verbally: high information content, insufficient information content, etc. Depending on the purposes of interpretation (tasks to be solved), the same images can be considered highly informative and insufficiently informative.

The basis for a formal assessment of the amount of information contained in an image can be based on its relationship with resolution. The higher the resolution of the images, the greater the amount of information they contain. Based on semantic information, its value for the researcher can be determined. For example, a clear image of the species composition of forest vegetation on infrared aerial photographs indicates the effectiveness of using these images to decipher its species composition. By deciphering aerospace images, you can obtain a wide variety of information and facts. However, information includes only those that meet the task or goal.

To determine the maximum amount of information, the concept “ full information", which should be understood as the information that in each specific case can be extracted from images obtained under optimal technical and weather shooting conditions, as well as scale. However, images that have properties other than optimal are often used. The amount of information they contain is general case less complete information and amounts to operational information. Operational information includes those necessary information that can be calculated: obtained by deciphering image data. However, the extracted information is almost always less than operational information due to decryption errors. Errors when deciphering objects can occur for the following reasons: when deciphering low-contrast objects; false identification of objects due to the coincidence of deciphering features (for example, limestone and snowfields). However, the decipherer often encounters interference and noise that is of no value to the researcher. Interference can include the presence of glare, as well as the image in the images of the thickness of the atmosphere, which is superimposed on the image in the form of haze, or such atmospheric phenomena, such as fog, dust storms, etc. The qualitative variety and quantity of information extracted are largely determined by the properties of the information field of the images.

Simplicity comparisons of photographs with nature, external coincidence of the image of objects with the way we see them, determine the clarity of the photographs. Objects are recognized in photographs if their image corresponds to the immediate visual image and if it is well known from practice, for example, cloudiness. The clarity of photographs has always been especially valued. It was assumed that the possibility of direct visual recognition is the main advantage of images from aircraft. But as the method developed great importance began to add expressiveness to the image. The more intense and contrasting the objects and phenomena that are the subject of decoding are highlighted in the image, the more expressive it is.

Thus, expressiveness images are characterized by the ease of deciphering objects and phenomena that are most significant for solving the problem. Visibility and expressiveness in a certain sense, opposite, mutually exclusive properties of the aerospace image. Thus, natural color photographs are the most visually appealing. Color spectrozonal images are less clear, but when interpreting, for example, forest vegetation, they are more expressive. The clarity and expressiveness of an image are related to its scale, but the optimal scales for expressiveness and clarity of images do not coincide with each other. Visibility increases with increasing scale.

Decodability aerospace images are the sum of their properties, which determine the amount of information that can be obtained by deciphering images to solve a given problem. It is known that the same images have different decipherability in relation to different objects and tasks. tasks. It can be expressed quantitatively through the ratio of operational information (I 0) contained in these images and Iп complete information:

However, often to determine the decipherability of images, relative decipherability is used, which is characterized through the ratio of useful information (I) carried by the aerial photograph to the complete information that can be obtained from the aerial photograph:

The value of Dc is called the decipherability coefficient. The concept of “complete information” can be interpreted in different ways, according to which relative decipherability can characterize various properties aerial photographs. If for full information accept the maximum information capacity of aerial photographs, then the decipherability coefficient will show the loading of aerial photographs with useless information, in other words, the “noise level

Using the same formula (Dc = I / Imax), the relative decipherability of individual objects can be calculated. With the appropriate approach, it allows you to compare aerial photographs taken on different films, printed on different papers, etc. Thus, the value of an aerial photograph as a source of information is expressed through the decipherability coefficient.

Completeness of decryption can be characterized through the ratio of used (recognized) useful information (I 1) to all useful information contained in the data

aerial photographs:

The completeness of decryption largely depends on the training of decipherers, their experience and special knowledge.

Under the reliability of decryption the likelihood of correctly recognizing or interpreting objects should be understood. It can be estimated through the ratio of the number of correctly recognized objects (n) to the sum of all recognized objects.

Decodibility can be improved by enlarging the image, changing the contrast, reducing blur, and other transformations.

Interpretation of space images- recognition of the studied natural complexes and ecological processes or their indicators by the pattern of the photographic image (tone, color, structure), its size and combination with other objects (texture of the photographic image). These external characteristics are inherent only in those physiognomic components of landscapes that are directly reflected in the photograph.

In this regard, only a small number of natural components can be deciphered by direct signs - landforms, vegetation cover, and sometimes surface deposits.

Decoding includes detection, recognition, interpretation, as well as determining the qualitative and quantitative characteristics of objects and displaying the results in graphic (cartographic), digital or text forms.

There are general geographic (topographical), landscape and thematic (sectoral) geological, soil, forest, glaciological, agricultural, etc. interpretation of images.

The main stages of interpretation of satellite images: binding; detection; recognition; interpretation; extrapolation.

Linking a photo- this is the determination of the spatial position of the boundaries of the image. It consists of an accurate geographical identification of the territory depicted in the image. This is done using topographic maps, the scale of which corresponds to the scale of the image. The characteristic contours of the image reference are the coastlines of reservoirs, the pattern of the hydrographic network, and the forms of macrorelief (mountain ranges, large depressions).

Detection consists of comparing different patterns of a photographic image. Based on the characteristics of the image (tone, color, pattern structure), the photophysionomic components of landscapes are isolated.

Recognition, or identification of decryption objects,- includes an analysis of the structure and texture of the photographic image, by which the photophysiognomic components of landscapes, man-made structures, the nature of land use, and technogenic disturbance of physiognomic components are identified. At this stage, direct decoding signs of photophysionomic components are established.

Interpretation consists in classifying identified objects according to a certain principle (depending on the thematic focus of decoding). Thus, during landscape interpretation, the physiognomic components of geosystems are interpreted, and identified man-made objects serve only for correct orientation. When deciphering economic use, attention is drawn to identified objects of land use - fields, roads, settlements, etc. Interpretation of the recipient (hidden) components of landscapes or their technogenic changes is carried out using the landscape-indication method. A complete and reliable interpretation of images is possible only on the basis of the integrated use of direct and indirect decoding features. The interpretation process is accompanied by drawing contours, i.e., creating interpretation schemes based on individual images.

Extrapolation- includes identifying similar objects throughout the study area and drawing up a preliminary map layout. To do this, all the data obtained during deciphering individual pictures. During extrapolation, similar objects, phenomena and processes in other areas are identified; install analogue landscapes.

Decoding is carried out according to the principle from general to specific. Every photograph is, first of all, an information model of the area, perceived by the researcher as a single whole, and objects are analyzed in their development and inextricable connection with their environment.

The following types of decryption are distinguished.

Thematic decoding are performed according to two logical circuits. The first involves first recognizing objects and then graphically highlighting them, the second involves first graphically highlighting similar areas in the image and then recognizing them. Both schemes end with an interpretation - a scientific interpretation of the decoding results. In computer decryption, these schemes are implemented in clustering and classification technologies with learning.

Objects in photographs are distinguished according to decryption criteria, which are divided into straight And indirect. TO direct include shape, size, color, tone and shadow, as well as a complex unifying feature - the image pattern. Indirect the signs are the location of the object, its geographical proximity, traces of interaction with the environment.

At indirect decryption based on objective existing connections and the interdependence of objects and phenomena, the decipherer identifies in the image not the object itself, which may not be depicted, but its indicator. Such indirect interpretation is called indicator interpretation, the geographic basis of which is indicator landscape science. Its role is especially great when direct signs lose significance due to the strong generalization of the image. In this case, special indication tables are compiled, where for each type or state of the indicator the corresponding type of displayed object is indicated.

Indication decoding allows you to move from spatial characteristics to temporal ones. Based on space-time series, it is possible to establish the relative duration of the process or the stage of its development. For example, based on the giant river meanders left in the valleys of many Siberian rivers, their size and shape, water flows in the past and changes that have taken place are estimated.

Indicators of the movement of water masses in the ocean are often broken ice, suspensions, etc. The movement of water is also well visualized by the temperature contrasts of the water surface - it is from thermal infrared images that the vortex structure of the World Ocean is revealed.

Interpretation of multispectral images. Working with a series of four to six zonal images is more difficult than with a single image, and their interpretation requires some special methodological approaches. There are comparative and sequential decoding.

Comparative decryption consists of determining a spectral image from photographs, comparing it with a known spectral reflectivity and identifying an object. First, on zonal images, collections of objects are identified that are different in different zones, and then, comparing them (subtracting zonal decoding schemes), they identify individual objects in these aggregates. This decoding is most effective for plant objects.

Sequential decryption based on the fact that zonal images optimally display different objects. For example, in photographs of shallow waters, due to unequal penetration of rays of different spectral ranges into aquatic environment objects located at different depths are visible, and a series of images allows you to perform a layer-by-layer analysis and then summarize the results step by step.

Deciphering multi-temporal images provides the study of changes in objects and their dynamics, as well as indirect interpretation of changing objects based on their dynamic characteristics. For example, crops are identified by changing images during the growing season, taking into account the agricultural calendar.

In the age of scientific and technological revolution and space exploration, humanity continues to carefully study the Earth, monitoring the state of the natural environment, taking care of rational environmental management, constantly improving methods for assessing the now limited natural resources. Among the developing methods of studying the Earth from space and space monitoring, multispectral photography is firmly established, opening up additional opportunities for increasing the reliability of image interpretation.

In September 1976, as part of international cooperation Under the Intercosmos program, specialists from the USSR and the GDR jointly carried out the space experiment “Rainbow”, during which USSR pilot-cosmonauts V.f. Bykovsky and V.V. Aksenov on an eight-day flight spaceship Soyuz-22 received more than 2,500 multispectral images of the earth's surface. The shooting was carried out with a multispectral space camera MKf-6, developed jointly by specialists from the national enterprise "Carl Zeis Jena" of the GDR and the Space Research Institute of the USSR Academy of Sciences and manufactured in the GDR. Multispectral imaging with the MKf-6 apparatus was also carried out from laboratory aircraft, and then from the manned orbital station Salyut-6. Simultaneously with the MKf-6 apparatus, the MSP-4 multizonal synthesis projector was developed, which opened up the possibility of producing high-quality color synthesized images, now widely used in scientific, practical and educational work.

This atlas of photographs and maps compiled from them is illustrated in typical examples the possibility of using multispectral aerospace photography materials in a variety of studies of the natural environment, in planning and operational management economic activity and for many branches of thematic mapping. The atlas presents a wide range of areas of Earth research. It covers the study of natural conditions and resources not only on land, but also in shallow seas. The interpretation technique for geological studies of mountain fold regions is presented using the example of the Pamir-Alai region. Geomorphological-glaciological-hydrological and hydrological aspects of the research are considered using the example of studying the tectonic structure and relief of the southern Cis-Baikal region, the relief of the shores of the Sea of ​​Okhotsk, the relief of river floodplains and the frozen thermokarst relief of central Yakutia, the glaciation of the Pamir-Alai, the distribution of solid river runoff in Lake Baikal and glacial landscapes in the northern part of the GDR. Vegetation studies were carried out using the example of semi-desert and desert vegetation of south-eastern Kazakhstan and forest vegetation of the southern Cis-Baikal region and central Yakutia. Landscape mapping covers arid landscapes of foothills and intermountain basins of southeastern Kazakhstan and Central Asia, mountain taiga landscapes of the northern

Baikal region, as well as landscapes of the central part of the GDR. Using the examples of south-eastern Kazakhstan and a site in the central part of the GDR, the possibilities of using satellite images for the purpose of physiographic zoning of the territory are shown. In addition to research on natural resources, the atlas also presents some areas of socio-economic research - mapping agricultural land use and settlement, as well as studying the human impact on the natural environment using the example of mapping modern landscapes with their anthropogenic modifications. These studies were carried out in Central Asian regions Soviet Union and in the GDR.

The literature describes in sufficient detail the methodology for deciphering “classical” aerial photographs. Traditional and well-developed technology for processing such images is successfully used in practice. The atlas presents a set of methodological techniques for processing multispectral aerial and satellite images at different levels of technical equipment - visual, instrumental and automated. When visually deciphering, it is most versatile to work with color synthesized images. When using a series of zonal shots, several techniques are used. The simplest technique - choosing the optimal spectral zone for deciphering specific phenomena - is effective only for some objects, for example, the coastline of shallow water bodies, and therefore has relatively limited application. Comparison of a series of zonal images using a spectral image of survey objects, approximately determined using a standardized density scale, is advisable when deciphering objects characterized by a specific course of spectral brightness, in particular for separating forest-forming species when mapping forest vegetation, to identify the boundaries of glaciers and the firn line by differences in the image of snow with different moisture content, etc.

Sequential interpretation of a series of zonal images, using the effect of optimal display of various objects in certain zones of the spectrum, is used to separate tectonic disturbances of different ranks, sequential multi-depth study of water areas, etc.

Interpretation of multispectral space images is carried out using selective use of aerial photographs obtained in subsatellite experiments. To identify subtle differences in deciphered objects that are not visually captured, for example, those associated with the state of crops, measurement interpretation is used, based on photometric determinations of the spectral brightness of objects from zonal images, taking into account distortions caused by shooting conditions. This provides spectrophotometric determinations with an error of 3-5%.

For more complex data analysis, including when solving operational problems associated with a large volume of processed information, automated image processing is required, the capabilities of which are illustrated by the example of land use and classification of cotton crops depending on their condition.

All maps included in the atlas, compiled from multispectral images, are cartographic works of a new type and demonstrate the possibilities of improving thematic maps based on aerospace survey materials.

Multispectral images obtained from an aircraft play a special role in solving diverse problems in relatively small areas well studied by classical methods. This method detailed study natural resources and control environment is promising, for example, for the territory of the GDR. The presented examples of multispectral aircraft images cover the test site in the lake area. Süsser See in the central part of the GDR, as well as areas of the Fergana Valley, Okhotsk coast, etc. in the USSR. Space images, in turn, have the well-known advantages of visibility, spectral and spatial generalization of the image. The presented satellite images cover the coasts of the Baltic Sea, the north-eastern Caspian Sea and the Sea of ​​Okhotsk, the southern Cis-Baikal region and the northern Baikal region, central Yakutia, south-eastern Kazakhstan and Central Asia.

The aerospace method of studying the Earth is complex and interdisciplinary in principle. Each image is, as a rule, suitable for multi-purpose use in various areas of Earth research. This corresponds to the regional structure of the atlas, in which, for each image, the interpretation technique is presented in those directions where it turned out to be most effective. Each section, which opens with a color synthesized image of the study area with a georeferencing scheme and a text description of the territory, presents the results of image interpretation in the form of thematic maps, mainly at a scale of 1:400000-1:500000, with brief text comments. On the main topics, explanations and recommendations are given on the method of thematic interpretation of multispectral images.

The atlas can serve as a scientific and methodological aid on the interpretation of multispectral images for specialists involved in the study of natural resources using remote methods, and can be used more widely as a visual aid on the use of space photography in the compilation of thematic maps in cartography, geology, soil scientists, specialists in agriculture and forestry, and also by conservationists. Undoubtedly, it will find wide application in universities. Students will be able to use it when studying the theory and practice of aerospace

scientific methods for mastering the skills of working with satellite images in the development and compilation of maps and in the study of natural resources.

The main work on the preparation of the atlas was carried out by the Geographical Faculty of Moscow State University, the Institute of Space Research of the USSR Academy of Sciences and the Central Institute of Earth Physics of the Academy of Sciences of the GDR.

The atlas was compiled in the laboratory of aerospace methods of the Department of Cartography of the Faculty of Geography of Moscow University with the participation of the departments of geomorphology, cartography, glaciology and cryolithology, physical geography of the USSR, physical geography of foreign countries, problem laboratories of complex mapping and atlases, soil erosion and channel processes of the same faculty, as well as Faculty of Geology, Department of Scientific Photography and Cinematography of Moscow State University, All-Union Association "Aerogeology", at the Center for Remote Methods of Earth Research of the Central Institute of Physics of the Earth of the Academy of Sciences of the GDR, Department of Geography Pedagogical Institute Potsdam and the Department of Geography of the University. M. Luther of Halle-Wittenberg.

Automated decryption is the interpretation of data contained in an image, performed by an electronic computer. This method is used due to factors such as the processing of huge amounts of data and the development of digital technologies that offer images in a format suitable for automated technologies. To decrypt images, certain software is used: ArcGIS, ENVI (see Fig. 5), Panorama, SOCETSET, etc.

Fig.5. Interface of the ENVI 4.7.01 program

Despite all the advantages of using computers and specialized programs, the constant development of technology, the automated process also has problems: pattern recognition based on machine classification using narrowly formalized decryption features.

To identify objects, they are divided into classes with certain properties; this process of dividing space into areas and classes of objects is called segmentation. Due to the fact that objects during shooting are often closed and with “noise” (clouds, smoke, dust, etc.), machine segmentation is probabilistic in nature. To improve quality, information about the shape, texture, location and relative position of objects is added to the spectral characteristics of objects (color, reflection, tone).

For machine segmentation and classification of objects, there are algorithms developed based on different classification rules:

    with training (supervised classification);

    without training (unsupervised classification).

A classification algorithm without training can segment an image fairly quickly, but with a large number of errors. Controlled classification requires the indication of reference areas in which there are objects of the same type as those being classified. This algorithm requires more computer input and produces results with greater accuracy.

3.1. Automated decryption using the envi 4.7.01 complex

To study methods for deciphering and processing space images, an image from the Landsat-8 satellite was deciphered onto the territory of the Udmurt Republic. The image was obtained from the US Geological Survey website. The city of Izhevsk is clearly visible in the picture; Izhevsky Pond and the flow of the Kama River from the city of Votkinsk to the city of Sarapul can also be read without distortion. Date of shooting: 05/15/2013 and 05/10/2017. The percentage of cloud coverage of the 2013 image is 45% and the upper part of the image is difficult to decipher (however, almost the entire spring-summer period of filming contains a high content of clouds in the image). Therefore, the main work on analyzing information will take place with a more current snapshot.

The percentage of cloud coverage of the 2017 image is 15% and the upper right corner of the image is not suitable for processing due to a group of clouds covering the surface of the territory.

The coordinate system adopted for use in the image is UTM—universal transverse Mercator projection, based on the WGS84 ellipsoid.

The ENVI software package (PC) is a software product that provides a full cycle of processing optical-electronic and radar data from remote sensing of the Earth, as well as their integration with geographic information systems (GIS) data.

The advantages of ENVI also include an intuitive graphical interface that allows a novice user to quickly master all the necessary data processing algorithms. Logical drop-down menu items will make it easier to find the function that is needed in the process of analyzing or processing data. It is possible to simplify, rebuild, Russify or rename ENVI menu items or add new functions. Version 4.7 integrates ENVI and ArcGIS products.

To prepare an image for the decryption process, it is necessary to process it and obtain the spectral image itself for analysis. To obtain an image from a series of images, you need to combine all channels into a single stream/container using the command on the Layerstacking control panel (see Fig. 6). After all the transformations, we get a multi-channel container/image with which you can continue working: filtering, binding, unsupervised classification, identifying dynamics, vectorization. All image channels will be reduced to the same resolution and to the same projection. To load this command you must select: BasicTools>LayerStackingor Map>LayerStacking.

Fig.6. ENVI program interface - channel layout in Layerstacking

When visualizing a multispectral image, you need to select the following commands in the menu of the ENVI software package: File>OpenExternalFile>QuickBird. In the new AvailableBandsList window (see Fig. 7), to synthesize the image in RGB lines, we select the red, green and blue channels, respectively - the sequence of channels “4,3,2”. As a result, we get an image familiar to the human eye (see Fig. 8.) and 3 new windows appear on the screen - Image, scroll, zoom.

Fig.7. AvailableBandsList Window

Fig.8. The synthesized image of the image taken on May 15, 2013 is the sequence of channels “4,3,2”.

Recently, in relation to Landsat-8 imagery, ENVI has more often used the “3,2,1” channel sequence to obtain images in close to natural colors. To compare two sequences, we will carry out the filtering procedure (There is a Filter tab in the Image window), displaying both results on the screen (see Fig. 9).

Fig.9. Filtering a photo in the sequence "3,2,1"

Thanks to this command, you can improve the quality of the image: in this case, the transparency of the clouds has increased, and clear contours of the separation of surfaces (water areas, forests, anthropogenic areas) have appeared. In fact, Filter helps correct the “noise” of the image.

Unsupervised classification is performed on the principle of distributing pixels into classes - similar brightness characteristics. ENVI has two algorithms for working with unsupervised classification: K-means and IsoData. The K-means command is an order of magnitude more complex: it requires certain skills in selecting image settings and outputting the results. The IsoData command is simpler and only requires changing the parameters specified in the system (see Fig. 10): main panel, command Classification - Unsupervised - K-means/ IsoData (see Fig. 11).

Fig. 10. IsoData parameter settings window in ENVI

The resulting example of unsupervised classification is dominated by the infrared and blue channels, which provide detailed information about the hydraulic network in the image area.

Fig. 11. Unsupervised classification

Through the ENVI complex it is easy and convenient to register an image using a georeferenced image, and subsequently the resulting image is used in MapInfo. To do this, in the main menu, select Map>Registration>SelectGCPs: Image to Map. The result can be immediately displayed in MapInfo for comparison, saved in a special format (see Fig. 12).

Fig. 12. Geotagging an image for use in MapInfo

Vectorization of an image in ENVI occurs with the same set of data as the binding of an image from ENVI to MapInfo, through the vectorization command: you need to set the projection, ellipsoid, zone number (see Fig. 13).

The dynamics of changes in the selected territory are monitored using multi-temporal multispectral images (for 2013 and 2017). Dynamics can be tracked by 3 methods:

    flashing method;

    "sandwich" method - combining layers in MapInfo;

    using a change map.

Fig. 13. Vectorization of the image

The blinking method creates two different windows with 2 snapshots using the NewDisplay command in the window for selecting layers to display. Both images are linked using the LinkDisplays command in the Image window and on the screen you can see both images that move the same way at different times, displaying the same area (see Fig. 14). By clicking the computer mouse, the displays with pictures will change places - blink, which will allow changes (dynamics) to be detected.

Fig. 14. Detection of dynamics - flashing method

The “sandwich” method consists of simultaneously combining both images, previously saved in the Jpeg2000/.jp2 format using the File - Save Images command. Alternately, both images must be opened in Mapinfo in a single projection (universal transverse Mercator projection). For a comfortable comparison, the transparency of the top layer/image is changed to 50% and a visual search for changes is carried out, followed by highlighting the areas of dynamics (see Fig. 15).

If the 2 received images are georeferenced, divided into layers and in geotiff/tiff format, then a modern, up-to-date method can be used - a change map. In both images you need to select the same layer type, for example, the third one is green. As a result of the transformations, a map with a lot of noise is obtained, requiring filter settings.

Fig. 15. Revealing dynamics - the "sandwich" method

If we compare all three methods, the author of the work is more impressed by the “sandwich” method, because The blinking method puts a lot of strain on vision and causes premature physiological eye fatigue. Creating a map of changes is not always effective, because It is impossible to completely remove noise.



What else to read