Analysis of trial testing results. Modern problems of science and education

Analysis of the results of trial testing of gymnasium No. 1

2011-2012 academic year

On October 6, 2011, the third trial testing was conducted in the gymnasium, the purpose of which is to adapt students to the UNT, their awareness of testing technology, quality control of knowledge and preparation for the UNT.

Testing of students was carried out according to the tests of NSCTO KEU, the results are as follows

In total, 45 graduates (out of 49) took part in the testing, i.e. 93% of students.

4 students scored 100 or more points

It can be seen from the results that, compared with the previous test, the average score is lower by 2.8 points. The students sat one at a time, bags and cell phones were removed from the last desk, testing conditions were close to those of the UNT. From the preliminary results, low results can be expected this year, so it is necessary to intensify preparations for the UNT, use various forms and methods of work.

Monitoring of test results

The chart shows that the results are unstable, lower than last year, however, there is an increase in results during several tests this year.

Results compared to 1 test are slightly higher.


Math scores improved.

4 students take general history. results are lower than last year. The lowest quality knowledge is in physics and mathematics. I did poorly on this biology test.

The applicant for Altyn Belgi for the first time confirmed all fives.

Monitoring of testing applicants for the certificate with honors and Altyn Belgi.

Surname

Russian language r

Story
Kazakhstan

Maths

kaz yaz with rus yaz

Total
points

Surname

Russian

Story
Kazakhstan

Maths

kaz yaz with rus yaz

elective subject

Total
points

name prev.

Davletshin

Abdrakhmetova

Kuketaeva

Ukubaeva

From the results it can be seen that only one has 4, the rest of the applicants have triples. It is necessary to organize individual work with excellent students in preparation for the UNT.

So far, no one confirms all fives, bad results in biology, history, mathematics.

1. The lowest results in physics, mathematics and the history of Kazakhstan, the highest in English. yaz, kaz. lang. There are no twos. This academic year, 4 applicants for the certificate with honors and 1 applicant Altyn Belgi scored all fives only

2. Comparative test results show instability, it is recommended that all teachers prepare monitoring of individual test results.

3. Subject teachers to draw up plans for working with underachieving and excellent students.

4. Class teachers hold parent-teacher meetings on Saturdays, invite parents of students who do not reach the threshold level of admission to universities.

When performing the tasks of the test "Cubes of Kos", the course of thinking of the subject is, as it were, projected outward. Therefore, the experimenter has a rather rare opportunity not only to quantify the measured intellectual abilities, but also to study the qualitative features of the analytical and synthetic processes stimulated by the test. Using this opportunity, we will first characterize the mental process that takes place when solving the "Cubes of Kos". This is necessary for a deeper understanding and interpretation of the results obtained.

“The pattern depicted on the card is perceived by the subject as an integral unity and must be mentally divided into several parts of equal size. In this case, the subject must realize that the number of these parts is equal to the number of cubes available. This is the analytical task of the subject"

The division of the pattern into separate parts corresponding to one cube can occur in different ways. However, if the subject did not make such an analysis, then the folding of the pattern will go entirely by trial and error, or by finding similarities between the individual parts of the pattern and single cubes.

Let us assume that a correct analysis of the pattern has been made (in some subjects, this is very characteristically manifested in behavior. They look at the pattern in silence for a long time, then say: “So”). After this, the stage of synthesis begins, which psychologically occurs differently for different subjects. Some subjects mentally plan almost the entire folding process and then work systematically, sequentially placing the cubes one after the other, or from left to right in rows, or from top to bottom in columns, or from the center to the periphery, or laying out identical or symmetrical parts of the pattern one after the other. At the same time, in the vast majority of cases, the subjects of this group never “try on” the cubes, but turn them in their hands with the desired face and immediately put them in place. Based on this external behavior, it can be assumed that the analysis and synthesis of the pattern here is already completed by the beginning of folding, synthesis overtakes folding.

In another group of subjects, a different external picture can be observed. The subjects also quickly and accurately add the cubes, but not the whole pattern, but some part of it, more or less finished. After that, they think for some time and begin to fold another part of the pattern. In this group, the cubes are also not tried on, but are immediately placed with the desired face. Thus, we are dealing with a thought process of the same internal structure as in the first group. The only difference is that in this case, the subjects analyze the pattern in pieces, and not completely.

In the third group of subjects, folding occurs in a fundamentally different way. The subjects take a cube in their hands and, turning it, try to find the similarity of one or another of its faces with any part of the pattern. Having laid the first cube in this way, the subjects try on the second one in the same way, and so on. At the same time, the following phenomenon occurs quite often: having turned one cube in his hands and, obviously, not guessing which face to put it on, the subject puts this cube aside and takes another, although he knows that all the cubes are the same. With this method of folding, the subjects, apparently, do not perform a preliminary analytical division of the pattern, synthetic processes are also absent, and they are replaced by a process of differential comparison. Moreover, the starting point of such a comparison is often not some part of the pattern, but the cube itself.

In worse cases, the subjects twirl the cube in their hands, put it in a completely wrong way, and the question of the experimenter, whether it should be put, is answered in the affirmative.

It is clear that during the testing process, the ways of adding cubes can change, however, the predominance of “planning” thinking, based on the developed operations of analysis and synthesis, is clearly visible to an experienced experimenter.”

Quantitative indicators characterize the level of development of non-verbal (practical, visual-effective) intelligence and analytical and synthetic abilities that underlie intellectual giftedness. The results obtained are interpreted depending on the scope and value of the indicator.

Scope of application

Indicator value Low High
The medicine: identifying symptoms of certain neuropsychological disorders. Possible organic disorders of the central nervous system, impaired visual-motor coordination, apraxia
Education: assessment of potential opportunities for learning and intellectual development. Difficulties in learning and developing other intellectual functions Good opportunities for the development of intelligence, regardless of the existing education, a high level of general learning
Career guidance and selection: assessment of professional capabilities and psychological fitness of specialists in some technical and artistic professions. Difficulties in performing constructive activities, psychological contraindications for engineers, designers, locksmiths, machine operators, builders, designers, fashion designers, cutters, tailors, etc. Possibility of successful implementation of constructive activities, good prerequisites for technical abilities.

Features of the behavior of the subjects, which are repeated many times during testing, i.e., appear at the level of a trend, are additional qualitative indicators of the test. Qualitative indicators not only additionally characterize the level of development of analytical and synthetic abilities and non-verbal (visual-effective) intelligence, but also reflect the individual characteristics of cognitive activity, personal characteristics and ways of emotional response, and individual psychopathological symptoms.

Observable signs of the subject's behavior Interpretation
Features of cognitive activity
Does not cope with the solution of even the first most simple tasks Violations of practical, visual-effective thinking
Unable to solve the problem even after showing the correct way to solve it or when doing the tasks again Violations of visual and motor memory, extremely low learning ability
Too often refers to the sample, considers cubes Violations of visual memory, a reduced level of development of visual-figurative thinking
Solve problems with considerable muscular effort; tension, tremor, awkward movements Motility disorders, organic disorders of the central nervous system
Speaks out his actions aloud, comments on the decision Involvement of verbal functions, difficulties in performing practical actions
Has difficulty understanding instructions, does not use hints Verbal intelligence disorders, insufficient language acquisition
Does not notice or correct errors Disorders of perception, attention, voluntary control
Works very unevenly: performs some tasks quickly and correctly, others slowly or with errors Performance disorders, fatigue
Features of personality and emotional state
Folds patterns carelessly, easily backtracks Lack of interest
Easily reacts to minor stimuli, gets distracted Emotional lability, inconstancy, instability of activity, impaired attention
Stubbornly strives to find the right solution, focused on the task Conscientiousness, purposefulness, stability of activity
Quickly takes to folding the image, in a hurry, acts by trial and error, which he immediately corrects Impulsiveness
Thinks before starting to fold the pattern, initially draws up a solution plan, preliminarily lays out the cubes in a certain order The tendency to reflection, prudence, organization, pedantry
Easily changes ways of solving a problem, tries different options plasticity, flexibility
Persistently tries to solve the problem in the same way, has difficulty in abandoning an inefficient way of solving Intellectual rigidity
Talks loudly, is in constant motion, waving his arms state of overexcitation
Solve problems silently, sedentary lethargy
Criticizes tasks, destroys the pattern on failure Aggressive response when faced with difficulties
Hesitates when choosing a decision, justifies Anxiety, fear, desire to avoid failure, self-doubt
Rejoices, laughs, does not lose heart in case of failures elevated mood
Criticizes himself, does not express joy with success Tendency to be depressive
Talks a lot with the experimenter, asks questions, shares experiences Eagerness for contacts
Silent, does not answer questions Distancing, avoiding communication
Indicates, requires domination
Asks for help, looks for clues, consults Addiction

Literature

1. Kohs S. C. Intelligence measurement. A psychological and statistical study based upon the block design test. -- New York: Mc-Milan, 1927.
2. Agafonova I. N., Kolechenko A. K., Pogorelova G. A., Shekhovtseva L. F. Methods for studying intelligence: Methodological recommendations. --Part 1. - SPb: SPb GIU, 1991.
3. Anastasi A. Psychological testing: Per. from English. / Ed. Gurevich K. M., Lubovsky V. I. - Prince. 1. - M.: Pedagogy, 1982.
4. V. N. Arbuzov, Kos tests. //Soviet psychotechnics. -T. VII. - 1934, No. 1. -FROM. 48-60.
5. Vine A, Simon T. Methods for measuring mental giftedness: Sat. articles. -- Kharkov: State. ed. Ukraine, 1923.
6. Bleikher V. M., Burlachuk L. F. Psychological diagnostics of intelligence and personality. - Kyiv: Higher School, 1978.
7. Burlachuk L. F. Psychodiagnostics of personality. - Kyiv: Health, 1989.
8. BurlachukL. F., Morozov S. M. Dictionary-reference book on psychological diagnostics. - Kyiv: Naukova Dumka, 1989.
9. GuideV. K., Zakharov V.P. Psychological testing: Textbook. - L.: LGU, 1982.
10. Gilyasheva I. H, Practical use of an adapted intelligence test in the clinic of neuropsychiatric diseases: Guidelines. - L.: B.I., 1987.
11. Glass J., Stanley J. Statistical methods in pedagogy and psychology: TRANS. from English. / Under the general, ed. Adler Yu. P. - M .: Progress, 1976.
12. Eliseev O.P. Constructive typology and psychodiagnostics of personality. - Pskov: POIUU, 1994.
13. Using the "Kos Cubes" technique for professional diagnostics: Guidelines for specialists in employment and career guidance services. / Comp.: Smirnova A. V., Khakhunova M. N. - Yaroslavl: Yaroslavl city center
professional orientation and psychological support of the population, 1995.
14. Kashin A.P. Diagnostic scaling of psychophysiological functions. //Theoretical and applied research on the psychophysiology of individual differences. - Kazan: KSU, 1973.- S. 4-16.
15. Kulagin B, V. Fundamentals of professional psychodiagnostics. - L.: Medicine, 1984. 16. Panasyuk A. Yu. An adapted version of D. Veksler's technique. - M .: Research Institute of Psychiatry of the Ministry of Health of the RSFSR, 1973.
17. Panasyuk A, /O. Structural-level analysis of the dynamics of intellectual development of mentally retarded and healthy children: Abstract of the thesis. diss. pump uch. Art. cand. psychol. Sciences. - L., 1976.
18. Psychodiagnostic methods (in a comprehensive longitudinal study of students). - L.: LGU, 1976.
19. Psychological Dictionary / Ed. Davydova V. V., Zaporozhets A. V., Lomova B. F. and others - M .: Pedagogy, 1983.
20. Serebryakova R. O. Application of a standardized set of methods for the study of intellectual activity to solve diagnostic problems in some neuropsychiatric diseases: Abstract of the thesis. diss, pump. uch. Art. cand. psychol. Sciences.

APPENDIX 2
An example of interpretation of results

RESULTS RECORDING FORM
FULL NAME. I. Sergey Nikolaevich the date 23.04.2000
Age (years) 49 Education higher Profession mechanical engineer additional information traumatic brain injury

Total points 24 Standard score (stan) 5

The level of development of non-verbal intelligence: Average

Carrying out during testing: Often refers to the model, pronounces his actions aloud, does not notice his mistakes, is easily distracted

CONCLUSION

Despite the average level of productivity of visual-effective thinking, the subject experiences difficulties in solving practical problems due to attention disorders and lack of voluntary control. With sufficient preservation of the functions of spatial analysis and synthesis, visual perception and motor coordination, it may not be able to cope with complex constructive activities that require prolonged concentration of attention, verification and comparison of results. There may be problems when performing activities of a technical profile, as well as in the process of learning and solving complex intellectual problems.

The testing system collects and stores various information about the testing itself and the results of the participants. It contains both the final results of the participants and their detailed answers to each of the tasks, as well as summary information on the test tasks, which allows you to evaluate their quality. Consider how to work with test results.

The results of test participants are collected in accordance with the personal data questionnaire that was created in the system. Based on the data entered by test participants, their results can be identified and used for analysis.

After receiving a link to the test (for example, http://app.startexam.com/Center/Web/kosmos) and selecting a test, the test participant enters their personal data into the questionnaire and presses the button Further.

After that, the test participant begins to get acquainted with the questions and take the test.

At the same time, the testing initiator already has access to information about the participants participating in the testing, but without the results, because the test has not been completed yet. The test administrator can view the testing sessions that have started by clicking on the link Sessions at testing centers.


A window with running testing sessions will open in front of you.


On this page, information about all participants participating in testing is available. The following data is available for each session:

  • Center– name of the testing center under which the test is run
  • Test- the name of the test that the participant solves
  • Member Name- last name, first name and patronymic of the test participant
  • the date of the beginning testing session
  • State test session (completed/not completed)
  • and test results

As you can see in the image, the results are not yet available, because. sessions have not been completed and information on participant responses has not been received. As the sessions end, their status will change, but already now you can familiarize yourself with the personal data of the participants by clicking on the session status.


A page will open with detailed information about the test participant's session and their personal data.

As participants complete the test, the state of the sessions will change.


As soon as the test participants complete the test, the state of all sessions will take the status Completed and the results will be available in the system.

Now let's consider in more detail what results are collected and what tools for working with them are available in the testing system. The following information is available on the Test Sessions page:

  • Time spent on testing
  • Max- the maximum score that could be scored on the test
  • score- the score scored by the participant
  • (%) – result of the participant in percent
  • Level, shown by the participant in accordance with the created rating scale
  • Reviews, left by the participants for tasks, if this option was enabled in the test

Test sessions can be filtered according to your preferences. The system can display results for a certain period of time:

  • last hour
  • today
  • yesterday
  • this week
  • last week
  • this month
  • last month
  • this year
  • In the past year
  • during all this time

In addition, you can show the desired number of results on one page: 10, 50, 100, or all at once.


Unnecessary results can be deleted - there is a corresponding button for this Delete, to delete, you must first mark the desired results with checkboxes.


Test results can be exported to various formats:

  • Excel spreadsheet format
  • XML markup language format
  • results packed in a zip archive

The results are exported according to the filters you choose. If you select the results for the week only, then the results obtained during the week will be exported. In addition, you can mark the results you need for export with checkboxes and only these results will be presented in the search results.


When exporting to Excel, not only the results will be added to the table, but also the personal information specified by the test participant.

Test results for a specific test

In the testing system, you can view the test results both for all tests at once, and for each test separately. Viewing test results has additional options for working with results.

To view the results for a specific test, you need to go to the project and open the tab Tests.


All project tests will be available here. Near each test there is a link Sessions, by clicking on which you can view testing sessions for this particular test. The feature set is broadly similar to the test center results, but there are some useful additions.

The first difference is that another type of report is available here - . The response matrix shows all items in the test and the participants' responses to each item.


With this type of report, you can export the results to an Excel spreadsheet and see which tasks the participants answered correctly and where they gave incorrect answers.


The vertical axis shows the tasks that were included in the test, the horizontal axis shows the names of the participants, and the answers of the participants are presented at the intersection. One means that the participant gave the correct answer to the task, zero means he made a mistake. If there is no value, then the task was not included in the final set of tasks of the test participant.

Detailed responses of test participants

In testing sessions for a specific test, statistics are available on the answers of each participant. To view it, click on the status of the testing session.


A page opens with personal data, which could also be viewed in the testing centers, but there is another tab here - Answers.


The page displays the results of the participant's responses to each test item, similar to those presented in the response matrix.

Here you can see the answer that the test participant gave to a specific task. For example, we want to know which answer option the participant chose for the question in which he made a mistake. Click on the test question and get the result.


The task of the test and the erroneous answer given by the participant are presented. An incorrectly solved task is displayed on a red background.

Similarly, you can see the task in which the participant answered correctly.


It is displayed on a green background.

Test task statistics

When viewing the results of test participants in detail, statistics for each test task are also available. By clicking on the tab Statistics You can view information about the task and evaluate its quality.


Information about the task is available here:

  • date of creation of the test task
  • task author
  • status
  • evaluation method

And also available statistics on it:

  • Number of results- the number of task solutions by test participants
  • Performed- the number of completed results (the task was answered)
  • Missed- the number of incomplete results (the task was not answered)
  • Average score- the ratio of the number of correct answers to the task to the total number of completed results
  • Correct answers- the percentage of correct answers to the task

The statistics of the test task allows you to draw a conclusion about its quality. Good tasks should fall within the interval of correct answers from 20% to 80%. More details on the quality of test tasks are discussed in the lesson Mastery of creating tests.

We have considered the main possibilities of using test results in the OpenTest system, which includes the entire range of testing processes from creating a task to conducting testing and analyzing the results.

1

The article considers computer testing as one of the methods for testing students' knowledge. The advantages and disadvantages of testing are analyzed and methodological recommendations for teachers on its use in the educational process are developed. The role of computer testing as an effective method of quality control of students' training is shown. The assessment of the results of the control of knowledge of students was carried out, taking into account the complexity of the tests, the criteria for evaluating the results of testing were determined; the causes of the most common errors in test tasks were identified; types of test items that cause the greatest difficulty for test takers. The effectiveness of test methods depends on the direction of the test, the competent layout of the types of test tasks and the structure of the process. The use of computer technologies in the practice of pedagogical testing allows not only to significantly facilitate the process of interpretation, apply a unified approach to the formation of test tasks and evaluate test results, but also to accurately identify the level of preparedness of the test person. Today, testing is seen as a modern paradigm for the objective assessment of students' educational achievements, which is associated with the widespread use of computers and their didactic capabilities in the knowledge control system.

test tasks

information and communication technologies

educational process

test constructor

validity

representativeness

computer testing

1. Shevchenko S.M., Tyumina N.S. Trends in innovative development of general education / S.M. Shevchenko, N.S. Tyumina // Integration of information technologies in the system of vocational training. Collection of articles based on materials of the regional scientific-practical conference. - Nizhny Novgorod: NGPU im. K. Minina, 2016. - S. 50-52.

2. Kadnevsky V.M. Genesis of testing in the history of national education / V.M. Kadnevsky. - Omsk: OmGU, 2011. - 335 p.

3. Tyumina N.S., Shevchenko S.M. Information means of computer testing / N.S. Tyumina, S.M. Shevchenko // Integration of information technologies in the system of professional and additional education collection of articles based on materials of the regional scientific and practical conference. - Nizhny Novgorod: NGPU im. K. Minina, 2016. - S. 174-177.

4. Efremova N.F. Test control in education: a textbook for students receiving education in pedagogical areas and specialties / N.F. Efremov. – M.: Logos, 2014. – 368 p.

5. Zvonnikov V.I. Measurements and quality of education /V.I. Zvonnikov. – M.: Logos, 2006. – 73 p.

6. Shevchenko S.M., Tyumina N.S. Computer testing in the educational process / S.M. Shevchenko, N.S. Tyumina // Proceedings of the regional n.-pract. conference "Integration of information technologies in the system of additional and professional education". - Nizhny Novgorod: NGPU im. K. Minina, 2017. - S. 55-58.

7. Chaikina Zh.V. Modern means of assessing learning outcomes: teaching aid / Zh.V. Chaikin. - Nizhny Novgorod: NGPU im. K. Minina. - N. Novgorod, 2014. - 48 p.

8. Ovchinnikov V.V. Evaluation of educational achievements of students during testing /V.V. Ovchinnikov. - M .: Center for Testing MORF, 2011. - 27 p.

9. Simonenko V.D. The program "Technology. Grade 7. GEF” [Electronic resource]. – URL: http://rusacademedu.ru (date of access: 25.01.2017).

10. Simonenko V.D., Samorodsky P.S. Technology. Grade 7 / ed. V.D. Simonenko. – M.: Ventana-Graf, 2014. – 153 p.

In the Concept for the Modernization of Russian Education, the task of creating an independent system for assessing the quality of the educational process is considered as one of the most important modern education. An important element of the education quality system is monitoring the results of students' activities, which should be carried out at all levels and stages of the educational process. The problem of choosing a method for studying the level of training of students, the quality of the formed knowledge, skills and abilities is important for general secondary education.

A certain contribution to the solution of this problem was made by Francis Galton, who created the test as a tool for monitoring learning outcomes, E. Thorndike and R.D. Fisher. According to modern requirements, the level of educational results of students, determined for each academic subject in accordance with educational programs and the Federal State Educational Standard, should be assessed. One of the modern methods for assessing the level of training of students is testing.

Testing as a method of control allows you to get an assessment of the quality of training of students, standardize the methodology for measuring and interpreting the results. Testing can be organized as the work of students with a test on a printed basis, and with a computer. Using modern programming tools, it is possible to develop fairly universal multi-purpose computer tests. This form of control allows you to use various types of visualization, take into account the individual characteristics of the tested, automate the processing of the received data. The problem of computer testing is the choice of tools and programs for developing test items.

The advantages of this method include: the manufacturability of the study, the ability to store and compare the results of control, as well as identify the causes of learning gaps. Testing can perform various functions depending on the stage of the lesson. For example, when updating knowledge, in order to correctly solve a test task, students may need previously studied material, so testing allows you to identify "gaps" in knowledge. Test methods allow you to highlight the main thing in the topic under consideration, draw the attention of students to important theoretical aspects in the process of primary consolidation of the material. Testing involves the implementation of both independent and collective forms of work, the discussion of the most difficult tasks, contributes to the activation of self-control and reflection of students at various stages of the lesson.

Test methods play an important role in optimizing the educational process with a multi-level preparation of the class, the implementation of broad and deep control over the development of knowledge by students. On the one hand, they contribute to solving the problem of individualization of tasks depending on the level of mastering the material being studied by the students. On the other hand, the use of information technology makes it possible to automate calculations, organize the study of new material using educational games and programs, which, in turn, contributes to the development of cognitive interest among students, develops their information culture, introduces modern approaches to solving problem situations in the classroom.

The undoubted advantages of test methods are: the objectivity of the obtained assessment, the "equality" of students in the control process, the coverage of a significant amount of educational material during testing, the relative ease of interpreting test results, and saving time for testing knowledge. The use of computer testing in the educational process contributes to the generalization of educational material, the identification of cause-and-effect relationships, the actualization of previously studied topics, the development of logical thinking when solving non-standard test tasks.

The disadvantages of testing include: the duration and complexity of test development; the need for confidentiality to ensure the objectivity of test results; the possibility of a high probability of "guessing" the correct answers; the need to eliminate incorrect tasks after each testing.

The development of tests, in our opinion, involves the following requirements: significance; scientific credibility; representativeness (the presence in the test of the main structural elements of the content of the subject in the volume necessary for control); increasing complexity of educational material; variability depending on the content of the studied material and the volume of hours; consistency of content; validity; complexity and balance of the test; relationship between content and form.

The paper presents the results of an experiment to evaluate the effectiveness of using computer testing in technology classes. The experiment was carried out on the basis of MBOU "School No. 190" in Nizhny Novgorod with students of the 7th grade.

Approbation of the developed test tasks in the study was carried out at the lessons of technology in the 7th "A" class - the experimental group, and the 7th "B" class - the control group. Students of the 7th "A" class underwent computer testing, and in case of an error in answering a specific question, they had the opportunity to answer the same question again. In the 7th "B" grade, students were given a blank test, involving only one attempt to answer. Testing was carried out on the topic "Technologies for manual processing of metals and artificial materials", the total number of test subjects was 24 boys (12 people in each group), the number of test tasks j = 20. Visual Studio was used to create tests. The work determined the "average" achievement of the group of subjects (DG), which makes it possible to evaluate the effectiveness of testing as a means of controlling knowledge, the validity of the test. The validity of test items is characterized by the following indicators:

Frequency of execution of the j -th task (number of correct answers of the j -th task);

The proportion of correct answers, proportional to the number of those tested;

The number of incorrect answers of the j-th task;

Difficulty index, which is determined by the formula:

, (2)

where is the number of people tested,

j - number of test items (j=20),

i - the number of people tested,

The primary score of the examinees (gives an assessment for the passed test).

It should be noted that during computer testing, in the case of a correct answer on the first attempt, on a second attempt.

The "average" achievement of the group of subjects (DG) was determined by the formula:

Analysis of the obtained results (Figures 1-3) allows us to draw the following conclusions:

The test is valid, since the index of difficulty of test items lies in the interval ;

They have an index of difficulty of tasks, ranging from 0.3 to 0.4, which indicates the literacy of the test;

The percentage of guessing is in the range from 0.14 to 0.25;

The effectiveness of testing as a means of controlling knowledge in technology, determined by formula (3):

The data obtained show that with an equal number of testees (12 students), the “average” achievement of class 7 “A” is higher than that of the control group 7 “B”. Firstly, this is due to the possibility of using a second attempt to answer during computer testing. Secondly, with computer testing, students better understand the instructions and the meaning of the question in the assignments for establishing correspondence, which make up 30% of the test. Thirdly, despite the fact that the test effectiveness indicator lies in the range from 12 to 16 correctly solved tasks (Figure 1), a significant part of the students received a mark of "3". The number of students who received marks "4" and "5" in computer and blank testing is approximately the same.

Rice. 1. Comparative analysis of the assessments of students in the experimental and control groups based on the results of testing by technology

The analysis of the response matrix of the testees of both groups made it possible to identify the tasks that caused the greatest difficulties for the students:

Tasks for an alternative answer (reproduction), aimed at identifying the ability to reason;

Multiple choice tasks (self-study) aimed at testing knowledge of the classification of turning tools and their purpose;

Compliance tasks (self-study) aimed at checking the level of knowledge of professions related to metal processing, the ability to correlate elements of machine tools, hand cutting tools, technological operations with their names;

Multiple choice tasks (use of knowledge when performing non-standard tasks), aimed at identifying the ability to interpret the studied material and mastering the skills of the ratio of the elements of the cutting tool and their letter designation.

The main factors influencing the quality of test tasks on the topic "Technologies for manual processing of metals and artificial materials" are:

Reducing the level of independence of students when working with textbooks, in particular, such as;

Insufficient number of textbooks of the same sample, which complicates the preparation for testing;

The inability of students to interpret the studied material in accordance with the test question;

Unformed skill to correlate technological elements and concepts with their names and designations.

The tasks in the tests were I, II and III levels (I level - tasks for the reproduction of knowledge; II level - for the application of knowledge in a non-standard situation; III level - tasks for independently mastered material). It should be noted that the students had difficulty with tasks of I and II levels of difficulty (Figure 2), which confirms the above reasons for the most common errors when performing test tasks. Analysis of the results of testing 7 "A" class showed that with the same number of subjects, the total number of errors made is 84 (I - 29, II - 30, III - 25), while in 7 "B" class - 97 (I - 35, II - 35, III - 27). This is due to the fact that during computer testing, students had the opportunity to re-answer a similar question, and blank testing is characterized by inattentive reading of instructions for performing test tasks by students and, as a result, errors in answering.

Rice. 2. Distribution of test tasks by difficulty level

The greatest number of errors was made when performing test tasks for addition (Figure 3), which indicates obvious difficulties in applying the acquired knowledge.

In computer testing, open-ended questions caused fewer difficulties than in blank control, despite the fact that the probability of error in the first case is much higher. This is due to the fact that in the program the answer is given in a specific form and any difference from it in the student's answer (change of ending, spelling mistake, etc.) leads to the fact that the answer is not counted. The difficulty index of test items of class 7 "A" can be characterized as evenly distributed, in contrast to the results of 7 "B".

The main mistake of students during blank testing was the incorrect distribution of time for completing tasks.

Fig.3. Distribution of test items by difficulty index

As a rule, the subjects first of all answered questions that did not cause them doubts, and then proceeded to the rest of the test tasks, trying to answer them using general erudition and intuition, or simply trying to guess the answer. This indicates that students are not always confident in their knowledge, skills and abilities acquired in the classroom.

Testing on the topic "Technologies of manual processing of metals and artificial materials" can be considered effective, as it made it possible to identify the level of knowledge of students, the causes of errors in their performance of test tasks.

Conducting a qualitative analysis of test items involves the following recommendations:

Preliminary study of the psychological and pedagogical characteristics of the test group;

Guidance of the rule: the greater the number of those tested, the more reliable the results of the interpretation;

Analysis of educational material for testing, taking into account the pace of mastering educational material by students;

Construction of test tasks of different levels of complexity;

Elimination of incorrect tasks after each testing of the test, increasing its representativeness.

It should be taken into account that factors such as the environment (light, weather, noise, temperature), the emotional and physical state of the tested, and others can slightly affect the test results. Below are the features of computer testing as a means of controlling students' knowledge:

1) ensuring the objectivity of the assessment of educational achievements;

2) implementation of automated statistical processing of students' achievements;

3) the ability to check a large amount of information and the level of knowledge of it by each subject;

4) a more accurate assessment scale, consisting of 20 divisions (questions), in contrast to the usual one, consisting of four;

5) ensuring equal conditions for all students through the use of a single procedure and assessment criteria, which reduces psycho-emotional stress.

Thus, the test as a tool for testing the knowledge of students, the main tasks of which are to control and generalize the studied material, is effective in organizing the educational process. However, it should be noted that when solving other problems, for example, creative tasks or in project activities, it is necessary to combine tests with other methods of monitoring students' learning of educational material, since they do not always allow to fully assess the skills and abilities of students. In conclusion, it should be noted that the development of test programs adapted to the personal characteristics of the test subjects, the so-called non-traditional tests, is possible only with the use of information technology.

Bibliographic link

Pachurin G.V., Tyumina N.S., Shevchenko S.M. ANALYSIS OF TESTING AS A MEANS OF CONTROL OF STUDENTS' KNOWLEDGE // Modern problems of science and education. - 2017. - No. 4.;
URL: http://science-education.ru/ru/article/view?id=26716 (date of access: 01.02.2020). We bring to your attention the journals published by the publishing house "Academy of Natural History"

student 11 "B" class Baygunakova Azat.

Urban testing


Items

October 18

November 22

21 December

Kazakh language

11

12

13

Russian language

7

9

8

Maths

4

7

6

History of Kazakhstan

6

8

7

The World History

6

6

7

Sum

34

42

41

Without Kazakh language

23

30

28

School testing


Items

September 21

15th of November

November 30th

December 14

January 17

Kazakh language

5

13

13

17

11

Russian language

7

9

5

9

7

Maths

5

6

3

11

4

History of Kazakhstan

7

20

6

5

6

The World History

4

7

6

3

8

Sum

28

55

33

45

36

Without Kazakh language

23

42

20

28

25


The results of urban testing in all subjects correspond to the assessment of "satisfactory". In total with scores in the Kazakh language, the overall result is below the passing one. Among the "sinking" topics, most of the sections of the school curriculum. The greatest number of points is gaining in the Kazakh language. Very low scores in math and world history

The results of school tests also indicate large gaps in knowledge. At the same time, these results objectively reflect the student's level of preparation and correspond to the results of the current and intermediate certification. Baygunakov A. in all subjects of the school curriculum during the entire course of study studied with the main grade "satisfactory".

Acquainted with the analysis ____________________________ A. Baigunakov

Analysis of trial test results

student 11 "B" class Alexander Bohunenko.

Urban testing


Items

October 18

November 22

21 December

Kazakh language

17

18

19

Russian language

19

21

19

Maths

11

12

14

History of Kazakhstan

8

12

13

Physics

15

8

14

Sum

70

71

79

Without Kazakh language

63

53

60

School testing


Items

September 21

15th of November

November 30th

December 14

January 17

Kazakh language

4

16

21

16

18

Russian language

18

18

20

20

19

Maths

10

14

13

15

15

History of Kazakhstan

13

13

13

14

10

Physics

11

11

12

14

15

Sum

56

72

79

79

77

Without Kazakh language

52

56

58

63

59


The results of urban testing in physics are unstable: the result on 11/22/13 is almost two times lower than the previous one. There is a positive dynamics of results in the Kazakh language. Consistently high scores in the Russian language, while "satisfactory" grades in subjects such as physics and mathematics do not correspond to current grades, grades for the quarter.

The results of school tests are characterized by a positive dynamics of results in physics, a consistently high score in the Russian language, higher

kimi, in comparison with city tests, results in mathematics, a large difference in results in the Kazakh language. Special attention should be paid to training in the history of Kazakhstan. The total scores for all trial tests are higher than the passing one.

Based on the results of all testing, subject teachers carried out an element-by-element analysis with the obligatory identification of “sinking” topics and the development of recommendations for filling gaps.

Acquainted with the analysis ____________________________ A. Bohunenko

Analysis of trial test results

student 11 "B" Bogdan Burchits.

Urban testing


Items

October 18

November 22

21 December

Kazakh language

8

15

12

Russian language

14

13

15

Maths

7

10

6

History of Kazakhstan

18

18

17

Geography Sun. ist

17

14

15

Sum

64

70

65

Without Kazakh language

56

55

53


School testing


Items

September 21

15th of November

November 30th

December 14

January 17

Kazakh language

16

12

18

15

18

Russian language

13

14

10

13

17

Maths

12

6

7

7

8

History of Kazakhstan

10

14

16

15

16

Geography Sun. ist

12

9

14

20

16

Sum

63

55

65

70

75

Without Kazakh language

47

43

47

55

57


The results of city tests are generally higher than the results of school ones. In all urban tests, the total score is higher than the passing one. In three out of five school tests, the total score without taking into account the results in the Kazakh language is below fifty. The results of tests in the Kazakh language, which were held at the school, correspond to the “good” mark, the lowest results are in mathematics, which corresponds to the results of the current and intermediate certification (the student has one “troika” in algebra for half a year). It is necessary to intensify the work on preparation for tests in the Russian language, because. they are unstable and their corresponding grades are below the student's current performance.

Based on the results of all testing, subject teachers carried out an element-by-element analysis with the obligatory identification of “sinking” topics and the development of recommendations for filling gaps.

I am familiar with the analysis ____________________________ B. Burchits

Analysis of trial test results

student 11 "B" Kabakova Anastasia.

Urban testing


Items

October 18

November 22

21 December

Kazakh language

17

17

16

Russian language

18

11

11

Maths

5

6

8

History of Kazakhstan

7

7

10

Biology

10

19

11

Sum

57

60

56

Without Kazakh language

40

43

40


What else to read