CN111990971B - Experiment and analysis method based on touch screen operation box vision space pairing learning - Google Patents
Experiment and analysis method based on touch screen operation box vision space pairing learning Download PDFInfo
- Publication number
- CN111990971B CN111990971B CN202010912439.7A CN202010912439A CN111990971B CN 111990971 B CN111990971 B CN 111990971B CN 202010912439 A CN202010912439 A CN 202010912439A CN 111990971 B CN111990971 B CN 111990971B
- Authority
- CN
- China
- Prior art keywords
- experimental
- correct
- stage
- stimulus
- experimental animal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000002474 experimental method Methods 0.000 title claims abstract description 40
- 238000004458 analytical method Methods 0.000 title claims abstract description 29
- 238000010171 animal model Methods 0.000 claims abstract description 113
- 238000012360 testing method Methods 0.000 claims abstract description 101
- 238000012549 training Methods 0.000 claims abstract description 61
- 230000000007 visual effect Effects 0.000 claims abstract description 40
- 238000000034 method Methods 0.000 claims abstract description 17
- 230000008569 process Effects 0.000 claims abstract description 12
- 230000003930 cognitive ability Effects 0.000 claims description 12
- 239000007787 solid Substances 0.000 claims description 11
- 230000003203 everyday effect Effects 0.000 claims description 2
- 241001465754 Metazoa Species 0.000 description 11
- 230000000694 effects Effects 0.000 description 7
- 238000011156 evaluation Methods 0.000 description 7
- 238000001514 detection method Methods 0.000 description 6
- 241000283984 Rodentia Species 0.000 description 4
- 208000024827 Alzheimer disease Diseases 0.000 description 3
- 230000003920 cognitive function Effects 0.000 description 3
- 230000002354 daily effect Effects 0.000 description 3
- 230000006735 deficit Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 241000282414 Homo sapiens Species 0.000 description 2
- 230000006399 behavior Effects 0.000 description 2
- 230000003542 behavioural effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- JYGXADMDTFJGBT-VWUMJDOOSA-N hydrocortisone Chemical compound O=C1CC[C@]2(C)[C@H]3[C@@H](O)C[C@](C)([C@@](CC4)(O)C(=O)CO)[C@@H]4[C@@H]3CCC2=C1 JYGXADMDTFJGBT-VWUMJDOOSA-N 0.000 description 2
- 230000001771 impaired effect Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 241000239290 Araneae Species 0.000 description 1
- 241000282412 Homo Species 0.000 description 1
- 241000699670 Mus sp. Species 0.000 description 1
- 102000014415 Muscarinic acetylcholine receptor Human genes 0.000 description 1
- 108050003473 Muscarinic acetylcholine receptor Proteins 0.000 description 1
- 241000700159 Rattus Species 0.000 description 1
- 108091006774 SLC18A3 Proteins 0.000 description 1
- 102100039452 Vesicular acetylcholine transporter Human genes 0.000 description 1
- 230000001154 acute effect Effects 0.000 description 1
- 230000032683 aging Effects 0.000 description 1
- 210000004727 amygdala Anatomy 0.000 description 1
- 150000003943 catecholamines Chemical class 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000003759 clinical diagnosis Methods 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 208000010877 cognitive disease Diseases 0.000 description 1
- 230000003931 cognitive performance Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000013075 data extraction Methods 0.000 description 1
- 230000002950 deficient Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 229960000890 hydrocortisone Drugs 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 238000011813 knockout mouse model Methods 0.000 description 1
- 230000007087 memory ability Effects 0.000 description 1
- 208000027061 mild cognitive impairment Diseases 0.000 description 1
- 238000010172 mouse model Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000011808 rodent model Methods 0.000 description 1
- 201000000980 schizophrenia Diseases 0.000 description 1
- 230000000698 schizophrenic effect Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000035882 stress Effects 0.000 description 1
- 208000024891 symptom Diseases 0.000 description 1
- 230000016978 synaptic transmission, cholinergic Effects 0.000 description 1
- 230000001225 therapeutic effect Effects 0.000 description 1
- 238000002560 therapeutic procedure Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/40—Detecting, measuring or recording for evaluating the nervous system
- A61B5/4076—Diagnosing or monitoring particular conditions of the nervous system
- A61B5/4088—Diagnosing of monitoring cognitive diseases, e.g. Alzheimer, prion diseases or dementia
-
- A—HUMAN NECESSITIES
- A01—AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
- A01K—ANIMAL HUSBANDRY; AVICULTURE; APICULTURE; PISCICULTURE; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
- A01K29/00—Other apparatus for animal husbandry
- A01K29/005—Monitoring or measuring activity, e.g. detecting heat or mating
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B2503/00—Evaluating a particular growth phase or type of persons or animals
- A61B2503/40—Animals
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B2503/00—Evaluating a particular growth phase or type of persons or animals
- A61B2503/42—Evaluating a particular growth phase or type of persons or animals for laboratory research
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A40/00—Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
- Y02A40/70—Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in livestock or poultry
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Neurology (AREA)
- Biophysics (AREA)
- Environmental Sciences (AREA)
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Developmental Disabilities (AREA)
- Hospice & Palliative Care (AREA)
- Biodiversity & Conservation Biology (AREA)
- Psychiatry (AREA)
- Psychology (AREA)
- Neurosurgery (AREA)
- Physiology (AREA)
- Animal Husbandry (AREA)
- Pathology (AREA)
- Child & Adolescent Psychology (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Eye Examination Apparatus (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The invention discloses an experiment and analysis method based on touch screen operation box visual space pairing learning, which is used for extracting training experiment original data in the touch screen operation box pairing association learning experiment process; sequentially extracting different pattern stimulus combinations generated by each test experiment and the positions of the corresponding experimental animal strike screens; presetting the correct position of a touch screen and counting correct items for each test; calculating to obtain the total accuracy of the experimental animal individuals in the training stage, the paired combination accuracy of the single pair of vision spaces and the paired combination solidus rate of the single pair of vision spaces; determining experimental animals which are converted into the PAL test stage according to the paired combination fixed rate of the pair of visual spaces of the last day of the training stage; and extracting experimental original data generated by the experimental animal converted to the testing stage, and calculating to obtain the total correct rate of the experimental animal individual in the testing stage and the correct rate of pairing combination of a pair of vision spaces. The invention can judge the difference of the learning ability of different groups of experimental animals in a shorter test period.
Description
Technical Field
The invention belongs to the technical field of animal behavioural experiments, and particularly relates to an experiment and analysis method based on touch screen operation box vision space pairing learning.
Background
Paired Association Learning (PAL) is an associative memory task and has cross-species similarity in cognitive ability detection. Whereas visual space pairing learning is one of the modes of pairing association learning, which detects learning memory ability by human or animal grasping the relationship between an object and a position in a plurality of stages. The experimental technique is widely applied and is often used for detecting mild cognitive impairment in humans or rodents. Because the water maze and the eight-arm maze experiments can only detect and distinguish rodents with serious spatial cognitive function impairment, and are greatly limited in complex cognitive function level detection, the technology is more often used for evaluating early symptoms such as mild cognitive function impairment and the like of human beings or rodents under diseases such as schizophrenia, alzheimer's disease and the like, and evaluating the therapeutic potential of developed medicaments or the effect of new therapies in rodent models. Therefore, the technology has very important significance in clinical diagnosis and scientific research.
The touch screen operation box is a behavioural experimental device designed for evaluating the cognitive ability of rodents, and the cognitive ability is usually observed through the effect of visual space pairing association learning in the operation box. The experimental process is divided into a training phase and a PAL testing phase. In the training phase, the experiment is often divided into 5-6 stepwise steps, so that the experimental animal learns and grasps all experimental behaviors required by the whole test time, including: training the animal to receive a test initiation signal; performing touch reaction on a stimulus image appearing on a screen; obtaining food rewards after touching correctly; when a penalty of 5s is accepted for touch errors. In the subsequent PAL testing phase, the experimenter would design three different graphical stimuli to match the corresponding positions, i.e. three pairs of correct visual space pairings. For example, when a flower appears at the left side position or an airplane appears at the middle position or a spider appears at the right side position, touching the screen is a correct reaction, and if another image at the wrong position is touched, the other image is wrong. The total learning accuracy of animals per day is usually recorded during the test phase as an assessment of their cognitive performance. For example, andrew J.Roebuck et al found that in schizophrenic model mice constructed by MK-801, both accuracy and completion time were significantly reduced during the PAL test phase compared to the control group; its team also reported that LE male rats were significantly more efficient and accurate in PAL tests than control group under acute constraint stress, whereas this phenomenon did not occur in the experimental group injected with cortisol alone, which might be related to the ability to treat catecholamines released in amygdala differently; cholinergic neurotransmission impairment is closely related to aging and Alzheimer's disease, carola Romberg et al found that in mice deficient in M2-type muscarinic receptors, object-position correlation learning ability was impaired during the PAL test phase, and accuracy was significantly lower than that in the control group; the results were consistent with the VAChT knockout mice associated with Alzheimer's disease. Therefore, the experimental technology can distinguish normal cognitive ability from impaired animals in scientific research.
Although the experimental method is currently used as an evaluation means of vision-space pairing association learning ability in scientific research, due to the defects of complex behavior paradigm, long detection time, limited detection distinguishing effectiveness of a single evaluation index and the like, when the conventional experimental method is used for observing and distinguishing the cognitive abilities of experimental animals in different treatment groups, the problems of non-uniform detection program standard, long modeling period, poor operability, insufficient evaluation index and the like exist, and specifically: firstly, because the detection program is complex and lacks unified standard, the difference of the training and learning sufficiency degree of the task rules of the animals in the test stage can influence the experimental result in the test stage, and the pairing association learning capability of the animals in the test stage is difficult to evaluate accurately. For example, the correct response rate of some experimental animals at the initial stage of the test shows a "floor effect", the distinction degree of each experimental group is small, and the experimental result is greatly affected by errors; secondly, the evaluation index of the total learning accuracy cannot reflect the learning process, the problem solving strategy establishing process, the learning strategy and the like of experimental animals in a dynamic complex vision-space pairing task, so that the reliability and the effectiveness of the method are insufficient, finer experimental analysis is difficult to perform, and the learning ability of the animals cannot be accurately evaluated, which are the defects of the existing experimental method.
Disclosure of Invention
The analysis method for judging the cognitive ability of the experimental animal by analyzing the total learning accuracy in the conventional pairing association learning experiment is used for solving the problems that an experimental program lacks uniform standards, has low effectiveness, cannot explore learning strategies and effectiveness and the like. The invention provides more effective experimental parameters and a new experimental analysis method, and therefore, the invention provides an experimental and analysis method based on touch screen operation box vision space pairing learning.
The specific embodiment is as follows:
an experiment and analysis method based on touch screen operation box visual space pairing learning extracts training stage experiment original data generated in the touch screen operation box pairing association learning experiment process; sequentially extracting different graph stimulus combinations generated by each experiment and the positions of the corresponding experimental animal strike screens; presetting the correct position of a touch screen, and automatically counting whether each test is correct or not; calculating to obtain the total correct rate of the experimental animal individuals in the training stage, the paired combination correct rate of the single pair of vision spaces and the paired combination fixed rate of the single pair of vision spaces; determining experimental animals which are converted into the PAL test stage according to the paired combination fixed rate of the pair of visual spaces of the last day of the training stage; and extracting experimental original data generated by the experimental animal converted to the testing stage, calculating to obtain the total correct rate of the experimental animal individual in the testing stage and the correct rate of paired combination of a single pair of visual space, and evaluating the cognitive ability of the experimental animal.
And (3) converting the experimental animals with the paired visual space fixing rates of less than 40% into PAL testing stages by using paired visual space fixing rates of the single pair visual space fixing rates obtained by the experimental animals on the last day of the training stage, and eliminating the experimental animals with the unsatisfied requirements.
In the training stage, three touch screens arranged in the touch operation box correspond to one preset correct graphic stimulus respectively, two random screens show two identical stimuli in each test time, one screen corresponds to the preset correct graphic stimulus, and the other screen shows blank; when the experimental animal selects the pattern stimulus at the correct position, the test time is regarded as correct and the next test time randomly appears the stimulus combination with the different pattern or position of the test time; when the experimental animal touches the graphic stimulus or the blank screen at the wrong position, the test time is regarded as wrong and the stimulus combination at the same graphic and position still appears in the next test time until the experimental animal selects to touch the graphic stimulus at the correct position, and the paired combination solid execution rate of the pair of vision space is calculated according to the following formula:
setting a training phase to be 14 days, wherein the training time is less than 1 hour every day, and 100 test stops are completed within 1 hour, and the accuracy of pairing and combining a single pair of vision space of experimental animal individuals in the training phase is as follows:
the experimental animal after being trained in the training stage is in the PAL testing stage, the same graphic stimulus appears on two random screens in three screens in the touch screen operation box and a blank screen appears on one screen in each test time through program control, whether the individual experimental animal touches the graphic stimulus correctly or not, the next test time is randomly selected from the combination types of the rest graphic stimulus, and the total accuracy of the individual experimental animal in the testing stage and the accuracy of paired combination of a single pair of visual spaces are calculated through the following formula;
in the training phase and the testing phase, the graphical stimulus used is identical to the correct position corresponding to the graphical stimulus.
Further preferably, the method further comprises a parametric analysis of the uncorrected correct rate, the single pair of visual space pairings combined uncorrected test run correct rate, the single screen correct rate, and the analysis of the single screen correct rate during the test phase for the individual experimental animals during the training phase.
The technical scheme of the invention has the following advantages:
A. according to the invention, through definitely converting the evaluation standard of the experimental animal from the training stage to the PAL testing stage, the single pair of the experimental animal individuals obtained in the training stage pair-wise combination solid execution rate provides a reliable basis for converting the training stage to the testing stage, and meanwhile, the reliability and stability of the experiment are enhanced;
B. according to the experimental analysis method provided by the invention, the overall accuracy of experimental animals and the accuracy of pairing combination of a single pair of vision spaces are analyzed by combining the training stage and the testing stage, so that the learning process of each sub-item in the overall task can be evaluated in high efficiency and fine learning ability, the effectiveness and sensitivity of the experiment on animal cognitive ability detection are enhanced while the floor effect of the initial stage is avoided, and the difference of the learning ability of different experimental animals can be accurately judged in a short experimental period range to a great extent.
C. The analysis data in the invention can provide firm data support for an experimenter to further explore learning strategies of experimental animals in different treatment groups.
Drawings
In order to more clearly illustrate the embodiments of the present invention, the drawings that are required for the embodiments will be briefly described, and it will be apparent that the drawings in the following description are some embodiments of the present invention and that other drawings may be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a block diagram of an experimental and analytical method provided by the present invention;
FIG. 2 shows the paired combined solid rate of the experimental animal individuals X three pairs of visual spaces;
FIG. 3 shows the overall accuracy of the individual Y PAL test of the experimental animal provided by the invention;
FIG. 4 shows the accuracy of paired combinations of three pairs of visual spaces of experimental animal individuals Y provided by the invention;
FIG. 5 shows the PAL test total accuracy daily comparison of two groups of experimental animals (P < 0.05, compared with the treated group of experimental animals)
FIG. 6 is a comparison of PAL test total accuracy per stage after block for two groups of experimental animals (P < 0.05, compared to treated group of experimental animals);
note that: block with 5 days-average
FIG. 7 is a comparison of the average of the overall accuracy of PAL tests for two groups of experimental animals (P < 0.001);
fig. 8 shows the ratio of the mean of the correct rates of the optimal single pair of visual space pairs to the number of experimental animals in each block phase of two groups of experimental animals (< 0.05, < 0.01, < 0.001, "/number" here represents different significance levels compared to control group of experimental animals;
note that: the left axis corresponds to a histogram, and is the optimal single pair of paired pair of vision space combination accuracy average value data; the right axis corresponds to a line graph, and is the proportional data of the number of experimental animals. The number of experimental animals reaches 80% of the accuracy of pairing combination of the optimal pair of visual space according to each stage as a judgment standard.
FIG. 9 is a graph showing the ratio of the mean of the combined correct rates of the optimal two pairs of visual space pairings for each block phase of two groups of experimental animals to the number of experimental animals (< 0.05 for P) compared to the control group of experimental animals;
note that: and the number of experimental animals is up to 70% according to the optimal matching combination accuracy of two pairs of visual spaces in each stage.
The comparison of the single day data with the continuous multiple day data uses a three day one average value unless specifically stated. And the continuous multi-day data is that each day adopts an average value, namely 32 days of data adopts continuous 30-bit numerical values; the optimal and suboptimal visual space pairing combination is selected by 30-day average comparison.
Detailed Description
The following description of the embodiments of the present invention will be made apparent and fully in view of the accompanying drawings, in which some, but not all embodiments of the invention are shown. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
As shown in fig. 1, the invention provides an experiment and analysis method based on touch screen operation box visual space pairing learning, which extracts training stage experiment original data generated in the touch screen operation box pairing association learning experiment process; sequentially extracting different graph stimulus combinations generated by each experiment and the positions of the corresponding experimental animal strike screens; automatically counting whether each test is correct or not by presetting the correct position of the touch screen; calculating to obtain the total correct rate of the experimental animal individuals in the training stage, the paired combination correct rate of the single pair of vision spaces and the paired combination fixed rate of the single pair of vision spaces; determining experimental animals which are converted into the PAL test stage according to the paired combination fixed rate of the pair of visual spaces of the last day of the training stage; and extracting experimental original data generated by the experimental animal converted to the testing stage, calculating to obtain the total correct rate of the experimental animal individual in the testing stage and the correct rate of paired combination of a single pair of visual space, and evaluating the cognitive ability of the experimental animal.
The training stage experiment original data generated in the pairing association learning experiment process of the touch screen operation box can be data extraction of Raw date (original data) of the experiment generated in the touch screen operation box based on original total correct rate and other data, a corresponding calculation program is manufactured according to the format and information of the existing Raw date, different graph stimulus combinations (comprising graph stimulus types and appearance positions) generated in each experiment and the positions of the corresponding experimental animal touch screens can be sequentially extracted through program calculation, the correct or not of each test time is automatically counted according to preset correct positions, and the respective correct rate and the fixed execution rate of the paired combination of the three pairs of vision spaces are finally calculated.
The visual space pairing learning experimental analysis method comprises the following experimental parameters of a training stage and a testing stage: the method comprises the steps of training the total accuracy of experimental animal individuals in a stage, training the accuracy of paired combination of single pair of vision spaces of experimental animal individuals in a stage, testing the total accuracy of experimental animal individuals in a stage, and testing the accuracy of paired combination of single pair of vision spaces of experimental animal individuals in a stage.
The invention also provides the following computable experimental analysis parameters: the individual uncorrected correct rate of the experimental animal in the training stage, the individual uncorrected test time correct rate of the experimental animal in the training stage by pairing and combining a pair of vision spaces, the correct rate of each position in the training stage and the correct rate of each position in the testing stage.
The experimental data can be extracted from the operation original data of the touch screen operation box experimental instrument. The single day data and the continuous multiple day data are compared by adopting a three-day one-average value. And the continuous multi-day data is an average value adopted for each day, namely, the data of 32 days adopts continuous 30-bit numerical values.
The experimental procedure in the training phase is as follows:
in the touch screen box, each test run was started from the experimental animal exploring the trough, after which the trough lamp was turned off, and the graphic stimulus was randomly present in three positions (left, middle, right) on the screen. Of the two graphic stimuli displayed on the screen, one is in the correct position (s+), and the other is in the wrong position (S-). When the experimental animal touches the graphic stimulus at the correct position, the stimulus disappears, and the experimental animal obtains the reward; in contrast, if the experimental animal touches the graphic stimulus at the wrong location, the stimulus disappears, the experimental box lights up for 5 seconds, and no bonus is available. Different from the conventional experimental process, in the training stage, three touch screens in the touch screen operation box are respectively corresponding to one preset correct graphic stimulus, namely three pairs of correct visual space are paired and combined, two identical image stimulus appears on two screens randomly in each test, one screen corresponds to the preset correct graphic stimulus, the third screen is displayed as blank, and one graphic stimulus of the touch screen at the correct position is contained in the third screen, namely six different graphic stimulus combination types are included in the stage. When the experimental animal selects the pattern stimulus at the correct position, the test time is regarded as correct and the stimulus combinations of different patterns or positions randomly appear in the next test time; when the experimental animal touches the graphic stimulus or the blank screen at the wrong position, the test is regarded as wrong and the stimulus combination of the same graphic and position still appears in the next test (correction test) until the experimental animal selects to touch the correct graphic stimulus. The training phase is preferably set to 14 days, the training time is 1 hour at the maximum, and the training time reaches 100 test times in 1 hour to stop, so that the experimental animal can master the rule of correlation between the stimulus and the position. Of course, other days and daily training times and trials may be provided than in the present invention. In this training phase daily task, the experiment was recorded:
in the training stage experimental animal individual solid execution rate calculation formula, if a certain pair of visual space paired combinations starts to appear in the 100 th test time and the experimental animal is selected incorrectly, the number of times that the pair of visual space paired combinations appears in the uncorrected test time is not counted.
In the training stage of the calculation formula, the experimental animal individuals pair-wise in the visual space to combine the solid content parameters, and reflect whether the experimental animal individuals effectively master the paired learning rule of the training stage, wherein the solid content parameters are aimed at achieving the aim of improving the experimental stability.
According to the invention, the single pair of vision space paired combination solid rate of experimental animal individuals in the last day of the training stage is lower than 40%, and the animal mastering the paired learning paradigm rule is considered to enter the testing stage.
The training stage is finished and qualified, and then the PAL test stage is carried out, and besides the total accuracy, the invention adopts a new analysis method and evaluation indexes to further carry out dynamic and refined analysis on each pair of paired learning score change processes and correlations thereof. Specifically, the same graphic stimulus appears in two random target positions, one is a blank screen, and the graphic stimulus contains a touch screen at the correct position. The test stage is identical to the previous training stage in the type of the graphic stimulus and the correct position corresponding to the stimulus. Whether the experimental animal touched the graphic stimulus or not, the next test run was randomly selected from the remaining five graphic stimulus combination categories, and this process was repeated until 100 test runs. The test phase was 32 days, and during this PAL test phase task, the experiment was recorded:
the total accuracy of experimental animal individuals in the test stage and the accuracy of pairing combination of single pair of visual spaces of experimental animal individuals in the test stage can be analyzed to judge the learning effect of experimental animals in different groups, so that the problem that the existing experimental analysis method is poor in distinguishing effectiveness and always presents a floor effect in a long time is solved.
According to the invention, through analysis of the learning results of the single pair of visual space pairing combinations, the difference of the correct rate and standard reaching number of the optimal visual space pairing combinations of each group is observed. And simultaneously observing the difference of the correct rate and the standard reaching number of the optimal paired combination of two pairs of visual spaces of each group, so as to observe the learning and cognition ability of experimental animals of each group.
According to the invention, 6 experimental animals including a Control group (Control group) and an experimental group (Treated group) are adopted, and according to training in a training stage of 14 days, the experimental animals with the paired combination of the pair of visual space in the last day with the solid content lower than 40% are converted into a PAL testing stage, namely the experimental animals are mastered and understand the learning paradigm, as shown in figure 2.
As shown in fig. 3, in the PAL test phase, the respective correct rates of the paired combinations of three pairs of visual spaces were extracted from the Raw date (Raw data), and the experimental animal individuals Y PAL test the total correct rate after the experimental test for 32 days.
Fig. 4 shows the accuracy of the paired combinations of the three pairs of visual spaces of the experimental animal individuals Y, and the model reflects that the learning ability is far better than the total accuracy of the test. As shown in fig. 5, in the PAL experimental test of 32 days, the total accuracy of experimental animals between the two groups was significantly different continuously after the 28 th day. In addition, the experimental animals were averaged for 5 days as one stage, a significant difference was seen in fig. 6 at the 6 th stage, and the total accuracy of the two groups of experimental animals for 30 days was compared with the average, P < 0.001, as shown in fig. 7.
In the new experimental parameter extraction data shown in fig. 8, it can be seen that the optimal single pair of vision spatial pairing combination accuracy rate of the control group is significantly different from that of the experimental group in each period, and the standard-reaching experimental animals of the control group are far higher than those of the experimental group (the optimal pair of vision spatial pairing combination accuracy rate reaches 80% as a judgment standard); as shown in fig. 9, in the optimal two pairs of spatial paired learning, the control group showed a significant difference in accuracy at the 4 th period compared with the experimental group, and the control group reached the standard experimental animals higher than the experimental group.
The experimental results show that the number of experimental animals which are paired and combined in visual space is more in the control group than in the experimental group in the same time, the learning efficiency is faster, and compared with the existing data, the experimental results can analyze the learning difference between two groups of experimental animals in a shorter period, and the experimental stability is high, so that the distinguishing effectiveness of the model for reflecting the learning ability is far better than that of the existing experimental analysis method.
From this experimental analysis it is known that: the cognitive ability of the control group of experimental animals is significantly higher than that of the experimental group.
According to the invention, the training standard from the training stage to the testing stage in the original touch screen operation box experimental step is set, so that the experimental animal can master the basic rules required by subsequent pairing learning in the last stage of training, and the learning ability and learning effect of the animal can be more objectively evaluated in the testing stage. And finally, comprehensive and accurate experimental data and analysis methods are provided for the touch screen operation box pairing association learning ability evaluation, and the method has great significance on the complex cognitive ability detection method of the experimental animals.
It is apparent that the above examples are given by way of illustration only and are not limiting of the embodiments. Other variations or modifications of the above teachings will be apparent to those of ordinary skill in the art. It is not necessary here nor is it exhaustive of all embodiments. While obvious variations or modifications are contemplated as falling within the scope of the present invention.
Claims (4)
1. An experiment and analysis method based on touch screen operation box visual space pairing learning is characterized in that training stage experiment original data generated in the touch screen operation box pairing association learning experiment process are extracted; sequentially extracting different pattern stimulus combinations generated by each test experiment and the positions of the corresponding experimental animal strike screens; presetting the correct position of a touch screen, and automatically counting whether each test is correct or not; calculating to obtain the total correct rate of the experimental animal individuals in the training stage, the paired combination correct rate of the single pair of vision spaces and the paired combination fixed rate of the single pair of vision spaces; determining experimental animals which are converted into the PAL test stage according to the paired combination fixed rate of the pair of visual spaces of the last day of the training stage; extracting experimental original data generated by experimental animals converted to a testing stage, calculating to obtain the total correct rate of the experimental animal individuals in the testing stage and the correct rate of pairing combination of a single pair of visual spaces, and evaluating the cognitive ability of the experimental animals;
in the training stage, three touch screens arranged in the touch operation box correspond to one preset correct graphic stimulus respectively, two random screens show two identical stimulus when in each test, one screen corresponds to the preset correct graphic stimulus, and the third screen shows blank; when the experimental animal selects the pattern stimulus at the correct position, the test time is regarded as correct and the next test time randomly appears the stimulus combination with the different pattern or position of the test time; when the experimental animal touches the graphic stimulus or the blank screen at the wrong position, the test time is regarded as wrong and the stimulus combination at the same graphic and position still appears in the next test time until the experimental animal selects to touch the graphic stimulus at the correct position, and the paired combination solid execution rate of the pair of vision space is calculated according to the following formula:
setting a training phase for 14 days, wherein the training time is less than 1 hour every day, and 100 test stops are completed within 1 hour, and the accuracy of pairing and combining the pair of single pair of vision space of experimental animal individuals in the training phase is as follows:
the experimental animal after being trained in the training stage is in the PAL testing stage, the same graphic stimulus appears on two random screens in three screens in the touch screen operation box and a blank screen appears on one screen in each test time through program control, whether the individual experimental animal touches the graphic stimulus correctly or not, the next test time is randomly selected from the combination types of the rest graphic stimulus, and the total accuracy of the individual experimental animal in the testing stage and the accuracy of paired combination of a single pair of visual spaces are calculated through the following formula;
2. the experimental and analysis method based on touch screen operation box vision space pairing learning according to claim 1, wherein the experimental animals with the single pair of vision space pairing combination solid content lower than 40% are converted into PAL testing stage by the single pair of vision space pairing combination solid content obtained by the experimental animal individuals on the last day of training stage, and the experimental animal individuals with unsatisfied requirements are eliminated.
3. The method according to claim 1, wherein the graphic stimulus used and the correct position corresponding to the graphic stimulus are identical in the training phase and the testing phase.
4. The touch screen operation box vision space pairing learning-based experiment and analysis method according to claim 1, further comprising parameter analysis of uncorrected correct rate, single pair of vision space pairing combination uncorrected test times correct rate, single screen correct rate of individual experimental animals in a training stage, and analysis of single screen correct rate in a testing stage.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010912439.7A CN111990971B (en) | 2020-09-02 | 2020-09-02 | Experiment and analysis method based on touch screen operation box vision space pairing learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010912439.7A CN111990971B (en) | 2020-09-02 | 2020-09-02 | Experiment and analysis method based on touch screen operation box vision space pairing learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111990971A CN111990971A (en) | 2020-11-27 |
CN111990971B true CN111990971B (en) | 2023-07-07 |
Family
ID=73465228
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010912439.7A Active CN111990971B (en) | 2020-09-02 | 2020-09-02 | Experiment and analysis method based on touch screen operation box vision space pairing learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111990971B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103125406A (en) * | 2013-03-19 | 2013-06-05 | 郑州大学 | Visual cognitive behavioral learning automatic training system of big and small mice |
CN104616231A (en) * | 2013-11-04 | 2015-05-13 | 中国科学院心理研究所 | Cloud-based psychological laboratory system and using method thereof |
CN106614383A (en) * | 2017-02-27 | 2017-05-10 | 中国科学院昆明动物研究所 | Training method and device for correcting screen contact way of macaque |
WO2018112103A1 (en) * | 2016-12-13 | 2018-06-21 | Akili Interactive Labs, Inc. | Platform for identification of biomarkers using navigation tasks and treatments using navigation tasks |
CN109566447A (en) * | 2018-12-07 | 2019-04-05 | 中国人民解放军军事科学院军事医学研究院 | The research system of non-human primate movement and cognitive function based on touch screen |
CN110199902A (en) * | 2019-07-07 | 2019-09-06 | 江苏赛昂斯生物科技有限公司 | Toy touch screen conditioned behavior control box |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9526430B2 (en) * | 2014-09-02 | 2016-12-27 | Apple Inc. | Method and system to estimate day-long calorie expenditure based on posture |
US10334823B2 (en) * | 2016-01-31 | 2019-07-02 | Margaret Jeannette Foster | Functional communication lexigram device and training method for animal and human |
-
2020
- 2020-09-02 CN CN202010912439.7A patent/CN111990971B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103125406A (en) * | 2013-03-19 | 2013-06-05 | 郑州大学 | Visual cognitive behavioral learning automatic training system of big and small mice |
CN104616231A (en) * | 2013-11-04 | 2015-05-13 | 中国科学院心理研究所 | Cloud-based psychological laboratory system and using method thereof |
WO2018112103A1 (en) * | 2016-12-13 | 2018-06-21 | Akili Interactive Labs, Inc. | Platform for identification of biomarkers using navigation tasks and treatments using navigation tasks |
CN106614383A (en) * | 2017-02-27 | 2017-05-10 | 中国科学院昆明动物研究所 | Training method and device for correcting screen contact way of macaque |
CN109566447A (en) * | 2018-12-07 | 2019-04-05 | 中国人民解放军军事科学院军事医学研究院 | The research system of non-human primate movement and cognitive function based on touch screen |
CN110199902A (en) * | 2019-07-07 | 2019-09-06 | 江苏赛昂斯生物科技有限公司 | Toy touch screen conditioned behavior control box |
Also Published As
Publication number | Publication date |
---|---|
CN111990971A (en) | 2020-11-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Pustina et al. | Improved accuracy of lesion to symptom mapping with multivariate sparse canonical correlations | |
Shin et al. | Cognitive functioning in obsessive-compulsive disorder: a meta-analysis | |
Mendez et al. | Temporal and spatial categorization in human and non-human primates | |
Berry et al. | A single-system account of the relationship between priming, recognition, and fluency. | |
US9324241B2 (en) | Predictive executive functioning models using interactive tangible-graphical interface devices | |
Janouschek et al. | The functional neural architecture of dysfunctional reward processing in autism | |
Hunsaker | Comprehensive neurocognitive endophenotyping strategies for mouse models of genetic disorders | |
CN114360728A (en) | Prediction model for mild cognitive dysfunction of diabetes and construction method of nomogram | |
Zhang et al. | What can “drag & drop” tell? Detecting mild cognitive impairment by hand motor function assessment under dual-task paradigm | |
Matzke et al. | Stopping timed actions. | |
Garofalo et al. | Influence of colour on object motor representation | |
CN111990971B (en) | Experiment and analysis method based on touch screen operation box vision space pairing learning | |
EP3537974B1 (en) | Method and apparatus for determining an indication of cognitive impairment | |
Mui et al. | Ex-Gaussian analysis of simple response time as a measure of information processing speed and the relationship with brain morphometry in multiple sclerosis | |
Knorr et al. | A comparison of fMRI and behavioral models for predicting inter-temporal choices | |
Morcom | Re-engaging with the past: recapitulation of encoding operations during episodic retrieval | |
Wöhner et al. | Semantic facilitation in blocked picture categorization: Some data and considerations regarding task selection. | |
CN113729708A (en) | Lie evaluation method based on eye movement technology | |
Li et al. | ERP correlates of verbal and numerical probabilities in risky choices: a two-stage probability processing view | |
Sookud et al. | Impaired goal-directed planning in transdiagnostic compulsivity is explained by uncertainty about learned task structure | |
Talarposhti et al. | Modeling one-choice discrete-continuous dual task | |
US20220167895A1 (en) | Method and system for testing cognition by processing the reaction of a subject to stimuli | |
Talwar | Computational models describe individual differences in cognitive function and their relationships to mental health symptoms | |
Wang | Bayesian analysis for quantification of individual rat and human behavioural patterns during attentional set-shifting tasks | |
Henderson | The binding problem and the perception of multiple stimuli |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |