WO2023070701A1 - Automatic animal pain test system - Google Patents
Automatic animal pain test system Download PDFInfo
- Publication number
- WO2023070701A1 WO2023070701A1 PCT/CN2021/128364 CN2021128364W WO2023070701A1 WO 2023070701 A1 WO2023070701 A1 WO 2023070701A1 CN 2021128364 W CN2021128364 W CN 2021128364W WO 2023070701 A1 WO2023070701 A1 WO 2023070701A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- time series
- coordinate
- pain
- coordinate information
- Prior art date
Links
- 208000002193 Pain Diseases 0.000 title claims abstract description 61
- 230000036407 pain Effects 0.000 title claims abstract description 61
- 241001465754 Metazoa Species 0.000 title claims abstract description 45
- 238000012360 testing method Methods 0.000 title claims abstract description 26
- 230000033001 locomotion Effects 0.000 claims abstract description 59
- 238000012545 processing Methods 0.000 claims abstract description 7
- 239000011159 matrix material Substances 0.000 claims description 30
- 238000000034 method Methods 0.000 claims description 26
- 238000012549 training Methods 0.000 claims description 25
- 238000004422 calculation algorithm Methods 0.000 claims description 12
- 230000000694 effects Effects 0.000 claims description 12
- 238000013528 artificial neural network Methods 0.000 claims description 10
- 238000013145 classification model Methods 0.000 claims description 8
- 238000003708 edge detection Methods 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 4
- 238000009499 grossing Methods 0.000 claims description 4
- 230000007423 decrease Effects 0.000 claims description 3
- 238000001514 detection method Methods 0.000 claims description 3
- 241000699666 Mus <mouse, genus> Species 0.000 description 34
- 241000699670 Mus sp. Species 0.000 description 28
- 208000000114 Pain Threshold Diseases 0.000 description 6
- 230000037040 pain threshold Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 230000009466 transformation Effects 0.000 description 5
- 230000001133 acceleration Effects 0.000 description 4
- 230000004888 barrier function Effects 0.000 description 4
- 230000006399 behavior Effects 0.000 description 4
- 238000002474 experimental method Methods 0.000 description 4
- 239000000243 solution Substances 0.000 description 4
- NNJVILVZKWQKPM-UHFFFAOYSA-N Lidocaine Chemical compound CCN(CC)CC(=O)NC1=C(C)C=CC=C1C NNJVILVZKWQKPM-UHFFFAOYSA-N 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 229960004194 lidocaine Drugs 0.000 description 3
- 238000005259 measurement Methods 0.000 description 3
- 238000010606 normalization Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000012546 transfer Methods 0.000 description 3
- NIXOWILDQLNWCW-UHFFFAOYSA-N acrylic acid group Chemical group C(C=C)(=O)O NIXOWILDQLNWCW-UHFFFAOYSA-N 0.000 description 2
- 230000004913 activation Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000003542 behavioural effect Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 239000004744 fabric Substances 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000036544 posture Effects 0.000 description 2
- 230000000638 stimulation Effects 0.000 description 2
- 239000013589 supplement Substances 0.000 description 2
- 206010065390 Inflammatory pain Diseases 0.000 description 1
- 208000001294 Nociceptive Pain Diseases 0.000 description 1
- 241000700159 Rattus Species 0.000 description 1
- 241000283984 Rodentia Species 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 239000002671 adjuvant Substances 0.000 description 1
- 230000036592 analgesia Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000010171 animal model Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004140 cleaning Methods 0.000 description 1
- 238000007621 cluster analysis Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000002347 injection Methods 0.000 description 1
- 239000007924 injection Substances 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000002858 neurotransmitter agent Substances 0.000 description 1
- 230000004297 night vision Effects 0.000 description 1
- 230000001473 noxious effect Effects 0.000 description 1
- 230000037324 pain perception Effects 0.000 description 1
- 230000008533 pain sensitivity Effects 0.000 description 1
- 108090000623 proteins and genes Proteins 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000000284 resting effect Effects 0.000 description 1
- 230000001568 sexual effect Effects 0.000 description 1
- 230000005477 standard model Effects 0.000 description 1
- 229940126585 therapeutic drug Drugs 0.000 description 1
- 230000000699 topical effect Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0048—Detecting, measuring or recording by applying mechanical forces or stimuli
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4824—Touch or pain perception evaluation
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4824—Touch or pain perception evaluation
- A61B5/4827—Touch or pain perception evaluation assessing touch sensitivity, e.g. for evaluation of pain threshold
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B2503/00—Evaluating a particular growth phase or type of persons or animals
- A61B2503/40—Animals
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B2503/00—Evaluating a particular growth phase or type of persons or animals
- A61B2503/42—Evaluating a particular growth phase or type of persons or animals for laboratory research
Definitions
- the invention relates to the field of medical equipment, in particular to an automatic animal pain testing system.
- the purpose of the embodiments of the present invention is to provide an automated animal pain testing system, which can automatically test animal pain, saves time and effort, and has high accuracy.
- the embodiment of the present invention provides a kind of automatic animal pain test system, comprises activity device, camera device and the computer equipment connected with described camera device;
- the movable device includes a movable box, an array of microneedles located in the movable box, and an angle adjustment bracket for adjusting the movable box;
- the camera device is used to take a video of the movement track of the animal in the movable box, and send the video of the movement track to the computer device;
- Said said computer equipment includes:
- At least one memory for storing at least one program
- the at least one processor When the at least one program is executed by the at least one processor, the at least one processor is made to implement the following steps:
- the coordinate information includes coordinate values and time series;
- the training method of the recognizer includes:
- the positive sample picture and the negative sample picture are input to the haar cascade classification model for training;
- the haar cascade classification model whose recognition accuracy reaches the preset accuracy is used as the recognizer.
- the determining coordinate information according to the location information specifically includes:
- the coordinate system and the coordinate information of the animal in the motion track video are determined according to the position information and the outline.
- the processor also implements the following steps:
- Resample the coordinate information by using a linear difference method to obtain a first coordinate value and a first time series including a preset number of frames within a preset time, and combine the first coordinate value and the first time series as coordinate information.
- the processor also implements the following steps:
- processing the coordinate information to obtain motion feature information specifically includes:
- a histogram of distance location distribution is determined according to the distance time series and the coordinate information.
- the training method of the prediction model includes:
- the principal components are extracted from the standardized two-dimensional matrix, and the two principal components with the best correlation are binary fitted to obtain a fitted straight line between the pain value and the motion feature, and the fitted straight line is used as a prediction model.
- the training method of the prediction model includes:
- the classification neural network is trained, and the classification neural network with prediction accuracy is used as the prediction model; the classification neural network includes one layer of input layer, three layers of hidden layers and one layer of output layer.
- the movable box includes a base and a baffle, and the base and the baffle are assembled through a male and female tenon.
- the movable box is divided into a rest area and an activity area. The rest area and the The active area is isolated by a baffle, and the baffle is assembled with the base by means of a male and a female tenon.
- the entire array of microneedles is installed in the active area, and the radius of the top of the entire array of microneedles gradually decreases from the end near the rest area.
- Implementing the embodiment of the present invention includes the following beneficial effects: the embodiment of the present invention generates pain stimulation for the animal to be tested through the mobile device; the motion track video of the animal to be tested is captured by the camera device; the position information and coordinate information are extracted from the motion track video by computer equipment and motion feature information, and transfer the motion feature information to the trained prediction model to obtain the predicted pain value; thereby completing the automatic test of animal pain, the measurement does not require manual intervention, saves time and effort, and has high accuracy.
- Fig. 1 is the structural representation of a kind of automated animal pain test system provided by the embodiment of the present invention
- FIG. 2 is a schematic flowchart of steps performed by a processor according to an embodiment of the present invention
- Fig. 3 is an interface diagram showing animal coordinate information provided by an embodiment of the present invention.
- Fig. 4 is an interface diagram for displaying animal movement characteristic information provided by an embodiment of the present invention.
- the embodiment of the present invention provides a kind of automatic animal pain testing system, comprises movable device, camera device and the computer equipment that is connected with described camera device;
- the movable device includes a movable box, an array of microneedles located in the movable box, and an angle adjustment bracket for adjusting the movable box;
- the camera device is used to take a video of the movement track of the animal in the movable box, and send the video of the movement track to the computer device;
- Said said computer equipment includes:
- At least one memory for storing at least one program
- the at least one processor When the at least one program is executed by the at least one processor, the at least one processor is made to implement the following steps:
- S200 Determine coordinate information according to the position information, and process the coordinate information to obtain motion feature information;
- the coordinate information includes coordinate values and time series;
- the animal to be tested in this embodiment is a mouse, and the dark environment in which the mouse lives should be simulated as much as possible, and a black light-shielding barrier cloth is covered outside the movable device.
- the LED light group is used as the camera light source.
- the computer equipment may be different types of electronic equipment, including but not limited to terminals such as desktop computers and laptop computers.
- test system supports multiple systems to work together, which can greatly save experimenters' experiment time.
- the movable box includes a base and a baffle, and the base and the baffle are assembled through a male and female tenon.
- the movable box is divided into a rest area and an activity area. The rest area and the The active area is isolated by a baffle, and the baffle is assembled with the base by means of a male and a female tenon.
- the front baffle of the experimental device is made of transparent acrylic material and cut by laser, and the rest of the device is made of acrylic material with light-shielding stickers.
- the base and the baffle are assembled by a male and female tenon, and the baffle between the rest area and the activity area is assembled by a male and female tenon, which is convenient for assembly and cleaning.
- the entire array of microneedles is installed in the active area, and the radius of the top of the entire array of microneedles gradually decreases from the end near the rest area.
- the microneedle array in this embodiment includes 4 sub-arrays, the height of the entire array of microneedles is 3 mm, the interval between microneedles is 1.8 mm, and 4 microneedle sub-arrays of 12 cm ⁇ 6 cm form a microneedle array. Needle runway.
- the top radius of the microneedles of the No. 1 array gradually changed from 0.03 mm to 0.08 mm
- the microneedle top radius of the No. 2 array gradually changed from 0.09 mm to 0.14 mm
- the microneedle radius of the No. 3 array gradually changed from 0.15 mm to 0.20 mm
- the active area of the movable box is 48cm long, 6cm wide, and 20cm high, and the rest area is 12cm long, 6cm wide, and 20cm high.
- the sharpest microneedles with a top radius of 0.03 mm are sharp enough to induce paw withdrawal in mice, but they will not puncture the skin of their feet.
- mice based on the design of the gradient microneedle array, different degrees of pain can be produced on the feet of mice. Due to the pain effect, mice tend to avoid the sharp microneedle area and prefer to move in the non-sharp microneedle area. By designing microneedles with continuous gradient changes in diameter, the degree of pain caused to mice also has continuous changes.
- the gradient microneedle array can stably form specific pain to mice, and these pain behavior characteristics can be significantly different from other non-pain behavior characteristics of mice, and are easier to be extracted and analyzed.
- the non-gradient change plate no matter how much the pain threshold of the mouse is (the mouse is afraid of pain or not pain), all the microneedles have the same sharpness, and the extracted motion features are not much different; the gradient change plate is helpful to produce significantly changed features, in order to relate these features to pain values.
- the experimental process is as follows: First, the activity device should be built according to the design drawings, the mouse should be placed in a dark environment, and the infrared LED light source on the front baffle of the device should be turned on, and the camera on the front of the experimental device should be turned on to test whether it can work normally. Then, place the mice in the rest area of the activity device to rest for 5 minutes. After the mice adapt to the environment of the experimental device, turn on the camera and open the barrier between the rest area and the experimental area. After the area moves to the experimental area, close the barrier; finally, when the experimental mouse has been active in the experimental area for 15 minutes, turn off the camera and transfer the mouse from the experimental device to the mouse cage.
- the training method of the recognizer includes:
- haar cascade classification model includes AdaBoost algorithm, cascade, Haar-like features, fast calculation of integral graph, etc.
- the position of the mouse is determined according to the frame difference method and the ROI area, and a large number of positive sample pictures are collected according to the center of the position, and the remaining pictures are used as negative samples.
- the collection of negative sample pictures is opposite to the collection of positive sample pictures, and the method of random position and random size interception is adopted.
- the recognizer uses the frame difference method and the ROI area to supplement the undetected mouse posture, and then put these mouse postures that cannot be detected by the haar recognizer into the haar cascade classifier model again. Training, multiple iterations Stop iterations when the recognition accuracy reaches 99.9%, and get the final version of the haar cascade mouse recognizer. Use the final version of the haar cascade mouse recognizer to identify and locate the mouse position information for each frame in the video.
- an automatic sample collector can also be designed to automatically supplement positive samples, negative samples, and some undetected mouse gestures to enhance the recognition rate of the recognition algorithm and Accuracy.
- the determining coordinate information according to the location information specifically includes:
- adaptive histogram equalization is used to optimize edge information by enhancing contrast in the form of a sliding window.
- canny edge detection algorithm to detect the edge in the image
- use the statistical Hough transform to detect the result of the canny edge in the picture, and obtain the straight line segment in the edge; then perform cluster analysis on the obtained straight line segment to obtain the glass
- the clustered line segments near the edge of the box are collinearly judged to recombine the collinear line segments into a straight line; the longest straight line is found, and the longest straight line in the cluster is obtained by analogy, which is the edge of the area, and then according to the cuboid edge
- the pairwise parallel rule optimizes for instabilities in the video to ensure the most accurate outline of the glass box.
- the pixel position of the mouse is transformed into a world coordinate value through coordinate transformation. Save the world coordinate value of each frame, and output the position time series of the entire video to txt; and display the current lateral and vertical trajectory of the mouse in real time, as well as the distribution histogram of the mouse in each position.
- Coordinate transformation includes two methods: fully automatic calibration coordinate transformation and manual calibration coordinate transformation. In some harsh scenes, the results of fully automatic calibration may not be particularly good. Manual calibration can be selected, and the calibration map of each video will be saved for use. Observe the calibration plot where the problem occurs. As shown in Figure 3, Figure 3 is an interface displaying coordinate information during the experiment.
- the processor also implements the following steps:
- S230 Resample the coordinate information by using the linear difference method to obtain a first coordinate value and a first time series including a preset number of frames within a preset time, and combine the first coordinate value and the first Time series as coordinate information.
- the processor also implements the following steps:
- S240 Perform center smoothing filtering on the first coordinate value and the first time series using a sliding window to obtain a second coordinate value and a second time series, and use the second coordinate value and the second time series as coordinate information.
- the obtained time series of mouse positions were resampled using the linear difference method, and the original 25-35 frames per second (camera frame rate ranging from 25 frames to 35 frames) were uniformly resampled to 60 frames per second. frame.
- a 1-second sliding window is used to perform center smoothing filtering on the data.
- processing the coordinate information to obtain motion feature information specifically includes:
- 12 features of position frequency distribution 20 features of velocity frequency distribution, 12 features of position distance frequency distribution, 12 features of position distance distribution, 1 feature of total distance, 12 features of position positive velocity distribution, 12 characteristics of position negative velocity distribution, 12 characteristics of acceleration frequency distribution, 12 characteristics of position positive acceleration distribution, 12 characteristics of position negative acceleration distribution, 15 characteristics of longitudinal position frequency distribution, 14 characteristics of longitudinal velocity frequency distribution, longitudinal position distance 12 features are distributed, for a total of 166 features.
- the training method of the prediction model includes:
- each mouse can calculate 498 features to form 300 pieces of 498-dimensional data, which is a 300*498 binary model.
- Dimensional motion feature matrix the output 300*498 features are standardized by column using the z-score algorithm to form a matrix X, and the 498 values in each row of the matrix X are zero-meaned, that is, the mean value of this row is subtracted to obtain New matrix, and then find the covariance matrix C of the matrix, and then find the eigenvalues and corresponding eigenvectors of the covariance matrix; then arrange the eigenvectors into a matrix from top to bottom according to the size of the corresponding eigenvalues, and take
- the training method of the prediction model includes:
- the classification neural network according to the standardized two-dimensional matrix and the corresponding pain value, and use the classification neural network with prediction accuracy as a prediction model; the classification neural network includes an input layer, three hidden layers and a layer output layer.
- the number of mice in the training library is selected to be 300, and each mouse can calculate 498 features to form 300 pieces of 498-dimensional data, which is a 300*498 two-dimensional
- the output 300*498 features are standardized by column using the z-score algorithm to obtain standardized mouse features, and the extracted mouse feature information and pain threshold are input to the classification neural network for training.
- the artificial intelligence network uses a fully connected classification network composed of multi-layer perceptrons, which consists of an input layer, three hidden layers and an output layer.
- mice Normalize the characteristic information of the mice according to the corresponding top data, divide the secondary variable data of each interval by the top secondary variable data under the corresponding variable, and eliminate the influence of individual mice due to their own activity ability and endurance, which is more It is beneficial to highlight the characteristics of pain caused by microneedles; secondly, we combine the pressure caused by the sharpness of the microneedles in each interval of the gradient microneedle array into the behavioral characteristics of mice in each interval; the normalization of each interval After normalization, the value of the secondary variable is divided by the microneedle sharpness of the corresponding interval, so that the influence of the sharpness of the microneedle in each interval on the behavior of the mouse is considered; all variables after normalization and sharpness transformation, and The general characteristics of mice such as temperature, humidity, total distance, and weight are also considered as auxiliary variables, and a total of 498 variables are used as the final input layer data.
- the hidden layer consists of three linear layers and two tanh activation functions.
- the data generated by the input layer is used as the input of the hidden layer.
- the input size of the hidden layer during training is 498, which is the number of features.
- Layer and two activation functions, the output size is 10 classification results (ten pain levels).
- the output layer receives the output result of the hidden layer, and calculates the result of size 10 through the Softmax function to obtain the predicted probability of ten pain values, so the result of the output layer is the classification probability of size 10.
- the loss function of this model adopts the Cross Entropy cross entropy loss function, and the number of training is set to 20,000 times. According to the predicted loss, after 20,000 updates, the corresponding coefficients w and b are continuously updated, and finally a training model is obtained.
- Implementing the embodiment of the present invention includes the following beneficial effects: the embodiment of the present invention generates pain stimulation for the animal to be tested through the mobile device; the motion track video of the animal to be tested is captured by the camera device; the position information and coordinate information are extracted from the motion track video by computer equipment and motion feature information, and transfer the motion feature information to the trained prediction model to obtain the predicted pain value; thereby completing the automatic test of animal pain, the measurement does not require manual intervention, saves time and effort, and has high accuracy.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Veterinary Medicine (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Public Health (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Psychiatry (AREA)
- Pain & Pain Management (AREA)
- Artificial Intelligence (AREA)
- Hospice & Palliative Care (AREA)
- Evolutionary Computation (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physiology (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
Abstract
An automatic animal pain test system, comprising a moving apparatus, a camera apparatus, and a computer device (8) connected to the camera apparatus. The moving apparatus comprises a moving box body (2), a microneedle array (5) located in the moving box body (2), and an angle adjustment support (4). The camera apparatus is configured to photograph a motion track video and send same to the computer device (8). The computer device (8) comprises at least one processor to implement the following steps: obtaining the motion track video, and inputting the motion track video into a trained recognizer to obtain position information; determining coordinate information according to the position information, and processing the coordinate information to obtain motion feature information; and inputting the motion feature information into a trained prediction model to obtain a predicted pain value. Animal pain can be automatically tested, time and labor are saved, the accuracy is high, and the system can be widely applied to the field of medical instruments.
Description
本发明涉及医疗器械领域,尤其涉及一种自动化动物疼痛测试系统。The invention relates to the field of medical equipment, in particular to an automatic animal pain testing system.
疼痛对人类的健康产生了重大影响,因为它非常难以治疗而且与各种疾病广泛相关,了解疼痛的原因和量化疼痛程度一直是医学史和现代医学的关键挑战。由于缺乏有效的工具来量化疼痛,关于疼痛的基础研究转化为临床应用受到限制。疼痛的评估通常是主观的,并且由于伦理问题,被限制直接在人类身上进行。啮齿类动物如小鼠和大鼠通常被用作标准模型,测量它们对有害刺激的反应。疼痛测试方法极大地加深了人们对痛觉机制相关基因、神经递质和细胞受体研究的了解,以及对疼痛治疗药物的发现。然而,传统的人工测量方式费时费力,而且测量结果容易受外界影响。Pain has a major impact on human health because it is very difficult to treat and is widely associated with various diseases. Understanding the cause of pain and quantifying the degree of pain has been a key challenge in medical history and modern medicine. The translation of basic research on pain into clinical applications is limited by the lack of effective tools to quantify pain. Pain assessment is often subjective and limited to direct human use due to ethical concerns. Rodents such as mice and rats are often used as standard models to measure their responses to noxious stimuli. Pain testing methods have greatly enhanced our understanding of the genes, neurotransmitters, and cell receptors involved in the mechanisms of pain perception, as well as the discovery of pain therapeutic drugs. However, the traditional manual measurement method is time-consuming and laborious, and the measurement results are easily affected by the outside world.
发明内容Contents of the invention
有鉴于此,本发明实施例的目的是提供一种自动化动物疼痛测试系统,能够对动物疼痛进行自动化测试,省时省力,准确率高。In view of this, the purpose of the embodiments of the present invention is to provide an automated animal pain testing system, which can automatically test animal pain, saves time and effort, and has high accuracy.
本发明实施例提供了一种自动化动物疼痛测试系统,包括活动装置、摄像装置以及与所述摄像装置连接的计算机设备;其中,The embodiment of the present invention provides a kind of automatic animal pain test system, comprises activity device, camera device and the computer equipment connected with described camera device; Wherein,
所述活动装置,包括活动箱体、位于所述活动箱体内的微针整列以及用于调节所述活动箱体的角度调节支架;The movable device includes a movable box, an array of microneedles located in the movable box, and an angle adjustment bracket for adjusting the movable box;
所述摄像装置,用于拍摄动物在所述活动箱体内的运动轨迹视频,并将所述运动轨迹视频发送给所述计算机设备;The camera device is used to take a video of the movement track of the animal in the movable box, and send the video of the movement track to the computer device;
所述所述计算机设备包括:Said said computer equipment includes:
至少一个处理器;at least one processor;
至少一个存储器,用于存储至少一个程序;at least one memory for storing at least one program;
当所述至少一个程序被所述至少一个处理器执行,使得所述至少一个处理器实现以下步骤:When the at least one program is executed by the at least one processor, the at least one processor is made to implement the following steps:
获取所述运动轨迹视频,并将所述运动轨迹视频输入到训练好的识别器获得位置信息;Obtaining the motion trajectory video, and inputting the motion trajectory video into a trained recognizer to obtain position information;
根据所述位置信息确定坐标信息,并对所述坐标信息进行处理得到运动特征信息;所述坐标信息包括坐标值及时间序列;determining coordinate information according to the position information, and processing the coordinate information to obtain motion feature information; the coordinate information includes coordinate values and time series;
将所述运动特征信息输入到训练好的预测模型获得预测疼痛值。Input the motion feature information into the trained prediction model to obtain the predicted pain value.
可选地,所述识别器的训练方法包括:Optionally, the training method of the recognizer includes:
获取运动轨迹视频样本,根据帧差法及感兴趣区域确定所述运动轨迹视频样本中的动物位置;Obtaining a motion track video sample, and determining the animal position in the motion track video sample according to the frame difference method and the region of interest;
根据所述动物位置采集所述运动轨迹视频样本中的正样本图片及负样本图片;Collecting positive sample pictures and negative sample pictures in the motion trajectory video samples according to the animal position;
将所述正样本图片及所述负样本图片输入到haar级联分类模型进行训练;The positive sample picture and the negative sample picture are input to the haar cascade classification model for training;
将识别准确率达到预设准确率的haar级联分类模型作为识别器。The haar cascade classification model whose recognition accuracy reaches the preset accuracy is used as the recognizer.
可选地,所述根据所述位置信息确定坐标信息,具体包括:Optionally, the determining coordinate information according to the location information specifically includes:
根据carry边缘检测算法及霍夫变换直线检测算法确定所述活动箱体的轮廓;Determine the outline of the active box according to the carry edge detection algorithm and the Hough transform line detection algorithm;
根据所述位置信息及所述轮廓确定坐标系及所述运动轨迹视频中动物的坐标信息。The coordinate system and the coordinate information of the animal in the motion track video are determined according to the position information and the outline.
可选地,所述处理器还实现以下步骤:Optionally, the processor also implements the following steps:
采用线性差值法对所述坐标信息进行重采样,获得预设时间内包含预设帧数的第一坐标值及第一时间序列,并将所述第一坐标值及所述第一时间序列作为坐标信息。Resample the coordinate information by using a linear difference method to obtain a first coordinate value and a first time series including a preset number of frames within a preset time, and combine the first coordinate value and the first time series as coordinate information.
可选地,所述处理器还实现以下步骤:Optionally, the processor also implements the following steps:
采用滑动窗口对所述第一坐标值及所述第一时间序列进行中心平滑滤波得到第二坐标值及第二时间序列,并将所述第二坐标值及所述第二时间序列作为坐标信息。performing center smoothing filtering on the first coordinate value and the first time series using a sliding window to obtain a second coordinate value and a second time series, and using the second coordinate value and the second time series as coordinate information .
可选地,所述对所述坐标信息进行处理得到运动特征信息,具体包括:Optionally, the processing the coordinate information to obtain motion feature information specifically includes:
对所述坐标信息求导得到速度时间序列;Deriving the coordinate information to obtain a velocity time series;
根据所述速度时间序列及所述坐标信息确定速度位置分布直方图;determining a velocity position distribution histogram according to the velocity time series and the coordinate information;
对所述速度时间序列积分得到路程时间序列;Integrating the speed time series to obtain a distance time series;
根据所述路程时间序列与所述坐标信息确定路程位置分布直方图。A histogram of distance location distribution is determined according to the distance time series and the coordinate information.
可选地,所述预测模型的训练方法包括:Optionally, the training method of the prediction model includes:
获取动物测试样本的动运动特征信息及对应的疼痛值;Obtain the dynamic characteristic information and the corresponding pain value of the animal test sample;
将所述动物测试样本的动运动特征信息转换成二维矩阵,并将所述二维矩阵标准化;converting the kinetic feature information of the animal test sample into a two-dimensional matrix, and standardizing the two-dimensional matrix;
对标准化后的二维矩阵提取主成分,并将相关度最好的两个主成分进行二元拟合得到疼痛值与运动特征的拟合直线,将所述拟合直线作为预测模型。The principal components are extracted from the standardized two-dimensional matrix, and the two principal components with the best correlation are binary fitted to obtain a fitted straight line between the pain value and the motion feature, and the fitted straight line is used as a prediction model.
可选地,所述预测模型的训练方法包括:Optionally, the training method of the prediction model includes:
获取动物测试样本的动运动特征信息及对应的疼痛值;Obtain the dynamic characteristic information and the corresponding pain value of the animal test sample;
将所述动物测试样本的动运动特征信息转换成二维矩阵,并将所述二维矩阵标准化;converting the kinetic feature information of the animal test sample into a two-dimensional matrix, and standardizing the two-dimensional matrix;
根据标准化后的二维矩阵及对应的疼痛值对分类神经网络进行训练,将得到预测精度的 分类神经网络作为预测模型;所述分类神经网络包括一层输入层、三层隐藏层和一层输出层。According to the standardized two-dimensional matrix and the corresponding pain value, the classification neural network is trained, and the classification neural network with prediction accuracy is used as the prediction model; the classification neural network includes one layer of input layer, three layers of hidden layers and one layer of output layer.
可选地,所述活动箱体包括底座及挡板,所述底座与所述挡板之间通过公母榫方式装配,所述活动箱体分为休息区和活动区,所述休息区与所述活动区通过挡板隔离,所述挡板与所述底座通过公母榫方式装配。Optionally, the movable box includes a base and a baffle, and the base and the baffle are assembled through a male and female tenon. The movable box is divided into a rest area and an activity area. The rest area and the The active area is isolated by a baffle, and the baffle is assembled with the base by means of a male and a female tenon.
可选地,所述微针整列安装在所述活动区,所述微针整列的顶部半径从靠近休息区端开始逐渐变小。Optionally, the entire array of microneedles is installed in the active area, and the radius of the top of the entire array of microneedles gradually decreases from the end near the rest area.
实施本发明实施例包括以下有益效果:本发明实施例通过活动装置为待测试动物产生疼痛刺激;通过摄像装置拍摄待测试动物的运动轨迹视频;通过计算机设备对运动轨迹视频提取位置信息、坐标信息及运动特征信息,并将运动特征信息到训练好的预测模型获得预测疼痛值;从而完成对动物疼痛进行自动化测试,测量无需人工干涉,省时省力,准确率高。Implementing the embodiment of the present invention includes the following beneficial effects: the embodiment of the present invention generates pain stimulation for the animal to be tested through the mobile device; the motion track video of the animal to be tested is captured by the camera device; the position information and coordinate information are extracted from the motion track video by computer equipment and motion feature information, and transfer the motion feature information to the trained prediction model to obtain the predicted pain value; thereby completing the automatic test of animal pain, the measurement does not require manual intervention, saves time and effort, and has high accuracy.
图1是本发明实施例提供的一种自动化动物疼痛测试系统的结构示意图;Fig. 1 is the structural representation of a kind of automated animal pain test system provided by the embodiment of the present invention;
图2是本发明实施例提供的一种处理器执行步骤的流程示意图;FIG. 2 is a schematic flowchart of steps performed by a processor according to an embodiment of the present invention;
图3是本发明实施例提供的一种展示动物坐标信息的界面图;Fig. 3 is an interface diagram showing animal coordinate information provided by an embodiment of the present invention;
图4是本发明实施例提供的一种展示动物运动特征信息的界面图。Fig. 4 is an interface diagram for displaying animal movement characteristic information provided by an embodiment of the present invention.
下面结合附图和具体实施例对本发明做进一步的详细说明。对于以下实施例中的步骤编号,其仅为了便于阐述说明而设置,对步骤之间的顺序不做任何限定,实施例中的各步骤的执行顺序均可根据本领域技术人员的理解来进行适应性调整。The present invention will be further described in detail below in conjunction with the accompanying drawings and specific embodiments. For the step numbers in the following embodiments, it is only set for the convenience of illustration and description, and the order between the steps is not limited in any way. The execution order of each step in the embodiments can be adapted according to the understanding of those skilled in the art sexual adjustment.
参阅图1及图2所示,本发明实施例提供了一种自动化动物疼痛测试系统,包括活动装置、摄像装置以及与所述摄像装置连接的计算机设备;其中,Referring to Fig. 1 and shown in Fig. 2, the embodiment of the present invention provides a kind of automatic animal pain testing system, comprises movable device, camera device and the computer equipment that is connected with described camera device; Wherein,
所述活动装置,包括活动箱体、位于所述活动箱体内的微针整列以及用于调节所述活动箱体的角度调节支架;The movable device includes a movable box, an array of microneedles located in the movable box, and an angle adjustment bracket for adjusting the movable box;
所述摄像装置,用于拍摄动物在所述活动箱体内的运动轨迹视频,并将所述运动轨迹视频发送给所述计算机设备;The camera device is used to take a video of the movement track of the animal in the movable box, and send the video of the movement track to the computer device;
所述所述计算机设备包括:Said said computer equipment includes:
至少一个处理器;at least one processor;
至少一个存储器,用于存储至少一个程序;at least one memory for storing at least one program;
当所述至少一个程序被所述至少一个处理器执行,使得所述至少一个处理器实现以下步骤:When the at least one program is executed by the at least one processor, the at least one processor is made to implement the following steps:
S100、获取所述运动轨迹视频,并将所述运动轨迹视频输入到训练好的识别器获得位置信息;S100. Obtain the motion track video, and input the motion track video to a trained recognizer to obtain position information;
S200、根据所述位置信息确定坐标信息,并对所述坐标信息进行处理得到运动特征信息;所述坐标信息包括坐标值及时间序列;S200. Determine coordinate information according to the position information, and process the coordinate information to obtain motion feature information; the coordinate information includes coordinate values and time series;
S300、将所述运动特征信息输入到训练好的预测模型获得预测疼痛值。S300. Input the motion feature information into the trained prediction model to obtain the predicted pain value.
图1中,黑色隔光屏障布1,小鼠活动箱2,红外夜视摄像头3,角度调整支架4,底部活动区域梯度微针阵列板5,休息区域摩擦板6,数据传输线7,计算机设备8,红外LED光源9。In Figure 1, black light barrier cloth 1, mouse activity box 2, infrared night vision camera 3, angle adjustment bracket 4, gradient microneedle array plate 5 in the bottom active area, friction plate 6 in the rest area, data transmission line 7, computer equipment 8. Infrared LED light source9.
本领域技术人员可以理解的是,当活动装置处于水平状态时,待测试动物活动在微针整列区域的概率较小,为了能够凸显待测动物个体之间的疼痛阈值差异,本实施例设计了一个能够调整活动装置角度的角度调节支架。该角度调节支架使得活动装置与地面之间呈现一定的角度,小鼠的活动将受到重力影响向微针整列区域活动。Those skilled in the art can understand that when the movable device is in a horizontal state, the probability of the animal to be tested moving in the area where the microneedles are arranged is relatively small. An angle adjustment bracket to adjust the angle of the movable unit. The angle adjustment bracket makes a certain angle between the movable device and the ground, and the activity of the mouse will be affected by the gravity to move towards the whole array of microneedles.
需要说明的是,本实施例的待测动物选择小鼠,尽量模拟小鼠生活的昏暗环境,在活动装置外覆盖黑色隔光屏障布,摄像装置选择红外摄像机,如选择发射94nm波长的近红外LED灯组作为相机光源。It should be noted that the animal to be tested in this embodiment is a mouse, and the dark environment in which the mouse lives should be simulated as much as possible, and a black light-shielding barrier cloth is covered outside the movable device. The LED light group is used as the camera light source.
需要说明的是,所述计算机设备,其可为不同类型的电子设备,包含但不限于有台式电脑、手提电脑等终端。It should be noted that the computer equipment may be different types of electronic equipment, including but not limited to terminals such as desktop computers and laptop computers.
需要说明的是,该测试系统支持多套系统协同工作,可以大大节省实验人员的实验时间。It should be noted that the test system supports multiple systems to work together, which can greatly save experimenters' experiment time.
可选地,所述活动箱体包括底座及挡板,所述底座与所述挡板之间通过公母榫方式装配,所述活动箱体分为休息区和活动区,所述休息区与所述活动区通过挡板隔离,所述挡板与所述底座通过公母榫方式装配。Optionally, the movable box includes a base and a baffle, and the base and the baffle are assembled through a male and female tenon. The movable box is divided into a rest area and an activity area. The rest area and the The active area is isolated by a baffle, and the baffle is assembled with the base by means of a male and a female tenon.
为了能够更好地观察和记录处于装置内实验动物的行为信息,实验装置的前挡板使用透明亚克力材料并通过激光切割而成,其余部分均采用有遮光贴纸的亚克力材料切割而成。In order to better observe and record the behavioral information of the experimental animals in the device, the front baffle of the experimental device is made of transparent acrylic material and cut by laser, and the rest of the device is made of acrylic material with light-shielding stickers.
本领域技术人员可以理解的是,底座与挡板之间通过公母榫方式装配,休息区与活动区之间的挡板通过公母榫方式装配,方便组装以及便于清洗。Those skilled in the art can understand that the base and the baffle are assembled by a male and female tenon, and the baffle between the rest area and the activity area is assembled by a male and female tenon, which is convenient for assembly and cleaning.
可选地,所述微针整列安装在所述活动区,所述微针整列的顶部半径从靠近休息区端开始逐渐变小。Optionally, the entire array of microneedles is installed in the active area, and the radius of the top of the entire array of microneedles gradually decreases from the end near the rest area.
具体地,本实施例中的微针阵列包含4个子阵列,微针整列的高度为3毫米,微针与微针之间的间隔为1.8毫米,4个12cm×6cm的微针子阵列组成微针跑道。其中,1号阵列的微针顶部半径从0.03毫米逐渐变化到0.08毫米,2号阵列的微针顶部半径从0.09毫米逐渐变 化至0.14毫米,3号阵列的微针半径从0.15毫米逐渐变化至0.20毫米,4号阵列的微针半径从0.21毫米逐渐变化至0.26毫米;4号阵列靠近休息区端。本实施例中活动箱体的活动区域长48cm、宽6cm、高20cm,休息区域长12cm、宽6cm、高20cm。Specifically, the microneedle array in this embodiment includes 4 sub-arrays, the height of the entire array of microneedles is 3 mm, the interval between microneedles is 1.8 mm, and 4 microneedle sub-arrays of 12 cm × 6 cm form a microneedle array. Needle runway. Among them, the top radius of the microneedles of the No. 1 array gradually changed from 0.03 mm to 0.08 mm, the microneedle top radius of the No. 2 array gradually changed from 0.09 mm to 0.14 mm, and the microneedle radius of the No. 3 array gradually changed from 0.15 mm to 0.20 mm, the microneedle radius of the No. 4 array was gradually changed from 0.21 mm to 0.26 mm; the No. 4 array was close to the end of the resting area. In the present embodiment, the active area of the movable box is 48cm long, 6cm wide, and 20cm high, and the rest area is 12cm long, 6cm wide, and 20cm high.
本实施例中的微针阵列,最尖锐的顶部半径为0.03毫米的微针足够尖锐,能够引起小鼠的缩足反应,但是又不会刺破其足部的皮肤。小鼠的位置距离休息区的距离越远,微针的顶部越尖锐,其疼痛程度就越高;当疼痛阈值较低的实验小鼠从休息区进入活动区时,由于其疼痛阈值较低,因而其能够忍受的疼痛程度较低,其活跃程度也就越低。In the microneedle array in this embodiment, the sharpest microneedles with a top radius of 0.03 mm are sharp enough to induce paw withdrawal in mice, but they will not puncture the skin of their feet. The farther the mouse's position is from the rest area, the sharper the top of the microneedles, the higher the pain level; when the experimental mice with a lower pain threshold enter the active area from the rest area, due to their lower pain threshold, The less pain it can tolerate, the less active it will be.
本领域技术人员可以理解的是,基于梯度微针阵列的设计,可以对小鼠足部产生不同程度的痛觉。小鼠由于疼痛效应,会趋向于避开尖锐的微针区域,而偏好活动于非尖锐微针区域。通过设计直径连续梯度变化的微针,对小鼠形成的疼痛程度也具有连续变化。梯度微针阵列能稳定地对小鼠形成特异性疼痛,这些疼痛行为特征能较明显地与小鼠的其他非疼痛行为特征形成显著差异,也更为容易被进行提取与分析。Those skilled in the art can understand that, based on the design of the gradient microneedle array, different degrees of pain can be produced on the feet of mice. Due to the pain effect, mice tend to avoid the sharp microneedle area and prefer to move in the non-sharp microneedle area. By designing microneedles with continuous gradient changes in diameter, the degree of pain caused to mice also has continuous changes. The gradient microneedle array can stably form specific pain to mice, and these pain behavior characteristics can be significantly different from other non-pain behavior characteristics of mice, and are easier to be extracted and analyzed.
实验证明,梯度变化微针能够产生不同的疼痛程度,区分不同的疼痛状态,以便于产生不同的疼痛特征。当倾斜0度,非梯度变化板,小鼠运动特征是很不明显的,在各个位置分布的频率基本上一致,显示不出特征差别,疼痛值阈值大的老鼠和疼痛值阈值小的老鼠的位置特征基本上都是一样的;当倾斜0度,梯度变化板,其位置分布特征是有显著差异的;微针较粗区域,小鼠的分布频率较高;微针较尖区域,小鼠的分布频率较低,故能够产生明显变化的位置特征;选取一定倾斜角度,有助于促进不运动小鼠产生运动。因此,非梯度变化板,无论小鼠疼痛阈值多大(小鼠怕疼怕不疼),所有微针尖度一样,提取出来的运动特征差别不大;梯度变化板有助于产生明显变化的特征,以便将这些特征与疼痛值联系起来。Experiments have proved that gradient microneedles can produce different pain levels and distinguish different pain states so as to produce different pain characteristics. When tilted at 0 degrees, non-gradient change plate, the mouse movement characteristics are very inconspicuous, and the frequencies distributed in each position are basically the same, showing no characteristic difference. The location characteristics are basically the same; when the slope is 0 degrees, the gradient plate has a significant difference in its location distribution characteristics; the area with thicker microneedles has a higher distribution frequency of mice; the area with sharper microneedles, the distribution frequency of mice The distribution frequency of is low, so it can produce obvious changes in the position characteristics; selecting a certain inclination angle is helpful to promote the movement of non-moving mice. Therefore, the non-gradient change plate, no matter how much the pain threshold of the mouse is (the mouse is afraid of pain or not pain), all the microneedles have the same sharpness, and the extracted motion features are not much different; the gradient change plate is helpful to produce significantly changed features, in order to relate these features to pain values.
实验过程如下:首先,要按照设计图纸将活动装置搭建完成,使小鼠处于昏暗环境,并打开位于装置前档板上的红外LED光源,并打开位于实验装置正面的摄像机,测试其是否能够正常工作;然后,将小鼠放置于活动装置的休息区中休息5分钟,待小鼠适应实验装置的环境后,打开摄像机并打开休息区与实验区之间的格挡版,待小鼠从休息区运动到实验区后,关闭格挡版;最后,当实验小鼠在实验区域中的活动时间达到15分钟后,关闭摄像机并将小鼠从实验装置转移至鼠笼中。The experimental process is as follows: First, the activity device should be built according to the design drawings, the mouse should be placed in a dark environment, and the infrared LED light source on the front baffle of the device should be turned on, and the camera on the front of the experimental device should be turned on to test whether it can work normally. Then, place the mice in the rest area of the activity device to rest for 5 minutes. After the mice adapt to the environment of the experimental device, turn on the camera and open the barrier between the rest area and the experimental area. After the area moves to the experimental area, close the barrier; finally, when the experimental mouse has been active in the experimental area for 15 minutes, turn off the camera and transfer the mouse from the experimental device to the mouse cage.
可选地,所述识别器的训练方法包括:Optionally, the training method of the recognizer includes:
S110、获取运动轨迹视频样本,根据帧差法及感兴趣区域确定所述运动轨迹视频样本中的动物位置;S110. Obtain a motion track video sample, and determine the position of the animal in the motion track video sample according to the frame difference method and the region of interest;
S120、根据所述动物位置采集所述运动轨迹视频样本中的正样本图片及负样本图片;S120. Collect a positive sample picture and a negative sample picture in the motion track video sample according to the animal position;
S130、将所述正样本图片及所述负样本图片输入到haar级联分类模型进行训练;S130. Input the positive sample picture and the negative sample picture into the haar cascade classification model for training;
S140、将识别准确率达到预设准确率的haar级联分类模型作为识别器。S140. Using a haar cascade classification model whose recognition accuracy reaches a preset accuracy rate as a recognizer.
需要说明的是,haar级联分类模型包括AdaBoost算法、级联、Haar-like特征、积分图的快速计算等。It should be noted that the haar cascade classification model includes AdaBoost algorithm, cascade, Haar-like features, fast calculation of integral graph, etc.
具体地,先通过帧差法大致定位出视频中每一帧图中运动小鼠的位置,将每一帧图像进行二值化阈值并利用连通域分析找到小鼠的大致位置;再设定ROI区域排除无关干扰。根据帧差法和ROI区域确定小鼠位置,根据位置中心进行正样本图片的大批量采集,剩余图片则当做负样本。负样本图片的采集与正样本图片的采集对立,采用随机位置、随机大小截取的方式。将批量采集的正负样本放入haar级联分类模型进行训练,训练得一个初阶版本的识别器,识别准确率达到80%左右,利用识别器较准确定位出小鼠位置,当出现识别器未能检测的正样本图像时,利用帧差法和ROI区域补充未能检测的小鼠姿态,再将这些未能被haar识别器检测到的小鼠姿态再次放入haar级联分类器模型进行训练,多次迭代当识别准确率达到99.9%时停止迭代,得到最终版的haar级联小鼠识别器。利用最终版的haar级联小鼠识别器,识别定位出视频中每一帧的小鼠位置信息。Specifically, first use the frame difference method to roughly locate the position of the moving mouse in each frame of the video, perform a binarization threshold on each frame of the image and use connected domain analysis to find the approximate position of the mouse; then set the ROI The area excludes irrelevant interference. The position of the mouse is determined according to the frame difference method and the ROI area, and a large number of positive sample pictures are collected according to the center of the position, and the remaining pictures are used as negative samples. The collection of negative sample pictures is opposite to the collection of positive sample pictures, and the method of random position and random size interception is adopted. Put the positive and negative samples collected in batches into the haar cascade classification model for training, and train a preliminary version of the recognizer with a recognition accuracy of about 80%. Using the recognizer to locate the mouse position more accurately, when the recognizer appears When the positive sample image cannot be detected, use the frame difference method and the ROI area to supplement the undetected mouse posture, and then put these mouse postures that cannot be detected by the haar recognizer into the haar cascade classifier model again. Training, multiple iterations Stop iterations when the recognition accuracy reaches 99.9%, and get the final version of the haar cascade mouse recognizer. Use the final version of the haar cascade mouse recognizer to identify and locate the mouse position information for each frame in the video.
需要说明的是,为了节省训练识别器样本采集时间,还可以设计一个样本自动化采集器,自动补充正样本,负样本,以及一些未被检测的小鼠姿态,用于增强识别算法的识别率以及准确率。It should be noted that, in order to save the sample collection time of the training recognizer, an automatic sample collector can also be designed to automatically supplement positive samples, negative samples, and some undetected mouse gestures to enhance the recognition rate of the recognition algorithm and Accuracy.
可选地,所述根据所述位置信息确定坐标信息,具体包括:Optionally, the determining coordinate information according to the location information specifically includes:
S210、根据carry边缘检测算法及霍夫变换直线检测算法确定所述活动箱体的轮廓;S210. Determine the contour of the movable box according to the carry edge detection algorithm and the Hough transform line detection algorithm;
S220、根据所述位置信息及所述轮廓确定坐标系及所述运动轨迹视频中动物的坐标信息。S220. Determine a coordinate system and coordinate information of the animal in the motion track video according to the position information and the outline.
具体地,利用自适应直方图均衡化以滑动窗口的形式通过增强对比度以优化边缘信息。利用canny边缘检测算法检测出图像中的边缘,利用统计霍夫变换将图片中canny边缘检测出来的结果进行直线检测,获取边缘中的直线段;再对获取的直线段进行聚类分析,获取玻璃箱棱边附近的聚集线段,进行共线判断将共线的线段重新组合成一条直线;找到最长直线,以此类推获取聚类中的最长直线即为该区域的棱,再根据长方体棱两两互相平行的规则对视频中不稳定现象作出优化,以确保获得最准确的玻璃箱轮廓。Specifically, adaptive histogram equalization is used to optimize edge information by enhancing contrast in the form of a sliding window. Use the canny edge detection algorithm to detect the edge in the image, use the statistical Hough transform to detect the result of the canny edge in the picture, and obtain the straight line segment in the edge; then perform cluster analysis on the obtained straight line segment to obtain the glass The clustered line segments near the edge of the box are collinearly judged to recombine the collinear line segments into a straight line; the longest straight line is found, and the longest straight line in the cluster is obtained by analogy, which is the edge of the area, and then according to the cuboid edge The pairwise parallel rule optimizes for instabilities in the video to ensure the most accurate outline of the glass box.
具体地,根据检测出的小鼠的活动区域及实际尺寸,将小鼠的像素位置进行坐标变换转换为世界坐标值。保存每一帧的世界坐标值,并将整个视频的位置时间序列输出至txt;并实时显示当前小鼠的横向及纵向运动轨迹图,以及小鼠在各个位置分布直方图。坐标变换包括 两种方式:全自动化标定坐标变换及手动标定坐标变换,部分恶劣场景全自动化标定结果可能不是特别好,可以选择手动方式进行标定,每一个视频的标定图都将保存起来以便用于观察出现问题的标定图。如图3所示,图3为实验过程中的一个展示坐标信息的界面。Specifically, according to the detected active area and actual size of the mouse, the pixel position of the mouse is transformed into a world coordinate value through coordinate transformation. Save the world coordinate value of each frame, and output the position time series of the entire video to txt; and display the current lateral and vertical trajectory of the mouse in real time, as well as the distribution histogram of the mouse in each position. Coordinate transformation includes two methods: fully automatic calibration coordinate transformation and manual calibration coordinate transformation. In some harsh scenes, the results of fully automatic calibration may not be particularly good. Manual calibration can be selected, and the calibration map of each video will be saved for use. Observe the calibration plot where the problem occurs. As shown in Figure 3, Figure 3 is an interface displaying coordinate information during the experiment.
可选地,所述处理器还实现以下步骤:Optionally, the processor also implements the following steps:
S230、采用线性差值法对所述坐标信息进行重采样,获得预设时间内包含预设帧数的第一坐标值及第一时间序列,并将所述第一坐标值及所述第一时间序列作为坐标信息。S230. Resample the coordinate information by using the linear difference method to obtain a first coordinate value and a first time series including a preset number of frames within a preset time, and combine the first coordinate value and the first Time series as coordinate information.
可选地,所述处理器还实现以下步骤:Optionally, the processor also implements the following steps:
S240、采用滑动窗口对所述第一坐标值及所述第一时间序列进行中心平滑滤波得到第二坐标值及第二时间序列,并将所述第二坐标值及所述第二时间序列作为坐标信息。S240. Perform center smoothing filtering on the first coordinate value and the first time series using a sliding window to obtain a second coordinate value and a second time series, and use the second coordinate value and the second time series as coordinate information.
具体地,将获取的小鼠位置时间序列,利用线性差值法进行重采样,将原来一秒25-35帧(摄像头帧率25帧到35帧不等)的视频统一重采样到一秒60帧。针对小鼠出现抖动现象,影响后续速度,加速度等参数的计算,故采用1秒滑动窗口对数据进行中心平滑滤波。Specifically, the obtained time series of mouse positions were resampled using the linear difference method, and the original 25-35 frames per second (camera frame rate ranging from 25 frames to 35 frames) were uniformly resampled to 60 frames per second. frame. In view of the shaking phenomenon of mice, which affects the calculation of subsequent speed, acceleration and other parameters, a 1-second sliding window is used to perform center smoothing filtering on the data.
可选地,所述对所述坐标信息进行处理得到运动特征信息,具体包括:Optionally, the processing the coordinate information to obtain motion feature information specifically includes:
S250、对所述坐标信息求导得到速度时间序列;S250. Deriving the coordinate information to obtain a velocity time series;
S260、根据所述速度时间序列及所述坐标信息确定速度位置分布直方图;S260. Determine a velocity position distribution histogram according to the velocity time series and the coordinate information;
S270、对所述速度时间序列积分得到路程时间序列;S270. Integrate the speed time series to obtain a distance time series;
S280、根据所述路程时间序列与所述坐标信息确定路程位置分布直方图。S280. Determine a distance location distribution histogram according to the distance time series and the coordinate information.
对重采样、滤波后的位置时间序列,求导得到速度时间序列,速度时间序列结合位置时间序列得到速度位置分布直方图,对速度时间序列积分得到路程时间序列,结合路程时间序列与位置时间序列可以得到路程位置分布直方图。将获取的位置分布直方图,速度位置分布直方图,路程位置分布直方图显示到动物行为数据处理分析软件上面以供实时观察。如图4所述,图4为实验过程中一个展示运动特征信息的界面。For the resampled and filtered position time series, derive the velocity time series, combine the velocity time series with the position time series to obtain the velocity position distribution histogram, integrate the velocity time series to obtain the distance time series, and combine the distance time series with the position time series A histogram of the distance distribution can be obtained. Display the obtained position distribution histogram, speed position distribution histogram, and distance position distribution histogram on the animal behavior data processing and analysis software for real-time observation. As described in Figure 4, Figure 4 is an interface displaying motion feature information during the experiment.
按照上述计算过程,可以获取位置频率分布12个特征、速度频率分布20个特征、位置路程频率分布12个特征、位置路程分布12个特征、总路程1个特征、位置正速度分布12个特征、位置负速度分布12个特征、加速度频率分布12个特征、位置正加速度分布12个特征、位置负加速度分布12个特征、纵向位置频率分布15个特征、纵向速度频率分布14个特、纵向位置路程分布12个特征,总共166个特征。将视频序列进行300s、600s、900s截取,每段时间序列166个特征,总特征数:166*3=498。According to the above calculation process, 12 features of position frequency distribution, 20 features of velocity frequency distribution, 12 features of position distance frequency distribution, 12 features of position distance distribution, 1 feature of total distance, 12 features of position positive velocity distribution, 12 characteristics of position negative velocity distribution, 12 characteristics of acceleration frequency distribution, 12 characteristics of position positive acceleration distribution, 12 characteristics of position negative acceleration distribution, 15 characteristics of longitudinal position frequency distribution, 14 characteristics of longitudinal velocity frequency distribution, longitudinal position distance 12 features are distributed, for a total of 166 features. The video sequence is intercepted for 300s, 600s, and 900s, each time sequence has 166 features, and the total number of features: 166*3=498.
可选地,所述预测模型的训练方法包括:Optionally, the training method of the prediction model includes:
S310A、获取动物测试样本的动运动特征信息及对应的疼痛值;S310A. Acquiring the dynamic characteristic information of the animal test sample and the corresponding pain value;
S320A、将所述动物测试样本的动运动特征信息转换成二维矩阵,并将所述二维矩阵标准化;S320A. Convert the motion characteristic information of the animal test sample into a two-dimensional matrix, and standardize the two-dimensional matrix;
S330A、对标准化后的二维矩阵提取主成分,并将相关度最好的两个主成分进行二元拟合得到疼痛值与运动特征的拟合直线,将所述拟合直线作为预测模型。S330A. Extract principal components from the standardized two-dimensional matrix, and perform binary fitting on the two principal components with the best correlations to obtain a fitted straight line between the pain value and the motion feature, and use the fitted straight line as a prediction model.
具体地,首先需要建立拟合预测模型训练库,如选择训练库小鼠数目为300只,每只小鼠可以计算出498个特征,构成300条498维的数据,是一个300*498的二维运动特征矩阵,将输出的300*498个特征利用z-score算法按列进行标准化,构成矩阵X,将矩阵X的每一行498个值进行零均值化,即减去这一行的均值,得到新的矩阵,再求出矩阵的协方差矩阵C,再求出协方差矩阵的特征值以及对应的特征向量;再将特征向量按对应特征值大小,从上到下按行排列成矩阵,取前两行组成权重矩阵P,而Y=PX即为主成分提出后的结果矩阵,大小为300*2,Y表示小鼠疼痛值对应的矩阵。将提取的主成分中相关度比较好的两个主成分,进行二元拟合,获得拟合直线,根据拟合直线预测小鼠疼痛值。Specifically, it is first necessary to establish a fitting prediction model training library. For example, if the number of mice in the training library is 300, each mouse can calculate 498 features to form 300 pieces of 498-dimensional data, which is a 300*498 binary model. Dimensional motion feature matrix, the output 300*498 features are standardized by column using the z-score algorithm to form a matrix X, and the 498 values in each row of the matrix X are zero-meaned, that is, the mean value of this row is subtracted to obtain New matrix, and then find the covariance matrix C of the matrix, and then find the eigenvalues and corresponding eigenvectors of the covariance matrix; then arrange the eigenvectors into a matrix from top to bottom according to the size of the corresponding eigenvalues, and take The first two lines constitute the weight matrix P, and Y=PX is the result matrix after the principal component is proposed, the size is 300*2, and Y represents the matrix corresponding to the pain value of the mouse. Two principal components with better correlation among the extracted principal components were subjected to binary fitting to obtain a fitted straight line, and the mouse pain value was predicted according to the fitted straight line.
可选地,所述预测模型的训练方法包括:Optionally, the training method of the prediction model includes:
S310B、获取动物测试样本的动运动特征信息及对应的疼痛值;S310B. Acquiring the dynamic characteristic information of the animal test sample and the corresponding pain value;
S320B、将所述动物测试样本的动运动特征信息转换成二维矩阵,并将所述二维矩阵标准化;S320B. Convert the motion characteristic information of the animal test sample into a two-dimensional matrix, and standardize the two-dimensional matrix;
S330B、根据标准化后的二维矩阵及对应的疼痛值对分类神经网络进行训练,将得到预测精度的分类神经网络作为预测模型;所述分类神经网络包括一层输入层、三层隐藏层和一层输出层。S330B. Train the classification neural network according to the standardized two-dimensional matrix and the corresponding pain value, and use the classification neural network with prediction accuracy as a prediction model; the classification neural network includes an input layer, three hidden layers and a layer output layer.
具体地,首先需要建立拟合预测模型训练库,选择训练库小鼠数目为300只,每只小鼠可以计算出498个特征,构成300条498维的数据,是一个300*498的二维运动特征矩阵,将输出的300*498个特征利用z-score算法按列进行标准化得到标准化后的小鼠特征,输入提取的小鼠特征信息和疼痛阈值到分类神经网络进行训练。人工智能网络采用的是由多层感知机构成的全连接分类网络,该网络由一层输入层,三层隐藏层及一层输出层构成。把小鼠特征信息按照对应顶部数据进行归一化,把各个区间的次级变量数据除以对应变量下的顶部次级变量数据,消除个体小鼠因为自身活动能力和耐力产生的影响,这样更加有利于突出与微针导致疼痛产生的特征;其次,我们把梯度微针阵列各个区间的微针尖度所导致的压强,结合到小鼠在各个区间的行为特征里;把每个区间的归一化后次级变量数值除于对应区间的微针尖度,这样子把每个区间的微针尖锐程度对小鼠的行为影响进行了考虑;通过归一化和尖度转化后的所有变量,并还把温度,湿度,总路程,体重等老鼠的总体特征作为辅助变量进 行考虑,共498个变量作为最终的输入层数据。隐藏层由三个线性层以及两个tanh激活函数构成,输入层产生的数据作为隐藏层的输入,训练时隐藏层的输入大小为498,即为特征数,经过三层大小均为1024的线性层以及两个激活函数,输出大小为10的分类结果(十个疼痛等级)。输出层接收到隐藏层的输出结果,并将大小为10的结果经过Softmax函数的计算,得到十个疼痛值的预测概率,故输出层的结果为大小10的分类概率。本模型的损失函数采用Cross Entropy交叉熵损失函数,训练次数设置为20000次,根据预测损失经过20000次的更新,不断更新相应的系数w、b,最终得到一个训练模型。Specifically, it is first necessary to establish a fitting prediction model training library. The number of mice in the training library is selected to be 300, and each mouse can calculate 498 features to form 300 pieces of 498-dimensional data, which is a 300*498 two-dimensional In the motion feature matrix, the output 300*498 features are standardized by column using the z-score algorithm to obtain standardized mouse features, and the extracted mouse feature information and pain threshold are input to the classification neural network for training. The artificial intelligence network uses a fully connected classification network composed of multi-layer perceptrons, which consists of an input layer, three hidden layers and an output layer. Normalize the characteristic information of the mice according to the corresponding top data, divide the secondary variable data of each interval by the top secondary variable data under the corresponding variable, and eliminate the influence of individual mice due to their own activity ability and endurance, which is more It is beneficial to highlight the characteristics of pain caused by microneedles; secondly, we combine the pressure caused by the sharpness of the microneedles in each interval of the gradient microneedle array into the behavioral characteristics of mice in each interval; the normalization of each interval After normalization, the value of the secondary variable is divided by the microneedle sharpness of the corresponding interval, so that the influence of the sharpness of the microneedle in each interval on the behavior of the mouse is considered; all variables after normalization and sharpness transformation, and The general characteristics of mice such as temperature, humidity, total distance, and weight are also considered as auxiliary variables, and a total of 498 variables are used as the final input layer data. The hidden layer consists of three linear layers and two tanh activation functions. The data generated by the input layer is used as the input of the hidden layer. The input size of the hidden layer during training is 498, which is the number of features. Layer and two activation functions, the output size is 10 classification results (ten pain levels). The output layer receives the output result of the hidden layer, and calculates the result of size 10 through the Softmax function to obtain the predicted probability of ten pain values, so the result of the output layer is the classification probability of size 10. The loss function of this model adopts the Cross Entropy cross entropy loss function, and the number of training is set to 20,000 times. According to the predicted loss, after 20,000 updates, the corresponding coefficients w and b are continuously updated, and finally a training model is obtained.
需要说明的是,为了建立人工智能算法的训练库,通过注射CFA(complete Freund's adjuvant,完全弗氏佐剂)溶液或利多卡因溶液,或在爪子上局部涂抹利多卡因溶液,产生不同痛觉程度的小鼠。CFA溶液或会在小鼠的爪子上诱发炎症性疼痛,根据注射剂量的不同,可以产生一组具有较低疼痛阈值的小鼠。相反,注射或局部应用利多卡因将有可能缓解局部的痛觉疼痛,产生具有较高疼痛阈值的小鼠组。这些小鼠与未经处理的小鼠随机混合,用纤维丝测痛方法测试它们的痛觉程度,然后用梯度微针阵列系统测试,收集运动曲线。为了建立一个更大的人工智能训练样本库,涵盖高度不同的痛觉程度和运动特征,这些小鼠被反复测试以收集数据,共3-5周,这样可以最大限度地减少使用的小鼠数量。It should be noted that, in order to establish a training library for artificial intelligence algorithms, different degrees of pain can be produced by injecting CFA (complete Freund's adjuvant) solution or lidocaine solution, or applying lidocaine solution locally on the paw. of mice. The CFA solution induced inflammatory pain in the mice's paws, which, depending on the dose injected, produced a group of mice with lower pain thresholds. Conversely, injection or topical application of lidocaine would likely alleviate localized nociceptive pain, producing groups of mice with higher pain thresholds. These mice were randomly mixed with untreated mice, and their pain sensitivity was tested by the fibrillar analgesia method, and then tested by the gradient microneedle array system to collect movement curves. In order to build a larger AI training sample library, covering highly varying pain levels and motor characteristics, these mice were repeatedly tested to collect data for 3-5 weeks, which minimized the number of mice used.
实施本发明实施例包括以下有益效果:本发明实施例通过活动装置为待测试动物产生疼痛刺激;通过摄像装置拍摄待测试动物的运动轨迹视频;通过计算机设备对运动轨迹视频提取位置信息、坐标信息及运动特征信息,并将运动特征信息到训练好的预测模型获得预测疼痛值;从而完成对动物疼痛进行自动化测试,测量无需人工干涉,省时省力,准确率高。Implementing the embodiment of the present invention includes the following beneficial effects: the embodiment of the present invention generates pain stimulation for the animal to be tested through the mobile device; the motion track video of the animal to be tested is captured by the camera device; the position information and coordinate information are extracted from the motion track video by computer equipment and motion feature information, and transfer the motion feature information to the trained prediction model to obtain the predicted pain value; thereby completing the automatic test of animal pain, the measurement does not require manual intervention, saves time and effort, and has high accuracy.
以上是对本发明的较佳实施进行了具体说明,但本发明创造并不限于所述实施例,熟悉本领域的技术人员在不违背本发明精神的前提下还可做作出种种的等同变形或替换,这些等同的变形或替换均包含在本申请权利要求所限定的范围内。The above is a specific description of the preferred implementation of the present invention, but the invention is not limited to the described embodiments, and those skilled in the art can also make various equivalent deformations or replacements without violating the spirit of the present invention. , these equivalent modifications or replacements are all within the scope defined by the claims of the present application.
Claims (10)
- 一种自动化动物疼痛测试系统,其特征在于,包括活动装置、摄像装置以及与所述摄像装置连接的计算机设备;其中,An automated animal pain testing system is characterized in that it includes a mobile device, a camera and computer equipment connected to the camera; wherein,所述活动装置,包括活动箱体、位于所述活动箱体内的微针整列以及用于调节所述活动箱体的角度调节支架;The movable device includes a movable box, an array of microneedles located in the movable box, and an angle adjustment bracket for adjusting the movable box;所述摄像装置,用于拍摄动物在所述活动箱体内的运动轨迹视频,并将所述运动轨迹视频发送给所述计算机设备;The camera device is used to take a video of the movement track of the animal in the movable box, and send the video of the movement track to the computer device;所述所述计算机设备包括:Said said computer equipment includes:至少一个处理器;at least one processor;至少一个存储器,用于存储至少一个程序;at least one memory for storing at least one program;当所述至少一个程序被所述至少一个处理器执行,使得所述至少一个处理器实现以下步骤:When the at least one program is executed by the at least one processor, the at least one processor is made to implement the following steps:获取所述运动轨迹视频,并将所述运动轨迹视频输入到训练好的识别器获得位置信息;Obtaining the motion trajectory video, and inputting the motion trajectory video into a trained recognizer to obtain position information;根据所述位置信息确定坐标信息,并对所述坐标信息进行处理得到运动特征信息;所述坐标信息包括坐标值及时间序列;determining coordinate information according to the position information, and processing the coordinate information to obtain motion feature information; the coordinate information includes coordinate values and time series;将所述运动特征信息输入到训练好的预测模型获得预测疼痛值。Input the motion feature information into the trained prediction model to obtain the predicted pain value.
- 根据权利要求1所述的系统,其特征在于,所述识别器的训练方法包括:The system according to claim 1, wherein the training method of the recognizer comprises:获取运动轨迹视频样本,根据帧差法及感兴趣区域确定所述运动轨迹视频样本中的动物位置;Obtaining a motion track video sample, and determining the animal position in the motion track video sample according to the frame difference method and the region of interest;根据所述动物位置采集所述运动轨迹视频样本中的正样本图片及负样本图片;Collecting positive sample pictures and negative sample pictures in the motion track video samples according to the animal position;将所述正样本图片及所述负样本图片输入到haar级联分类模型进行训练;The positive sample picture and the negative sample picture are input to the haar cascade classification model for training;将识别准确率达到预设准确率的haar级联分类模型作为识别器。The haar cascade classification model whose recognition accuracy reaches the preset accuracy is used as the recognizer.
- 根据权利要求1所述的系统,其特征在于,所述根据所述位置信息确定坐标信息,具体包括:The system according to claim 1, wherein said determining coordinate information according to said location information specifically comprises:根据carry边缘检测算法及霍夫变换直线检测算法确定所述活动箱体的轮廓;Determine the outline of the active box according to the carry edge detection algorithm and the Hough transform line detection algorithm;根据所述位置信息及所述轮廓确定坐标系及所述运动轨迹视频中动物的坐标信息。The coordinate system and the coordinate information of the animal in the motion track video are determined according to the position information and the outline.
- 根据权利要求3所述的系统,其特征在于,所述处理器还实现以下步骤:The system according to claim 3, wherein the processor further implements the following steps:采用线性差值法对所述坐标信息进行重采样,获得预设时间内包含预设帧数的第一坐标值及第一时间序列,并将所述第一坐标值及所述第一时间序列作为坐标信息。Resample the coordinate information by using a linear difference method to obtain a first coordinate value and a first time series including a preset number of frames within a preset time, and combine the first coordinate value and the first time series as coordinate information.
- 根据权利要求4所述的系统,其特征在于,所述处理器还实现以下步骤:The system according to claim 4, wherein the processor further implements the following steps:采用滑动窗口对所述第一坐标值及所述第一时间序列进行中心平滑滤波得到第二坐标值 及第二时间序列,并将所述第二坐标值及所述第二时间序列作为坐标信息。performing center smoothing filtering on the first coordinate value and the first time series using a sliding window to obtain a second coordinate value and a second time series, and using the second coordinate value and the second time series as coordinate information .
- 根据权利要求1所述的系统,其特征在于,所述对所述坐标信息进行处理得到运动特征信息,具体包括:The system according to claim 1, wherein said processing said coordinate information to obtain motion feature information specifically includes:对所述坐标信息求导得到速度时间序列;Deriving the coordinate information to obtain a velocity time series;根据所述速度时间序列及所述坐标信息确定速度位置分布直方图;determining a velocity position distribution histogram according to the velocity time series and the coordinate information;对所述速度时间序列积分得到路程时间序列;Integrating the speed time series to obtain a distance time series;根据所述路程时间序列与所述坐标信息确定路程位置分布直方图。A histogram of distance location distribution is determined according to the distance time series and the coordinate information.
- 根据权利要求1所述的系统,其特征在于,所述预测模型的训练方法包括:The system according to claim 1, wherein the training method of the prediction model comprises:获取动物测试样本的动运动特征信息及对应的疼痛值;Obtain the dynamic characteristic information and the corresponding pain value of the animal test sample;将所述动物测试样本的动运动特征信息转换成二维矩阵,并将所述二维矩阵标准化;converting the kinetic feature information of the animal test sample into a two-dimensional matrix, and standardizing the two-dimensional matrix;对标准化后的二维矩阵提取主成分,并将相关度最好的两个主成分进行二元拟合得到疼痛值与运动特征的拟合直线,将所述拟合直线作为预测模型。The principal components are extracted from the standardized two-dimensional matrix, and the two principal components with the best correlation are binary fitted to obtain a fitted straight line between the pain value and the motion feature, and the fitted straight line is used as a prediction model.
- 根据权利要求1所述的系统,其特征在于,所述预测模型的训练方法包括:The system according to claim 1, wherein the training method of the prediction model comprises:获取动物测试样本的动运动特征信息及对应的疼痛值;Obtain the dynamic characteristic information and the corresponding pain value of the animal test sample;将所述动物测试样本的动运动特征信息转换成二维矩阵,并将所述二维矩阵标准化;converting the kinetic feature information of the animal test sample into a two-dimensional matrix, and standardizing the two-dimensional matrix;根据标准化后的二维矩阵及对应的疼痛值对分类神经网络进行训练,将得到预测精度的分类神经网络作为预测模型;所述分类神经网络包括一层输入层、三层隐藏层和一层输出层。According to the standardized two-dimensional matrix and the corresponding pain value, the classification neural network is trained, and the classification neural network with prediction accuracy is used as the prediction model; the classification neural network includes one layer of input layer, three layers of hidden layers and one layer of output layer.
- 根据权利要求1所述的系统,其特征在于,所述活动箱体包括底座及挡板,所述底座与所述挡板之间通过公母榫方式装配,所述活动箱体分为休息区和活动区,所述休息区与所述活动区通过挡板隔离,所述挡板与所述底座通过公母榫方式装配。The system according to claim 1, wherein the movable box includes a base and a baffle, and the base and the baffle are assembled through a male and female tenon, and the movable box is divided into a rest area and the activity area, the rest area is separated from the activity area by a baffle, and the baffle is assembled with the base by a male and female tenon.
- 根据权利要求1所述的系统,其特征在于,所述微针整列安装在所述活动区,所述微针整列的顶部半径从靠近休息区端开始逐渐变小。The system according to claim 1, wherein the array of microneedles is installed in the active area, and the radius of the top of the array of microneedles gradually decreases from the end near the rest area.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111272069.6 | 2021-10-29 | ||
CN202111272069.6A CN114010155B (en) | 2021-10-29 | 2021-10-29 | Automatic change painful test system of animal |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023070701A1 true WO2023070701A1 (en) | 2023-05-04 |
Family
ID=80058990
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/128364 WO2023070701A1 (en) | 2021-10-29 | 2021-11-03 | Automatic animal pain test system |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN114010155B (en) |
WO (1) | WO2023070701A1 (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105167753A (en) * | 2015-10-12 | 2015-12-23 | 南京大学医学院附属鼓楼医院 | Inclined plate type mechanical pain-sensitivity determining device |
CN105224912A (en) * | 2015-08-31 | 2016-01-06 | 电子科技大学 | Based on the video pedestrian detection and tracking method of movable information and Track association |
CN110477867A (en) * | 2019-08-19 | 2019-11-22 | 中山大学 | A kind of animal electricity stimulation pain assessment test macro and assessment test method |
CN110522415A (en) * | 2019-08-19 | 2019-12-03 | 中山大学 | A kind of ANIMAL PAIN test macro and its test method |
WO2021132813A1 (en) * | 2019-12-23 | 2021-07-01 | 경희대학교 산학협력단 | Pain evaluation method and analysis device using deep learning model |
CN113362371A (en) * | 2021-05-18 | 2021-09-07 | 北京迈格威科技有限公司 | Target tracking method and device, electronic equipment and storage medium |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100490736C (en) * | 2007-09-13 | 2009-05-27 | 杭州电子科技大学 | Hot plate test automatic detecting method |
CN102565103B (en) * | 2011-12-16 | 2014-03-19 | 清华大学 | Tracking detection method for weld defects based on X-ray image |
CN103186775B (en) * | 2013-03-27 | 2016-01-20 | 西安电子科技大学 | Based on the human motion identification method of mix description |
CN106725340B (en) * | 2017-01-09 | 2023-01-17 | 常州市第一人民医院 | Full-automatic animal spinal cord injury inclined plate experiment evaluating device |
US20210059220A1 (en) * | 2019-08-27 | 2021-03-04 | Children`S Medical Center Corporation | Test environment for the characterization of neuropathic pain |
CN110866480B (en) * | 2019-11-07 | 2021-09-17 | 浙江大华技术股份有限公司 | Object tracking method and device, storage medium and electronic device |
CN211485043U (en) * | 2019-11-12 | 2020-09-15 | 鼠来宝(武汉)生物科技有限公司 | Mouse sole pain-tendering instrument |
CN113194249A (en) * | 2021-04-22 | 2021-07-30 | 中山大学 | Moving object real-time tracking system and method based on camera |
-
2021
- 2021-10-29 CN CN202111272069.6A patent/CN114010155B/en active Active
- 2021-11-03 WO PCT/CN2021/128364 patent/WO2023070701A1/en unknown
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105224912A (en) * | 2015-08-31 | 2016-01-06 | 电子科技大学 | Based on the video pedestrian detection and tracking method of movable information and Track association |
CN105167753A (en) * | 2015-10-12 | 2015-12-23 | 南京大学医学院附属鼓楼医院 | Inclined plate type mechanical pain-sensitivity determining device |
CN110477867A (en) * | 2019-08-19 | 2019-11-22 | 中山大学 | A kind of animal electricity stimulation pain assessment test macro and assessment test method |
CN110522415A (en) * | 2019-08-19 | 2019-12-03 | 中山大学 | A kind of ANIMAL PAIN test macro and its test method |
WO2021132813A1 (en) * | 2019-12-23 | 2021-07-01 | 경희대학교 산학협력단 | Pain evaluation method and analysis device using deep learning model |
CN113362371A (en) * | 2021-05-18 | 2021-09-07 | 北京迈格威科技有限公司 | Target tracking method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN114010155B (en) | 2024-06-11 |
CN114010155A (en) | 2022-02-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Chen et al. | Recognition of aggressive episodes of pigs based on convolutional neural network and long short-term memory | |
Singh et al. | [Retracted] Hybrid Feature‐Based Disease Detection in Plant Leaf Using Convolutional Neural Network, Bayesian Optimized SVM, and Random Forest Classifier | |
US10025973B2 (en) | System and method for automatically discovering, characterizing, classifying and semi-automatically labeling animal behavior and quantitative phenotyping of behaviors in animals | |
Zhang et al. | Real-time sow behavior detection based on deep learning | |
Acharya et al. | Decision support system for fatty liver disease using GIST descriptors extracted from ultrasound images | |
US7209588B2 (en) | Unified system and method for animal behavior characterization in home cages using video analysis | |
Nasiri et al. | Pose estimation-based lameness recognition in broiler using CNN-LSTM network | |
Jiang et al. | Dairy cow lameness detection using a back curvature feature | |
CN105427296A (en) | Ultrasonic image low-rank analysis based thyroid lesion image identification method | |
Wang et al. | An automated behavior analysis system for freely moving rodents using depth image | |
Barbosa et al. | paraFaceTest: an ensemble of regression tree-based facial features extraction for efficient facial paralysis classification | |
CN109002766A (en) | A kind of expression recognition method and device | |
CN101251894A (en) | Gait recognizing method and gait feature abstracting method based on infrared thermal imaging | |
CN110392549B (en) | Systems, methods and media for determining brain stimulation that elicits a desired behavior | |
CN110503636B (en) | Parameter adjustment method, focus prediction method, parameter adjustment device and electronic equipment | |
Narmatha et al. | Skin cancer detection from dermoscopic images using Deep Siamese domain adaptation convolutional Neural Network optimized with Honey Badger Algorithm | |
Xu et al. | Automatic quantification and assessment of grouped pig movement using the XGBoost and YOLOv5s models | |
WO2023070701A1 (en) | Automatic animal pain test system | |
Liu et al. | Working condition perception for froth flotation based on NSCT multiscale features | |
Wang et al. | Apparatus and methods for mouse behavior recognition on foot contact features | |
CN109993076A (en) | A kind of white mouse behavior classification method based on deep learning | |
CN113627255B (en) | Method, device and equipment for quantitatively analyzing mouse behaviors and readable storage medium | |
Padmapriya et al. | Localization of eye region in infrared thermal images using deep neural network | |
Gao et al. | Polymorphous bovine somatic cell recognition based on feature fusion | |
Geng et al. | Caenorhabditis elegans egg-laying detection and behavior study using image analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21962051 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205N DATED 16.08.2024) |