Nothing Special   »   [go: up one dir, main page]

CN110970130B - Data processing device for attention deficit hyperactivity disorder - Google Patents

Data processing device for attention deficit hyperactivity disorder Download PDF

Info

Publication number
CN110970130B
CN110970130B CN201911398269.9A CN201911398269A CN110970130B CN 110970130 B CN110970130 B CN 110970130B CN 201911398269 A CN201911398269 A CN 201911398269A CN 110970130 B CN110970130 B CN 110970130B
Authority
CN
China
Prior art keywords
subject
data
target
characteristic parameters
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911398269.9A
Other languages
Chinese (zh)
Other versions
CN110970130A (en
Inventor
段新
段拙然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foshan Chuangshijia Technology Co ltd
Original Assignee
Foshan Chuangshijia Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foshan Chuangshijia Technology Co ltd filed Critical Foshan Chuangshijia Technology Co ltd
Priority to CN201911398269.9A priority Critical patent/CN110970130B/en
Publication of CN110970130A publication Critical patent/CN110970130A/en
Priority to PCT/CN2020/129452 priority patent/WO2021135692A1/en
Application granted granted Critical
Publication of CN110970130B publication Critical patent/CN110970130B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/20ICT specially adapted for the handling or processing of patient-related medical or healthcare data for electronic clinical trials or questionnaires
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The application is applicable to the technical field of computers, and provides a data processing method for attention deficit hyperactivity disorder, which comprises the following steps: and collecting the characteristic parameters output by the patient and the normal person when the patient and the normal person use the virtual reality environment to complete the task, training a machine learning model through the collected characteristic parameters to obtain a trained machine learning model, and finally predicting and classifying the characteristic parameters collected by the virtual reality environment of any subject by utilizing the trained machine learning model. According to the method and the device, the test data acquired by the tasks in the virtual reality environment are completed through the subjects, the subjects can complete the test in the relaxed environment, the condition that the acquired data are inaccurate due to tension of the subjects is avoided, in addition, the characteristic parameters are predicted and classified through the machine learning model, the judgment of artificial subjective factors is avoided, and the judgment objective accuracy of attention deficit hyperactivity disorder is improved.

Description

Data processing device for attention deficit hyperactivity disorder
Technical Field
The application belongs to the technical field of computers, and particularly relates to a data processing device for attention deficit hyperactivity disorder.
Background
Attention deficit hyperactivity disorder (Attention deficit hyperactivity disorder, ADHD) is a common mental disorder in childhood and is classified into the three types of attention deficit (ADHD-I), hyperactivity and impulsivity (ADHD-H), and combined manifestations (ADHD-C). Attention deficit hyperactivity disorder is mainly manifested by inattention, hyperactivity, impulsivity, poor self-control ability, etc., and affects learning, communication with people, and conduct of children.
At present, diagnosis of attention deficit hyperactivity disorder is realized through interview, observation and questionnaire, and diagnosis is realized through evaluation, and subjective performance is brought in the diagnosis process, and misdiagnosis and missed diagnosis are often caused by child stubborn skin or tension.
Disclosure of Invention
The embodiment of the application provides a data processing device for attention deficit hyperactivity disorder, which can solve the problem of strong subjectivity in judging attention deficit hyperactivity disorder.
In a first aspect, an embodiment of the present application provides a data processing apparatus for attention deficit hyperactivity disorder, including:
the data acquisition module is used for acquiring input data of a task in a virtual reality environment completed by a subject;
The data calculation module is used for calculating the characteristic parameters of the subject based on the input data;
and the result output module is used for outputting a classification result based on the characteristic parameters and the trained machine learning model.
Compared with the prior art, the embodiment of the application has the beneficial effects that: according to the method, the input data of the test are obtained through the task of the subject in the virtual reality environment, the characteristic parameters are obtained through calculation according to the input data, and finally the characteristic parameters are input into the machine learning model to evaluate the subject, so that the classification result is obtained. According to the method, the test data collected by the task in the virtual reality environment is completed by the subject, the test can be completed in an easy environment by the subject, and the situation that the collected data is inaccurate due to tension when the subject talks about or is in paper pen test is avoided; secondly, the automatic classification is carried out according to the characteristic parameters through a machine learning model, so that the evaluation of human subjective factors is avoided, and the evaluation objective accuracy of attention defect hyperactivity disorder is improved; again, virtual Reality (VR) scenes are designed conveniently, and behavioral, cognitive and physiological data generated in the VR environment can be used as input valid features of a classifier for distinguishing patients from normal subjects, so that biomarkers with ADHD characteristics can be obtained easily.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required for the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic application scenario diagram of a data processing method for attention deficit hyperactivity disorder according to an embodiment of the present application;
FIG. 2 is a flow chart of a data processing method for attention deficit hyperactivity disorder according to an embodiment of the present application;
FIG. 3 is a flow chart illustrating a specific method for inputting data in FIG. 2 according to an embodiment of the present application;
FIG. 4 is a flowchart illustrating a method for calculating the feature parameters in FIG. 2 according to an embodiment of the present disclosure;
FIG. 5 is a second flowchart of a method for calculating the feature parameters in FIG. 2 according to an embodiment of the present disclosure;
FIG. 6 is a flowchart illustrating a method for calculating the feature parameters in FIG. 2 according to an embodiment of the present disclosure;
FIG. 7 is a flow chart of a machine learning model training method according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a data processing apparatus for attention deficit hyperactivity disorder according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a terminal device according to an embodiment of the present application;
fig. 10 is a block diagram of a part of the structure of a computer provided in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
Fig. 1 is a schematic application scenario of a data processing method for attention deficit hyperactivity disorder according to an embodiment of the present application, where the data processing method for attention deficit hyperactivity disorder may be used for evaluating ADHD in a subject. The terminal device 20 is configured to obtain test data of the task performed by the tested person 10 in the virtual reality environment, analyze and evaluate the test data, and finally obtain a classification result, and the doctor can determine whether the tested person 10 has attention deficit hyperactivity disorder and parting thereof according to the classification result of the terminal device 20.
The following describes in detail the data processing method of attention deficit hyperactivity disorder according to the embodiment of the present application with reference to fig. 1.
Fig. 2 is a schematic flowchart of a data processing method for attention deficit hyperactivity disorder provided in the present application, and referring to fig. 2, the data processing method for attention deficit hyperactivity disorder is described in detail as follows:
s101, acquiring input data of a subject for completing tasks in a virtual reality environment.
In this embodiment, the VR stores interesting games capable of distinguishing ADHD from normal subjects, for example, finding differences between two environments, archery, identifying character expressions in social situations, and the like, and the subjects can collect different input data such as attention and multi-impulse impulses of the subjects during the completion of the tasks by completing the game tasks in the VR. Multiple games can be set in one task, for example, environmental differences with different difficulties can be set in different game tasks for finding two environmental scenes, and expressions with different complexity can be set for testing in tasks for identifying character expressions in social scenes.
By way of example: VR finds different scenes to evaluate spatial working memory, select attention, and visual information searching. The scene "difference" is found to be a target-oriented behavior, requiring attention, influenced by the eye's gaze location and gaze latency. ADHD patients have weaker detectability for changes than normal, and are easily ignored especially for subtle changes, mainly due to the deficit in control and attention of voluntary eye movements in ADHD patients. In setting up a game, static and dynamic different daily or sports scenes, for example, the presence or absence of a color appearing or a change in position, can be set to allow the subject to find a difference. ADHD patients may get faster answers than normal, but recognize differences with less accuracy than normal, and errors than normal.
VR fixed target archery game scene measurement attention concentration and persistence can set up audio-visual interference factor, and in interference environment, fixed target static shooting need stare the bulls eye, and in the stipulated time, stare the bulls eye the duration is close the stipulated time more, and shooting target ring score is higher.
VR mobile target game scenarios are persistent operation quiz (Continuous performance test, CPT) tasks that require a subject to respond as quickly as possible to target stimuli, not to non-target stimuli, and require the invocation of auditory and visual selectivities. In the interference environment, visual, audio and visual space interference, such as flying birds, rabbits, mice and the like, can be arranged, and targets, such as flying saucers, are randomly thrown into the air, so that subjects are required to shoot the targets.
VR recognizes human expressions in social situations, and can set emotion recognition under two different difficulties, 1) recognition of static and dynamic expressions, recognition of positive and negative expressions and the like (such as anger, sadness, fear and the like of negative emotions), and recognition of different intensities of various emotions (such as 30%,50%,70% and 100% of four intensities). 2) Contextual task VR context: emotion recognition and processing in social context, for example: visual attention and emotion recognition in interpersonal interaction, complex and subtle emotion changes under the situation, better visual attention is needed for interpretation and discrimination, a subject can recognize that the interaction (answer) can be carried out by fingers or natural language, expression (emotion) recognition is objectively examined, psychological (intelligence) theory, test during selected attention and selected response, visual information search and the like.
Wherein, different VR scene tasks focus on evaluating different features such as attention and multiple impulse of ADHD, at least one VR scene may be selected, and at least one set of input data may be obtained for each scene. The input data includes: task performance data, motion sensing data, eye tracking data, and brain electrical data.
As shown in fig. 3, in one possible implementation, the implementation procedure of step S101 may include:
s1011, acquiring the task performance data under the current task through a gesture tracker and/or a language processing device.
In this embodiment, the target stimulus refers to an instruction or task content of a task in VR, for example: finding out the expression of the character in the social situation, wherein the expression required to be designated is target stimulus; the flying saucer in the flying saucer shooting game is target stimulus.
Task performance data may be obtained by a gesture tracker, such as indicating a smiling face in a situation by hand, by which it may be determined whether the subject is properly performing the task, or by hand movements, by which it may be determined when a target stimulus is occurring, and if a target stimulus is occurring, but the subject's hands are not moving, indicating that the subject is not responding under the current target stimulus.
Task performance data may be obtained by a language processing device, for example: finding out the smiling face in the situation, enabling the subject to speak the azimuth or sequence number of the smiling face through natural language, and finally determining whether the subject completes the task correctly or not through information conversion and information identification by acquiring language information of the subject.
Alternatively, the gesture tracking may be optical tracking, for example: leap Motion (Motion controller); or inertial tracking, may be a data glove with sensors on the hand. By adopting the hand motion capturing technology, the hand motion can be tracked without a hand-held device or wearing data gloves, and the hand motion can interact with the virtual scene naturally.
S1012, acquiring the motion sensing data acquired by the action recorder under the current task.
In this embodiment, the definition of ADHD is described by the physical exercise behavior of the diagnosed person, and the stability of the physical posture of the subject is one of the criteria for judging whether the subject is ADHD. In the prior art, body movement data of a diagnosed person is often collected for a period of time, and differences between the body movement data of the diagnosed person and the body movement data of a normal person are compared to judge whether the diagnosed person is ill or not.
Optionally, the action recorder captures actions by adopting optical tracking or inertial tracking, and the implementation mode of the action recorder can adopt wearable equipment or a scene depth analysis scheme; the scene depth analysis scheme is to analyze the depth information of a scene by receiving an optical signal through an optical sensor so as to further determine the body position and the posture of a subject; the wearable device is to fix a sensor on a joint or a key point of a subject, and obtain the physical movement condition of the subject by testing the position change or the bending degree of the joint or the key point, wherein the key point can comprise a head, a wrist, an ankle and the like. The action recorder can record body movement data of a subject through two devices, wherein one device is an accelerometer device, and the accelerometer device records triaxial activities by using a triaxial accelerometer so as to obtain movement and inertia measurement data; the other is an infrared optical position detection and analysis system, which is a moving object analysis system developed based on optical sensitive devices and stereo measurement.
S1013, acquiring the eye movement tracking data acquired by the eye movement tracking device under the current task.
In this embodiment, eye movements may represent gaze time and gaze direction. The eye movement tracking device can obtain eye movement tracking data such as the azimuth, time, sequence and the like of the eye gaze of an individual, and further obtain characteristic parameters such as the number of gazing times, gazing time, a visual scanning path, a visual scanning strategy and the like. The eye movement tracking apparatus can objectively record the visual attention and the visual search pattern, and can provide an evaluation index distinguishing the different visual attention and visual search patterns of the ADHD patient from the normal person.
In this embodiment, the eye movement tracking device may be integrated in the VR head display device, and is mainly used for tracking an eyeball and estimating an eye line, and by tracking the eyeball, acquisition of time of a point where the eyeball gazes, time of displacement of the eyeball, order of displacement of the eyeball, and the like may be achieved.
S1014, acquiring the brain electricity data acquired by the brain electricity acquisition device under the current task.
In this embodiment, the electroencephalogram acquisition device may acquire brain electrical responses of the subject during target stimulation, that is, event-related potentials, external stimulation is often video and audio stimulation, and internal stimulation is often tasks related to attention, decision-making ability and working memory, which are called psychological tasks.
Specifically, the electroencephalogram data is electroencephalogram (EEG) and electroencephalogram evoked Event Related Potential (ERP), and may include: EEG signals alpha, beta, theta waves, etc., potentials P200, P300, peak values of the frequency spectrum around 11Hz, peak-to-peak values of P2-N2, etc.
In this example, there are many abnormalities in the electroencephalogram of ADHD patients, and more brain wave activity than in normal patients, especially in frontal lobe areas, there is very much wave activity; ADHD patients have a larger P2 component and a smaller N2 component. ADHD patients showed specific P2-N2 peak and peak frequency around 11 Hz. Acquisition and study of brain electrical data may therefore be an indicator for evaluating ADHD patients.
S102, calculating the characteristic parameters of the subject based on the input data.
As shown in fig. 4, in a possible implementation manner, in a case where the input data is task performance data, the implementation process of step S102 may include:
specifically, the task performance data may include: the correct number of responses of the subject at the target stimulus, the start time of response of the subject at the target stimulus, and the number of non-responses of the subject at the target stimulus, wherein one task comprises at least one target stimulus.
In this embodiment, the correct number of responses refers to the total number of indications that the subject can correctly answer the target stimulus when the target stimulus is displayed in the VR, such as: finding out a smiling face in a social situation, and correctly indicating the smiling face by a subject, wherein the subject correctly responds to target stimulation.
S1021, calculating the ratio of the correct response times to the total target stimulus number in the current task, and obtaining the correct rate in the characteristic parameters.
In this example, the higher the accuracy rate, the more focused the subject. The target stimulus total refers to the number of instructions that need to be completed, such as: finding out three groups of environment scenes in different tasks, wherein each group has two scenes, and finding out the difference of the two scenes in each group; in the task of flying a flying saucer, if there are 15 flying saucers in total, the total number of targets is 15.
By way of example, the calculation of the accuracy includes:
Figure GDA0004186916010000081
wherein a is the accuracy; a is the correct reaction times; z is the target stimulus total.
S1022, calculating a difference between the response start time and the target stimulus start time when the response is correct, and calculating a response standard difference in the characteristic parameters based on the difference.
In this embodiment, the standard deviation at the time of the reaction is the standard deviation of the above-mentioned difference, and by calculating the difference between the reaction start time and the target stimulus start time at the time of the correct reaction, the standard deviation at the time of the reaction, that is, the measure of attention, can be calculated.
The response start time is the time at which the subject begins to respond to the current target stimulus, e.g., the subject's response may include: an action or language.
S1023, counting the number of times before the corresponding target stimulation starting time of the response starting time under the current task as the error number in the characteristic parameter.
In this embodiment, the number of errors includes the number of times the subject has responded when the target stimulus was not present. If the difference between the response start time and the target stimulation time is smaller, the response time of the subject is indicated to be short, if the response time of the subject is shorter but the number of errors is greater, the impulsivity characteristic of the subject is more obvious, and if the response time of the subject is longer but the number of errors is greater, the characteristic of the inattention of the subject is more obvious.
And S1024, recording the non-response times of the subject when the target stimulus occurs as the number of the leakage reports in the characteristic parameters.
In this example, when the target stimulus occurs, the subject does not respond, i.e., neither before nor after the target stimulus begins, indicating that the subject is not responding to the current target stimulus.
As shown in fig. 5, in a possible implementation manner, in a case where the input data is motion sensing data, the implementation process of step S102 may include:
specifically, the motion sensing data may include: a stationary time period of the subject, a motion time of the subject when motion is transformed, a position coordinate of a motion path recorded by the motion recorder when the subject moves each time, a motion path of the subject during completion of the task, and the number of the motion paths.
In this embodiment, the motion sensing data may be the motion data of the subject recorded at the current task, or may be the motion data of the subject recorded at a plurality of tasks during the whole test.
And S1025, summing all the rest time periods, and determining the rest time length in the characteristic parameters according to the quotient of the summation result and the number of the rest time periods.
In this embodiment, the resting period refers to the average period of time of the subject during the resting period.
S1026, searching the number of the action time in the preset time period and recording the number as the number of the movements in the characteristic parameter.
S1027, calculating the area of each movement of the subject based on the position coordinates, and recording the sum of all the area as the movement area of the subject in the characteristic parameters.
In this embodiment, the movement region of the subject may comprise the total number of movement areas covered by the movement sensor device through the movement.
And S1028, summing all the intersecting points of the motion paths, and determining the motion complexity in the characteristic parameters according to the quotient of the summation result and the number of the motion paths.
In this embodiment, a lower motion complexity indicates that the subject's motion path tends to be simple linear during the test, and a higher motion complexity indicates that the subject's motion path is complex entangled during the test.
S1029, calculating the displacement of the subject in the characteristic parameters for completing the task based on the position coordinates.
In this embodiment, since a task may include multiple target stimuli, and one target stimulus has one displacement, the displacement of the subject to complete the task may include the total displacement of the subject when the current task is completed, and may also include the total displacement of the subject when the test is completed.
Alternatively, the characteristic parameter may further include a time scale, where the time scale reflects the degree of activity of the subject, and the time scale may be calculated by counting the ratio of the number of 1 s to the number of 0 s when the subject is in a motion state and counting 1 s and 0 s when the subject is in a stationary state.
As shown in fig. 6, in a possible implementation manner, in a case where the input data is eye movement tracking data, the implementation process of step S102 may include:
specifically, the eye movement tracking data includes eye coordinates of the subject, time of eye gaze target stimulation, and order of eye gaze target stimulation.
S10210, based on the eyeball coordinates, determining the fixation times of the subject in the characteristic parameters to each target stimulus.
In this embodiment, the residence time and residence times of the eyeball at a certain position can be calibrated by the coordinates of the eyeball, so that the fixation times of the subject to the target stimulus can be counted by counting the occurrence times of the eyeball coordinates.
And S10211, recording the time of the eyeball fixation target stimulation as the fixation time in the characteristic parameter.
S10212, determining a visual scanning path of the subject in the characteristic parameters based on the sequence of the eyeball fixation target stimulus.
In this embodiment, the visual scan path of the subject, that is, what the subject sees first and then what is known by the order in which the subject views the target stimulus, and the tendency of the subject to view the target stimulus, for example, by analyzing the visual scan path of the subject: in the task of distinguishing between a smiling face and a crying face picture, if the visual scan path of the subject is a smiling face to a crying face, it is indicated that the subject is more inclined to observe the smiling face, and if the subject first sees the crying face and then sees the smiling face, it is indicated that the subject is more inclined to look at the crying face.
And S10213, obtaining a visual scanning strategy of the subject in the characteristic parameters based on the visual scanning path and the gazing time.
In this example, the visual scanning strategy reflects the subject's residence time in and sensitivity to the target stimulus, such as: in the VR finding different tasks, the normal subjects watch the change areas for a plurality of times, the time is longer, and the time of first watching is longer than that of ADHD subjects; whereas ADHD subjects gazes at the entire scene for a long period of time.
In one possible implementation manner, in a case where the input data is electroencephalogram data, the implementation procedure of step S102 may include:
And obtaining time-frequency domain characteristics, P300 characteristics and the like in the characteristic parameters based on the electroencephalogram data.
In this embodiment, preprocessing is performed on the EEG signal, where the preprocessing mainly includes data inspection, bandpass filtering, spurious removal, segmentation, and time-frequency domain feature extraction as electroencephalogram data feature parameters after preprocessing; the peaks, latency, average value and the like of the potentials P200 and P300 are subjected to feature extraction, and serve as brain electrical data feature parameters and the like.
S103, based on the characteristic parameters and the trained machine learning model, outputting a classification result.
In this embodiment, the feature parameters are input into the trained machine learning model, the classification result is automatically output, and the doctor can determine whether the subject is an ADHD patient and what type of ADHD the subject belongs to according to the classification result.
As an example, the output result may be set to four types of attention deficit, hyperactivity and impulsivity, combined performance, and normal.
It should be noted that, different VR game task scenes may obtain different specific feature parameters according to different scenes, in addition to the common feature parameters.
By way of example:
1) The feature parameters in the task of the VR to identify the character expression in the social context may further include:
The task performance data can also be obtained: reaction time, correct number, number of errors, error rate, etc. are selected.
The eye tracking data can also be obtained by: the method comprises the steps of Entering Time (ET) of an interest area, first Fixation Time (FFT), total Fixation Time (FT) of the interest area, the number of fixation points and the like, and deflection of the interest area can be analyzed through the fixation time duty ratio of each interest area to reflect the attention deflection value. The study found that ADHD patients were attentive to happy faces for a shorter period than normal, and to neutral faces for a longer period than normal, and when a group of happy-unhappy faces was presented, ADHD patients were attentively biased to unhappy faces, while normal persons were biased to happy faces. ADHD patients pay more attention to the mouth of the emotional face than to the eyes, and may have mouth opening and closing more well-defined positive negative emotions than normal ones, and eye resolution is much more difficult, in a social scenario ADHD patients do not pay attention to other people's facial and body language information when they are angry.
The brain electrical data can also be obtained: the facial expression processing composition of ADHD patients is different from that of normal ones. ADHD patients have reduced processing of different face holes, and under the stimulation of various expressions such as happiness, neutrality, anger, fear and the like, the amplitude of the occipital regions P100 at two sides is lower than that of the normal person, and the amplitude of the occipital regions N170 at two sides is lower than that of the normal person. The P100 and N170 wave amplitudes of ADHD patients have no obvious difference under various expression stimulus; whereas the amplitude of the left occipital region P100 of the normal person is significantly higher than that of the neutral face due to happiness, anger and fear, and the amplitude of the left temporal region N170 is significantly higher than that of the neutral face due to fear.
2) The characteristic parameters in the VR archery game scene may further include:
the task performance data can also be obtained: target fraction, correct response rate, false response rate, and miss rate, etc.
The motion sensing data can also be obtained by: accelerating the reaction time, accelerating the variation rate of the reaction time, leading the patient with ADHD to have cognitive defects, reducing the capability of accelerating the reaction when the target throwing interval is shorter and denser, and increasing the omission times; the index of hyperactivity can be evaluated: the total time of static and activity, the average transformation action times, the distance path of activity and the area of activity in a certain time; an index of impulse can be evaluated: number of errors, error rate, etc.
The eye tracking data can also be obtained by: total time of interest (FT), number of ocular off-target, and time of ocular off-target, etc.
3) The finding of the characteristic parameters in different scenes by the VR may further include:
the eye tracking data can also be obtained by: the number of gaze-varied regions, the time of first gaze-varied regions, the region of interest, etc.
As shown in fig. 2, in one possible implementation manner, the method may further include:
S201, training a machine learning model based on an input sample to obtain the trained machine learning model.
In this embodiment, the machine learning model may include: algorithms based on classical machine learning (e.g. Support Vector Machine (SVM), artificial neural network (Artificial Neural Network, ANN)). For example: SVM models, convolutional Neural Network (CNN) models, training and optimizing models based on caffe models, and the like. The development environment may use TensorFlow, python, MXNet, torch, theano, CNTK, cuda, cuDNN, caffe open source libraries or LibSVM toolboxes, etc.
As shown in fig. 7, in one possible implementation, the implementation procedure of step S201 may include:
s2011, taking at least one index of characteristic parameters obtained when a tested person completes tasks in the virtual reality environment as input parameters, wherein the tested person comprises a normal subject and a diseased subject;
and S2012, training the machine learning model based on the input sample by taking the input parameter as the input sample to obtain the trained machine learning model.
In this embodiment, the input parameter may be at least one index of the characteristic parameters obtained when the tester completes at least one task in VR as the input parameter.
In this embodiment, the machine learning model may include a support vector machine, a convolutional neural network, or the like. The input samples include: data collected by a number of patients with ADHD and a number of normals through completion of tasks in VR, wherein ADHD includes three types of ADHD-I, ADHD-H and ADHD-C.
Specifically, in the case where the machine learning model is a support vector machine, training of the machine learning model includes:
for example, a characteristic parameter is selected as input: an SVM recursive feature elimination method (support vector machine recursive feature elimination, SVM-RFE) classification method based on electroencephalogram evoked potential time-frequency domain as feature input. And (3) performing time-frequency domain feature extraction on the brain electric evoked potential of the tested person in the process of performing a related game (task), and performing feature classification by using a support vector machine to realize individual prediction.
For example, an integrated learning method of selecting various feature parameters as inputs: the integrated learning method is used for combining a single classification method, and the multi-kernel learning (multiple kernel learning, MKL) method is used for combining a plurality of kernel functions, so that the classification capability can be remarkably improved. Task performance data, motion sensing data, eye tracking data, electroencephalogram data and other multi-modal data are combined with 4 features based on multi-core learning (multiple kernel learning, MKL) to train a multi-core classifier. And preprocessing various data by using an SVM (support vector machine) by adopting a nested cross-validation (nested cross validation) method, extracting features, selecting features and finally classifying.
Specifically, deep learning can learn features through a deep nonlinear network structure, and form a more abstract deep representation (attribute category or feature) by combining low-level features, so as to realize complex function approximation, thereby learning the essential features of the data set. In the case where the machine learning model is a convolutional neural network, training of the machine learning model includes:
the input layer uses a plurality of nodes, and is composed of 4 neurons such as task performance data, motion sensing data, eye movement tracking data, brain electricity data and the like, wherein the input vector is X1 … … … Xn (characteristic), and the output layer is normal, ADHD-I, ADHD-H, ADHD-C and the like. Each node realizes nonlinear transformation through an activation function and then inputs the nonlinear transformation, and can output the nonlinear transformation through a plurality of hidden layers; the convolutional neural network output is compared with the actual output or the expected output, the error between the predicted value and the actual value is measured by mean square error, the connection weight w value and the bias guide b value are changed by adopting a back-propagation algorithm (back-propagation algorithm, BP algorithm) according to the size of the error, and the predicted result value and the actual result value are more and more approximate through continuous iteration minimization of the loss function, and the loss function value is not changed any more, for example, until the error reaches approximately 0.001. For the multi-classification problem, a Softmax loss function can be adopted, and finally, after the training phase is finished, the weight value is fixed to the final value, so that the trained convolutional neural network is obtained.
Different VR scene tasks focus on measuring different features such as the attention and the multi-impulse of ADHD, and input parameters are machine learning algorithms taking various indexes in feature parameters obtained when a tested person completes a plurality of tasks in VR as input parameters.
For example, the various classification processes of the various modal parameters are unified into a complete CNN (convolutional neural network) network structure, which merges the task parameters of each single VR scene into a CNN ADHD classification network, which is also a CNN model consisting of a series of convolutional, pooling and ReLU activation layers. A network fusion method based on the Point-switch boltzmann machine (Point-wise Gated Boltzmann Machine, PGBM) may be employed. Splicing the last layer of feature vectors of two or more CNNs of the VR scene, taking the spliced feature vectors as input of a visible layer of a PGBM part to train the part, and training the PGBM part by adopting a contrast divergence method. And obtaining the characteristic representation of the task related part in the spliced characteristic vector through the network connection weight obtained after training, wherein the part of characteristic representation is used as the input of the newly added full-connection layer, and training the newly added full-connection layer. Likewise, the back propagation depth of the network is limited to the newly added fully connected layer. The Softmax penalty function is also used as a guide for the web training process.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic of each process, and should not limit the implementation process of the embodiment of the present application in any way.
Fig. 8 shows a block diagram of a data processing apparatus for attention deficit hyperactivity disorder according to an embodiment of the present application, and for convenience of explanation, only the parts related to the embodiment of the present application are shown.
Referring to fig. 8, the apparatus 100 may include: the data acquisition module 110 and the data calculation module 120 result output module 130.
The data acquisition module 110 is configured to acquire input data of a subject completing a task in a virtual reality environment;
a data calculation module 120, configured to calculate, based on the input data, a characteristic parameter of the subject;
and the result output module 130 is used for outputting a classification result based on the characteristic parameters and the trained machine learning model.
In one possible implementation, at least one game scene is stored in the virtual reality environment, and the subject completes tasks in the game scene.
In one possible implementation, the input data includes at least one of task performance data, motion sensing data, eye tracking data, and electroencephalogram data; the data acquisition module 110 may be specifically configured to:
acquiring the task performance data under the current task through a gesture tracker and/or a language processing device;
acquiring the motion sensing data acquired by the action recorder under the current task;
acquiring the eye movement tracking data acquired by the eye movement tracking equipment under the current task;
and acquiring the brain electricity data acquired by the brain electricity acquisition device under the current task.
In one possible implementation, the task performance data includes: the correct number of responses of the subject at the target stimulus, the start time of response of the subject at the target stimulus occurrence, and the number of non-responses of the subject at the target stimulus occurrence, wherein one task comprises at least one target stimulus; in the case where the input data is task performance data, the data calculation module 120 may specifically be configured to:
calculating the ratio of the correct response times to the total target stimulus number in the current task to obtain the correct rate in the characteristic parameters;
Calculating a difference between a response start time at the correct response and a target stimulus start time, and calculating a response standard difference in the characteristic parameters based on the difference;
counting the number of times before the corresponding target stimulus starting time of the response starting time under the current task as the error number in the characteristic parameter;
the number of non-response times of the subject when the target stimulus occurs is recorded as the number of leakage reports in the characteristic parameter.
In one possible implementation, the motion sensing data includes: a stationary time period of the subject, a motion time of the subject when motion is transformed, a position coordinate of a motion path recorded by the motion recorder when the subject moves each time, a motion path of the subject during completion of the task, and the number of the motion paths; in the case where the input data is motion sensing data, the data calculation module 120 may specifically be configured to:
summing all the rest time periods, and determining the rest time length in the characteristic parameters according to the quotient of the sum result and the number of the rest time periods;
searching the number of the action time in a preset time period and recording the number as the number of the movements in the characteristic parameters;
Calculating the area of each movement of the subject based on the position coordinates, and recording the sum of all the area areas as the movement area of the subject in the characteristic parameters;
summing all the crossing points of the motion paths, and determining the motion complexity in the characteristic parameters according to the quotient of the summation result and the number of the motion paths;
based on the position coordinates, a displacement of the subject in the characteristic parameters to accomplish the task is calculated.
In one possible implementation, the eye movement tracking data includes eye coordinates of the subject, a time of eye gaze target stimulation, and an order of eye gaze target stimulation; in the case where the input data is eye-tracking data, the data calculation module 120 may specifically be configured to:
determining the number of gazes of the subject in the characteristic parameters on each target stimulus based on the eyeball coordinates;
recording the time of the eyeball fixation target stimulation as the fixation time in the characteristic parameter;
determining a visual scan path of the subject in the characteristic parameters based on the order in which the eyeballs gaze at a target stimulus;
based on the visual scan path and the gaze time, a visual scan strategy of the subject in the characteristic parameters is obtained.
In one possible implementation, the apparatus 100 further includes:
and the training module is used for training the machine learning model based on the input sample to obtain the trained machine learning model.
In one possible implementation, the training module may be specifically configured to:
taking at least one index of characteristic parameters obtained when a tested person completes at least one task in the virtual reality environment as an input parameter, wherein the tested person comprises a normal subject and a diseased subject;
and training the machine learning model based on the input samples by taking all the input parameters as input samples to obtain the trained machine learning model.
It should be noted that, because the content of information interaction and execution process between the above devices/units is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
The embodiment of the present application further provides a terminal device, referring to fig. 9, the terminal device 400 may include: at least one processor 410, a memory 420, and a computer program stored in the memory 420 and executable on the at least one processor 410, the processor 410, when executing the computer program, performing the steps of any of the various method embodiments described above, such as steps S101 to S103 in the embodiment shown in fig. 2. Alternatively, the processor 410 may perform the functions of the modules/units of the apparatus embodiments described above, such as the functions of the modules 110 to 130 shown in fig. 8, when executing the computer program.
By way of example, a computer program may be partitioned into one or more modules/units that are stored in memory 420 and executed by processor 410 to complete the present application. The one or more modules/units may be a series of computer program segments capable of performing specific functions for describing the execution of the computer program in the terminal device 400.
It will be appreciated by those skilled in the art that fig. 9 is merely an example of a terminal device and is not limiting of the terminal device, and may include more or fewer components than shown, or may combine certain components, or different components, such as input-output devices, network access devices, buses, etc.
The processor 410 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 420 may be an internal storage unit of the terminal device, or may be an external storage device of the terminal device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like. The memory 420 is used for storing the computer program as well as other programs and data required by the terminal device. The memory 420 may also be used to temporarily store data that has been output or is to be output.
The bus may be an industry standard architecture (Industry Standard Architecture, ISA) bus, an external device interconnect (Peripheral Component, PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, among others. The buses may be divided into address buses, data buses, control buses, etc. For ease of illustration, the buses in the drawings of the present application are not limited to only one bus or one type of bus.
The data processing method for attention deficit hyperactivity disorder provided by the embodiment of the application can be applied to terminal equipment such as computers, tablet computers, notebook computers, netbooks, personal digital assistants (personal digital assistant, PDA) and the like, and the embodiment of the application does not limit the specific type of the terminal equipment.
Taking the terminal device as a computer as an example. Fig. 10 is a block diagram showing a part of the structure of a computer provided with an embodiment of the present application. Referring to fig. 10, a computer includes: communication circuit 510, memory 520, input unit 530, display unit 540, audio circuit 550, wireless fidelity (wireless fidelity, wiFi) module 560, processor 570, and power supply 580.
The following describes the components of the computer in detail with reference to fig. 10:
the communication circuit 510 may be used for receiving and transmitting signals during the process of receiving and transmitting information or communication, in particular, after receiving an image sample transmitted by the image acquisition device, processing the image sample by the processor 570; in addition, an image acquisition instruction is sent to the image acquisition apparatus. Typically, the communication circuitry includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (Low Noise Amplifier, LNA), a duplexer, and the like. In addition, the communication circuit 510 may also communicate with networks and other devices through wireless communication. The wireless communications may use any communication standard or protocol including, but not limited to, global system for mobile communications (Global System of Mobile communication, GSM), general packet radio service (General Packet Radio Service, GPRS), code division multiple access (Code Division Multiple Access, CDMA), wideband code division multiple access (Wideband Code Division Multiple Access, WCDMA), long term evolution (Long Term Evolution, LTE)), email, short message service (Short Messaging Service, SMS), and the like.
The memory 520 may be used to store software programs and modules, and the processor 570 performs various functional applications and data processing of the computer by executing the software programs and modules stored in the memory 520. The memory 520 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, application programs required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to the use of the computer (such as audio data, phonebooks, etc.), and the like. In addition, memory 520 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The input unit 530 may be used to receive input numeric or character information and generate key signal inputs related to subject settings and function control of the computer. In particular, the input unit 530 may include a touch panel 531 and other input devices 532. The touch panel 531, also referred to as a touch screen, may collect touch operations on or near the subject (e.g., the operation of the subject on the touch panel 531 or near the touch panel 531 using any suitable object or accessory such as a finger, a stylus, etc.), and drive the corresponding connection device according to a predetermined program. Alternatively, the touch panel 531 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch azimuth of the subject, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device and converts it into touch point coordinates, which are then sent to the processor 570 and can receive commands from the processor 570 and execute them. In addition, the touch panel 531 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. The input unit 530 may include other input devices 532 in addition to the touch panel 531. In particular, other input devices 532 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, mouse, joystick, etc.
The display unit 540 may be used to display information input by a subject or information provided to the subject and various menus of a computer. The display unit 540 may include a display panel 541, and alternatively, the display panel 541 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 531 may cover the display panel 541, and when the touch panel 531 detects a touch operation thereon or thereabout, the touch operation is transferred to the processor 570 to determine a type of a touch event, and then the processor 570 provides a corresponding visual output on the display panel 541 according to the type of the touch event. Although in fig. 10, the touch panel 531 and the display panel 541 implement input and output functions of a computer as two independent components, in some embodiments, the touch panel 531 and the display panel 541 may be integrated to implement input and output functions of a computer.
Audio circuitry 550 may provide an audio interface between the subject and the computer. The audio circuit 550 may convert the received audio data into an electrical signal, transmit the electrical signal to a speaker, and convert the electrical signal into a sound signal for output; on the other hand, the microphone converts the collected sound signals into electrical signals, which are received by the audio circuit 550 and converted into audio data, which are processed by the audio data output processor 570 and sent to, for example, another computer via the communication circuit 510, or which are output to the memory 520 for further processing.
WiFi belongs to a short-distance wireless transmission technology, and a computer can help a subject to send and receive e-mails, browse web pages, access streaming media and the like through the WiFi module 560, so that wireless broadband Internet access is provided for the subject. Although fig. 10 shows a WiFi module 560, it is understood that it does not belong to the essential constitution of the computer, and can be omitted entirely as required within the scope of not changing the essence of the invention.
The processor 570 is a control center of the computer and connects various parts of the entire computer using various interfaces and lines, and performs various functions of the computer and processes data by running or executing software programs and/or modules stored in the memory 520, and calling data stored in the memory 520, thereby performing overall monitoring of the computer. Optionally, the processor 570 may include one or more processing units; preferably, the processor 570 may integrate an application processor that primarily handles operating systems, subject interfaces, applications, etc., and a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 570.
The computer also includes a power supply 580 (e.g., a battery) for powering the various components, and preferably the power supply 580 can be logically coupled to the processor 570 via a power management system so as to provide for managing charging, discharging, and power consumption by the power management system.
Embodiments of the present application also provide a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps in the respective embodiments of the data processing method that may implement the attention deficit hyperactivity disorder described above.
Embodiments of the present application provide a computer program product which, when run on a mobile terminal, causes the mobile terminal to perform the steps in the various embodiments of the data processing method that may implement attention deficit hyperactivity disorder described above.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application implements all or part of the flow of the method of the above embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing device/terminal apparatus, recording medium, computer Memory, read-Only Memory (ROM), random access Memory (RAM, random Access Memory), electrical carrier signals, telecommunications signals, and software distribution media. Such as a U-disk, removable hard disk, magnetic or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other manners. For example, the apparatus/network device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (4)

1. A data processing apparatus for attention deficit hyperactivity disorder, comprising:
the data acquisition module is used for acquiring input data of a subject for completing tasks in a virtual reality environment, at least one game scene is stored in the virtual reality environment, the subject completes the tasks in the game scene, and the game scene comprises a difference of two environment situations, a VR fixed target archery game, a VR mobile target game and a person expression recognition in a social situation; in the VR fixed target archery game, the target is required to be stared, and in the specified time, the closer the duration of the staring target is to the specified time, the higher the shooting target ring score is; VR mobile target games require subjects to respond as soon as possible to target stimuli, and not to non-target stimuli; the input data comprises task performance data, motion sensing data, eye movement tracking data and electroencephalogram data; the task performance data includes: the correct number of responses of the subject at the target stimulus, the start time of response of the subject at the target stimulus occurrence, and the number of non-responses of the subject at the target stimulus occurrence, wherein one task comprises at least one target stimulus; the motion sensing data includes: a stationary time period of the subject, a motion time of the subject when motion is converted, position coordinates of a motion path recorded by a motion recorder when the subject moves each time, a motion path of the subject during completion of the task, and the number of the motion paths; the eye movement tracking data includes eye coordinates of the subject, time of eye gaze target stimulation, and order of eye gaze target stimulation; the electroencephalogram data are electroencephalogram and electroencephalogram evoked event related potentials;
The data calculation module is used for calculating and obtaining characteristic parameters of the subject based on the input data, and the characteristic parameters comprise: calculating the ratio of the correct response times to the total target stimulus number in the current task to obtain the correct rate in the characteristic parameters; calculating a difference between a response start time at the correct response and a target stimulus start time, and calculating a response standard difference in the characteristic parameters based on the difference; counting the number of times before the corresponding target stimulus starting time of the response starting time under the current task as the error number in the characteristic parameter; recording the non-response number of the subject when the target stimulus occurs as the number of leakage reports in the characteristic parameter; summing all the rest time periods, and determining the rest time length in the characteristic parameters according to the quotient of the sum result and the number of the rest time periods; searching the number of the action time in a preset time period and recording the number as the number of the movements in the characteristic parameters; calculating the area of each movement of the subject based on the position coordinates, and recording the sum of all the area areas as the movement area of the subject in the characteristic parameters; summing all the crossing points of the motion paths, and determining the motion complexity in the characteristic parameters according to the quotient of the summation result and the number of the motion paths; calculating a displacement of the subject in the characteristic parameters to complete the task based on the position coordinates; determining the number of gazes of the subject in the characteristic parameters on each target stimulus based on the eyeball coordinates; recording the time of the eyeball fixation target stimulation as the fixation time in the characteristic parameter; determining a visual scan path of the subject in the characteristic parameters based on the order in which the eyeballs gaze at a target stimulus; obtaining a visual scanning strategy of the subject in the characteristic parameters based on the visual scanning path and the gazing time; time-frequency domain features and P300 features determined based on the electroencephalogram data;
And the result output module is used for outputting a classification result based on the characteristic parameters and the trained machine learning model.
2. The attention deficit hyperactivity disorder data processing apparatus of claim 1, wherein the data acquisition module is configured to:
acquiring the task performance data under the current task through a gesture tracker and/or a language processing device;
acquiring the motion sensing data acquired by the action recorder under the current task;
acquiring the eye movement tracking data acquired by the eye movement tracking equipment under the current task;
and acquiring the brain electricity data acquired by the brain electricity acquisition device under the current task.
3. The attention deficit hyperactivity disorder data processing apparatus of claim 1, which further comprises:
and the training module is used for training the machine learning model based on the input sample to obtain the trained machine learning model.
4. A data processing apparatus for attention deficit hyperactivity disorder as claimed in claim 3, wherein the training module is for:
taking at least one index of characteristic parameters obtained when a tested person completes at least one task in the virtual reality environment as an input parameter, wherein the tested person comprises a normal subject and a diseased subject;
And training the machine learning model based on the input samples by taking all the input parameters as input samples to obtain the trained machine learning model.
CN201911398269.9A 2019-12-30 2019-12-30 Data processing device for attention deficit hyperactivity disorder Active CN110970130B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201911398269.9A CN110970130B (en) 2019-12-30 2019-12-30 Data processing device for attention deficit hyperactivity disorder
PCT/CN2020/129452 WO2021135692A1 (en) 2019-12-30 2020-11-17 Data processing method and device for attention deficit hyperactivity disorder and terminal device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911398269.9A CN110970130B (en) 2019-12-30 2019-12-30 Data processing device for attention deficit hyperactivity disorder

Publications (2)

Publication Number Publication Date
CN110970130A CN110970130A (en) 2020-04-07
CN110970130B true CN110970130B (en) 2023-06-27

Family

ID=70037418

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911398269.9A Active CN110970130B (en) 2019-12-30 2019-12-30 Data processing device for attention deficit hyperactivity disorder

Country Status (2)

Country Link
CN (1) CN110970130B (en)
WO (1) WO2021135692A1 (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110970130B (en) * 2019-12-30 2023-06-27 佛山创视嘉科技有限公司 Data processing device for attention deficit hyperactivity disorder
CN111528859B (en) * 2020-05-13 2023-04-18 浙江大学人工智能研究所德清研究院 Child ADHD screening and evaluating system based on multi-modal deep learning technology
CN111528867A (en) * 2020-05-13 2020-08-14 湖州维智信息技术有限公司 Expression feature vector determination method for child ADHD screening and evaluating system
CN111563633A (en) * 2020-05-15 2020-08-21 上海乂学教育科技有限公司 Reading training system and method based on eye tracker
CN113889256A (en) * 2020-07-02 2022-01-04 波克科技股份有限公司 Method and device for identifying mild cognitive impairment waiting crowd through online game
CN113889258A (en) * 2020-07-02 2022-01-04 波克科技股份有限公司 Method and device for identifying mild cognitive impairment waiting crowd through online game
CN113435335B (en) * 2021-06-28 2022-08-12 平安科技(深圳)有限公司 Microscopic expression recognition method and device, electronic equipment and storage medium
CN113425293B (en) * 2021-06-29 2022-10-21 上海交通大学医学院附属新华医院 Auditory dyscognition disorder evaluation system and method
CN113456075A (en) * 2021-07-02 2021-10-01 西安中盛凯新技术发展有限责任公司 Concentration assessment training method based on eye movement tracking and brain wave monitoring technology
CN113576482B (en) * 2021-09-28 2022-01-18 之江实验室 Attention deviation training evaluation system and method based on composite expression processing
CN114743618A (en) * 2022-03-22 2022-07-12 湖南心康医学科技有限公司 Cognitive dysfunction treatment system and method based on artificial intelligence
TWI831178B (en) * 2022-04-13 2024-02-01 國立中央大學 Analysis apparatus, diagnostic system and analysis method for adhd
CN116664620B (en) * 2023-07-12 2024-08-16 深圳优立全息科技有限公司 Picture dynamic capturing method and related device based on tracking system
CN117198537B (en) * 2023-11-07 2024-03-26 北京无疆脑智科技有限公司 Task completion data analysis method and device, electronic equipment and storage medium

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7942828B2 (en) * 2000-05-17 2011-05-17 The Mclean Hospital Corporation Method for determining fluctuation in attentional state and overall attentional state
IL148618A0 (en) * 2002-03-11 2002-09-12 Adhd Solutions Ltd A method for diagnosis and treatment of adhd and add, and a system for use thereof
US20050216243A1 (en) * 2004-03-02 2005-09-29 Simon Graham Computer-simulated virtual reality environments for evaluation of neurobehavioral performance
US20080280276A1 (en) * 2007-05-09 2008-11-13 Oregon Health & Science University And Oregon Research Institute Virtual reality tools and techniques for measuring cognitive ability and cognitive impairment
AU2012259507B2 (en) * 2011-05-20 2016-08-25 Nanyang Technological University Systems, apparatuses, devices, and processes for synergistic neuro-physiological rehabilitation and/or functional development
US11839472B2 (en) * 2016-07-19 2023-12-12 Akili Interactive Labs, Inc. Platforms to implement signal detection metrics in adaptive response-deadline procedures
KR102369850B1 (en) * 2016-08-03 2022-03-03 아킬리 인터랙티브 랩스 인크. Cognitive platform including computerized associative elements
JP7266582B2 (en) * 2017-08-15 2023-04-28 アキリ・インタラクティヴ・ラブズ・インコーポレイテッド Cognitive platform with computer-controlled elements
CN107519622A (en) * 2017-08-21 2017-12-29 南通大学 Spatial cognition rehabilitation training system and method based on virtual reality and the dynamic tracking of eye
CN109712710B (en) * 2018-04-26 2023-06-20 南京大学 Intelligent infant development disorder assessment method based on three-dimensional eye movement characteristics
CN110070944B (en) * 2019-05-17 2023-12-08 段新 Social function assessment training system based on virtual environment and virtual roles
CN110970130B (en) * 2019-12-30 2023-06-27 佛山创视嘉科技有限公司 Data processing device for attention deficit hyperactivity disorder

Also Published As

Publication number Publication date
WO2021135692A1 (en) 2021-07-08
CN110970130A (en) 2020-04-07

Similar Documents

Publication Publication Date Title
CN110970130B (en) Data processing device for attention deficit hyperactivity disorder
Huynh et al. Engagemon: Multi-modal engagement sensing for mobile games
Elzeiny et al. Machine learning approaches to automatic stress detection: A review
Conati et al. Modeling user affect from causes and effects
RU2708807C2 (en) Algorithm of integrated remote contactless multichannel analysis of psychoemotional and physiological state of object based on audio and video content
CN110349674A (en) Autism-spectrum obstacle based on improper activity observation and analysis assesses apparatus and system
Yoon et al. Emotion recognition of serious game players using a simple brain computer interface
CN113974589B (en) Multi-modal behavior paradigm evaluation optimization system and cognitive ability evaluation method
Al-Ghannam et al. Prayer activity monitoring and recognition using acceleration features with mobile phone
Bakhtiyari et al. Fuzzy model of dominance emotions in affective computing
Wang et al. Mgeed: a multimodal genuine emotion and expression detection database
Jianwattanapaisarn et al. Emotional characteristic analysis of human gait while real-time movie viewing
Li et al. A framework for using games for behavioral analysis of autistic children
Liu et al. Transition-aware housekeeping task monitoring using single wrist-worn sensor
Lambay et al. Machine learning assisted human fatigue detection, monitoring, and recovery
Jitpattanakul et al. Enhancing Sensor-Based Human Activity Recognition using Efficient Channel Attention
Salman et al. Improvement of Eye Tracking Based on Deep Learning Model for General Purpose Applications
CN116301473A (en) User behavior prediction method, device, equipment and medium based on virtual reality
Ekiz et al. Long short-term memory network based unobtrusive workload monitoring with consumer grade smartwatches
Zhang et al. Multimodal Fast–Slow Neural Network for learning engagement evaluation
Chepin et al. The improved method for robotic devices control with operator's emotions detection
Pillai et al. Comparison of concurrent cognitive load measures during n-back tasks
Bakkialakshmi et al. Effective Prediction System for Affective Computing on Emotional Psychology with Artificial Neural Network
Ishimaru et al. ARFLED: ability recognition framework for learning and education
Sun et al. Super-Resolution Level Separation: A Method for Enhancing Electroencephalogram Classification Accuracy Through Super-Resolution Level Separation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20201012

Address after: Room 601, 6th floor, No.17, Qinghui Road, Central District neighborhood committee, Daliang sub district office, Shunde District, Foshan City, Guangdong Province

Applicant after: Foshan chuangshijia Technology Co.,Ltd.

Address before: 528300 Guangdong Foshan Shunde District Daliang Street New Gui Nan Road Yi Ju Garden two phase 19 19 401

Applicant before: Duan Xin

GR01 Patent grant
GR01 Patent grant