Nothing Special   »   [go: up one dir, main page]

CN110991277A - Multidimensional and multitask learning evaluation system based on deep learning - Google Patents

Multidimensional and multitask learning evaluation system based on deep learning Download PDF

Info

Publication number
CN110991277A
CN110991277A CN201911139266.3A CN201911139266A CN110991277A CN 110991277 A CN110991277 A CN 110991277A CN 201911139266 A CN201911139266 A CN 201911139266A CN 110991277 A CN110991277 A CN 110991277A
Authority
CN
China
Prior art keywords
user
learning
reading
module
recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911139266.3A
Other languages
Chinese (zh)
Other versions
CN110991277B (en
Inventor
李剑峰
张进
宋志远
史吉光
王洪波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Jianxin Intelligent Technology Co ltd
Original Assignee
Hunan Jianxin Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Jianxin Intelligent Technology Co ltd filed Critical Hunan Jianxin Intelligent Technology Co ltd
Priority to CN201911139266.3A priority Critical patent/CN110991277B/en
Publication of CN110991277A publication Critical patent/CN110991277A/en
Application granted granted Critical
Publication of CN110991277B publication Critical patent/CN110991277B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06393Score-carding, benchmarking or key performance indicator [KPI] analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Resources & Organizations (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Educational Administration (AREA)
  • Multimedia (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • Economics (AREA)
  • Development Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Educational Technology (AREA)
  • Ophthalmology & Optometry (AREA)
  • Game Theory and Decision Science (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Primary Health Care (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a multi-dimensional multi-task learning evaluation system based on deep learning, which comprises a first sleepy and tired identification module, a second sleepy and tired identification module, a third sleepy and tired identification module and a fourth sleepy and tired identification module, wherein the first sleepy and tired identification module is used for identifying the actions of opening and closing eyes and identifying; the opening and closing motion recognition is used for recognizing the tired and doze state of the user and judging the attention of the user by combining the eye movement track; the user is identified by combining the head posture to judge the correctness and the mistake of the reading and learning posture of the user, and the tired and sleepy state of the user is judged by combining the actions of eyes. The invention has the functions of face recognition, sleepiness and tiredness recognition, learning emotion evaluation, automatic scoring module, myopia recognition and the like, and can evaluate learning progress and repair in multiple dimensions.

Description

Multidimensional and multitask learning evaluation system based on deep learning
Technical Field
The invention relates to the technical field of intelligent equipment, in particular to a multi-dimensional multi-task learning evaluation system based on deep learning.
Background
The prior art has the following defects:
the brightness of voice recognition control light is taken to prior art, but not simultaneously to and head gesture and position of sitting form, eye movement track, and the open closed action recognition of eyes, there is intelligent degree low, and myopia prevention effect is poor, helps promoting little scheduling problem to user's study.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, provides a multidimensional multitask learning evaluation system based on deep learning, has a face recognition function, a drowsiness recognition function, a learning emotion evaluation function, an automatic scoring and scoring module, a myopia recognition function and the like, can evaluate learning progress and repair multidimensional, improves the learning-aided intelligentization degree, improves the myopia prevention effect, and has a great help promotion effect on the learning of users.
The purpose of the invention is realized by the following technical scheme:
a multi-dimensional multi-task learning evaluation system based on deep learning comprises a first drowsiness and tiredness identification module, a first visual analysis module and a second visual analysis module, wherein the first drowsiness and tiredness identification module is used for identifying the actions of opening and closing eyes and identifying the track of the eyes; the opening and closing motion recognition is used for recognizing the tired and doze state of the user and judging the attention of the user by combining the eye movement track; the user is identified by combining the head posture to judge the correctness and the mistake of the reading and learning posture of the user, and the tired and sleepy state of the user is judged by combining the actions of eyes; the facial expression analysis module is used for judging the happy, nervous and excited states of the user in the learning process and specifically evaluating the learning process; the second sleepy and tired recognition module judges the learning sleepy and tired state of the user through eye opening and closing and head posture recognition, and establishes a data set for training the data set and testing the data set by collecting different sleepy postures; the learning course subject identification module is used for establishing a learning course subject data set through the reading and writing contents of the user and used for training the data set and testing the data set; the digital camera confirms the classification of the contents of the reading and writing subjects through the collected reading and writing images and the identification of the corresponding training set and test set; the learning emotion evaluation module is used for confirming a specific course learned by the user through the learning course subject identification module, and identifying and learning an expression index value of the subject by combining the facial expression identification module, and is used for evaluating the interest degree of the user and the mastering capacity of the content of the course in a multi-dimensional manner when the user learns different subjects; and the marking module is used for inputting standard answers into the background management system through the user control terminal when the image recognition confirms that the user writes a job task, scanning the image input with the answers, inputting the standard answers of each small question according to the content structure of the job, collecting the actual result of the answer of the user, comparing and recognizing the actual result with the standard answers of each small question and scoring the marking.
Further, the myopia prevention identification module is used for performing myopia prevention and early warning through threshold calculation of the linear distance.
Further, the method comprises the following steps:
s1, determining the initial positions of the two points of the line segment;
s2, confirming the plane position of the reading observed by the eyes through image recognition, confirming the central line of the reading plane, and finding the shortest distance point between the two eyes and the reading plane through carrying out straight line detection by Hough transform;
and S3, comparing the data of the minimum reading distance with a design threshold, if the data is smaller than the threshold, warning and reminding through a loudspeaker, and if the data is larger than or equal to the threshold, confirming that the user belongs to a normal reading mode.
Further, in step S1, the center point between two points of the center of the two eye axes is set as the starting point.
Further, in step S2, the end point is a contact point of the writing pen tip and the job text; and detecting the distance between the axis center point of the connecting line of the center points of the two eyes and the text point of the contact operation of the writing pen point by using Hough transform.
Furthermore, the system comprises a management module for managing the user identity information.
The intelligent desk lamp further comprises a face recognition module, wherein the face recognition module is used for establishing personal identity face recognition data through the collected face data, and when a user uses the intelligent desk lamp, the face data are collected through a digital camera to recognize user identity information.
Further, the cloud server is used for distributing and updating the firmware program and data backup.
The invention has the beneficial effects that:
(1) the invention has the functions of face recognition, sleepiness and tiredness recognition, learning emotion evaluation, automatic scoring module, myopia recognition and the like, can evaluate learning progress and repair in multiple dimensions, improves the intelligent degree, improves the myopia prevention effect, and greatly helps to promote learning of users. Specifically, while the brightness and the working mode of the desk lamp are controlled through common voice recognition, myopia can be recognized, judged and prevented according to two working modes of different reading, writing and answering modes of a user, in the embodiment, the intelligent desk lamp can provide lighting learning for the user, and meanwhile, the learning state of the user in the desk lamp using process can be evaluated according to the head posture and the eye opening and closing state; the center point position of each eye of the user is detected linearly through Hough transformation, the user is reminded of paying attention to the eye using habit through a set threshold early warning mode, and the optimal eye using state for preventing myopia is achieved.
(2) The invention evaluates the user in the learning process according to the head posture, the eye opening and closing state and the eye movement track state, and can judge whether sleepy and tired actions exist, for example, the head swings repeatedly within a certain frequency, or the eyes are in the sleeping state, and the like, automatically identifies and reminds the user, possibly breaks or lifts the spirit, and achieves intelligent identification.
(3) The intelligent mobile phone control terminal is communicated with the intelligent mobile phone control terminal through the communication module, and when the image detection error of the user is detected, the loudspeaker sounds to remind the user or adjust the brightness of light to inform the user of paying attention to the posture so as to prevent the myopic improper posture; or an improper mode of reminding the user to improve the attention and preventing the user from drowsiness and tiredness during the learning and the operation.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a block diagram of the present invention.
Detailed Description
The technical solutions of the present invention are further described in detail below with reference to the accompanying drawings, but the scope of the present invention is not limited to the following. All of the features disclosed in this specification, or all of the steps of a method or process so disclosed, may be combined in any combination, except combinations where mutually exclusive features and/or steps are used.
Any feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving equivalent or similar purposes, unless expressly stated otherwise. That is, unless expressly stated otherwise, each feature is only an example of a generic series of equivalent or similar features.
Specific embodiments of the present invention will be described in detail below, and it should be noted that the embodiments described herein are only for illustration and are not intended to limit the present invention. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be apparent to one of ordinary skill in the art that: it is not necessary to employ these specific details to practice the present invention. In other instances, well-known circuits, software, or methods have not been described in detail so as not to obscure the present invention.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Before describing the embodiments, some necessary terms need to be explained. For example:
if the terms "first," "second," etc. are used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. Thus, a "first" element discussed below could also be termed a "second" element without departing from the teachings of the present invention. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. In contrast, when an element is referred to as being "directly connected" or "directly coupled" to another element, there are no intervening elements present.
The various terms appearing in this application are used for the purpose of describing particular embodiments only and are not intended as limitations of the invention, with the singular being intended to include the plural unless the context clearly dictates otherwise.
When the terms "comprises" and/or "comprising" are used in this specification, these terms are intended to specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence and/or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
As shown in fig. 1, a multidimensional multitask learning evaluation system based on deep learning comprises a first drowsy and tired identification module, an eye movement track identification module and a second drowsy and tired identification module, wherein the drowsy and tired identification module is used for identifying the movement of opening and closing eyes; the opening and closing motion recognition is used for recognizing the tired and doze state of the user and judging the attention of the user by combining the eye movement track; the user is identified by combining the head posture to judge the correctness and the mistake of the reading and learning posture of the user, and the tired and sleepy state of the user is judged by combining the actions of eyes; the facial expression analysis module is used for judging the happy, nervous and excited states of the user in the learning process and specifically evaluating the learning process; the second sleepy and tired recognition module judges the learning sleepy and tired state of the user through eye opening and closing and head posture recognition, and establishes a data set for training the data set and testing the data set by collecting different sleepy postures; the learning course subject identification module is used for establishing a learning course subject data set through the reading and writing contents of the user and used for training the data set and testing the data set; the digital camera confirms the classification of the contents of the reading and writing subjects through the collected reading and writing images and the identification of the corresponding training set and test set; the learning emotion evaluation module is used for confirming a specific course learned by the user through the learning course subject identification module, and identifying and learning an expression index value of the subject by combining the facial expression identification module, and is used for evaluating the interest degree of the user and the mastering capacity of the content of the course in a multi-dimensional manner when the user learns different subjects; and the marking module is used for inputting standard answers into the background management system through the user control terminal when the image recognition confirms that the user writes a job task, scanning the image input with the answers, inputting the standard answers of each small question according to the content structure of the job, collecting the actual result of the answer of the user, comparing and recognizing the actual result with the standard answers of each small question and scoring the marking.
Further, the myopia prevention identification module is used for performing myopia prevention and early warning through threshold calculation of the linear distance.
Further, the method comprises the following steps:
s1, determining the initial positions of the two points of the line segment;
s2, confirming the plane position of the reading observed by the eyes through image recognition, confirming the central line of the reading plane, and finding the shortest distance point between the two eyes and the reading plane through carrying out straight line detection by Hough transform;
and S3, comparing the data of the minimum reading distance with a design threshold, if the data is smaller than the threshold, warning and reminding through a loudspeaker, and if the data is larger than or equal to the threshold, confirming that the user belongs to a normal reading mode.
Further, in step S1, the center point between two points of the center of the two eye axes is set as the starting point.
Further, in step S2, the end point is a contact point of the writing pen tip and the job text; and detecting the distance between the axis center point of the connecting line of the center points of the two eyes and the text point of the contact operation of the writing pen point by using Hough transform.
Furthermore, the system comprises a management module for managing the user identity information.
The intelligent desk lamp further comprises a face recognition module, wherein the face recognition module is used for establishing personal identity face recognition data through the collected face data, and when a user uses the intelligent desk lamp, the face data are collected through a digital camera to recognize user identity information.
Further, the cloud server is used for distributing and updating the firmware program and data backup.
Example one
As shown in fig. 1, a multidimensional multitask learning evaluation system based on deep learning comprises a first drowsy and tired identification module, an eye movement track identification module and a second drowsy and tired identification module, wherein the drowsy and tired identification module is used for identifying the movement of opening and closing eyes; the opening and closing motion recognition is used for recognizing the tired and doze state of the user and judging the attention of the user by combining the eye movement track; the user is identified by combining the head posture to judge the correctness and the mistake of the reading and learning posture of the user, and the tired and sleepy state of the user is judged by combining the actions of eyes; the facial expression analysis module is used for judging the happy, nervous and excited states of the user in the learning process and specifically evaluating the learning process; the second sleepy and tired recognition module judges the learning sleepy and tired state of the user through eye opening and closing and head posture recognition, and establishes a data set for training the data set and testing the data set by collecting different sleepy postures; the learning course subject identification module is used for establishing a learning course subject data set through the reading and writing contents of the user and used for training the data set and testing the data set; the digital camera confirms the classification of the contents of the reading and writing subjects through the collected reading and writing images and the identification of the corresponding training set and test set; the learning emotion evaluation module is used for confirming a specific course learned by the user through the learning course subject identification module, and identifying and learning an expression index value of the subject by combining the facial expression identification module, and is used for evaluating the interest degree of the user and the mastering capacity of the content of the course in a multi-dimensional manner when the user learns different subjects; and the marking module is used for inputting standard answers into the background management system through the user control terminal when the image recognition confirms that the user writes a job task, scanning the image input with the answers, inputting the standard answers of each small question according to the content structure of the job, collecting the actual result of the answer of the user, comparing and recognizing the actual result with the standard answers of each small question and scoring the marking.
In the embodiment, the user identity information is authenticated, the face recognition module establishes personal identity face recognition data through the collected face data, and when the user uses the intelligent desk lamp, the face data is collected through the digital camera to recognize the user identity information; in terms of function implementation, the drowsiness and tiredness identification module: identifying the motion of opening and closing eyes and identifying the track of the eye motion; the opening and closing motion recognition is used for recognizing the tired and doze state of the user and judging the attention of the user by combining the eye movement track; the head posture recognition user can judge the correct posture and the wrong posture of the user in reading and learning, and can also judge the tired and sleepy state of the user by combining the actions of eyes.
Analyzing facial expression, judging the states of happiness, tension, excitement and the like of a user in the learning process, and specifically evaluating the learning process, such as answering of a mathematic test paper which needs to be finished today, and judging the emotional change of the user in the process of finishing the test paper through the process analysis of the test paper; a level evaluation may be made of the user's learning tension.
The drowsiness and tiredness recognition module judges the learning drowsiness and tiredness state of the user through the opening and closing of eyes and head posture recognition, for example, when the user learns, the eyes are opened and do not move, but the head moves continuously to and fro; the head posture is not changed, the eye pupil detail is reduced, and the like, and a data set is established by collecting different doze postures and is used for training the data set and testing the data set.
The learning course subject identification module is used for establishing a learning course subject data set through the reading (writing) content of a user and is used for training the data set and testing the data set; the digital camera confirms whether the contents of the reading (writing) subjects are Chinese, mathematic or physical or chemical through the collected reading (writing) images and the recognition of the corresponding training set and test set.
The learning emotion evaluation module is used for confirming a specific course learned by a user through the learning course subject identification module and simultaneously identifying and learning an expression index value of the subject by combining the facial expression identification module so as to evaluate different dimensionality evaluations such as the interest degree of the user in learning different subjects, the mastering capacity of the course content and the like; an automatic scoring module: when the image recognition confirms that the user writes a job task, standard answers are input into the background management system through the user control terminal, image input with the answers can be scanned, standard answers of all the small questions can be input according to the content structure of the job, then the actual results of the answers of the user are collected, and the actual results are compared with the standard answers input into all the small questions for recognition; a management module: and the cloud server is used for distributing and updating the desk lamp firmware and data backup.
In other technical features of the embodiment, those skilled in the art can flexibly select and use the features according to actual situations to meet different specific actual requirements. However, it will be apparent to one of ordinary skill in the art that: it is not necessary to employ these specific details to practice the present invention. In other instances, well-known algorithms, methods or systems have not been described in detail so as not to obscure the present invention, and are within the scope of the present invention as defined by the claims.
For simplicity of explanation, the foregoing method embodiments are described as a series of acts or combinations, but those skilled in the art will appreciate that the present application is not limited by the order of acts, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and elements referred to are not necessarily required in this application.
Those of skill in the art would appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The disclosed systems, modules, and methods may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units may be only one logical division, and there may be other divisions in actual implementation, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be referred to as an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may also be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
It will be understood by those skilled in the art that all or part of the processes in the methods for implementing the embodiments described above can be implemented by instructing the relevant hardware through a computer program, and the program can be stored in a computer-readable storage medium, and when executed, the program can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a ROM, a RAM, etc.
The foregoing is illustrative of the preferred embodiments of this invention, and it is to be understood that the invention is not limited to the precise form disclosed herein and that various other combinations, modifications, and environments may be resorted to, falling within the scope of the concept as disclosed herein, either as described above or as apparent to those skilled in the relevant art. And that modifications and variations may be effected by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (8)

1. A multi-dimensional and multi-task learning evaluation system based on deep learning is characterized by comprising the following components:
the first drowsiness and tiredness identification module is used for identifying the action of opening and closing eyes and identifying the track of the eye movement; the opening and closing motion recognition is used for recognizing the tired and doze state of the user and judging the attention of the user by combining the eye movement track; the user is identified by combining the head posture to judge the correctness and the mistake of the reading and learning posture of the user, and the tired and sleepy state of the user is judged by combining the actions of eyes;
the facial expression analysis module is used for judging the happy, nervous and excited states of the user in the learning process and specifically evaluating the learning process;
the second sleepy and tired recognition module judges the learning sleepy and tired state of the user through eye opening and closing and head posture recognition, and establishes a data set for training the data set and testing the data set by collecting different sleepy postures;
the learning course subject identification module is used for establishing a learning course subject data set through the reading and writing contents of the user and used for training the data set and testing the data set; the digital camera confirms the classification of the contents of the reading and writing subjects through the collected reading and writing images and the identification of the corresponding training set and test set;
the learning emotion evaluation module is used for confirming a specific course learned by the user through the learning course subject identification module, and identifying and learning an expression index value of the subject by combining the facial expression identification module, and is used for evaluating the interest degree of the user and the mastering capacity of the content of the course in a multi-dimensional manner when the user learns different subjects;
and the marking module is used for inputting standard answers into the background management system through the user control terminal when the image recognition confirms that the user writes a job task, scanning the image input with the answers, inputting the standard answers of each small question according to the content structure of the job, collecting the actual result of the answer of the user, comparing and recognizing the actual result with the standard answers of each small question and scoring the marking.
2. The deep learning-based multi-dimensional and multi-task learning evaluation system of claim 1, wherein the myopia prevention recognition module is configured to perform myopia prevention and early warning through threshold calculation of linear distance.
3. The deep learning based multi-dimensional and multi-task learning evaluation system according to claim 2, characterized by comprising the following steps:
s1, determining the initial positions of the two points of the line segment;
s2, confirming the plane position of the reading observed by the eyes through image recognition, confirming the central line of the reading plane, and finding the shortest distance point between the two eyes and the reading plane through carrying out straight line detection by Hough transform;
and S3, comparing the data of the minimum reading distance with a design threshold, if the data is smaller than the threshold, warning and reminding through a loudspeaker, and if the data is larger than or equal to the threshold, confirming that the user belongs to a normal reading mode.
4. The deep learning-based multi-dimensional and multi-task learning evaluation system of claim 3, wherein in step S1, a central point between two points at the center of the two eye axes is used as a starting point.
5. The system for multi-dimensional and multi-task learning evaluation based on deep learning of claim 4, wherein in step S2, the end point is the contact point of the writing pen tip and the working text; and detecting the distance between the axis center point of the connecting line of the center points of the two eyes and the text point of the contact operation of the writing pen point by using Hough transform.
6. The deep learning-based multi-dimensional and multi-task learning evaluation system according to any one of claims 1 to 5, comprising a management module for user identity information management.
7. The deep learning-based multi-dimensional and multi-task learning evaluation system of claim 6, comprising a face recognition module for establishing personal identification face recognition data through the collected face data, and collecting the face data through a digital camera while a user uses the intelligent desk lamp to identify user identification information.
8. The deep learning based multi-dimensional and multi-task learning evaluation system of claim 6, comprising a cloud server for distributing and updating firmware programs and data backup.
CN201911139266.3A 2019-11-20 2019-11-20 Multi-dimensional multi-task learning evaluation system based on deep learning Active CN110991277B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911139266.3A CN110991277B (en) 2019-11-20 2019-11-20 Multi-dimensional multi-task learning evaluation system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911139266.3A CN110991277B (en) 2019-11-20 2019-11-20 Multi-dimensional multi-task learning evaluation system based on deep learning

Publications (2)

Publication Number Publication Date
CN110991277A true CN110991277A (en) 2020-04-10
CN110991277B CN110991277B (en) 2023-09-22

Family

ID=70085109

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911139266.3A Active CN110991277B (en) 2019-11-20 2019-11-20 Multi-dimensional multi-task learning evaluation system based on deep learning

Country Status (1)

Country Link
CN (1) CN110991277B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111797324A (en) * 2020-08-07 2020-10-20 广州驰兴通用技术研究有限公司 Distance education method and system for intelligent education
CN112132922A (en) * 2020-09-24 2020-12-25 扬州大学 Method for realizing cartoon of images and videos in online classroom
CN116453384A (en) * 2023-06-19 2023-07-18 江西德瑞光电技术有限责任公司 Immersion type intelligent learning system based on TOF technology and control method

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6419638B1 (en) * 1993-07-20 2002-07-16 Sam H. Hay Optical recognition methods for locating eyes
US20040152060A1 (en) * 2003-01-31 2004-08-05 Haru Ando Learning condition judging program and user condition judging system
US20050105827A1 (en) * 2003-09-09 2005-05-19 Fuji Photo Film Co., Ltd. Method and apparatus for detecting positions of center points of circular patterns
US20060071950A1 (en) * 2004-04-02 2006-04-06 Kurzweil Raymond C Tilt adjustment for optical character recognition in portable reading machine
WO2006081505A1 (en) * 2005-01-26 2006-08-03 Honeywell International Inc. A distance iris recognition system
US20060239509A1 (en) * 2005-04-26 2006-10-26 Fuji Jukogyo Kabushiki Kaisha Road line recognition apparatus
US20090202174A1 (en) * 2008-02-07 2009-08-13 Hisashi Shiba Pose estimation
KR20100016696A (en) * 2008-08-05 2010-02-16 주식회사 리얼맨토스 Student learning attitude analysis systems in virtual lecture
US20120208166A1 (en) * 2011-02-16 2012-08-16 Steve Ernst System and Method for Adaptive Knowledge Assessment And Learning
US20140161349A1 (en) * 2011-07-14 2014-06-12 Megachips Corporation Straight line detection apparatus and straight line detection method
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
CN106127139A (en) * 2016-06-21 2016-11-16 东北大学 A kind of dynamic identifying method of MOOC course middle school student's facial expression
KR20170025245A (en) * 2015-08-28 2017-03-08 주식회사 코코넛네트웍스 Method For Providing Smart Lighting Service Based On Face Expression Recognition, Apparatus and System therefor
CN106599881A (en) * 2016-12-30 2017-04-26 首都师范大学 Student state determination method, device and system
WO2017092526A1 (en) * 2015-11-30 2017-06-08 广东百事泰电子商务股份有限公司 Smart table lamp with face distance measurement and near light reminder functions
CN108647657A (en) * 2017-05-12 2018-10-12 华中师范大学 A kind of high in the clouds instruction process evaluation method based on pluralistic behavior data
CN108664932A (en) * 2017-05-12 2018-10-16 华中师范大学 A kind of Latent abilities state identification method based on Multi-source Information Fusion
CN108805009A (en) * 2018-04-20 2018-11-13 华中师范大学 Classroom learning state monitoring method based on multimodal information fusion and system
CN108826071A (en) * 2018-07-12 2018-11-16 太仓煜和网络科技有限公司 A kind of reading desk lamp based on artificial intelligence
WO2019050074A1 (en) * 2017-09-08 2019-03-14 주식회사 듀코젠 Studying system capable of providing cloud-based digital question writing solution and implementing distribution service platform, and control method thereof
WO2019075820A1 (en) * 2017-10-20 2019-04-25 深圳市鹰硕技术有限公司 Test paper reviewing system
CN110333774A (en) * 2019-03-20 2019-10-15 中国科学院自动化研究所 A kind of remote user's attention appraisal procedure and system based on multi-modal interaction

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6419638B1 (en) * 1993-07-20 2002-07-16 Sam H. Hay Optical recognition methods for locating eyes
US20040152060A1 (en) * 2003-01-31 2004-08-05 Haru Ando Learning condition judging program and user condition judging system
US20050105827A1 (en) * 2003-09-09 2005-05-19 Fuji Photo Film Co., Ltd. Method and apparatus for detecting positions of center points of circular patterns
US20060071950A1 (en) * 2004-04-02 2006-04-06 Kurzweil Raymond C Tilt adjustment for optical character recognition in portable reading machine
WO2006081505A1 (en) * 2005-01-26 2006-08-03 Honeywell International Inc. A distance iris recognition system
US20060239509A1 (en) * 2005-04-26 2006-10-26 Fuji Jukogyo Kabushiki Kaisha Road line recognition apparatus
US20090202174A1 (en) * 2008-02-07 2009-08-13 Hisashi Shiba Pose estimation
KR20100016696A (en) * 2008-08-05 2010-02-16 주식회사 리얼맨토스 Student learning attitude analysis systems in virtual lecture
US20120208166A1 (en) * 2011-02-16 2012-08-16 Steve Ernst System and Method for Adaptive Knowledge Assessment And Learning
US20140161349A1 (en) * 2011-07-14 2014-06-12 Megachips Corporation Straight line detection apparatus and straight line detection method
WO2015024407A1 (en) * 2013-08-19 2015-02-26 国家电网公司 Power robot based binocular vision navigation system and method based on
KR20170025245A (en) * 2015-08-28 2017-03-08 주식회사 코코넛네트웍스 Method For Providing Smart Lighting Service Based On Face Expression Recognition, Apparatus and System therefor
WO2017092526A1 (en) * 2015-11-30 2017-06-08 广东百事泰电子商务股份有限公司 Smart table lamp with face distance measurement and near light reminder functions
CN106127139A (en) * 2016-06-21 2016-11-16 东北大学 A kind of dynamic identifying method of MOOC course middle school student's facial expression
CN106599881A (en) * 2016-12-30 2017-04-26 首都师范大学 Student state determination method, device and system
CN108647657A (en) * 2017-05-12 2018-10-12 华中师范大学 A kind of high in the clouds instruction process evaluation method based on pluralistic behavior data
CN108664932A (en) * 2017-05-12 2018-10-16 华中师范大学 A kind of Latent abilities state identification method based on Multi-source Information Fusion
WO2019050074A1 (en) * 2017-09-08 2019-03-14 주식회사 듀코젠 Studying system capable of providing cloud-based digital question writing solution and implementing distribution service platform, and control method thereof
WO2019075820A1 (en) * 2017-10-20 2019-04-25 深圳市鹰硕技术有限公司 Test paper reviewing system
CN108805009A (en) * 2018-04-20 2018-11-13 华中师范大学 Classroom learning state monitoring method based on multimodal information fusion and system
CN108826071A (en) * 2018-07-12 2018-11-16 太仓煜和网络科技有限公司 A kind of reading desk lamp based on artificial intelligence
CN110333774A (en) * 2019-03-20 2019-10-15 中国科学院自动化研究所 A kind of remote user's attention appraisal procedure and system based on multi-modal interaction

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
左国才;王海东;吴小平;苏秀芝;: "基于深度学习的人脸识别技术在学习效果评价中的应用研究", no. 03 *
赵帅;黄晓婷;: "依然在路上:教学人工智能的发展与局限", no. 04 *
陈靓影;罗珍珍;徐如意;: "课堂教学环境下学生学习兴趣智能化分析", no. 08 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111797324A (en) * 2020-08-07 2020-10-20 广州驰兴通用技术研究有限公司 Distance education method and system for intelligent education
CN112132922A (en) * 2020-09-24 2020-12-25 扬州大学 Method for realizing cartoon of images and videos in online classroom
CN116453384A (en) * 2023-06-19 2023-07-18 江西德瑞光电技术有限责任公司 Immersion type intelligent learning system based on TOF technology and control method

Also Published As

Publication number Publication date
CN110991277B (en) 2023-09-22

Similar Documents

Publication Publication Date Title
Mekyska et al. Identification and rating of developmental dysgraphia by handwriting analysis
CN110991277B (en) Multi-dimensional multi-task learning evaluation system based on deep learning
Bogomolov et al. Daily stress recognition from mobile phone data, weather conditions and individual traits
CN110969099A (en) Threshold value calculation method for myopia prevention and early warning linear distance and intelligent desk lamp
Abdic et al. Driver frustration detection from audio and video in the wild
US20160104385A1 (en) Behavior recognition and analysis device and methods employed thereof
Schwikert et al. Familiarity and recollection in heuristic decision making.
WO2022142614A1 (en) Dangerous driving early warning method and apparatus, computer device and storage medium
Stewart et al. Generalizability of Face-Based Mind Wandering Detection across Task Contexts.
CN111125657A (en) Control method and device for student to use electronic equipment and electronic equipment
CN115607156B (en) Multi-mode-based psychological cognitive screening evaluation method, system and storage medium
Ruensuk et al. How do you feel online: Exploiting smartphone sensors to detect transitory emotions during social media use
Geller How do you feel? Your computer knows
CN112614583A (en) Depression grade testing system
Shobana et al. I-Quiz: An Intelligent Assessment Tool for Non-Verbal Behaviour Detection.
CN115132027A (en) Intelligent programming learning system and method based on multi-mode deep learning
Yasser et al. Detection of confusion behavior using a facial expression based on different classification algorithms
Sanches et al. Using the eye gaze to predict document reading subjective understanding
CN112163462A (en) Face-based juvenile recognition method and device and computer equipment
Brogaard et al. Template tuning and graded consciousness
CN111507555B (en) Human body state detection method, classroom teaching quality evaluation method and related device
CN115601829A (en) Classroom analysis method based on human face and human body behavior and action image recognition
Roy et al. Students attention monitoring and alert system for online classes using face landmarks
Maruichi et al. Self-confidence estimation on vocabulary tests with stroke-level handwriting logs
Craye A framework for context-aware driver status assessment systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant