Nothing Special   »   [go: up one dir, main page]

CN115188051A - Object behavior-based online course recommendation method and system - Google Patents

Object behavior-based online course recommendation method and system Download PDF

Info

Publication number
CN115188051A
CN115188051A CN202210838319.6A CN202210838319A CN115188051A CN 115188051 A CN115188051 A CN 115188051A CN 202210838319 A CN202210838319 A CN 202210838319A CN 115188051 A CN115188051 A CN 115188051A
Authority
CN
China
Prior art keywords
course
target object
learning
expression
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202210838319.6A
Other languages
Chinese (zh)
Inventor
余瑶
魏巍
刘珊珊
王可
胡术鄂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Education
Original Assignee
Chongqing University of Education
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Education filed Critical Chongqing University of Education
Priority to CN202210838319.6A priority Critical patent/CN115188051A/en
Publication of CN115188051A publication Critical patent/CN115188051A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Strategic Management (AREA)
  • Educational Administration (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Tourism & Hospitality (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Educational Technology (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Data Mining & Analysis (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The application provides an on-line course recommendation method and system based on object behaviors, which are used for carrying out expression recognition and human body action recognition on a target object in an image to be recognized to obtain an expression recognition result and a human body action recognition result of the target object; determining a learning state score according to the expression recognition result and the human body action recognition result; and finally, calculating the interestingness of the target object to the current network learning course based on the learning state scores at different moments, and recommending the associated learning course to the target object according to the interestingness calculation result. According to the method and the system, observation and analysis of school managers or subject teachers can be replaced by computer vision technology, and then courses interested by students are recommended according to expression recognition results and behavior recognition results, so that talent intelligence of the students can be better excited, learning interest can be promoted, and logical thinking ability can be better exercised.

Description

Object behavior-based online course recommendation method and system
Technical Field
The present application relates to the field of computer technologies, and in particular, to an online course recommendation method and system based on object behaviors.
Background
In recent years, with the acceleration of the informatization pace of the class, school managers or discipline teachers generally directly observe the behavior and actions of students while they are listening to the class in order to fully exploit the talent and learning interests of the students, so as to determine whether the students are interested in a certain class or some classes. Meanwhile, appropriate interest courses are developed according to the interest degrees of students in the relevant courses, and the corresponding interest courses are recommended to the students, so that the talent intelligence of the students in the relevant courses is stimulated, the learning interest of the students is increased, and the students are assisted to improve the logical thinking ability.
However, with the rapid development of internet technology, the large-scale popularization of intelligent terminal devices such as smart phones and tablet computers, and the like, mobile network resources such as 4G and the like are no longer scarce, and digitization and mobile online learning become new ways for students to receive education. In some special cases (such as epidemic, high temperature, heavy rain, etc.), students at school may be exposed to online learning at home at any time. In the process of online learning, school managers or discipline teachers cannot directly observe behavior and actions of students, so that the students cannot really know the mastering conditions of the teaching contents. Therefore, at the moment, the behavior and the action of the students in the online learning process need to be analyzed, so that interest course recommendation can be conveniently carried out in the later period.
Disclosure of Invention
In view of the above-mentioned shortcomings in the prior art, the present application is directed to an online course recommendation method and system based on object behavior, which are used to solve the problems in the prior art.
To achieve the above and other related objects, the present application provides an online course recommendation method based on object behavior, comprising the steps of:
acquiring an image to be recognized, wherein the image to be recognized comprises an image shot when a target object is in network learning;
performing expression recognition and human body action recognition on a target object in the image to be recognized to obtain an expression recognition result and a human body action recognition result of the target object;
determining the learning state score of the target object to the current online learning course according to the expression recognition result and the human body action recognition result of the target object;
and calculating the interest degree of the target object in the current network learning course based on the learning state scores of the target object in the current network learning course at a plurality of different moments, and recommending the learning course associated with the current network learning course to the target object based on the interest degree calculation result.
Optionally, the calculating of interest level of the target object in the current online learning course based on the learning state scores of the target object in the current online learning course at a plurality of different times includes:
Figure BDA0003746117760000021
in the formula, F m Representing the interest degree of the target object m in the current network learning course;
feame (m, t) represents a learning state score of the target object m at the t-th time;
n is a positive integer.
Optionally, the recommending a learning course associated with the current network learning course to the target object based on the interest calculation result includes:
calculating current web learning course k i And candidate course k j Similarity W (k) therebetween i ,k j ) The method comprises the following steps:
Figure BDA0003746117760000022
based on similarity W (k) i ,k j ) Calculating target object m and candidate course k j Fitness P (m, k) j ) The method comprises the following steps:
Figure BDA0003746117760000023
for fitness P (m, k) j ) Sorting, selecting the first H courses as courses to be recommended, and recommending the courses to the object m;
in the formula, N (k) i ) Indicates that the current network learning course k is selected i The set of objects of (1);
N(k j ) Indicates that candidate course k is selected j The set of objects of (1);
W(k i ,k j ) Representing a current web learning course k i And candidate course k j The similarity between them;
k i e.N (m) represents that the current network learning course k is selected i The target object m of (1);
Figure BDA0003746117760000024
representing the target object m in the current network learning course k i The curriculum score of (1);
P(m,k j ) Representing target object m and candidate course k j The fitness of (c).
Optionally, the process of recommending a learning course associated with the current network learning course to the target object based on the interestingness calculation result further includes:
calculating current web learning course k i In the candidate course k j Of (b), there are:
Figure BDA0003746117760000025
computing candidate courses k j Course content in current network learning course k i The frequency of occurrence of (a) is:
Figure BDA0003746117760000026
calculating current web learning course k i And candidate course k j The matching degree of (A) is as follows:
Figure BDA0003746117760000031
based on target object m and candidate course k j Fitness of (2) and course matching degree Q (k) i ,k j ) Determining the course recommendation degree associated with the target object m, including:
Rec(k j )=w 1 ×P(m,k j )+w 2 ×Q(k i ,k j );
based on course recommendation Rec (m, k) j ) For the fitness P (m, k) j ) Sorting, and selecting the first H courses as courses to be recommended;
wherein, confidence (k) i →k j ) Representing a current web learning course k i In the candidate course k j Frequency of occurrence in;
confidence(k j →k i ) Representing candidate courses k j The current network learning course k i Frequency of occurrence in;
Q(k i ,k j ) Representing a current web learning course k i And candidate course k j The matching degree of (2);
Rec(m,k j ) Representing a course recommendation degree associated with the target object m;
w 1 and w 2 Represents a weight coefficient, wherein w 1 ∈[0,1],w 2 ∈[0,1]And w is 1 +w 2 =1。
Optionally, the process of determining the learning state score of the target object for the current online learning course according to the expression recognition result and the human body action recognition result of the target object includes:
acquiring expression weight and human body action weight which are configured in advance or in real time;
calculating the facial expression score Face of the target object m at the t-th moment x (m, t) having:
Figure BDA0003746117760000032
calculating facial expression score Feame of the image to be recognized at the tth moment x (m, t) is:
Figure BDA0003746117760000033
calculating the human body motion fraction Face of the target object m at the t-th moment y (m, t) having:
Figure BDA0003746117760000034
calculating the face motion score Feame of the image to be recognized at the tth moment y (m, t) having:
Figure BDA0003746117760000035
calculating the learning state score of the target object m at the tth moment of the image to be recognized according to the weight configuration result, wherein the learning state score comprises the following steps:
Feame(m,t)=Feame x (m,t)+Feame y (m,t);
in the formula, E represents an expression recognition result, and F represents an action recognition result;
f(P(x t ) A calculation value representing an expression recognition result and a corresponding expression weight;
f(P(y t ) A calculated value representing the weight of the action and the other result and the corresponding action;
t represents the tth moment;
m represents a target object m in the image to be recognized;
x represents the xth personal facial expression;
y denotes the y-th individual's physical movement.
Optionally, the process of configuring the weight for the situation includes:
obtaining an expression recognition result, including: fear expression, no obvious expression, angry expression, sadness expression, confusion expression, aversion expression, light slight expression, happy expression, surprised expression, evasion degree expression and fatigue degree expression;
carrying out weight configuration on the obtained expressions, wherein the weight configuration comprises the following steps: the method comprises the steps of configuring the weight values of fear expressions and unobvious expressions to be 0, configuring the weight values of anger expressions, sadness expressions and puzzling expressions to be-1, configuring the weight values of disgust expressions and slight expressions to be-2, configuring the weight value of happy expressions to be 1, configuring the weight value of surprise expressions to be 2 and configuring the weight values of aversion expressions and fatigue expressions to be 3.
And/or the process of configuring the weight for the human body action comprises the following steps:
obtaining a human body action recognition result, comprising: groveling, cell phone playing, writing, lazy waist stretching, and hand lifting;
carrying out weight configuration on the acquired human body action, comprising the following steps: the weight value of the action of playing the mobile phone is set as-2, the weight value of the action of lying prone and stretching lazy waist is set as-1, the weight value of the action of writing is set as 1, and the weight value of the action of lifting hands is set as 2.
The present application further provides an online course recommendation system based on object behavior, the system comprising:
the system comprises an image acquisition module, a recognition module and a recognition module, wherein the image acquisition module is used for acquiring an image to be recognized, and the image to be recognized comprises an image shot when a target object is in network learning;
the image recognition module is used for carrying out expression recognition and human body action recognition on a target object in the image to be recognized to obtain an expression recognition result and a human body action recognition result of the target object;
the learning state module is used for determining the learning state score of the target object to the current online learning course according to the expression recognition result and the human body action recognition result of the target object;
the interest degree module is used for calculating the interest degree of the target object in the current online learning course according to the learning state scores of the target object in the current online learning course at a plurality of different moments;
and the data recommendation module is used for recommending the learning course associated with the current network learning course to the target object according to the interestingness calculation result.
Optionally, the calculating, by the interestingness module, the interestingness of the target object in the current online learning course according to the learning state scores of the target object in the current online learning course at a plurality of different times includes:
Figure BDA0003746117760000051
in the formula, F m Representing the interest degree of the target object m in the current network learning course;
feame (m, t) represents the learning state score of the target object m at the t-th moment;
n is a positive integer.
Optionally, the process of recommending, by the data recommendation module, a learning course associated with the current network learning course to the target object according to the interestingness calculation result includes:
calculating current web learning course k i And candidate course k j Similarity W (k) therebetween i ,k j ) The method comprises the following steps:
Figure BDA0003746117760000052
based on similarity W (k) i ,k j ) Calculating target object m and candidate course k j Fitness P (m, k) j ) The method comprises the following steps:
Figure BDA0003746117760000053
for fitness P (m, k) j ) Sorting, selecting the first H courses as courses to be recommended, and recommending the courses to the object m;
in the formula, N (k) i ) Indicates that the current network learning course k is selected i The set of objects of (1);
N(k j ) Indicates that candidate course k is selected j The set of objects of (1);
W(k i ,k j ) Representing a current web learning course k i And candidate course k j The similarity between them;
k i e N (m) represents that the current network learning course k is selected i The target object m of (1);
Figure BDA0003746117760000054
representing the target object m in the current network learning course k i The class score of (2);
P(m,k j ) Representing target object m and candidate course k j The fitness of (c).
Optionally, the process of recommending, by the data recommendation module, a learning course associated with the current network learning course to the target object according to the interestingness calculation result further includes:
calculating current web learning course k i In the candidate course k j The frequency of occurrence of (a) is:
Figure BDA0003746117760000055
computing candidate courses k j Course content in current network learning course k i Of (b), there are:
Figure BDA0003746117760000056
calculating current web learning course k i And candidate course k j The collocation degree of (A) is as follows:
Figure BDA0003746117760000057
based on target object m and candidate course k j Fitness of (2) and course matching degree Q (k) i ,k j ) Determining the course recommendation degree associated with the target object m, including:
Rec(k j )=w 1 ×P(m,k j )+w 2 ×Q(k i ,k j );
based on course recommendation degree Rec (m, k) j ) For fitness P (m, k) j ) Sorting, and selecting the first H courses as courses to be recommended;
wherein, confidence (k) i →k j ) Representing a current web learning course k i In the candidate course k j Frequency of occurrence in;
confidence(k j →k i ) Representing candidate courses k j Course content in current network learning course k i Frequency of occurrence of (2);
Q(k i ,k j ) Representing a current web learning course k i And candidate course k j The matching degree of (2);
Rec(m,k j ) Representing a course recommendation degree associated with the target object m;
w 1 and w 2 Represents a weight coefficient, wherein w 1 ∈[0,1],w 2 ∈[0,1]And w is 1 +w 2 =1。
As described above, the present application provides an online course recommendation method and system based on object behaviors, which have the following beneficial effects: when a target object (such as a student) is in network learning, shooting the target object by using an image shooting device to obtain an image to be recognized; then, performing expression recognition and human body action recognition on a target object in the image to be recognized to obtain an expression recognition result and a human body action recognition result of the target object; determining the learning state score of the target object to the current network learning course according to the expression recognition result and the human body action recognition result of the target object; and finally, calculating the interest degree of the target object in the current online learning course based on the learning state scores of the target object in the current online learning course at a plurality of different moments, and recommending the learning course associated with the current online learning course to the target object according to the interest degree calculation result. Therefore, the corresponding image can be obtained by shooting the emotional posture of the student during the online learning, and then the image is identified, so that the expression and the behavior of the student can be identified, and the interest degree of the student in the current online course is deduced based on the expression identification result and the behavior identification result. The method is equivalent to that the computer vision technology is used for replacing observation and analysis of school managers or discipline teachers, and then courses interested by students are recommended according to expression recognition results and behavior recognition results, so that the talent intelligence of the students can be better excited, the learning interest can be improved, and the logical thinking ability can be better exercised.
Drawings
FIG. 1 is a flowchart illustrating an online course recommendation method based on object behaviors according to an embodiment of the present application;
fig. 2 is a schematic diagram of a framework structure for performing expression recognition according to an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating a distribution of facial muscles of a human face according to an embodiment of the present application;
FIG. 4 is a block diagram of a frame structure for behavior and action recognition according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a hand raising action provided in an embodiment of the present application;
FIG. 6 is a schematic diagram of the lazy motion provided by an embodiment of the present application;
FIG. 7 is a schematic diagram illustrating actions taken to play a mobile phone according to an embodiment of the present application;
fig. 8 is a schematic hardware configuration diagram of an object behavior-based online course recommendation system according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application is provided by way of specific examples, and other advantages and effects of the present application will be readily apparent to those skilled in the art from the disclosure herein. The present application is capable of other and different embodiments and its several details are capable of modifications and/or changes in various respects, all without departing from the spirit of the present application. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the present embodiment are only for illustrating the basic idea of the present application, and the drawings only show the components related to the present application rather than the number, shape and size of the components in actual implementation, and the type, quantity and ratio of the components in actual implementation may be changed arbitrarily, and the layout of the components may be more complicated.
Referring to fig. 1, the present embodiment provides an online course recommendation method based on object behaviors, which includes the following steps:
s110, images to be recognized are obtained, and the images to be recognized comprise images shot when the target object is in network learning. As an example, the target object in the present embodiment is a student. Wherein, each image to be identified can comprise one student or a plurality of students. For example, if the image to be recognized is captured in a school classroom, the image to be recognized may include one or more students; if the image to be recognized is taken at the student's home, the image to be recognized may be a photograph containing a student.
S120, performing expression recognition and human body action recognition on a target object in the image to be recognized to obtain an expression recognition result and a human body action recognition result of the target object;
s130, determining the learning state score of the target object to the current online learning course according to the expression recognition result and the human body action recognition result of the target object;
and S140, calculating interest degree of the target object in the current online learning course based on the learning state scores of the target object in the current online learning course at a plurality of different moments, and recommending the learning course associated with the current online learning course to the target object based on the interest degree calculation result.
Therefore, in the embodiment, the corresponding image can be obtained by shooting the emotional posture of the student during the online learning, and then the image is identified, so that the expression and the behavior of the student can be identified, and the interest degree of the student in the current online course can be inferred based on the expression identification result and the behavior identification result. The method is equivalent to that the computer vision technology is used for replacing the observation and analysis of school managers or subject teachers, and then courses interested by students are recommended according to expression recognition results and behavior recognition results, so that the talent intelligence of the students can be better excited, the learning interest can be improved, and the logical thinking ability can be better exercised.
In an exemplary embodiment, as shown in fig. 2 and fig. 3, the process of performing expression recognition on the target object in the image to be recognized in step S120 and obtaining an expression recognition result includes: carrying out facial expression recognition on the target object in the image to be recognized to obtain the facial muscle distribution of the target object; one or more facial muscles are used as an action unit, and the action units of the same human face organ are combined to obtain expression actions of all human face organs; and then according to the expression action of the facial organ, determining the facial expression of the target object, thereby obtaining the facial expression recognition result of the target object. In this embodiment, the facial expression of the target object may include: fear, no obvious expression, anger, sadness, confusion, aversion, slight, happy, surprise, avoidance degree and fatigue degree. The framework structure in fig. 2 is composed of a core layer, an attribute layer, an action unit layer and a valence awakening layer, and is used for face recognition, action unit detection and valence awakening evaluation. Evaluating the valence awakening intensity through an action unit, dividing the face into areas such as eyebrows, eyes and a mouth by a face attribute layer, and training the face attribute layer and a core layer by using a convolutional neural network; and for the action unit layer, training the corresponding action unit area through the face to train the action unit layer, and after training, adding a full connection layer to evaluate the valence awakening value. In the training, celebA dataset is used for training a core layer and an attribute layer, the detection of action units is supervised by attribute convolutional neural network layers of different face parts during face attribute recognition, and finally, the action units and the attribute layer are used for recognition. When the facial expression is recognized, adam is selected as an optimizer, the negative gradient learning rate is set to be 0.01, and the gradient weight is set to be 0.9 for training. The muscle distribution of the face of the present embodiment is as shown in fig. 3, the present embodiment divides the face into 44 motion units (Action units, AUs) that are relatively independent and are mutually connected based on the facial anatomy principle, and local muscle actions of the face at a certain time can be represented by the AUs, for example, AU2 represents the upper part of the outer edge of the eyebrow, and AU3 represents the lower part of the eyebrow. When the human face has the expression change, each AU can present the expression of the human face by coding alone or in a combined form.
In an exemplary embodiment, as shown in fig. 4, in step S120, performing behavior and action recognition on the target object in the image to be recognized, and acquiring a behavior and action recognition result includes: extracting skeleton key points of the target object in the image to be identified to obtain all human skeleton key points of the target object; dividing the acquired human skeleton key points into two-arm key points, two-leg key points and head and neck key points, and generating human motion characteristic vectors of the target object according to the two-arm key points, the two-leg key points and the head and neck key points; performing human motion recognition according to the human motion characteristic vector of the target object to obtain a corresponding human motion recognition result; the human body action in this embodiment may include: groveling, cell phone playing, writing, lazy waist stretching, and hand lifting. Fig. 4 is a schematic diagram of a frame structure for recognizing a human motion of a target object in an image to be recognized according to this embodiment. In fig. 4, the data training module randomly orders and divides the acquired image data, then detects bone key points, extracts feature vectors from the detected bone key points, and then performs classification processing by using a support vector machine to finally form a learning behavior classification model; the behavior acquisition module acquires classroom image data through camera equipment arranged in a classroom and then detects bone key points of each study; in the behavior identification module, corresponding learning behaviors are respectively identified by using a bone key point relation feature extraction method and a bone key point direct coordinate method, wherein the bone key point relation feature extraction method is used for extracting feature vectors of learning bone key points detected in the behavior acquisition module, and then the extracted feature vectors are input into a model imported by the data training module, so that the learning behaviors are identified. The bone key point direct coordinate method identifies the learning behaviors by the coordinate relation of different behaviors of the student bone key points detected in the behavior acquisition module; the data analysis module collects the learning behavior data in the behavior recognition module, and then visually displays the learning behavior of the student in class through data analysis.
Specifically, the data training module firstly acquires a large amount of picture data to carry out training sample collection, and the image data of three learning behaviors actions of lying prone, playing mobile phones and writing are collected according to the standards of a plurality of quantities, a plurality of postures and a plurality of angles in the embodiment. Then, a program for automatically acquiring images through a camera is written by using an opencv library, and the program can quickly acquire image data and store the acquired image data in a classified manner. And then a plurality of students with different heights and body types are selected at random, and the program is operated to collect image data of different learning behaviors of the students respectively. Randomly ordering the collected image data of each category in the category, dividing 70% of the image data in each category into a training set, and taking the rest 30% of the image data as a test set. And then, detecting skeleton key points of the prepared image data by using an OpenPose model, extracting the characteristic vectors of the two arms and the head and neck, and finally, performing classification training by taking the extracted characteristic vectors as the input of a support vector machine.
The behavior acquisition module acquires images of the whole classroom through a camera arranged in the classroom, the acquisition interval is once every 5 seconds, and then the acquired images are subjected to bone key point detection. For three actions of lying on the stomach, playing a mobile phone and writing, a bone key point relation feature extraction method is used for extracting feature vectors, learning behaviors of all classmates in a classroom need to be identified in a classroom application scene, the body types and the heights of all students in the classroom are different, and the distances between the students and a camera are different, so that the extracted feature vectors can be ensured to be distinguished from the three learning behaviors in the process of extracting the key points of the three actions, a support vector machine can carry out effective classification, the feature vectors can be suitable for all the classmates, the detection invariability of the learning behaviors is ensured, meanwhile, in order to improve the classification accuracy, the bone key points of a human body are divided into three parts, namely two arms, two legs, a head and a neck, the three parts carry out feature vector construction aiming at respective bone key point change features, then the bone key points in the three parts are respectively subjected to feature extraction, the feature extraction problem of the three parts is divided into feature extraction problems of the three parts by the classification method, and the feature extraction problem is converted into the position relation between the key points, so that the feature vectors are extracted. And then, the extracted feature vectors are led into a model trained by a data training module, action classification is carried out, and which type of action can be determined according to a result label output by the model, so that the identification process of learning actions is completed. Because the key points of the two actions of lifting hands and stretching lazy waist have obvious geometric relationship, the learning behavior identification result is obtained by directly calculating and comparing the coordinate relationship between the skeleton key points.
The data analysis module collects and analyzes the recognition result data of the behavior recognition module, the learning behavior is divided into an aggressive learning behavior and a passive learning behavior, hands are lifted and written into the aggressive learning behavior, and the leaning, the playing of a mobile phone and the stretching of the waist are the passive learning behavior. The participation degree and the class listening conscience degree of the students in the class are objectively evaluated through the analysis of the behavior data. The times of the hands-up actions of students in class in the course of classroom teaching is important data for measuring the classroom participation of the students. The method for measuring the classroom participation of the students in the embodiment is to carry out a ratio of the total number of the hands-holding times of each student in the whole class process to the total number of the questioning times of the teacher in the class process to obtain the classroom participation of each student, and the classroom participation is obtained by dividing the sum of the participation of all students by the total number of the students. The student can embody through the depolarizing learning action on the student class in the fidelity degree of listening to the course in-process, and this embodiment carries out data analysis to lying prone, playing the cell-phone, stretching lazy waist three kinds of depolarizing learning actions and weighs the student and listen to the class fidelity degree on class. In this embodiment, image data is collected every 5 seconds, 30 seconds are taken as a determination interval, if there is 3 depolarizing learning behaviors of a student in 30 seconds, it is determined that the student does not attend the class seriously, the number of students who do not attend the class seriously is subtracted from the total number of students, and the number of students who attend the class seriously is divided by the total number of students, so that the class seriously is the whole class attending degree of the classroom. After the course is finished, the times, participation degree, class listening fidelity, class overall participation degree and class listening fidelity of each classmate in the whole course can be respectively displayed visually.
As an example, as shown in fig. 5, when the key point of the left hand of the human body is higher than the key point of the nose, it can be determined that the action at this time is to lift the left hand, and similarly, it can be determined that the right hand is lifted. The specific calculation method is as follows: and taking the upper left corner of the picture as a coordinate origin, and judging that the action at the moment is left-handed if the vertical coordinate of the hand key point H is smaller than the vertical coordinate of the point N and the vertical coordinate of the right hand key point is larger than N under the condition that the left hand key point exists in the picture.
According to the above description, as another example, as shown in fig. 6, when the key points of the left and right hands of the human body are both higher than the key point of the nose, it is possible to specify that the motion at that time is stretching into the lazy waist. The specific calculation method is as follows: and taking the upper left corner of the picture as a coordinate origin, and judging that the motion at the moment is lazy waist stretching when the vertical coordinates of the hand key points H1 and H2 are both smaller than N1 under the condition that the left hand key point and the right hand key point exist in the picture.
According to the above description, as another example, as shown in fig. 7, a specific calculation method of the feature vector when playing the mobile phone is as follows:
Figure BDA0003746117760000101
using vectors for key points between the neck and head
Figure BDA0003746117760000102
To express that the vector direction is C1 pointing to N1, the vector
Figure BDA0003746117760000103
A vector between the left shoulder and right shoulder keypoints is represented. Dot-multiplying two vectors and dividing by
Figure BDA0003746117760000104
Can be obtained
Figure BDA0003746117760000105
In that
Figure BDA0003746117760000106
The length of the projection in the direction, and dividing this length by the length of the projection
Figure BDA0003746117760000109
Die of is obtained
Figure BDA0003746117760000107
In that
Figure BDA0003746117760000108
The ratio Y1 of the projection length in the direction to the length of S1S 2. Since the vector is directional, the value of Y1 is negative when N1 is below C1. When N1 is above C1, the value of Y1 is positive. So that the two motion characteristics of head-down and head-up can be distinguished.
In an exemplary embodiment, the process of determining the learning state score of the target object for the current network learning course according to the expression recognition result and the human body action recognition result of the target object in step S130 includes:
acquiring expression weight and human body action weight which are configured in advance or in real time; as an example, the process of configuring weights for expressions in this embodiment may include: obtaining an expression recognition result, including: fear expression, unobvious expression, angry expression, sadness expression, puzzling expression, disgust expression, slight expression, happy expression, surprised expression, avoidance expression and fatigue expression; carrying out weight configuration on the obtained expressions, wherein the weight configuration comprises the following steps: the method comprises the steps of configuring the weight values of fear expressions and unobvious expressions to be 0, configuring the weight values of anger expressions, sadness expressions and puzzling expressions to be-1, configuring the weight values of disgust expressions and slight expressions to be-2, configuring the weight value of happy expressions to be 1, configuring the weight value of surprise expressions to be 2 and configuring the weight values of aversion expressions and fatigue expressions to be 3. In addition, the process of configuring the weight for the human body action in the embodiment may include: obtaining a human body action recognition result, comprising: groveling, cell phone playing, writing, lazy waist stretching, and hand lifting; carrying out weight configuration on the acquired human body action, comprising the following steps: the weight value of the action of playing the mobile phone is set as-2, the weight value of the action of lying prone and stretching lazy waist is set as-1, the weight value of the action of writing is set as 1, and the weight value of the action of lifting hands is set as 2.
Calculating the facial expression score Face of the target object m at the t-th moment x (m, t) having:
Figure BDA0003746117760000111
calculating facial expression score Feame of the image to be recognized at the tth moment x (m, t) is:
Figure BDA0003746117760000112
calculating the human motion score Face of the target object m at the t-th moment y (m, t) having:
Figure BDA0003746117760000113
calculating the face motion score Feame of the image to be recognized at the tth moment y (m, t) is:
Figure BDA0003746117760000114
calculating the learning state score of the target object m at the tth moment of the image to be recognized according to the weight configuration result, wherein the learning state score comprises the following steps:
Feame(m,t)=Feame x (m,t)+Feame y (m,t);
in the formula, E represents an expression recognition result, and F represents an action recognition result;
f(P(x t ) A calculation value representing an expression recognition result and a corresponding expression weight;
f(P(y t ) A calculated value representing the weight of the action and the other result and the corresponding action;
t represents the tth time;
m represents a target object m in the image to be recognized;
x represents the xth personal facial expression;
y denotes the y-th individual's physical movement.
According to the above description, in an exemplary embodiment, the step S140 of calculating the interest level of the target object in the current online learning course based on the learning state scores of the target object in the current online learning course at a plurality of different time instants includes:
Figure BDA0003746117760000121
in the formula, F m Representing the interest degree of the target object m in the current network learning course;
feame (m, t) represents a learning state score of the target object m at the t-th time;
n is a positive integer.
In light of the above description, in an exemplary embodiment, the process of recommending a learning course associated with the current network learning course to the target object based on the interest calculation result in step S140 includes:
calculating current web learning course k i And candidate course k j Similarity W (k) therebetween i ,k j ) The method comprises the following steps:
Figure BDA0003746117760000122
based on similarity W (k) i ,k j ) Calculating target object m and candidate course k j Fitness P (m, k) j ) The method comprises the following steps:
Figure BDA0003746117760000123
for fitness P (m, k) j ) Sorting, selecting the first H courses as courses to be recommended, and recommending the courses to the object m;
in the formula, N (k) i ) Indicates that the current network learning course k is selected i The set of objects of (1);
N(k j ) Indicates that candidate course k is selected j In (1) pairA set of images;
W(k i ,k j ) Representing a current web learning course k i And candidate course k j The similarity between them;
k i e.N (m) represents that the current network learning course k is selected i The target object m of (1);
Figure BDA0003746117760000124
representing the target object m in the current network learning course k i The curriculum score of (1);
P(m,k j ) Representing target object m and candidate course k j The fitness of (c).
In accordance with the above-mentioned statement, in an exemplary embodiment, the process of recommending a learning course associated with the current network learning course to the target object based on the interestingness calculation result further includes:
calculating current web learning course k i In the candidate course k j Of (b), there are:
Figure BDA0003746117760000125
computing candidate courses k j Course content in current network learning course k i Of (b), there are:
Figure BDA0003746117760000126
calculating current web learning course k i And candidate course k j The matching degree of (A) is as follows:
Figure BDA0003746117760000131
based on target object m and candidate course k j Fitness of (2) and course matching degree Q (k) i ,k j ) Determining the course recommendation degree associated with the target object m, including:
Rec(k j )=w 1 ×P(m,k j )+w 2 ×Q(k i ,k j );
based on course recommendation Rec (m, k) j ) For the fitness P (m, k) j ) Sorting, and selecting the first H courses as courses to be recommended;
wherein, confidence (k) i →k j ) Representing a current web learning course k i In the candidate course k j Frequency of occurrence of (2);
confidence(k j →k i ) Representing candidate course k j Course content in current network learning course k i Frequency of occurrence in;
Q(k i ,k j ) Representing a current web learning course k i And candidate course k j The degree of collocation;
Rec(m,k j ) Representing a course recommendation degree associated with the target object m;
w 1 and w 2 Represents a weight coefficient, wherein w 1 ∈[0,1],w 2 ∈[0,1]And w is 1 +w 2 =1。
In summary, the present application provides an online course recommendation method based on object behaviors, when a target object (e.g., a student) is in online learning, an image capturing device is used to capture the target object to obtain an image to be recognized; then, performing expression recognition and human body action recognition on a target object in the image to be recognized to obtain an expression recognition result and a human body action recognition result of the target object; determining the learning state score of the target object to the current network learning course according to the expression recognition result and the human body action recognition result of the target object; and finally, calculating the interest degree of the target object in the current online learning course based on the learning state scores of the target object in the current online learning course at a plurality of different moments, and recommending the learning course associated with the current online learning course to the target object according to the interest degree calculation result. Therefore, the method can obtain the corresponding image by shooting the emotional posture of the student during the online learning, then identifies the image, can identify the expression and the behavior of the student, and further deduces the interest degree of the student in the current online course based on the expression identification result and the behavior identification result. The method is equivalent to that the computer vision technology is used for replacing observation and analysis of school managers or discipline teachers, and then courses interested by students are recommended according to expression recognition results and behavior recognition results, so that intelligence of the students can be better excited, learning interest can be improved, and logical thinking ability can be better exercised.
As shown in fig. 8, the present application further provides an online course recommendation system based on object behaviors, the system includes:
the image acquisition module 810 is configured to acquire an image to be recognized, where the image to be recognized includes an image that is captured when a target object is in network learning. As an example, the target object in the present embodiment is a student. Wherein, each image to be identified can comprise a student or a plurality of students. For example, the image to be recognized is captured in a classroom of a school, and the image to be recognized may include one or more students; if the image to be recognized is taken at the student's home, the image to be recognized may be a photograph containing a student.
The image recognition module 820 is configured to perform expression recognition and human body motion recognition on the target object in the image to be recognized, so as to obtain an expression recognition result and a human body motion recognition result of the target object. For example, the image recognition module 820 performs expression recognition on the target object in the image to be recognized, and a process of obtaining an expression recognition result refers to the method embodiment corresponding to fig. 2 and fig. 3, which is not described herein again. The image recognition module 820 performs behavior recognition on the target object in the image to be recognized, and a process of obtaining a behavior recognition result refers to the method embodiments shown in fig. 4 to fig. 7, which is not described herein again.
A learning state module 830, configured to determine a learning state score of the target object for the current online learning course according to an expression recognition result and a human body motion recognition result of the target object;
the interest degree module 840 is used for calculating the interest degree of the target object in the current online learning course according to the learning state scores of the target object in the current online learning course at a plurality of different moments;
and the data recommendation module 850 is used for recommending the learning course associated with the current network learning course to the target object according to the interestingness calculation result.
Therefore, according to the embodiment, the corresponding image can be obtained by shooting the emotional posture of the student during the online learning, and then the image is identified, so that the expression and the behavior of the student can be identified, and the interest degree of the student in the current online course is deduced based on the expression identification result and the behavior identification result. In other words, the computer vision technology is used to replace the observation and analysis of school managers or discipline teachers, and then the courses interested by the students are recommended according to the expression recognition results and behavior recognition results, so that the talent intelligence of the students can be better excited, the learning interest can be improved, and the logical thinking ability can be better exercised.
In an exemplary embodiment, the process of the learning state module 830 determining the learning state score of the target object for the current online learning course according to the expression recognition result and the human body action recognition result of the target object includes:
and obtaining the expression weight and the human body action weight which are configured in advance or in real time. As an example, the process of configuring weights for expressions in this embodiment may include: obtaining an expression recognition result, including: fear expression, unobvious expression, angry expression, sadness expression, puzzling expression, disgust expression, slight expression, happy expression, surprised expression, avoidance expression and fatigue expression; carrying out weight configuration on the obtained expressions, wherein the weight configuration comprises the following steps: the weighted values of the fear expression and the unobvious expression are configured to be 0, the weighted values of the angry expression, the sadness expression and the puzzling expression are configured to be-1, the weighted values of the aversion expression and the slight expression are configured to be-2, the weighted value of the happy expression is configured to be 1, the weighted value of the surprise expression is configured to be 2, and the weighted values of the aversion degree expression and the fatigue degree expression are configured to be 3. In addition, the process of configuring the weight for the human body action in the embodiment may include: obtaining a human body action recognition result, comprising: groveling, cell phone playing, writing, lazy waist stretching, and hand lifting; carrying out weight configuration on the acquired human body action, comprising the following steps: the weight value of the action of playing the mobile phone is set as-2, the weight value of the action of lying prone and stretching lazy waist is set as-1, the weight value of the action of writing is set as 1, and the weight value of the action of lifting hands is set as 2.
Calculating the facial expression score Face of the target object m at the t-th moment x (m, t) is:
Figure BDA0003746117760000151
calculating the facial expression score Feame of the image to be recognized at the t moment x (m, t) is:
Figure BDA0003746117760000152
calculating the human motion score Face of the target object m at the t-th moment y (m, t) is:
Figure BDA0003746117760000153
calculating the face motion score Feame of the image to be recognized at the tth moment y (m, t) having:
Figure BDA0003746117760000154
calculating the learning state score of the target object m at the tth moment of the image to be recognized according to the weight configuration result, wherein the learning state score comprises the following steps:
Feame(m,t)=Feame x (m,t)+Feame y (m,t);
in the formula, E represents an expression recognition result, and F represents an action recognition result;
f(P(x t ) A calculation value representing the expression recognition result and the corresponding expression weight;
f(P(y t ) A calculated value representing the weight of the action and the other result and the corresponding action;
t represents the tth moment;
m represents a target object m in the image to be recognized;
x represents the xth personal facial expression;
y denotes the y-th individual's physical movement.
In an exemplary embodiment, the calculating the interest level of the target object in the current online learning course according to the learning state scores of the target object in the current online learning course at a plurality of different time instants by the interest level module 840 includes:
Figure BDA0003746117760000155
in the formula, F m Representing the interest degree of the target object m in the current network learning course;
feame (m, t) represents a learning state score of the target object m at the t-th time;
n is a positive integer.
In light of the above description, in an exemplary embodiment, the process of recommending, by the data recommendation module 850, the learning course associated with the current network learning course to the target object according to the interestingness calculation result includes:
calculating current web learning course k i And candidate course k j Similarity W (k) therebetween i ,k j ) The method comprises the following steps:
Figure BDA0003746117760000161
based on similarity W (k) i ,k j ) Calculating target object m and candidate course k j Fitness P (m, k) j ) The method comprises the following steps:
Figure BDA0003746117760000162
for fitness P (m, k) j ) Sorting, selecting the first H courses as courses to be recommended, and recommending the courses to the object m;
in the formula, N (k) i ) Indicates that the current network learning course k is selected i The set of objects of (1);
N(k j ) Indicates that candidate course k is selected j The set of objects of (1);
W(k i ,k j ) Representing a current web learning course k i And candidate course k j The similarity between them;
k i e N (m) represents that the current network learning course k is selected i The target object m of (1);
Figure BDA0003746117760000163
representing the target object m in the current network learning course k i The curriculum score of (1);
P(m,k j ) Representing target object m and candidate course k j The fitness of (c).
Further, in an exemplary embodiment, the process of recommending, by the data recommending module 850, the learning course associated with the current network learning course to the target object according to the interestingness calculation result further includes:
calculating current web learning course k i In the candidate course k j Of (b), there are:
Figure BDA0003746117760000164
computing candidate courses k j Course content in current network learning coursek i Of (b), there are:
Figure BDA0003746117760000165
calculating current web learning course k i And candidate course k j The collocation degree of (A) is as follows:
Figure BDA0003746117760000166
based on target object m and candidate course k j Fitness of (2) and course matching degree Q (k) i ,k j ) Determining the course recommendation degree associated with the target object m, including:
Rec(k j )=w 1 ×P(m,k j )+w 2 ×Q(k i ,k j );
based on course recommendation Rec (m, k) j ) For fitness P (m, k) j ) Sorting, and selecting the first H courses as courses to be recommended;
wherein, confidence (k) i →k j ) Representing a current web learning course k i In the candidate course k j Frequency of occurrence in;
confidence(k j →k i ) Representing candidate course k j Course content in current network learning course k i Frequency of occurrence of (2);
Q(k i ,k j ) Representing a current web learning course k i And candidate course k j The degree of collocation;
Rec(m,k j ) Representing a course recommendation degree associated with the target object m;
w 1 and w 2 Represents a weight coefficient, wherein w 1 ∈[0,1],w 2 ∈[0,1]And w is a 1 +w 2 =1。
In summary, the present application provides an online course recommendation system based on object behaviors, when a target object (e.g., a student) is in online learning, an image capturing device is used to capture the target object to obtain an image to be recognized; then, performing expression recognition and human body action recognition on a target object in the image to be recognized to obtain an expression recognition result and a human body action recognition result of the target object; determining the learning state score of the target object to the current network learning course according to the expression recognition result and the human body action recognition result of the target object; and finally, calculating the interest degree of the target object in the current online learning course based on the learning state scores of the target object in the current online learning course at a plurality of different moments, and recommending the learning course associated with the current online learning course to the target object according to the interest degree calculation result. Therefore, the system can obtain the corresponding image by shooting the emotional posture of the student during the online learning, then identify the image, identify the expression and the behavior of the student, and further deduce the interest degree of the student in the current online course based on the expression identification result and the behavior identification result. The system is equivalent to that the system utilizes the computer vision technology to replace the observation and analysis of school managers or discipline teachers, and then recommends the courses interested by students according to the expression recognition result and the behavior recognition result, so that the talent intelligence of the students can be better excited, the learning interest can be improved, and the logical thinking ability can be better exercised.
It should be noted that the online course recommendation system based on object behaviors provided in the foregoing embodiment and the online course recommendation method based on object behaviors provided in the foregoing embodiment belong to the same concept, and specific ways of executing operations by each module have been described in detail in the method embodiments, and are not described herein again. In practical applications, the object behavior-based online course recommendation system provided in the above embodiment may distribute the above functions to different function modules according to needs, that is, divide the internal structure of the system into different function modules to complete all or part of the above described functions, which is not limited herein. Therefore, the application effectively overcomes various defects in the prior art and has high industrial utilization value.
The above embodiments are merely illustrative of the principles and utilities of the present application and are not intended to limit the present application. Any person skilled in the art can modify or change the above-described embodiments without departing from the spirit and scope of the present application. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical concepts disclosed in the present application shall be covered by the claims of the present application.

Claims (10)

1. An online course recommendation method based on object behavior, the method comprising the steps of:
acquiring an image to be recognized, wherein the image to be recognized comprises an image shot when a target object is in network learning;
performing expression recognition and human body action recognition on a target object in the image to be recognized to obtain an expression recognition result and a human body action recognition result of the target object;
determining the learning state score of the target object to the current network learning course according to the expression recognition result and the human body action recognition result of the target object;
calculating interest degree of the target object in the current online learning course based on the learning state scores of the target object in the current online learning course at a plurality of different moments, and recommending the learning course associated with the current online learning course to the target object based on the interest degree calculation result.
2. The method of claim 1, wherein the step of calculating interest level of the target object in the current e-learning course based on the learning status scores of the target object in the current e-learning course at different times comprises:
Figure FDA0003746117750000011
in the formula, F m Representing the interest degree of the target object m in the current network learning course;
feame (m, t) represents the learning state score of the target object m at the t-th moment;
n is a positive integer.
3. The method of claim 2, wherein the step of recommending a learning course associated with a current web learning course to the target object based on the interestingness calculation result comprises:
calculating current web learning course k i And candidate course k j Similarity W (k) therebetween i ,k j ) The method comprises the following steps:
Figure FDA0003746117750000012
based on similarity W (k) i ,k j ) Calculating target object m and candidate course k j Fitness P (m, k) j ) The method comprises the following steps:
Figure FDA0003746117750000013
for fitness P (m, k) j ) Sorting, selecting the first H courses as courses to be recommended, and recommending the courses to the object m;
in the formula, N (k) i ) Indicates that the current network learning course k is selected i The set of objects of (1);
N(k j ) Indicates that candidate course k is selected j The set of objects of (1);
W(k i ,k j ) Representing a current web learning course k i And candidate course k j The similarity between them;
k i e.N (m) represents that the current network learning course k is selected i The target object m of (2);
Figure FDA0003746117750000021
representing the target object m in the current network learning course k i The curriculum score of (1);
P(m,k j ) Representing target object m and candidate course k j The fitness of (c).
4. The object behavior-based online course recommendation method according to claim 3, wherein the process of recommending a learning course associated with the current network learning course to the target object based on the interestingness calculation result further comprises:
calculating current web learning course k i In the candidate course k j The frequency of occurrence of (a) is:
Figure FDA0003746117750000022
computing candidate courses k j Course content in current network learning course k i The frequency of occurrence of (a) is:
Figure FDA0003746117750000023
calculating current web learning course k i And candidate course k j The collocation degree of (A) is as follows:
Figure FDA0003746117750000024
based on target object m and candidate course k j Fitness of (2) and course matching degree Q (k) i ,k j ) Determining the course recommendation degree associated with the target object m, including:
Rec(k j )=w 1 ×P(m,k j )+w 2 ×Q(k i ,k j );
based on course recommendation Rec (m, k) j ) For fitness P (m, k) j ) Sorting, and selecting the first H courses as courses to be recommended;
wherein, confidence (k) i →k j ) Representing a current web learning course k i In the candidate course k j Frequency of occurrence in;
confidence(k j →k i ) Representing candidate courses k j Course content in current network learning course k i Frequency of occurrence of (2);
Q(k i ,k j ) Representing a current web learning course k i And candidate course k j The degree of collocation;
Rec(m,k j ) Representing a course recommendation degree associated with the target object m;
w 1 and w 2 Represents a weight coefficient, wherein w 1 ∈[0,1],w 2 ∈[0,1]And w is 1 +w 2 =1。
5. The method for recommending online courses based on object behaviors as claimed in any of claims 2 to 4, wherein the step of determining the learning state score of the target object for the current online learning course according to the expression recognition result and the human body movement recognition result of the target object comprises:
acquiring expression weight and human body action weight which are configured in advance or in real time;
calculating the facial expression score Face of the target object m at the t-th moment x (m, t) is:
Figure FDA0003746117750000031
calculating facial expression score Feame of the image to be recognized at the tth moment x (m, t) having:
Figure FDA0003746117750000032
calculating the human body motion fraction Face of the target object m at the t-th moment y (m, t) is:
Figure FDA0003746117750000033
calculating the face motion score Feame of the image to be recognized at the tth moment y (m, t) having:
Figure FDA0003746117750000034
calculating the learning state score of the target object m at the tth moment of the image to be recognized according to the weight configuration result, wherein the learning state score comprises the following steps:
Feame(m,t)=Feame x (m,t)+Feame y (m,t);
in the formula, E represents an expression recognition result, and F represents an action recognition result;
f(P(x t ) A calculation value representing an expression recognition result and a corresponding expression weight;
f(P(y t ) A calculated value representing the weight of the action and the other result and the corresponding action;
t represents the tth moment;
m represents a target object m in the image to be recognized;
x represents the xth personal facial expression;
y denotes the y-th personal body motion.
6. The method of claim 5, wherein the process of assigning weights to situations comprises:
obtaining an expression recognition result, including: fear expression, no obvious expression, angry expression, sadness expression, confusion expression, aversion expression, light slight expression, happy expression, surprised expression, evasion degree expression and fatigue degree expression;
carrying out weight configuration on the obtained expressions, wherein the weight configuration comprises the following steps: the method comprises the steps of configuring the weight values of fear expressions and unobvious expressions to be 0, configuring the weight values of anger expressions, sadness expressions and puzzling expressions to be-1, configuring the weight values of disgust expressions and slight expressions to be-2, configuring the weight value of happy expressions to be 1, configuring the weight value of surprise expressions to be 2 and configuring the weight values of aversion expressions and fatigue expressions to be 3.
And/or the process of configuring the weight for the human body action comprises the following steps:
obtaining a human body action recognition result, comprising: groveling, cell phone playing, writing, lazy waist stretching, and hand lifting;
carrying out weight configuration on the acquired human body action, comprising the following steps: the weight value of the action of playing the mobile phone is configured to be-2, the weight values of the actions of lying on the stomach and stretching over the waist are configured to be-1, the weight value of the action of writing is configured to be 1, and the weight value of the action of lifting hands is configured to be 2.
7. An on-line course recommendation system based on object behavior, the system comprising:
the system comprises an image acquisition module, a recognition module and a recognition module, wherein the image acquisition module is used for acquiring an image to be recognized, and the image to be recognized comprises an image shot when a target object is in network learning;
the image recognition module is used for carrying out expression recognition and human body action recognition on a target object in the image to be recognized to obtain an expression recognition result and a human body action recognition result of the target object;
the learning state module is used for determining the learning state score of the target object to the current online learning course according to the expression recognition result and the human body action recognition result of the target object;
the interest degree module is used for calculating the interest degree of the target object in the current online learning course according to the learning state scores of the target object in the current online learning course at a plurality of different moments;
and the data recommendation module is used for recommending the learning course associated with the current network learning course to the target object according to the interestingness calculation result.
8. The object behavior-based online course recommendation system of claim 7, wherein the interest level module calculates the interest level of the target object in the current online learning course according to the learning state scores of the target object in the current online learning course at a plurality of different moments, and comprises:
Figure FDA0003746117750000041
in the formula, F m Representing the interest degree of the target object m in the current network learning course;
feame (m, t) represents a learning state score of the target object m at the t-th time;
n is a positive integer.
9. The object behavior-based online course recommendation system of claim 8, wherein the data recommendation module recommending the learning course associated with the current network learning course to the target object according to the interestingness calculation result comprises:
calculating current web learning course k i And candidate course k j Similarity W (k) therebetween i ,k j ) The method comprises the following steps:
Figure FDA0003746117750000051
based on similarity W (k) i ,k j ) Calculating target object m and candidate course k j Fitness P (m, k) j ) The method comprises the following steps:
Figure FDA0003746117750000052
for the fitness P (m, k) j ) Sorting, selecting the first H courses as courses to be recommended, and recommending the courses to the object m;
in the formula, N (k) i ) Indicates that the current network learning course k is selected i The set of objects of (1);
N(k j ) Indicates that candidate course k is selected j The set of objects of (a);
W(k i ,k j ) Representing a current web learning course k i And candidate course k j The similarity between them;
k i e.N (m) represents that the current network learning course k is selected i The target object m of (1);
Figure FDA0003746117750000053
representing the target object m in the current network learning course k i The curriculum score of (1);
P(m,k j ) Representing target object m and candidate course k j The fitness of (c).
10. The object behavior-based online course recommendation system of claim 9, wherein the process of the data recommendation module recommending the learning course associated with the current network learning course to the target object according to the interestingness calculation result further comprises:
calculating current web learning course k i In the candidate course k j The frequency of occurrence of (a) is:
Figure FDA0003746117750000054
computing candidate courses k j Course content in current network learning course k i The frequency of occurrence of (a) is:
Figure FDA0003746117750000055
calculating current web learning course k i And candidate course k j The matching degree of (A) is as follows:
Figure FDA0003746117750000056
based on target object m and candidate course k j Fitness of (2) and course matching degree Q (k) i ,k j ) Determining the course recommendation degree associated with the target object m, including:
Rec(k j )=w 1 ×P(m,k j )+w 2 ×Q(k i ,k j );
based on course recommendation Rec (m, k) j ) For fitness P (m, k) j ) Sorting, and selecting the first H courses as courses to be recommended;
wherein, confidence (k) i →k j ) Representing a current web learning course k i In the candidate course k j Frequency of occurrence in;
confidence(k j →k i ) Representing candidate courses k j The current network learning course k i Frequency of occurrence in;
Q(k i ,k j ) Representing a current web learning course k i And candidate course k j The degree of collocation;
Rec(m,k j ) Representing a course recommendation degree associated with the target object m;
w 1 and w 2 Represents a weight coefficient, wherein w 1 ∈[0,1],w 2 ∈[0,1]And w is 1 +w 2 =1。
CN202210838319.6A 2022-07-14 2022-07-14 Object behavior-based online course recommendation method and system Withdrawn CN115188051A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210838319.6A CN115188051A (en) 2022-07-14 2022-07-14 Object behavior-based online course recommendation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210838319.6A CN115188051A (en) 2022-07-14 2022-07-14 Object behavior-based online course recommendation method and system

Publications (1)

Publication Number Publication Date
CN115188051A true CN115188051A (en) 2022-10-14

Family

ID=83518669

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210838319.6A Withdrawn CN115188051A (en) 2022-07-14 2022-07-14 Object behavior-based online course recommendation method and system

Country Status (1)

Country Link
CN (1) CN115188051A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116561421A (en) * 2023-05-11 2023-08-08 广东工贸职业技术学院 Student course recommendation method, device and equipment based on face recognition

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116561421A (en) * 2023-05-11 2023-08-08 广东工贸职业技术学院 Student course recommendation method, device and equipment based on face recognition

Similar Documents

Publication Publication Date Title
Jain et al. Three-dimensional CNN-inspired deep learning architecture for Yoga pose recognition in the real-world environment
Jalal et al. Students’ behavior mining in e-learning environment using cognitive processes with information technologies
CN108734104B (en) Body-building action error correction method and system based on deep learning image recognition
CN109635727A (en) A kind of facial expression recognizing method and device
CN109145871B (en) Psychological behavior recognition method, device and storage medium
CN109885595A (en) Course recommended method, device, equipment and storage medium based on artificial intelligence
CN112287891A (en) Method for evaluating learning concentration through video based on expression and behavior feature extraction
CN109902912A (en) A kind of personalized image aesthetic evaluation method based on character trait
CN110163567A (en) Classroom roll calling system based on multitask concatenated convolutional neural network
CN110363129A (en) Autism early screening system based on smile normal form and audio-video behavioural analysis
CN111814718A (en) Attention detection method integrating multiple discrimination technologies
Zhou et al. Deep learning-based emotion recognition from real-time videos
Villegas-Ch et al. Identification of emotions from facial gestures in a teaching environment with the use of machine learning techniques
Enadula et al. Recognition of student emotions in an online education system
CN114783043B (en) Child behavior track positioning method and system
CN115188051A (en) Object behavior-based online course recommendation method and system
Guo et al. PhyCoVIS: A visual analytic tool of physical coordination for cheer and dance training
Aly et al. Enhancing Facial Expression Recognition System in Online Learning Context Using Efficient Deep Learning Model.
Ye et al. An action analysis algorithm for teachers based on human pose estimation
Ahmad et al. Comparative studies of facial emotion detection in online learning
CN115546893A (en) Evaluation visualization method and system for cheering gym video
Trabelsi et al. Behavioral-based real-time cheating detection in academic exams using deep learning techniques
Yang et al. Study on learner's facial expression recognition
CN117036877B (en) Emotion recognition method and system for facial expression and gesture fusion
Ghongane Hierarchical classification of yoga poses using deep learning techniques

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20221014

WW01 Invention patent application withdrawn after publication