Nothing Special   »   [go: up one dir, main page]

CN113076736A - Multidimensional text scoring method and device, computer equipment and storage medium - Google Patents

Multidimensional text scoring method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN113076736A
CN113076736A CN202110481306.3A CN202110481306A CN113076736A CN 113076736 A CN113076736 A CN 113076736A CN 202110481306 A CN202110481306 A CN 202110481306A CN 113076736 A CN113076736 A CN 113076736A
Authority
CN
China
Prior art keywords
text
scoring
scored
dimension
score
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110481306.3A
Other languages
Chinese (zh)
Inventor
李佳琳
王健宗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202110481306.3A priority Critical patent/CN113076736A/en
Publication of CN113076736A publication Critical patent/CN113076736A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/211Syntactic parsing, e.g. based on context-free grammar [CFG] or unification grammars
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Educational Administration (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Strategic Management (AREA)
  • Multimedia (AREA)
  • Biomedical Technology (AREA)
  • Development Economics (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Economics (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Game Theory and Decision Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the invention discloses a multidimensional text scoring method and device, computer equipment and a storage medium. The method comprises the following steps: acquiring a text to be scored; then converting the text to be scored into a text characteristic vector according to a preset text conversion network model; scoring the text feature vectors respectively based on an LSTM scoring model corresponding to each scoring dimension in a plurality of preset scoring dimensions to obtain a dimension score of the text to be scored under each scoring dimension; and finally, determining the target score of the text to be scored according to the dimension score. In the embodiment of the invention, the LSTM scoring model with a plurality of scoring dimensions is used for scoring the text with corresponding dimensions respectively to obtain the scores of the text to be scored under the plurality of scoring dimensions, so that the scheme can rapidly score the text from the plurality of dimensions without consuming manpower.

Description

Multidimensional text scoring method and device, computer equipment and storage medium
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a multidimensional text scoring method, a multidimensional text scoring device, computer equipment and a storage medium.
Background
With the progress of science and technology, more and more machines are used instead of completion, and as people pay more and more attention to education, more and more students need to improve the level of articles by practicing writing.
When the articles are scored, on one hand, most article scoring is still based on manual review, time is wasted in reviewing a plurality of articles and scoring each dimension of the articles, and the article auditing time is long; on the other hand, most research work in the field of automatic article scoring is focused on the overall scoring of the article instead of researching the importance of the features of the article in the overall scoring, so that the author and the teacher only get scores of the whole article, but the author and the teacher still have no clear idea of the improvement of the article in which aspect, and the author is difficult to help make progress in writing.
Currently, there is no method that can quickly score articles from multiple dimensions.
Disclosure of Invention
The embodiment of the invention provides a multidimensional text scoring method, a multidimensional text scoring device, computer equipment and a storage medium, which can rapidly score texts from multiple dimensions.
In a first aspect, an embodiment of the present invention provides a multidimensional text scoring method, which includes:
acquiring a text to be scored;
converting the text to be scored into a text characteristic vector according to a preset text conversion network model;
scoring the text feature vectors respectively based on an LSTM scoring model corresponding to each scoring dimension in a plurality of preset scoring dimensions to obtain a dimension score of the text to be scored under each scoring dimension;
and determining the target score of the text to be scored according to the dimension score.
In a second aspect, the embodiment of the present invention further provides a multidimensional text scoring apparatus, which includes a unit for executing the above method.
In a third aspect, an embodiment of the present invention further provides a computer device, which includes a memory and a processor, where the memory stores a computer program, and the processor implements the above method when executing the computer program.
In a fourth aspect, the present invention also provides a computer-readable storage medium, which stores a computer program, the computer program including program instructions, which when executed by a processor, implement the above method.
The embodiment of the invention provides a multidimensional text scoring method and device, computer equipment and a storage medium. Wherein the method comprises the following steps: acquiring a text to be scored; then converting the text to be scored into a text characteristic vector according to a preset text conversion network model; scoring the text feature vectors respectively based on an LSTM scoring model corresponding to each scoring dimension in a plurality of preset scoring dimensions to obtain a dimension score of the text to be scored under each scoring dimension; and finally, determining the target score of the text to be scored according to the dimension score. In the embodiment of the invention, the LSTM scoring model with a plurality of scoring dimensions is used for scoring the text with corresponding dimensions respectively to obtain the scores of the text to be scored under the plurality of scoring dimensions, so that the scheme can rapidly score the text from the plurality of dimensions without consuming manpower.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic view of an application scenario of a multi-dimensional text scoring method according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a multidimensional text scoring method according to an embodiment of the present invention;
FIG. 3 is a schematic sub-flow chart of a multidimensional text scoring method according to an embodiment of the present invention;
FIG. 4 is a block diagram of a text conversion model according to an embodiment of the present invention;
fig. 5 is another schematic sub-flow diagram of a multidimensional text scoring method according to an embodiment of the present invention;
FIG. 6 is a block diagram of an LSTM scoring model according to an embodiment of the present invention;
fig. 7 is another schematic sub-flow diagram of a multidimensional text scoring method according to an embodiment of the present invention;
FIG. 8 is a flowchart illustrating a multi-dimensional text scoring method according to another embodiment of the present invention;
FIG. 9 is a schematic block diagram of a multi-dimensional text scoring apparatus provided by an embodiment of the present invention;
FIG. 10 is a schematic block diagram of a multi-dimensional text scoring apparatus according to another embodiment of the present invention;
FIG. 11 is a schematic block diagram of a computer device provided by an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
The embodiment of the invention provides a multidimensional text scoring method and device, computer equipment and a storage medium.
The execution main body of the multidimensional text scoring method can be the multidimensional text scoring device provided by the embodiment of the invention, or computer equipment integrating the multidimensional text scoring device, wherein the multidimensional text scoring device can be realized in a hardware or software mode, the computer equipment can comprise a server or a terminal, and the terminal can be a smart phone, a tablet computer, a palm computer, a notebook computer or the like.
For example, referring to fig. 1, fig. 1 is a schematic view of an application scenario of a multidimensional text scoring method according to an embodiment of the present invention. The multidimensional text scoring method can be applied to the computer device shown in fig. 1, when a user needs to perform multidimensional scoring on a text to be scored, the text to be scored can be provided to the computer device, then the computer device converts the text to be scored into text feature vectors according to a text conversion network model, then the text feature vectors are respectively input into Long Short-Term Memory network (LSTM) scoring models respectively corresponding to each scoring dimension of the scoring dimensions, for example, the text feature vectors are respectively input into a first scoring point LSTM scoring model, a second scoring point LSTM scoring model … and an nth scoring point LSTM scoring model, where N is an integer greater than 2, then a first scoring point score, a second scoring point score … and an nth scoring point score are output, and finally, a text to be scored is scored according to the first scoring point score, the second scoring point score, the LSTM scoring model, the Long Short-Term Memory network model and the LSTM scoring model, The second score point score … and the Nth score point score obtain the target score of the text to be scored.
Fig. 2 is a flowchart illustrating a multidimensional text scoring method according to an embodiment of the present invention. As shown, the method includes the following steps S110-140.
And S110, obtaining a text to be scored.
In this embodiment, the text to be scored may be a text in other languages such as chinese and english, and the specific language type is not limited here, where the text to be scored may be a text such as a thesis, a composition, a diary, and the like, and the type of the text is not limited here.
In some embodiments, when an author (e.g., a student) completes writing a text to be scored, or a scorer (e.g., a teacher) of the text acquires the text to be scored, if the text to be scored is an electronic text, the text to be scored is directly input into a computer device.
In some embodiments, if the paper text needs to be scored, for example, the author writes the text on paper when writing the text, then the user needs to take a picture of the paper text, and then, as shown in fig. 3, step S110 includes:
and S111, acquiring an image containing the text to be scored.
For example, a user photographs a text to be scored written on paper, obtains an image containing the text to be scored, and then inputs the image into a computer device.
For another example, a camera is arranged on the computer device, and the user directly obtains an image containing the text to be scored through shooting by the camera on the computer device.
And S112, carrying out optical character recognition processing on the image to obtain a text to be scored.
In this embodiment, after obtaining the image including the text to be scored, the image is subjected to Optical Character Recognition (OCR), and the recognized text is used as the text to be scored.
In some embodiments, the computer device in this embodiment also supports online scoring of text, i.e. a user can compose the text online, in which case the network identification can be a terminal such as a mobile phone or a computer.
For example, when a user finishes writing a paragraph or a sentence, the computer device automatically scores the paragraph or the sentence as a text to be scored, or when the user wants to detect the text in the online writing process, the computer device is actively triggered to execute the multidimensional text scoring method in the present invention, and the text written by the user is taken as the text to be detected by the computer device.
And S120, converting the text to be scored into the text characteristic vector according to a preset text conversion network model.
In some embodiments, the text-to-text network Model may be a Vector Space Model (VSM).
As shown in fig. 4, the text conversion model in this embodiment includes a word embedding layer, a convolution layer, a pooling layer, and a dense layer, and in some embodiments, the text conversion model in this embodiment may correspond to a plurality of LSTM scoring models; in other embodiments, each LSTM scoring model corresponds to a text conversion model; in still other embodiments, the word embedding layer in the text conversion model may correspond to a plurality of text conversion models, where the corresponding text conversion model includes a convolution layer, a pooling layer, and a dense layer, and each LSTM scoring model corresponds to a text conversion model.
In some embodiments, as shown in fig. 5, step S120 includes:
and S121, splitting the text to be scored into a plurality of words.
In this embodiment, if the text to be scored is a chinese text, the word at this time is a word of the chinese, such as "flower", "sky", and "of the chinese, and at this time, the word in the text to be scored needs to be recognized according to the vocabulary recognition network model, and then the text is split according to the recognized word, where the vocabulary recognition network model may be a convolutional neural network trained according to the existing vocabulary.
If the text to be scored is English text, each text to be scored is split into a plurality of words, for example, "I am" is split into "I" and "am".
And S122, respectively converting the words into a plurality of word feature vectors according to the word embedding layer in the text conversion network model.
Specifically, the Word Embedding layer (Word Embedding) in the present embodiment is a method for converting words in a text into digital vectors, and since a computer cannot understand the words, in order to analyze them using a standard machine learning algorithm, it is necessary to input the vectors converted into numbers in a digital form.
In some embodiments, we use the most common 5000 words as a vocabulary, with all other words mapped into vectors, e.g., one-hot encoding in the word embedding layer to represent "I" and "am" as [1, 0, 0, 0, 0, 0, 0, 0, respectively],[0,1,0,0,0,0,0,0,0,0]And the computer identification is facilitated. For example, giving a word "w1、w、…wm"composed text, when words are mapped to xiI is 1,2, … m, as shown in the following equation:
xi=Ewi
wherein, wiIs the encoded representation of the ith word in the text, E is the embedding matrix, and xi is the embedding vector for the ith word, i.e., the word feature vector.
And S123, determining sentence characteristic vectors of the text to be scored according to the plurality of word characteristic vectors.
In some embodiments, step S123 includes: in a convolution (Convolutional Layer) Layer of a text conversion network model, a filter is utilized to obtain scattered sentence feature vectors of a plurality of word feature vectors; and then, carrying out scale unified processing on the dispersed sentence characteristic vectors according to a pooling layer in the text conversion network model (namely, fixedly outputting the dimensions through the pooling layer) to obtain the sentence characteristic vectors.
Specifically, the word feature vector xi,iIf 10 words exist in a text and each word is represented by a 100-dimensional vector, a 10 × 100 vector is generated by the method, namely an image, and the convolution layer slides through an entire matrix consisting of word feature vectors by using a filter to obtain dispersed sentence feature vector expressions (each sentence feature vector is a feature map), and then the output dimension is fixed by the pooling layer, so that the dense layer is convenient to perform weighted distribution.
And S124, carrying out weighted distribution processing on the sentence characteristic vectors according to the dense layer in the text conversion network model to obtain text characteristic vectors.
The text feature vector in this embodiment is composed of a plurality of sentence feature vectors, and the dense layer is a full connection layer of the text conversion network model.
In this embodiment, the weighted distribution is performed on the plurality of sentence feature vectors obtained in step S123 through the dense layer, so as to obtain weighted sentence feature vectors corresponding to the plurality of sentence feature vectors, respectively, and the plurality of weighted sentence feature vectors form the text feature vector in this embodiment.
S130, scoring the text feature vectors respectively based on the LSTM scoring model corresponding to each scoring dimension in the plurality of preset scoring dimensions to obtain a dimension score of the text to be scored under each scoring dimension.
In this embodiment, the framework of the LSTM scoring model is shown in fig. 6, and includes an LSTM layer, a pooling layer, and a dense layer.
In some embodiments, as shown in fig. 7, for the LSTM scoring model corresponding to each scoring dimension, step S130 includes:
s131, respectively determining sentence scores of each sentence feature vector in the text feature vectors through scoring logic of an LSTM layer in an LSTM scoring model.
In this embodiment, LSTM is memorable and may associate context, such as predicting the last word of "the children are in. LSTM can predict the best matching words and sentences in longer sentences, even the entire article, facilitating scoring in various scoring dimensions.
In this embodiment, scoring logic, that is, (scoring criteria) to determine what condition a sentence appears needs to be deducted, is different for each scoring point, and in this embodiment, distance description is performed by taking scoring dimensions including "sentence structure dimension", "grammar dimension", and "text-to-topic association dimension" as examples, and certainly, the scoring dimensions may also include other scoring dimensions according to settings, such as "high-level word dimension" and "layout dimension", and specific types of the scoring dimensions are not limited here.
For the LSTM scoring model corresponding to the "sentence structure dimension", the LSTM scoring model may be trained using correct-structure sentences and incorrect-structure sentences in the corpus according to a syntactic analysis tree, such as a leading-predicate object structure, the words are divided into adjective and adverb words, etc., and whether the structure of the sentence is correct or not is determined by matching fixed sentences and each sentence is scored.
For the LSTM scoring model corresponding to the "grammar dimension," the sentence may be divided into different parts of speech by, for example: "Linda will fee incommonfortable," the corresponding part of speech label is: ('Linda' NNP '), (' wild 'MD') ('fee' VB ') (' unformatable 'ADJ'). The proper noun (NNP), the Verb (VB) of the emotional verb (MD) and the Adjective (ADJ) are used for predicting the part of speech of the word which can be matched with the specific noun (NNP) and the emotional verb (MD) by utilizing the memory function of the LSTM, and the part of speech is compared with the sentence, and if the part of speech is wrong, the part of speech is deducted.
For the LSTM scoring model corresponding to the 'correlation dimension of text and subject', words similar to the subject, sentences similar to the view, sentences forming significance to the demonstration process and the like can be searched. The classification function and the prediction function of the LSTM are utilized, the LSTM model is trained through data (such as synonyms in a corpus and reasonable logic structures), and related sentences are found out and scored through classification calculation.
And S132, carrying out scale unified processing on the sentence scores through a pooling layer in the LSTM scoring model to obtain the processed sentence scores.
In this embodiment, the sentence scores are subjected to scale unification processing, that is, the sentence scores corresponding to each sentence in the text to be scored are processed into a fixed dimension, so that weighting distribution in the following dense layer is facilitated.
And S133, carrying out weighted distribution processing on the processed sentence scores through a dense layer in the LSTM scoring model to obtain the dimension scores of the texts to be scored.
In this embodiment, each scoring dimension has a corresponding weight distribution rule, for example, the weight occupied by the sentences located in front of and behind the text to be tested is higher, and the weight occupied by the sentences located in the middle of the text to be tested is lower.
For another example, the step S133 determines the weight of the corresponding sentence score according to the length of the sentence, wherein the longer the sentence is, the higher the weight of the corresponding sentence score is, and conversely, the shorter the sentence is, the lower the weight is, and then: determining sentence lengths respectively corresponding to each processed sentence score; then determining weights corresponding to the processed sentence scores according to the sentence lengths through the dense layer; and finally, carrying out weighted distribution processing on the processed sentence scores through the dense layer according to the weights to obtain the dimensionality scores of the texts to be scored.
In other words, in this embodiment, each sentence in the text to be scored is scored in each scoring dimension, and then the score of the sentence corresponding to each sentence is weighted and distributed according to the weight of the sentence, so as to obtain the score of the text to be scored in each scoring dimension.
And S140, determining the target score of the text to be scored according to the dimension score.
Specifically, the target score in this embodiment includes a comprehensive score of each dimension of the text to be scored and a dimension score of each scoring dimension.
In the embodiment, each scoring dimension is preset with a corresponding weight, for example, the weight of "sentence structure dimension" is 0.2, the weight of "grammar dimension" is 0.3, and the mass of "text-to-topic association dimension" is 0.5, and at this time, the composite score is 0.2+ the grammar dimension score 0.3+ the text-to-topic association dimension score 0.5.
In some embodiments, a dense layer is connected after each LSTM scoring model, and then each dimension score is weighted and scored through the dense layer to obtain a composite score.
Specifically, after determining the target score of the text to be scored, the target score is output to the user, if the computer device in this embodiment is a terminal, the target score may be directly output at this time, and if the computer device in this embodiment is a server, the server needs to send the target score to the user terminal at this time, so that the user obtains the target score of the text to be scored.
After the user obtains the target score of the text to be scored, the user can know which aspect of the text to be scored needs to be improved due to the fact that the target score not only comprises the comprehensive score but also comprises the dimension score of each dimension, and therefore the user can better help an author to improve the writing level.
In some embodiments, the computer device in this embodiment may further collect error points when performing scoring of each scoring dimension, and then automatically evaluate the corresponding scoring dimension according to the collected error points, for example, in "grammar dimension", if there are more than 5 grammar errors, comments which need to pay attention to grammar may be automatically added, and the like.
In summary, the computer device in this embodiment obtains a text to be scored; then, converting the text to be scored into a text characteristic vector according to a preset text conversion network model; scoring the text feature vectors respectively based on an LSTM scoring model corresponding to each scoring dimension in a plurality of preset scoring dimensions to obtain a dimension score of the text to be scored under each scoring dimension; and finally, determining the target score of the text to be scored according to the dimension score. In the embodiment of the invention, the LSTM scoring model with a plurality of scoring dimensions is used for scoring the text with corresponding dimensions respectively to obtain the scores of the text to be scored under the plurality of scoring dimensions, so that the scheme can rapidly score the text from the plurality of dimensions without consuming manpower.
Fig. 8 is a flowchart illustrating a multi-dimensional text method according to another embodiment of the present invention. As shown in fig. 8, the multi-dimensional text method of the present embodiment includes steps S210 to S290. Steps S210 to S240 are similar to steps S110 to S140 in the above embodiments, and are not described herein again. The added steps S250 to S290 in the present embodiment will be explained.
And S250, receiving a modified text corresponding to the text to be scored.
In this embodiment, the modified text corresponding to the text to be scored is a text that has been modified by the user according to the target score of the previous text to be scored, and in order to check the modification effect, the user may input the modified text into the computer device for verification again.
In some embodiments, the computer device may determine whether the modified text is scored according to a text identifier of the modified text, and if the modified text is scored, find out a pre-text corresponding to the modified text according to a scoring record, where the text identifier may be a text title or a text number, and a specific type of the identifier is not limited herein.
And S260, converting the modified text into a modified text feature vector according to the text conversion network model.
And S270, scoring the feature vectors of the modified texts respectively based on an LSTM scoring model to obtain modified text dimension scores of the modified texts under each scoring dimension.
And S280, determining a modified text target score of the modified text according to the modified text dimension score.
It should be noted that, compared with steps S120 to S140, in steps S260 to S280, the text to be scored is replaced by the modified text, and other steps are similar to each other, and detailed description is not repeated here.
And S290, comparing the target score with the modified text target score to obtain a score comparison result.
Specifically, in this embodiment, the target score is compared with the corresponding dimension score in the modified text target score, and the comprehensive score is compared with the corresponding comprehensive score to obtain a score comparison result.
The score comparison result comprises the score change condition of each scoring dimension and the comparison condition of the comprehensive score, for example, the sentence structure dimension is improved by 5 points, the grammar dimension score is improved by 4 points, the association dimension score of the text and the theme is improved by 6 points, and the comprehensive score is improved by 5.2 points, so that the comparison is convenient for a user to know whether the modified text is improved and which dimension is improved.
Fig. 9 is a schematic block diagram of a multidimensional text scoring apparatus according to an embodiment of the present invention. As shown in fig. 9, the present invention also provides a multidimensional text scoring device corresponding to the above multidimensional text scoring method. The multidimensional text scoring device comprises a unit for executing the multidimensional text scoring method, and the device can be configured in a terminal or a server. Specifically, referring to fig. 9, the multidimensional text scoring device includes an obtaining unit 901, a converting unit 902, a scoring unit 903, and a determining unit 904, where:
an obtaining unit 901, configured to obtain a text to be scored;
a converting unit 902, configured to convert the text to be scored into a text feature vector according to a preset text conversion network model;
the scoring unit 903 is configured to score the text feature vectors respectively based on an LSTM scoring model corresponding to each scoring dimension in a plurality of preset scoring dimensions, so as to obtain a dimension score of the text to be scored in each scoring dimension;
a determining unit 904, configured to determine a target score of the text to be scored according to the dimension score.
In some embodiments, the scoring unit 903 is specifically configured to:
respectively determining sentence scores of each sentence feature vector in the text feature vectors by scoring logics of an LSTM layer in the LSTM scoring model aiming at the LSTM scoring model corresponding to each scoring dimension;
carrying out scale unified processing on the sentence scores through a pooling layer in the LSTM scoring model to obtain processed sentence scores;
and performing weighted distribution processing on the processed sentence scores through a dense layer in the LSTM scoring model to obtain the dimension scores of the texts to be scored.
In some embodiments, the scoring unit 903 is further specifically configured to:
determining sentence lengths respectively corresponding to the processed sentence scores;
determining weights respectively corresponding to the processed sentence scores according to the sentence lengths through the dense layer;
and performing weighted distribution processing on the processed sentence scores through the dense layer according to the weights to obtain the dimension scores of the texts to be scored.
In some embodiments, the conversion unit 902 is specifically configured to:
splitting the text to be scored into a plurality of words;
converting the words into a plurality of word feature vectors according to a word embedding layer in the text conversion network model;
determining sentence characteristic vectors of the text to be scored according to the word characteristic vectors;
and carrying out weighted distribution processing on the sentence characteristic vector according to a dense layer in the text conversion network model to obtain the text characteristic vector.
In some embodiments, the conversion unit 902 is further specifically configured to:
in the convolution layer of the text conversion network model, a filter is utilized to obtain scattered sentence vectors of the word feature vectors;
and carrying out scale unified processing on the dispersed sentence characteristic vectors according to a pooling layer in the text conversion network model to obtain the sentence characteristic vectors.
In some embodiments, the determining unit 904 is specifically configured to:
determining a comprehensive score of the text to be scored according to the weight corresponding to each scoring dimension and the dimension score;
determining the composite score and the dimension score as the target score.
In some embodiments:
the obtaining unit 901 is further configured to receive a modified text corresponding to the text to be scored;
a converting unit 902, further configured to convert the modified text into a modified text feature vector according to the text conversion network model;
the scoring unit 903 is further configured to score the modified text feature vectors based on the LSTM scoring model, so as to obtain modified text dimension scores of the modified text in each scoring dimension;
a determining unit 904, further configured to determine a modified text target score of the modified text according to the modified text dimension score;
fig. 10 is a schematic block diagram of a multi-dimensional text scoring apparatus according to another embodiment of the present invention. As shown in fig. 10, the multidimensional text scoring device of the present embodiment is added with a comparison unit 905 on the basis of the above embodiments.
A comparing unit 905, configured to compare the target score with the modified text target score to obtain a score comparison result.
It should be noted that, as can be clearly understood by those skilled in the art, the specific implementation processes of the multi-dimensional text scoring device and each unit may refer to the corresponding descriptions in the foregoing method embodiments, and for convenience and brevity of description, no further description is provided here.
The multi-dimensional text scoring apparatus described above may be implemented in the form of a computer program that is executable on a computer device such as that shown in fig. 11.
Referring to fig. 11, fig. 11 is a schematic block diagram of a computer device according to an embodiment of the present invention. The computer device 1100 may be a terminal or a server, where the terminal may be an electronic device with a communication function, such as a smart phone, a tablet computer, a notebook computer, a desktop computer, a personal digital assistant, and a wearable device. The server may be an independent server or a server cluster composed of a plurality of servers.
Referring to fig. 11, the computer device 1100 includes a processor 1102, memory and network interface 1105 connected by a system bus 1101, where the memory may include non-volatile storage media 1103 and internal memory 1104.
The non-volatile storage medium 1103 may store an operating system 11031 and computer programs 11032. The computer program 11032 includes program instructions that, when executed, cause the processor 1102 to perform a multi-dimensional text scoring method.
The processor 1102 is configured to provide computing and control capabilities to support the operation of the overall computer device 1100.
The internal memory 1104 provides an environment for running the computer program 11032 in the non-volatile storage medium 1103, and when the computer program 11032 is executed by the processor 1102, the processor 1102 may be enabled to execute a multi-dimensional text scoring method.
The network interface 1105 is used for network communications with other devices. Those skilled in the art will appreciate that the configuration shown in fig. 11 is a block diagram of only a portion of the configuration associated with aspects of the present invention and is not intended to limit the computing device 1100 to which aspects of the present invention may be applied, and that a particular computing device 1100 may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
Wherein the processor 1102 is adapted to run a computer program 11032 stored in the memory to implement the steps of:
acquiring a text to be scored;
converting the text to be scored into a text characteristic vector according to a preset text conversion network model;
scoring the text feature vectors respectively based on an LSTM scoring model corresponding to each scoring dimension in a plurality of preset scoring dimensions to obtain a dimension score of the text to be scored under each scoring dimension;
and determining the target score of the text to be scored according to the dimension score.
In some embodiments, when the processor 1102 implements the step of scoring the text feature vectors based on the LSTM scoring model corresponding to each scoring dimension in the preset plurality of scoring dimensions, and obtaining a dimension score of the text to be scored in each scoring dimension, the following steps are specifically implemented:
respectively determining sentence scores of each sentence feature vector in the text feature vectors by scoring logics of an LSTM layer in the LSTM scoring model aiming at the LSTM scoring model corresponding to each scoring dimension;
carrying out scale unified processing on the sentence scores through a pooling layer in the LSTM scoring model to obtain processed sentence scores;
and performing weighted distribution processing on the processed sentence scores through a dense layer in the LSTM scoring model to obtain the dimension scores of the texts to be scored.
In some embodiments, when the processor 1102 performs weighted distribution processing on the processed sentence scores through the dense layer in the LSTM scoring model to obtain the dimension score of the text to be scored, the following steps are specifically implemented:
determining sentence lengths respectively corresponding to the processed sentence scores;
determining weights respectively corresponding to the processed sentence scores according to the sentence lengths through the dense layer;
and performing weighted distribution processing on the processed sentence scores through the dense layer according to the weights to obtain the dimension scores of the texts to be scored.
In some embodiments, when the processor 1102 implements the step of converting the text to be scored into the text feature vector according to the preset text conversion network model, the following steps are specifically implemented:
splitting the text to be scored into a plurality of words;
converting the words into a plurality of word feature vectors according to a word embedding layer in the text conversion network model;
determining sentence characteristic vectors of the text to be scored according to the word characteristic vectors;
and carrying out weighted distribution processing on the sentence characteristic vector according to a dense layer in the text conversion network model to obtain the text characteristic vector.
In some embodiments, when the processor 1102 performs the step of determining the sentence feature vector of the text to be scored according to the word feature vectors, the following steps are specifically performed:
in the convolution layer of the text conversion network model, a filter is utilized to obtain scattered sentence vectors of the word feature vectors;
and carrying out scale unified processing on the dispersed sentence characteristic vectors according to a pooling layer in the text conversion network model to obtain the sentence characteristic vectors.
In some embodiments, when the processor 1102 implements the step of determining the target score of the text to be scored according to the dimension score, the following steps are specifically implemented:
determining a comprehensive score of the text to be scored according to the weight corresponding to each scoring dimension and the dimension score;
determining the composite score and the dimension score as the target score.
In some embodiments, after implementing the step of determining the target score of the text to be scored according to the dimension score, the processor 1102 further implements the steps of:
receiving a modified text corresponding to the text to be scored;
converting the modified text into a modified text feature vector according to the text conversion network model;
based on the LSTM scoring model, scoring the modified text feature vectors respectively to obtain modified text dimension scores of the modified texts under each scoring dimension;
determining a modified text target score of the modified text according to the modified text dimension score;
and comparing the target score with the modified text target score to obtain a score comparison result.
It should be appreciated that in embodiments of the present invention, the Processor 1102 may be a Central Processing Unit (CPU), and the Processor 1102 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, etc. Wherein a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It will be understood by those skilled in the art that all or part of the flow of the method implementing the above embodiments may be implemented by a computer program instructing associated hardware. The computer program includes program instructions, and the computer program may be stored in a storage medium, which is a computer-readable storage medium. The program instructions are executed by at least one processor in the computer system to implement the flow steps of the embodiments of the method described above.
Accordingly, the present invention also provides a storage medium. The storage medium may be a computer-readable storage medium. The storage medium stores a computer program, wherein the computer program comprises program instructions. The program instructions, when executed by the processor, cause the processor to perform the steps of:
acquiring a text to be scored;
converting the text to be scored into a text characteristic vector according to a preset text conversion network model;
scoring the text feature vectors respectively based on an LSTM scoring model corresponding to each scoring dimension in a plurality of preset scoring dimensions to obtain a dimension score of the text to be scored under each scoring dimension;
and determining the target score of the text to be scored according to the dimension score.
In some embodiments, when the processor executes the program instruction to implement the step of scoring the text feature vector based on the LSTM scoring model corresponding to each scoring dimension in the preset plurality of scoring dimensions, and obtaining a dimension score of the text to be scored in each scoring dimension, the following steps are specifically implemented:
respectively determining sentence scores of each sentence feature vector in the text feature vectors by scoring logics of an LSTM layer in the LSTM scoring model aiming at the LSTM scoring model corresponding to each scoring dimension;
carrying out scale unified processing on the sentence scores through a pooling layer in the LSTM scoring model to obtain processed sentence scores;
and performing weighted distribution processing on the processed sentence scores through a dense layer in the LSTM scoring model to obtain the dimension scores of the texts to be scored.
In some embodiments, when the processor executes the program instructions to implement the step of obtaining the dimension score of the text to be scored by performing weighted distribution processing on the processed sentence scores through a dense layer in the LSTM scoring model, the processor specifically implements the following steps:
determining sentence lengths respectively corresponding to the processed sentence scores;
determining weights respectively corresponding to the processed sentence scores according to the sentence lengths through the dense layer;
and performing weighted distribution processing on the processed sentence scores through the dense layer according to the weights to obtain the dimension scores of the texts to be scored.
In some embodiments, when the processor executes the program instructions to implement the step of converting the text to be scored into the text feature vectors according to the preset text conversion network model, the following steps are specifically implemented:
splitting the text to be scored into a plurality of words;
converting the words into a plurality of word feature vectors according to a word embedding layer in the text conversion network model;
determining sentence characteristic vectors of the text to be scored according to the word characteristic vectors;
and carrying out weighted distribution processing on the sentence characteristic vector according to a dense layer in the text conversion network model to obtain the text characteristic vector.
In some embodiments, when the processor executes the program instructions to implement the step of determining the sentence feature vector of the text to be scored according to the plurality of word feature vectors, the processor implements the following steps:
in the convolution layer of the text conversion network model, a filter is utilized to obtain scattered sentence vectors of the word feature vectors;
and carrying out scale unified processing on the dispersed sentence characteristic vectors according to a pooling layer in the text conversion network model to obtain the sentence characteristic vectors.
In some embodiments, when the processor executes the program instructions to implement the step of determining the target score of the text to be scored according to the dimension score, the processor implements the following steps:
determining a comprehensive score of the text to be scored according to the weight corresponding to each scoring dimension and the dimension score;
determining the composite score and the dimension score as the target score.
In some embodiments, after executing the program instructions to implement the step of determining a target score for the text to be scored from the dimension score, the processor further implements the steps of:
receiving a modified text corresponding to the text to be scored;
converting the modified text into a modified text feature vector according to the text conversion network model;
based on the LSTM scoring model, scoring the modified text feature vectors respectively to obtain modified text dimension scores of the modified texts under each scoring dimension;
determining a modified text target score of the modified text according to the modified text dimension score;
and comparing the target score with the modified text target score to obtain a score comparison result.
The storage medium may be a usb disk, a removable hard disk, a Read-Only Memory (ROM), a magnetic disk, or an optical disk, which can store various computer readable storage media.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative. For example, the division of each unit is only one logic function division, and there may be another division manner in actual implementation. For example, various elements or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented.
The steps in the method of the embodiment of the invention can be sequentially adjusted, combined and deleted according to actual needs. The units in the device of the embodiment of the invention can be merged, divided and deleted according to actual needs. In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a terminal, or a computer device) to execute all or part of the steps of the method according to the embodiments of the present invention.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A multidimensional text scoring method is characterized by comprising the following steps:
acquiring a text to be scored;
converting the text to be scored into a text characteristic vector according to a preset text conversion network model;
scoring the text feature vectors respectively based on an LSTM scoring model corresponding to each scoring dimension in a plurality of preset scoring dimensions to obtain a dimension score of the text to be scored under each scoring dimension;
and determining the target score of the text to be scored according to the dimension score.
2. The method according to claim 1, wherein the scoring the text feature vectors respectively based on an LSTM scoring model corresponding to each scoring dimension in a plurality of preset scoring dimensions to obtain a dimension score of the text to be scored in each scoring dimension comprises:
respectively determining sentence scores of each sentence feature vector in the text feature vectors by scoring logics of an LSTM layer in the LSTM scoring model aiming at the LSTM scoring model corresponding to each scoring dimension;
carrying out scale unified processing on the sentence scores through a pooling layer in the LSTM scoring model to obtain processed sentence scores;
and performing weighted distribution processing on the processed sentence scores through a dense layer in the LSTM scoring model to obtain the dimension scores of the texts to be scored.
3. The method of claim 2, wherein the performing a weighted distribution process on the processed sentence scores through a dense layer in the LSTM scoring model to obtain the dimensional scores of the text to be scored comprises:
determining sentence lengths respectively corresponding to the processed sentence scores;
determining weights respectively corresponding to the processed sentence scores according to the sentence lengths through the dense layer;
and performing weighted distribution processing on the processed sentence scores through the dense layer according to the weights to obtain the dimension scores of the texts to be scored.
4. The method according to claim 1, wherein the converting the text to be scored into text feature vectors according to a preset text conversion network model comprises:
splitting the text to be scored into a plurality of words;
converting the words into a plurality of word feature vectors according to a word embedding layer in the text conversion network model;
determining sentence characteristic vectors of the text to be scored according to the word characteristic vectors;
and carrying out weighted distribution processing on the sentence characteristic vector according to a dense layer in the text conversion network model to obtain the text characteristic vector.
5. The method of claim 4, wherein determining sentence feature vectors for the text to be scored from the plurality of word feature vectors comprises:
in the convolution layer of the text conversion network model, a filter is utilized to obtain scattered sentence vectors of the word feature vectors;
and carrying out scale unified processing on the dispersed sentence characteristic vectors according to a pooling layer in the text conversion network model to obtain the sentence characteristic vectors.
6. The method of any one of claims 1 to 5, wherein the determining a target score for the text to be scored according to the dimension score comprises:
determining a comprehensive score of the text to be scored according to the weight corresponding to each scoring dimension and the dimension score;
determining the composite score and the dimension score as the target score.
7. The method of any of claims 1-5, wherein after determining a target score for the text to be scored according to the dimension score, the method further comprises:
receiving a modified text corresponding to the text to be scored;
converting the modified text into a modified text feature vector according to the text conversion network model;
based on the LSTM scoring model, scoring the modified text feature vectors respectively to obtain modified text dimension scores of the modified texts under each scoring dimension;
determining a modified text target score of the modified text according to the modified text dimension score;
and comparing the target score with the modified text target score to obtain a score comparison result.
8. A multidimensional text scoring apparatus, comprising:
the acquisition unit is used for acquiring a text to be scored;
the conversion unit is used for converting the text to be scored into a text characteristic vector according to a preset text conversion network model;
the scoring unit is used for scoring the text feature vectors respectively based on an LSTM scoring model corresponding to each scoring dimension in a plurality of preset scoring dimensions to obtain a dimension score of the text to be scored under each scoring dimension;
and the determining unit is used for determining the target score of the text to be scored according to the dimension score.
9. A computer arrangement, characterized in that the computer arrangement comprises a memory having stored thereon a computer program and a processor implementing the method according to any of claims 1-7 when executing the computer program.
10. A storage medium, characterized in that the storage medium stores a computer program comprising program instructions which, when executed by a processor, implement the method according to any one of claims 1-7.
CN202110481306.3A 2021-04-30 2021-04-30 Multidimensional text scoring method and device, computer equipment and storage medium Pending CN113076736A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110481306.3A CN113076736A (en) 2021-04-30 2021-04-30 Multidimensional text scoring method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110481306.3A CN113076736A (en) 2021-04-30 2021-04-30 Multidimensional text scoring method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113076736A true CN113076736A (en) 2021-07-06

Family

ID=76616321

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110481306.3A Pending CN113076736A (en) 2021-04-30 2021-04-30 Multidimensional text scoring method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113076736A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113536769A (en) * 2021-07-21 2021-10-22 深圳证券信息有限公司 Text conciseness and clarity evaluation method and related equipment
CN113743086A (en) * 2021-08-31 2021-12-03 北京阅神智能科技有限公司 Chinese sentence evaluation output method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104391984A (en) * 2014-12-11 2015-03-04 南京大学 Recommendation level grading method for Chinese and English mixed network user reviews
CN111199151A (en) * 2019-12-31 2020-05-26 联想(北京)有限公司 Data processing method and data processing device
WO2020107878A1 (en) * 2018-11-30 2020-06-04 平安科技(深圳)有限公司 Method and apparatus for generating text summary, computer device and storage medium
CN111914532A (en) * 2020-09-14 2020-11-10 北京阅神智能科技有限公司 Chinese composition scoring method
CN112527968A (en) * 2020-12-22 2021-03-19 大唐融合通信股份有限公司 Composition review method and system based on neural network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104391984A (en) * 2014-12-11 2015-03-04 南京大学 Recommendation level grading method for Chinese and English mixed network user reviews
WO2020107878A1 (en) * 2018-11-30 2020-06-04 平安科技(深圳)有限公司 Method and apparatus for generating text summary, computer device and storage medium
CN111199151A (en) * 2019-12-31 2020-05-26 联想(北京)有限公司 Data processing method and data processing device
CN111914532A (en) * 2020-09-14 2020-11-10 北京阅神智能科技有限公司 Chinese composition scoring method
CN112527968A (en) * 2020-12-22 2021-03-19 大唐融合通信股份有限公司 Composition review method and system based on neural network

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113536769A (en) * 2021-07-21 2021-10-22 深圳证券信息有限公司 Text conciseness and clarity evaluation method and related equipment
CN113743086A (en) * 2021-08-31 2021-12-03 北京阅神智能科技有限公司 Chinese sentence evaluation output method

Similar Documents

Publication Publication Date Title
US11593612B2 (en) Intelligent image captioning
CN110427463B (en) Search statement response method and device, server and storage medium
US10504010B2 (en) Systems and methods for fast novel visual concept learning from sentence descriptions of images
CN108363790B (en) Method, device, equipment and storage medium for evaluating comments
US20210256390A1 (en) Computationally efficient neural network architecture search
CN109840287A (en) A kind of cross-module state information retrieval method neural network based and device
CN109376222B (en) Question-answer matching degree calculation method, question-answer automatic matching method and device
CN110795552A (en) Training sample generation method and device, electronic equipment and storage medium
CN111985243B (en) Emotion model training method, emotion analysis device and storage medium
KR101988165B1 (en) Method and system for improving the accuracy of speech recognition technology based on text data analysis for deaf students
Al-Kabi et al. Evaluating social context in arabic opinion mining.
CN111368082A (en) Emotion analysis method for domain adaptive word embedding based on hierarchical network
CN110377778A (en) Figure sort method, device and electronic equipment based on title figure correlation
CN109670040B (en) Writing assistance method and device, storage medium and computer equipment
CN113076736A (en) Multidimensional text scoring method and device, computer equipment and storage medium
CN110991193B (en) OpenKiwi-based translation matrix model selection system
CN110852071A (en) Knowledge point detection method, device, equipment and readable storage medium
CN114817541A (en) Rumor detection method and device based on dual-emotion perception
CN114722832A (en) Abstract extraction method, device, equipment and storage medium
CN113836894A (en) Multidimensional English composition scoring method and device and readable storage medium
CN113012685B (en) Audio recognition method and device, electronic equipment and storage medium
CN112559711A (en) Synonymous text prompting method and device and electronic equipment
CN113705207A (en) Grammar error recognition method and device
CN112883713A (en) Evaluation object extraction method and device based on convolutional neural network
CN111813941A (en) Text classification method, device, equipment and medium combining RPA and AI

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination