Nothing Special   »   [go: up one dir, main page]

CN110598214A - Intention recognition result error correction method - Google Patents

Intention recognition result error correction method Download PDF

Info

Publication number
CN110598214A
CN110598214A CN201910853882.9A CN201910853882A CN110598214A CN 110598214 A CN110598214 A CN 110598214A CN 201910853882 A CN201910853882 A CN 201910853882A CN 110598214 A CN110598214 A CN 110598214A
Authority
CN
China
Prior art keywords
result
intention
dictionary
error correction
recognition result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910853882.9A
Other languages
Chinese (zh)
Inventor
贾川江
周杰
李足红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Changhong Electric Co Ltd
Original Assignee
Sichuan Changhong Electric Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Changhong Electric Co Ltd filed Critical Sichuan Changhong Electric Co Ltd
Priority to CN201910853882.9A priority Critical patent/CN110598214A/en
Publication of CN110598214A publication Critical patent/CN110598214A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/635Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/638Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/738Presentation of query results

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Character Discrimination (AREA)

Abstract

The invention discloses an intention recognition result error correction method, which is used for correcting the recognition result of an intention recognition module and comprises the following steps: A. defining a dictionary by user; B. defining an error correction rule; C. performing word segmentation and entity extraction on information input by a user; D. matching the word segmentation result in a self-defined dictionary; E. and judging whether the identification result of the intention identification module is wrong or not by combining the matching result and the error correction rule, if so, replacing the wrong result with the correct identification result and outputting, and otherwise, directly outputting the identification result of the intention identification module. The intention recognition result error correction method of the invention realizes the method for correcting the intention of the model recognition error by designing an error correction rule combined with entity extraction, thereby achieving the aim of improving the model recognition rate.

Description

Intention recognition result error correction method
Technical Field
The invention relates to the technical field of classification and identification of intention in neuro-linguistic programming, in particular to an intention identification result error correction method.
Background
In recent years, with the rapid development of artificial intelligence, the artificial intelligence voice technology is more pursued by the intelligent television industry, a few television manufacturers issue new television products with voice interaction functions, and the intelligent television voice interaction also becomes one of the important factors attracting consumers. Neuro-linguistic programmability is a sub-field of artificial intelligence and is also one of the core elements of speech interaction of smart televisions.
The intention recognition is a very important application scene in the field of speech recognition, for example, a user says 'how much weather is today', and the intention of the user can be accurately analyzed through an intention classification model to inquire weather, so that intelligent 'conversation' between a television and the user is achieved.
The recognition accuracy of the intention classification model is often difficult to improve to a certain extent, even 0.01%. And (3) exactly, some user intentions are that the model cannot be identified or is identified wrongly, and the intention correction can supplement the model identification accuracy to a certain extent, correct the intentions which cannot be identified or are identified wrongly, and improve the identification accuracy and the user experience.
Disclosure of Invention
The invention aims to overcome the defects in the background technology, and provides an intention recognition result error correction method, which realizes the method for correcting the intention of model recognition errors by designing an error correction rule combined with entity extraction, and achieves the aim of improving the model recognition rate.
In order to achieve the technical effects, the invention adopts the following technical scheme:
an intention recognition result error correction method is used for correcting the recognition result of an intention recognition module, and comprises the following steps:
A. defining a dictionary by user;
B. defining an error correction rule;
C. performing word segmentation and entity extraction on information input by a user;
D. matching the word segmentation result in a self-defined dictionary;
E. and judging whether the identification result of the intention identification module is wrong or not by combining the matching result and the error correction rule, if so, replacing the wrong result with the correct identification result and outputting, and otherwise, directly outputting the identification result of the intention identification module.
Further, the dictionary of the step a includes dictionaries of a plurality of domains, and each domain dictionary includes information as a recognition result corresponding to the domain.
Further, different dictionaries have different priorities.
Further, the dictionary at least comprises a video domain dictionary and a music domain dictionary, the video domain dictionary at least comprises action names and video names, and the music domain dictionary at least comprises action names and song names.
Further, the error correction rule at least includes: action + video name, action + song name, and action + video name belongs to the video field, and action + song name belongs to the music field.
Further, when performing word segmentation and entity extraction on the information input by the user in the step C, the information input by the user is specifically segmented into two words of verb and noun, and the two words are extracted.
Further, the step D is specifically to perform content matching on the extracted words in each dictionary, and obtain the field to which the extracted words belong according to the priority of the dictionary.
Further, the step E is to combine the extracted words according to the definition of the error correction rule, compare the domain to which the extracted words belong with the recognition result of the intention recognition module, determine whether the recognition result of the intention recognition module is correct, if so, replace the correct recognition result with the incorrect recognition result and output, otherwise, directly output the recognition result of the intention recognition module.
Compared with the prior art, the invention has the following beneficial effects:
the intention recognition result error correction method provided by the invention realizes the correction of the wrong intention of model recognition by designing an error correction rule combined with entity extraction, thereby achieving the aim of improving the model recognition rate.
Drawings
FIG. 1 is a flow chart of an error correction method for an intention recognition result according to the present invention.
Detailed Description
The invention will be further elucidated and described with reference to the embodiments of the invention described hereinafter.
Example (b):
the first embodiment is as follows:
as shown in fig. 1, an intention recognition result error correction method for correcting an recognition result of an intention recognition module includes the following steps:
step 1, defining a dictionary by user.
The dictionary specifically comprises dictionaries in a plurality of fields, information which is used as a recognition result and corresponds to the fields is recorded in the dictionaries in each field, and different priorities can be set for different application fields.
If the method is applied to the television field, the dictionary at least comprises a video field dictionary and a music field dictionary, the television field can set the priority of the video field dictionary higher than that of the music field dictionary, the video field dictionary at least comprises action name action, video name video and the like, the music field dictionary at least comprises action name action, song name song and the like, specifically, the action name action in the video field dictionary generally represents the words of video related actions such as watching, playing, watching and the like, the video name video in the video field dictionary generally represents the video names of all movies and videos with playing resources such as XX transmission, XX recording and the like, the action name action in the music field dictionary generally represents the words of music related actions such as listening, playing and the like, and the song name song in the music field dictionary generally represents the names of all music with playing resources, such as XX song, etc.
And 2, defining an error correction rule.
The error correction rules include at least: action + video name, action + song name, and action + video name belongs to the video field, and action + song name belongs to the music field.
As in typical 1: and if the people want to see the XX river lake, the corresponding action is 'want to see', and the video name is 'XX river lake'.
Typical expression 2: "play morph XX", the corresponding action is "play", the video name is "morph XX"
And 3, performing word segmentation and entity extraction on the information input by the user.
The method specifically comprises the steps of dividing information input by a user into two words of verb and noun and extracting the two words when the information input by the user is divided and the entity is extracted.
If the user says that: the 'XX pass is played', the word segmentation result is two words (XX pass is played), and two entities can be obtained.
And 4, matching the word segmentation result in a self-defined dictionary.
Specifically, the extracted words are subjected to content matching in each dictionary, and the field to which the extracted words belong is obtained according to the priority of the dictionaries.
For example, for the above word segmentation result combined with the custom dictionary, the corresponding content can be matched in the video domain dictionary, and the following results are obtained:
the action XX is played to transmit video, namely the information input by the user specifically belongs to the video field.
If the same nouns match the content in all the multiple domain dictionaries, the matching result in the dictionary with the highest dictionary priority is used as the final matching result.
And 5, judging whether the identification result of the intention identification module is wrong or not by combining the matching result and the error correction rule, if so, replacing the wrong identification result with the correct identification result and outputting, and otherwise, directly outputting the identification result of the intention identification module.
The method specifically comprises the steps of combining extracted words according to the definition of an error correction rule, comparing the field to which the words belong with the recognition result of an intention recognition module, judging whether the recognition result of the intention recognition module is correct or not, replacing the correct recognition result with the wrong recognition result to output if the recognition result is wrong, and otherwise, directly outputting the recognition result of the intention recognition module.
If the recognition result of the "play XX pass" input by the user by the intention recognition module belongs to the music field and the recognition result is wrong in the embodiment, the recognition result can be corrected to the video field and the corrected result is output, so that the error correction of the recognition result is completed.
It will be understood that the above embodiments are merely exemplary embodiments taken to illustrate the principles of the present invention, which is not limited thereto. It will be apparent to those skilled in the art that various modifications and improvements can be made without departing from the spirit and substance of the invention, and these modifications and improvements are also considered to be within the scope of the invention.

Claims (8)

1. An intention recognition result error correction method is used for correcting the recognition result of an intention recognition module, and is characterized by comprising the following steps:
A. defining a dictionary by user;
B. defining an error correction rule;
C. performing word segmentation and entity extraction on information input by a user;
D. matching the word segmentation result in a self-defined dictionary;
E. and judging whether the identification result of the intention identification module is wrong or not by combining the matching result and the error correction rule, if so, replacing the wrong result with the correct identification result and outputting, and otherwise, directly outputting the identification result of the intention identification module.
2. The method for correcting the recognition result of the intention according to claim 1, wherein the dictionary in the step a includes dictionaries of a plurality of fields, and information as the recognition result corresponding to each field is recorded in each field dictionary.
3. The method of claim 2, wherein different dictionaries have different priorities.
4. The method as claimed in claim 3, wherein the dictionary comprises at least a video domain dictionary and a music domain dictionary, the video domain dictionary comprises at least an action name and a video name, and the music domain dictionary comprises at least an action name and a song name.
5. The method for correcting the error of the result of intention recognition according to claim 4, wherein the error correction rule at least comprises: action + video name, action + song name, and action + video name belongs to the video field, and action + song name belongs to the music field.
6. The method as claimed in claim 5, wherein the step C of segmenting the information inputted by the user and extracting the entities divides the information inputted by the user into two words of verb and noun and extracts the two words.
7. The method as claimed in claim 6, wherein the step D is to match the contents of the extracted words in each dictionary and obtain the domain to which the extracted words belong according to the priorities of the dictionaries.
8. The method for correcting the error of the intention recognition result of claim 7, wherein the step E is to combine the extracted words according to the definition of the error correction rule, compare the domain to which the words belong with the recognition result of the intention recognition module, determine whether the recognition result of the intention recognition module is correct, if so, replace the error result with the correct recognition result and output the result, otherwise, directly output the recognition result of the intention recognition module.
CN201910853882.9A 2019-09-10 2019-09-10 Intention recognition result error correction method Pending CN110598214A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910853882.9A CN110598214A (en) 2019-09-10 2019-09-10 Intention recognition result error correction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910853882.9A CN110598214A (en) 2019-09-10 2019-09-10 Intention recognition result error correction method

Publications (1)

Publication Number Publication Date
CN110598214A true CN110598214A (en) 2019-12-20

Family

ID=68858557

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910853882.9A Pending CN110598214A (en) 2019-09-10 2019-09-10 Intention recognition result error correction method

Country Status (1)

Country Link
CN (1) CN110598214A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115129841A (en) * 2022-06-27 2022-09-30 深圳集智数字科技有限公司 Intention identification method and device
CN116136957A (en) * 2023-04-18 2023-05-19 之江实验室 Text error correction method, device and medium based on intention consistency

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103136196A (en) * 2008-04-18 2013-06-05 上海触乐信息科技有限公司 Methods used for inputting text into electronic device and correcting error
CN105869634A (en) * 2016-03-31 2016-08-17 重庆大学 Field-based method and system for feeding back text error correction after speech recognition
CN106599278A (en) * 2016-12-23 2017-04-26 北京奇虎科技有限公司 Identification method and method of application search intention
CN107045496A (en) * 2017-04-19 2017-08-15 畅捷通信息技术股份有限公司 The error correction method and error correction device of text after speech recognition
CN107622054A (en) * 2017-09-26 2018-01-23 科大讯飞股份有限公司 The error correction method and device of text data
CN109508376A (en) * 2018-11-23 2019-03-22 四川长虹电器股份有限公司 It can online the error correction intension recognizing method and device that update
CN109657229A (en) * 2018-10-31 2019-04-19 北京奇艺世纪科技有限公司 A kind of intention assessment model generating method, intension recognizing method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103136196A (en) * 2008-04-18 2013-06-05 上海触乐信息科技有限公司 Methods used for inputting text into electronic device and correcting error
CN105869634A (en) * 2016-03-31 2016-08-17 重庆大学 Field-based method and system for feeding back text error correction after speech recognition
CN106599278A (en) * 2016-12-23 2017-04-26 北京奇虎科技有限公司 Identification method and method of application search intention
CN107045496A (en) * 2017-04-19 2017-08-15 畅捷通信息技术股份有限公司 The error correction method and error correction device of text after speech recognition
CN107622054A (en) * 2017-09-26 2018-01-23 科大讯飞股份有限公司 The error correction method and device of text data
CN109657229A (en) * 2018-10-31 2019-04-19 北京奇艺世纪科技有限公司 A kind of intention assessment model generating method, intension recognizing method and device
CN109508376A (en) * 2018-11-23 2019-03-22 四川长虹电器股份有限公司 It can online the error correction intension recognizing method and device that update

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SZZACK: "NLP进化史系列之意图识别", 《BLOG.CSDN.NET/ZENGNLP/ARTICLE/DETAILS/94657099》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115129841A (en) * 2022-06-27 2022-09-30 深圳集智数字科技有限公司 Intention identification method and device
CN115129841B (en) * 2022-06-27 2024-10-18 深圳须弥云图空间科技有限公司 Intention recognition method and device
CN116136957A (en) * 2023-04-18 2023-05-19 之江实验室 Text error correction method, device and medium based on intention consistency
CN116136957B (en) * 2023-04-18 2023-07-07 之江实验室 Text error correction method, device and medium based on intention consistency

Similar Documents

Publication Publication Date Title
US11610590B2 (en) ASR training and adaptation
US20180143956A1 (en) Real-time caption correction by audience
EP3005347A1 (en) Processing of audio data
JP4109185B2 (en) Video scene section information extraction method, video scene section information extraction device, video scene section information extraction program, and recording medium recording the program
CN104899190A (en) Generation method and device for word segmentation dictionary and word segmentation processing method and device
US20110093263A1 (en) Automated Video Captioning
CN106816151B (en) Subtitle alignment method and device
CN111539199B (en) Text error correction method, device, terminal and storage medium
CN110598214A (en) Intention recognition result error correction method
CN111681678A (en) Method, system, device and storage medium for automatically generating sound effect and matching video
Ohishi et al. Conceptbeam: Concept driven target speech extraction
WO2022143349A1 (en) Method and device for determining user intent
CN115393765A (en) Video subtitle translation method, device and equipment
Srinivasan et al. Analyzing utility of visual context in multimodal speech recognition under noisy conditions
CN114461366A (en) Multi-task model training method, processing method, electronic device and storage medium
CN106528715A (en) Audio content checking method and device
CN116074574A (en) Video processing method, device, equipment and storage medium
CN109192197A (en) Big data speech recognition system Internet-based
CN104464731A (en) Data collection device, method, voice talking device and method
CN114694629B (en) Voice data amplification method and system for voice synthesis
CN111681679B (en) Video object sound effect searching and matching method, system, device and readable storage medium
CN115862631A (en) Subtitle generating method and device, electronic equipment and storage medium
Virkar et al. Speaker diarization of scripted audiovisual content
CN110428668A (en) A kind of data extraction method, device, computer system and readable storage medium storing program for executing
KR102311947B1 (en) Mehtod and apparatus for generating preview of learning process

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20191220

RJ01 Rejection of invention patent application after publication