CN115168534A - Intelligent retrieval method and device - Google Patents
Intelligent retrieval method and device Download PDFInfo
- Publication number
- CN115168534A CN115168534A CN202210618683.1A CN202210618683A CN115168534A CN 115168534 A CN115168534 A CN 115168534A CN 202210618683 A CN202210618683 A CN 202210618683A CN 115168534 A CN115168534 A CN 115168534A
- Authority
- CN
- China
- Prior art keywords
- text
- document
- audio
- information
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 238000013507 mapping Methods 0.000 claims abstract description 52
- 230000015654 memory Effects 0.000 claims description 17
- 238000012937 correction Methods 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 5
- 238000012545 processing Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
- G06F16/3344—Query execution using natural language analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
- G06F16/3329—Natural language query formulation or dialogue systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computational Linguistics (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Human Computer Interaction (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The embodiment of the disclosure discloses an intelligent retrieval method and an intelligent retrieval device, wherein the method comprises the following steps: acquiring document information, wherein the document information comprises a text document and an audio document/video document containing voice information; the text document, the audio document/the video document have associated content; after voice recognition is carried out on the voice information to obtain an audio text, the audio text is corrected by using text contents in the text document and/or the video document; establishing a position mapping relation between the corrected audio text and the document information, wherein the position mapping relation indicates the position of any segment of characters in the audio text in the document information; and if the text information to be retrieved is received, based on the position mapping relation, positioning the text information to be retrieved to the target position of the document information. It is realized that the short words can be mapped to the corresponding audio/video or text document position.
Description
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to an intelligent retrieval method and apparatus.
Background
When a student learns in a course, a teacher usually sends a pdf or a ppt of a courseware to the student, and the student can record audio or video of the teacher during learning.
In the related art, students usually cannot be positioned in audio or video according to text information in the learning process.
Disclosure of Invention
The main purpose of the present disclosure is to provide an intelligent retrieval method and apparatus.
In order to achieve the above object, according to a first aspect of the present disclosure, there is provided an intelligent retrieval method including: acquiring document information, wherein the document information comprises a text document and an audio document/video document containing voice information; the text document, the audio document/the video document have associated content; after voice recognition is carried out on the voice information to obtain an audio text, the audio text is corrected by utilizing text contents in the text document and/or the video document; establishing a position mapping relation between the corrected audio text and the document information, wherein the position mapping relation indicates the position of any one segment of characters in the audio text in the document information; and if the text information to be retrieved is received, based on the position mapping relation, positioning the text information to be retrieved to the target position of the document information.
Optionally, the method further comprises: carrying out relevance grouping on the documents in the database through a preset rule to obtain n document groups, wherein the text documents and the audio documents/video documents in each group have relevant contents; and after the corrected audio text is obtained, storing the corrected audio text into a corresponding document group.
Optionally, if text information to be retrieved is received, determining a document group matched with the text information to be retrieved based on the text information to be retrieved; and in the document group, positioning the text information to be retrieved to a target position of the document information based on the position mapping relation.
Optionally, the establishing a position mapping relationship between the corrected audio text and the document information includes: enabling the corrected audio text and the audio document/video to be in a first position mapping relation, wherein the first position mapping relation indicates the position of any segment of characters in the audio text in the audio document/video document; and indicating the position of any piece of speech in the audio/video document in the audio text.
Optionally, the establishing a position mapping relationship between the corrected audio text and the document information includes: and establishing a second position mapping relation between the corrected audio text and the text document, wherein the second position mapping relation indicates the position of any segment of characters in the audio text in the text document.
Optionally, the method further comprises: when the language of the received text to be retrieved is inconsistent with the language of the document information, translating the text to be retrieved; positioning the text information to be retrieved to a target position of the document information through the translated text information; and/or performing voice broadcast on the content at the target position.
According to a second aspect of the present disclosure, there is provided an intelligent retrieval apparatus, including: an acquisition unit configured to acquire document information, wherein the document information includes a text document, an audio document/video document containing voice information; the text document, the audio document/the video document have associated content; the correction unit is configured to correct the audio text by using text contents in the text document and/or the video document after performing voice recognition on the voice information to obtain the audio text; the mapping unit is configured to establish a position mapping relation between the corrected audio text and the document information, wherein the position mapping relation indicates the position of any segment of characters in the audio text in the document information; and the retrieval unit is configured to locate the text information to be retrieved to the target position of the document information based on the position mapping relation if the text information to be retrieved is received.
Optionally, the apparatus further comprises: the grouping unit is configured to perform relevance grouping on the documents in the database through a preset rule to obtain n document groups, wherein the text documents and the audio documents/video documents in each group have relevant content; and the storage unit is configured to store the corrected audio text into the corresponding document group after the corrected audio text is obtained.
According to a third aspect of the present disclosure, a computer-readable storage medium is provided, which stores computer instructions for causing a computer to execute the intelligent retrieval method according to any one of the implementation manners of the first aspect.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising: the method comprises the following steps: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to cause the at least one processor to perform the intelligent retrieval method of any one of the implementations of the first aspect.
In the intelligent retrieval method and the intelligent retrieval device in the embodiment of the disclosure, the method comprises the following steps: acquiring document information, wherein the document information comprises a text document and an audio document/video document containing voice information; the text document, the audio document/the video document have associated content; after voice recognition is carried out on the voice information to obtain an audio text, the audio text is corrected by using text contents in the text document and/or the video document; establishing a position mapping relation between the corrected audio text and the document information, wherein the position mapping relation indicates the position of any segment of characters in the audio text in the document information; and if the text information to be retrieved is received, based on the position mapping relation, positioning the text information to be retrieved to the target position of the document information. It is realized that the short words can be mapped to the corresponding audio/video or text document position.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present disclosure, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of an intelligent retrieval method according to an embodiment of the present disclosure;
2-6 are application schematic diagrams of an intelligent retrieval method according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those skilled in the art, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only some embodiments of the present disclosure, not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the present disclosure may be described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In the present disclosure, the terms "upper", "lower", "left", "right", "front", "rear", "top", "bottom", "inner", "outer", "middle", "vertical", "horizontal", "lateral", "longitudinal", and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings. These terms are used primarily to better describe the present disclosure and its embodiments, and are not used to limit the indicated devices, elements or components to a particular orientation or to be constructed and operated in a particular orientation.
Moreover, some of the above terms may be used to indicate other meanings besides the orientation or positional relationship, for example, the term "on" may also be used to indicate some kind of attachment or connection relationship in some cases. The specific meaning of these terms in this disclosure will be understood by those of ordinary skill in the art as appropriate.
Furthermore, the terms "mounted," "disposed," "provided," "connected," and "sleeved" are to be construed broadly. For example, it may be a fixed connection, a removable connection, or a unitary construction; can be a mechanical connection, or an electrical connection; may be directly connected, or indirectly connected through intervening media, or may be in internal communication between two devices, elements or components. The specific meaning of the above terms in the present disclosure can be understood as a specific case by a person of ordinary skill in the art.
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
According to an embodiment of the present disclosure, an intelligent retrieval method is provided, which includes the following steps 101 to 104:
step 101: acquiring document information, wherein the document information comprises a text document and an audio document/video document containing voice information; the text document, audio document/video document have associated content.
In this embodiment, the text document may be pdf, word, and the courseware pdf and ppt usually have few characters, but have visualized contents such as drawings and formulas; the audio/video document having associated content means that the audio/video document is content that is taught in connection with pdf, word implementations. For example, the content of the lesson is "addition of numbers", and the text document and the audio/video document are both documents that are set forth around the explanation content.
Step 102: and after voice recognition is carried out on the voice information to obtain an audio text, correcting the audio text by using text contents in the text document and/or the video document.
In this embodiment, no matter whether the lecture voice is in a voice form or a video form, the voice can be recognized through the voice recognition technology, and the recognition accuracy rate cannot reach 100% through the voice recognition mode, so that the recognized audio text needs to be corrected.
During correction, extraction can be performed based on the content of the text document, and if a video exists, the content of the video document is extracted through OCR; and comparing the characters in the audio text based on the extracted content to correct the content of the audio text, including professional nouns. It can be understood that the correction can be performed in a network search manner during the correction, but the method is intelligently adapted to a relatively universal text and has low accuracy, and obviously, the correlation is stronger and more accurate based on the correction and correction manner of pdf, ppt and video content corresponding to the voice.
Referring to fig. 2, fig. 2 shows a schematic diagram of the above-described correction process.
Step 103: and establishing a position mapping relation between the corrected audio text and the document information, wherein the position mapping relation indicates the position of any segment of characters in the audio text in the document information.
In this embodiment, the positions of the characters or the character segments in the corrected audio text and the positions in the document information may be automatically calibrated. And obtaining a position mapping relation after the calibration is finished.
As an optional implementation manner of this embodiment, establishing a position mapping relationship between the corrected audio text and the document information includes: and enabling the corrected audio text and the audio document/video to have a first position mapping relation, wherein the first position mapping relation indicates the position of any segment of characters in the audio text in the audio document/video document.
In this alternative implementation, the location of the text segment in the audio text may be mapped to a corresponding location in the audio/video document. Referring to fig. 4, taking audio position calibration as an example, by calibrating the position mapping relationship between the corrected audio text and the audio, the position of the text corresponding to the audio at a certain position in the audio can be quickly located from the certain position in the audio; or from a certain position of the audio text, the audio text can be quickly positioned to the corresponding position of the audio.
As an optional implementation manner of this embodiment, the establishing a position mapping relationship between the corrected audio text and the document information includes: and establishing a second position mapping relation between the corrected audio text and the text document, wherein the second position mapping relation indicates the position of any segment of characters in the audio text in the text document.
In this alternative implementation, the location of a text segment in the audio text may be uniquely mapped to a corresponding location in the text document (distinguished from the existing way of locating by text search, which is unique and synchronized with the audio/video content). Referring to fig. 5, the content of the text document (including words and images) may be extracted, and then the extracted content may be positionally corresponded to the content in the audio text through a character sequence comparison and search matching algorithm. It will be appreciated that the text document and the audio text have the above described association (i.e. have associated content). After the position mapping relation between the audio text and the text document is established, during subsequent retrieval, the content mapped in the audio text can be obtained through the input text information to be retrieved, and the content mapped at the corresponding position in the text document can be determined simultaneously based on the content. Synchronous positioning is realized.
Taking the audio position calibration as an example, by calibrating the position mapping relation between the corrected audio text and the audio, the position of the character corresponding to the audio at a certain position of the audio in the text can be quickly positioned; or from a certain position of the audio text, the audio text can be quickly positioned to the corresponding position of the audio.
Step 104: and if the text information to be retrieved is received, based on the position mapping relation, positioning the text information to be retrieved to the target position of the document information.
In this embodiment, after the position mapping relationship is established, a user may input a text to be retrieved, which may be a text segment, on an interactive interface, and then the system applicable to the method of this embodiment may perform text retrieval in an audio text of a database after receiving the information, determine a matched audio text based on the text, and then determine a position in an audio, video, or text document based on a position of a character in the audio text.
In this embodiment, a user can quickly retrieve the corresponding position of the audio text by inputting the text, thereby quickly locating the corresponding playing position of the pdf/word/ppt, audio or video document. And then can be quickly displayed for users to watch and listen, so as to achieve the function of assisting learning by the common assistance of the picture of pdf, formula, voice of audio and characters of voice text.
As an optional implementation manner of this embodiment, the method further includes: carrying out relevance grouping on the documents in the database through a preset rule to obtain n document groups, wherein the text documents and the audio documents/video documents in each group have relevant contents; and after the corrected audio text is obtained, storing the corrected audio text into a corresponding document group.
In this alternative implementation, if there are a large number of documents, the documents may be archived in groups to group the documents that have an association into the same group. Referring to fig. 3, a relevance text filing can be obtained by obtaining an audio text through speech recognition and obtaining a corrected text content after correction, extracting a text document (pdf/word/ppt) content, obtaining a text of a video content through recognition of a video courseware file ocr, and a text matching algorithm. The retrieval efficiency can be improved through the filing group.
As an optional implementation manner of this embodiment, if text information to be retrieved is received, a document group matched with the text information to be retrieved is determined based on the text information to be retrieved; and in the document group, positioning the text information to be retrieved to a target position of the document information based on the position mapping relation.
In this optional implementation manner, after text information to be retrieved is input, an audio text matching the text to be retrieved may be determined in each document group through the text, and if there is less text information to be retrieved and there may be a plurality of matching texts, the texts may be sorted according to the matching degree and displayed to the user through an interface, so that the user determines which of the document groups is desired to be retrieved. After the best matching document set is obtained, the position of the text to be retrieved in the audio/video document and the position of the text document can be further determined based on the position of the text to be retrieved in the audio text. It will be appreciated that the specific positioning into which type of document is configurable by the user.
As an optional implementation manner of this embodiment, the method further includes: when the language of the received text to be retrieved is inconsistent with the language of the document information, translating the text to be retrieved; positioning the text information to be retrieved to a target position of the document information through the translated text information; and/or performing voice broadcast on the content at the target position.
In this alternative implementation, the functions of translation and TTS (Text to Speech) are combined, so that learning assistance of users of different languages can be compatible.
Referring to fig. 6, it shows a schematic structural diagram of the method of the embodiment, by which the effect of learning assistance is achieved, and audio/video and text document courseware which has few words and may contain images can be quickly located based on short text.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than here.
According to an embodiment of the present disclosure, there is also provided a device for implementing the above intelligent retrieval method, the device including: the method comprises the steps that an acquisition unit acquires document information, wherein the document information comprises a text document and an audio document/video document containing voice information; the text document and the audio document/video document have associated content; after voice recognition is carried out on the voice information to obtain an audio text, the audio text is corrected by using text contents in the text document and/or the video document; establishing a position mapping relation between the corrected audio text and the document information, wherein the position mapping relation indicates the position of any segment of characters in the audio text in the document information; and if the text information to be retrieved is received, based on the position mapping relation, positioning the text information to be retrieved to the target position of the document information.
As an optional implementation manner of this embodiment, the apparatus further includes: the grouping unit is configured to perform relevance grouping on the documents in the database through a preset rule to obtain n document groups, wherein the text documents and the audio documents/video documents in each group have relevant content; and the storage unit is configured to store the corrected audio text into the corresponding document group after the corrected audio text is obtained.
The embodiment of the present disclosure provides an electronic device, as shown in fig. 7, the electronic device includes one or more processors 71 and a memory 72, where one processor 71 is taken as an example in fig. 7.
The controller may further include: an input device 73 and an output device 74.
The processor 71, the memory 72, the input device 73 and the output device 74 may be connected by a bus or other means, as exemplified by the bus connection in fig. 7.
The processor 71 may be a Central Processing Unit (CPU). The processor 71 may also be other general purpose processors, digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or combinations thereof. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 72, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the control methods in the embodiments of the present disclosure. The processor 71 executes various functional applications of the server and data processing by running non-transitory software programs, instructions and modules stored in the memory 72, i.e. implements the method of the above-described method embodiment.
The memory 72 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of a processing device operated by the server, and the like. Further, the memory 72 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 72 may optionally include memory located remotely from the processor 71, which may be connected to a network connection device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 73 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the processing device of the server. The output device 74 may include a display device such as a display screen.
One or more modules are stored in the memory 72, which when executed by the one or more processors 71 perform the method shown in FIG. 1.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program to instruct related hardware, and the program can be stored in a computer readable storage medium, and when executed, the program can include the processes of the embodiments of the motor control methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-only memory (ROM), a Random Access Memory (RAM), a flash memory (FlashMemory), a hard disk (hard disk drive, abbreviated as HDD) or a Solid State Drive (SSD), etc.; the storage medium may also comprise a combination of memories of the kind described above.
Although the embodiments of the present disclosure have been described in conjunction with the accompanying drawings, those skilled in the art may make various modifications and variations without departing from the spirit and scope of the present disclosure, and such modifications and variations fall within the scope defined by the appended claims.
Claims (10)
1. An intelligent retrieval method, comprising:
acquiring document information, wherein the document information comprises a text document and an audio document/video document containing voice information; the text document, the audio document/the video document have associated content;
after voice recognition is carried out on the voice information to obtain an audio text, the audio text is corrected by using text contents in the text document and/or the video document;
establishing a position mapping relation between the corrected audio text and the document information, wherein the position mapping relation indicates the position of any segment of characters in the audio text in the document information;
and if the text information to be retrieved is received, based on the position mapping relation, positioning the text information to be retrieved to the target position of the document information.
2. The intelligent retrieval method of claim 1, wherein prior to obtaining document information, the method further comprises:
carrying out relevance grouping on the documents in the database through a preset rule to obtain n document groups, wherein the text documents and the audio documents/video documents in each group have relevant contents;
and after the corrected audio text is obtained, storing the corrected audio text into a corresponding document group.
3. The intelligent retrieval method according to claim 2, wherein if text information to be retrieved is received, a document group matched with the text information to be retrieved is determined based on the text information to be retrieved;
and in the document group, positioning the text information to be retrieved to a target position of the document information based on the position mapping relation.
4. The intelligent retrieval method of claim 1 wherein establishing a location mapping relationship between the corrected audio text and the document information comprises:
establishing a first position mapping relation between the corrected audio text and the audio document/video, wherein the first position mapping relation indicates the position of any segment of characters in the audio text in the audio document/video document; and indicating the position of any piece of speech in the audio/video document in the audio text.
5. The intelligent retrieval method of claim 1, wherein the establishing of the position mapping relationship between the corrected audio text and the document information comprises:
and establishing a second position mapping relation between the corrected audio text and the text document, wherein the second position mapping relation indicates the position of any segment of characters in the audio text in the text document.
6. The intelligent retrieval method of claim 1, wherein the method further comprises:
when the language of the received text to be retrieved is inconsistent with the language of the document information, translating the text to be retrieved;
positioning the text information to be retrieved to a target position of the document information through the translated text information;
and/or performing voice broadcast on the content at the target position.
7. The intelligent retrieval device of claim 1, comprising:
an acquisition unit configured to acquire document information, wherein the document information includes a text document, an audio document/video document containing voice information; the text document, the audio document/the video document have associated content;
the correction unit is configured to correct the audio text by using text contents in the text document and/or the video document after performing voice recognition on the voice information to obtain the audio text;
the mapping unit is configured to establish a position mapping relation between the corrected audio text and the document information, wherein the position mapping relation indicates the position of any segment of characters in the audio text in the document information;
and the retrieval unit is configured to locate the text information to be retrieved to the target position of the document information based on the position mapping relation if the text information to be retrieved is received.
8. The intelligent retrieval device of claim 7, wherein the device further comprises:
the grouping unit is configured to perform relevance grouping on the documents in the database through a preset rule to obtain n document groups, wherein the text documents and the audio documents/video documents in each group have relevant content;
and the storage unit is configured to store the corrected audio text into the corresponding document group after the corrected audio text is obtained.
9. A computer-readable storage medium storing computer instructions for causing a computer to perform the intelligent retrieval method of any one of claims 1-6.
10. An electronic device, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores a computer program executable by the at least one processor, the computer program being executable by the at least one processor to cause the at least one processor to perform the intelligent retrieval method of any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210618683.1A CN115168534A (en) | 2022-06-01 | 2022-06-01 | Intelligent retrieval method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210618683.1A CN115168534A (en) | 2022-06-01 | 2022-06-01 | Intelligent retrieval method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115168534A true CN115168534A (en) | 2022-10-11 |
Family
ID=83482573
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210618683.1A Pending CN115168534A (en) | 2022-06-01 | 2022-06-01 | Intelligent retrieval method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115168534A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115617957A (en) * | 2022-12-19 | 2023-01-17 | 铭台(北京)科技有限公司 | Intelligent document retrieval method based on big data |
-
2022
- 2022-06-01 CN CN202210618683.1A patent/CN115168534A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115617957A (en) * | 2022-12-19 | 2023-01-17 | 铭台(北京)科技有限公司 | Intelligent document retrieval method based on big data |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11062090B2 (en) | Method and apparatus for mining general text content, server, and storage medium | |
CN108287858B (en) | Semantic extraction method and device for natural language | |
CN109543102A (en) | Information recommendation method, device and storage medium based on video playing | |
CN106534548B (en) | Voice error correction method and device | |
US20180144024A1 (en) | Method and apparatus for correcting query based on artificial intelligence | |
CN108052687B (en) | Education information search system based on Internet | |
CN102346731B (en) | File processing method and file processing device | |
CN109522397B (en) | Information processing method and device | |
CN107748744B (en) | Method and device for establishing drawing box knowledge base | |
CN105760359B (en) | Question processing system and method thereof | |
CN112002164A (en) | Job tutoring method and device, intelligent desk lamp and computer readable storage medium | |
US10089898B2 (en) | Information processing device, control method therefor, and computer program | |
CN111610901B (en) | AI vision-based English lesson auxiliary teaching method and system | |
CN111223015B (en) | Course recommendation method and device and terminal equipment | |
CN104090968B (en) | Intelligent information pushing method and device | |
US20210334314A1 (en) | Sibling search queries | |
WO2022127425A1 (en) | Question assistance method, apparatus and system | |
CN111522971A (en) | Method and device for assisting user in attending lessons in live broadcast teaching | |
CN111241276A (en) | Topic searching method, device, equipment and storage medium | |
CN115168534A (en) | Intelligent retrieval method and device | |
CN117421413A (en) | Question-answer pair generation method and device and electronic equipment | |
CN117290542A (en) | Video question-answering method, computer device and storage medium | |
WO2023024898A1 (en) | Problem assistance method, problem assistance apparatus and problem assistance system | |
CN113486650A (en) | Sentence scanning method and device and storage medium | |
CN112307158A (en) | Information retrieval method, device, equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |