CN111259181A - Method and equipment for displaying information and providing information - Google Patents
Method and equipment for displaying information and providing information Download PDFInfo
- Publication number
- CN111259181A CN111259181A CN201811468336.5A CN201811468336A CN111259181A CN 111259181 A CN111259181 A CN 111259181A CN 201811468336 A CN201811468336 A CN 201811468336A CN 111259181 A CN111259181 A CN 111259181A
- Authority
- CN
- China
- Prior art keywords
- dubbing
- file
- files
- text
- group
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 73
- 238000011156 evaluation Methods 0.000 claims abstract description 103
- 230000004044 response Effects 0.000 claims abstract description 27
- 230000001960 triggered effect Effects 0.000 claims description 23
- 238000004590 computer program Methods 0.000 claims description 13
- 239000012634 fragment Substances 0.000 claims description 6
- 230000000875 corresponding effect Effects 0.000 claims 15
- 238000001514 detection method Methods 0.000 claims 2
- 230000002596 correlated effect Effects 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 12
- 238000004891 communication Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 239000003795 chemical substances by application Substances 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000000981 bystander Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000003340 mental effect Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the application discloses a method and equipment for displaying and providing information. One embodiment of the method comprises: responding to the received cartoon content acquisition request, and displaying corresponding cartoon content; and in response to detecting that a user performs a preset operation, displaying identifications of one or more groups of dubbing files of which evaluation information meets a preset condition in at least one group of dubbing files of the text in the cartoon content, wherein the dubbing users of different groups of dubbing files are different, the identification of each group of dubbing files comprises group identifications corresponding to all the dubbing files of the group or at least one file identification respectively corresponding to at least one dubbing file in the group of dubbing files, and the identification of each dubbing file is displayed in association with the text corresponding to the dubbing file. This embodiment may present the user with high quality, personalized dubbing content originating from different dubbing users.
Description
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a method and equipment for displaying and providing information.
Background
With the rapid development of mobile internet technology, various cartoon applications emerge endlessly. Acoustic caricatures exist in the prior art. The sound cartoon is usually associated with an audio frequency which is manually read and recorded in advance; and after the user who browses the cartoon starts the sound cartoon, directly starting and playing the pre-recorded audio.
Disclosure of Invention
The embodiment of the application provides a method and equipment for displaying and providing information.
In a first aspect, an embodiment of the present application provides a method for displaying information, including: responding to the received cartoon content acquisition request, and displaying corresponding cartoon content; and in response to detecting that a user performs a preset operation, displaying identifications of one or more groups of dubbing files of which evaluation information meets a preset condition in at least one group of dubbing files of the text in the cartoon content, wherein the dubbing users of different groups of dubbing files are different, the identification of each group of dubbing files comprises group identifications corresponding to all the dubbing files of the group or at least one file identification respectively corresponding to at least one dubbing file in the group of dubbing files, and the identification of each dubbing file is displayed in association with the text corresponding to the dubbing file.
In a second aspect, an embodiment of the present application provides a method for providing information, including: in response to receiving a file identification information acquisition request from a terminal device, feeding back identification information of one or more groups of dubbing files of which evaluation information meets a preset condition in at least one group of dubbing files of a text in the cartoon content requested by the file identification information acquisition request to the terminal device; or in response to receiving the file identification information acquisition request and the evaluation information acquisition request from the terminal device, feeding back the identification information and the evaluation information of at least one group of dubbing files of the text in the comic content requested by the file identification information acquisition request and the evaluation information acquisition request to the terminal device.
In a third aspect, an embodiment of the present application provides a terminal device, where the terminal device includes: one or more processors; a storage device, on which one or more programs are stored, which, when executed by the one or more processors, cause the one or more processors to implement the method as described in any implementation manner of the first aspect.
In a fourth aspect, an embodiment of the present application provides a service device, where the service device includes: one or more processors; a storage device, on which one or more programs are stored, which, when executed by the one or more processors, cause the one or more processors to implement the method as described in any implementation manner of the second aspect.
In a fifth aspect, the present application provides a computer-readable medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method as described in any implementation manner of the first aspect.
In a sixth aspect, the present application provides a computer-readable medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method as described in any implementation manner of the second aspect.
According to the method and the device for displaying and providing information, the corresponding cartoon content is displayed by responding to the received cartoon content acquisition request; and then, responding to the fact that a user executes preset operation, displaying the identifications of one or more groups of dubbing files of which the evaluation information meets preset conditions in at least one group of dubbing files of the text in the cartoon content, wherein the dubbing users of different groups of dubbing files are different, the identification of each group of dubbing files comprises group identifications corresponding to all the dubbing files of the group or at least one file identification respectively corresponding to at least one dubbing file in the group of dubbing files, and the identification of each dubbing file is displayed in a manner of being associated with the text corresponding to the dubbing file, so that high-quality personalized dubbing content from different dubbing users can be displayed for the user.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram to which some embodiments of the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a method for presenting information in accordance with the present application;
FIG. 3 is a schematic diagram of an application scenario of a method for presenting information according to the present application;
FIG. 4 is a flow diagram of yet another embodiment of a method for presenting information in accordance with the present application;
FIG. 5 is a flow diagram for one embodiment of a method for providing information, in accordance with the present application;
FIG. 6 is a schematic block diagram of a computer system suitable for use in implementing an electronic device according to some embodiments of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows an exemplary system architecture 100 to which embodiments of the method for presenting information, providing information of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 1011, 1012, 1013, a network 102, and a service device 103. The network 102 serves as a medium for providing communication links between the terminal devices 1011, 1012, 1013 and the service device 103. Network 102 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 1011, 1012, 1013 to interact with the service device 103 via the network 102 to send or receive messages or the like, for example, to obtain identification information of the requested set or sets of dubbing files from the service device 103, or to obtain identification information and evaluation information of the requested at least one set of dubbing files from the service device 103.
The terminal devices 1011, 1012, 1013 may display corresponding comic content in response to receiving the comic content acquisition request; and then, if the fact that the user executes the preset operation is detected, displaying the identifications of one or more groups of dubbing files of which the evaluation information meets the preset condition in at least one group of dubbing files of the text in the cartoon content.
The terminal devices 1011, 1012, 1013 may be hardware or software. When the terminal devices 1011, 1012, 1013 are hardware, they may be various electronic devices having a display screen and supporting information interaction, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like. When the terminal devices 1011, 1012, 1013 are software, they may be installed in the electronic devices listed above. It may be implemented as multiple pieces of software or software modules, or as a single piece of software or software module. And is not particularly limited herein.
The service device 103 may be a server that provides various services. For example, the identification information of one or more sets of dubbing files whose evaluation information satisfies a preset condition among at least one set of dubbing files of text in the comic content requested by the file identification information acquisition request may be fed back to the terminal device 1011, 1012, 1013 upon receiving the file identification information acquisition request from the terminal device 1011, 1012, 1013, or the identification information and the evaluation information of at least one set of dubbing files of text in the comic content requested by the file identification information acquisition request and the evaluation information acquisition request may be fed back to the terminal device 1011, 1012, 1013 upon receiving the file identification information acquisition request and the evaluation information acquisition request from the terminal device 1011, 1012, 1013.
The service device 103 may be hardware or software. When the service device 103 is hardware, it may be implemented as a distributed server cluster composed of multiple servers, or may be implemented as a single server. When the service device 103 is software, it may be implemented as a plurality of software or software modules (for example, to provide distributed services), or as a single software or software module. And is not particularly limited herein.
It should be noted that the method for presenting information provided in the embodiment of the present application may be performed by the terminal devices 1011, 1012, 1013, and the method for providing information may be performed by the service device 103.
It should be understood that the number of terminal devices, networks, and serving devices in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and serving devices, as desired for an implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a method for presenting information in accordance with the present application is shown. The method for presenting information is generally applied to terminal devices. The method for displaying information comprises the following steps:
In the present embodiment, an execution subject (e.g., a terminal device shown in fig. 1) of the method for presenting information may determine whether a comic content acquisition request is received. Here, the comic content acquisition request may be a request generated by a user by searching a comic keyword in a comic search box, and in general, the comic keyword may include, but is not limited to, a comic name, a comic chapter, a comic author, a comic style, and the like. The comic content acquisition request may be a request generated by the user by clicking a comic link such as a comic title or a comic image for linking to the comic content. The comic content retrieval request may also be a request generated by the user through other manners of retrieving the comic content, for example, a request generated by the user by clicking a link of "watch comic and do tasks".
In this embodiment, if a cartoon content acquisition request is received, the execution main body may display corresponding cartoon content. Since the cartoon content acquiring request may include a cartoon keyword (including but not limited to a cartoon name, a cartoon chapter, a cartoon author, a cartoon style, etc.), the corresponding cartoon content may be found and displayed through the cartoon keyword. As an example, if the comic content acquisition request includes a comic name and a comic chapter, the execution main body may display the comic content corresponding to the comic chapter in the comic name.
In this embodiment, the execution body may detect whether a user performs a preset operation. The preset operation performed by the user may be an operation performed by the user on a virtual button (e.g., a button for playing a dubbing file) presented in the screen, for example, a click operation, a drag operation, or the like. The preset operation performed by the user may also be an operation performed by the user on a preset area (e.g., a certain text area) in the displayed comic content, for example, a long press operation.
In this embodiment, if it is detected that the user performs a preset operation, for example, a click operation of the user on a virtual button presented in a screen is detected or a long-press operation of the user on a text in the displayed cartoon content is detected, the execution main body may display an identifier of one or more groups of dubbing files of which evaluation information satisfies a preset condition in at least one group of dubbing files corresponding to the text in the cartoon content.
In this embodiment, the evaluation information may be acquired from the service apparatus. The service device may determine the evaluation information of the dubbing file according to other user's evaluation of the dubbing file, the clarity of the speech in the dubbing file, the matching degree between the dubbing file and the corresponding dubbing text, and the like. The evaluation information can be characterized as a score corresponding to the dubbing file; textual information that evaluates the dubbing file may also be characterized, e.g., good, general, bad, etc. If the evaluation information is characterized by a score, one or more groups of dubbing files with scores greater than a preset score threshold (e.g., 80 points) can be selected from the at least one group of dubbing files. If the evaluation information is represented by the text information, one or more groups of dubbing files with the text information being preset text (e.g., good or good) can be selected from the at least one group of dubbing files.
Here, the dubbing users of the dubbing files of different groups are different, and the dubbing users of the dubbing files of the same group are generally the same dubbing user. The identifier of each group of dubbing files may include a group identifier corresponding to all dubbing files of the group or at least one file identifier corresponding to at least one dubbing file in the group of dubbing files, respectively. It should be noted that the group identifier may be an identifier of a dubbing user of the group audio file, an identifier set by the dubbing user for the group audio file, or an identifier set by default. As an example, the identification of a group of dubbing files may be the group identification "dubbing rainsleet", and the identification of the group of dubbing files may also be characterized by three file identifications "rainsleet-1", "rainsleet-2" and "rainsleet-3" respectively corresponding to the three dubbing files contained in the group of dubbing files.
Here, the identification of each dubbing file may be displayed in association with the text to which the dubbing file corresponds. For example, the identification of the dubbing file may be displayed around the text corresponding to the dubbing file, and the identification of the dubbing file may also be displayed in a manner that other users can understand that the identification corresponds to the text.
It should be noted that the text in the cartoon content may include, but is not limited to: dialog text between caricatures, onwhite text and annotation text in caricature content. The text of the conversation between caricatures may typically be displayed in a dialog box. The onwhite text in the caricature content may include text describing the mental activities of the caricature character. The annotation text in the caricature content may include explanatory text for nouns that appear in the caricature dialog.
It should be further noted that, if a long-press operation of a user on a certain text area in the displayed cartoon content is detected, the execution main body may display the identifiers of one or more groups of dubbing files corresponding to the text in the text area; if the long-press operation of the user on the blank area in the displayed cartoon content is detected, the execution main body can display the identifications of one or more groups of dubbing files corresponding to each text in all texts in the current display interface.
In some optional implementations of this embodiment, the text may include at least one text fragment. In some cases, at least one of at least one dialog text, at least one onwhite text, and at least one annotation text may be included in the caricature content. Each dialog text, each voice-over text, or each comment text may be treated as a text fragment. The execution body may detect whether the user performs a dubbing operation for dubbing the text segment. The dubbing operation may be an operation in which the user selects a text segment to be dubbed, and then clicks on the "dubbing" icon. If the dubbing operation for dubbing the text segment executed by the user is detected, the execution main body can receive the voice of the user to generate a dubbing file of the dubbed text segment by the user. Then, the execution subject may send the dubbing file of the user and the identification information of the text segment corresponding to the dubbing file to a service device in association with each other, so that the service device evaluates the dubbing file of the user. The identification information of the text segment has uniqueness for finding the dubbed text segment from the plurality of text segments. The service device is typically an electronic device that stores text segments and corresponding dubbing files, and evaluates the dubbing files. The service device can generally determine the evaluation information of the dubbing file according to factors such as the evaluation of other users to the dubbing file, the definition of voice in the dubbing file, the matching degree between the dubbing file and the corresponding dubbing text, and the like.
In some optional implementations of the embodiment, a preset display icon (e.g., a "more" icon, an "…" icon, etc.) may be presented in the display interface of the cartoon content, and the display icon may be used to display an identifier of a dubbing file other than the identifier of the displayed dubbing file. In some cases, if the displayed identifiers of dubbing files do not have identifiers of dubbing files in which the user is interested, the user may obtain identifiers of more dubbing files through the display icon. The execution main body may detect whether the user triggers the display icon, for example, whether the user performs preset operations such as clicking and pulling down on the display icon. If it is detected that the user triggers the display icon, the execution subject may display the identifiers of the other dubbing files except the identifier of the displayed dubbing file.
In some optional implementations of this embodiment, the executing body may detect whether the user performs a preset operation on the identifier of the dubbing file, for example, a long-press operation is performed on the identifier of the dubbing file, or a click operation on the presented "rating" icon after the long-press operation is performed on the identifier of the dubbing file. If it is detected that the user performs a preset operation on the identifier of the dubbing file, the execution main body may receive an evaluation of the user on the dubbing file to which the preset operation is directed.
In some optional implementations of this embodiment, the executing body may, in response to detecting that the user performs a preset operation, display an identifier of one or more groups of dubbing files of which evaluation information satisfies a preset condition in at least one group of dubbing files of the text in the cartoon content as follows: the execution body may detect whether a user performs a preset operation, for example, a click operation performed on a virtual button presented in a screen or a long-press operation on displayed comic content. If it is detected that the user executes a preset operation, the execution main body may send a file identification information acquisition request to the service device, where the file identification information acquisition request may be used to acquire identification information of one or more sets of dubbing files of which evaluation information satisfies a preset condition among at least one set of dubbing files of a text in the displayed comic content, and the identification information has uniqueness, and the terminal device and the service device may search for the dubbing file by using the identification information of the dubbing file. Then, the execution subject may receive identification information of one or more sets of dubbing files of which evaluation information satisfies a preset condition among at least one set of dubbing files of the text in the comic content fed back by the service device. Finally, the execution subject may display the identifiers of the one or more groups of dubbing files based on the identification information. The execution main body may search for the identifier of the dubbing file corresponding to the identification information, and display the identifiers of the searched one or more sets of dubbing files.
In some optional implementations of this embodiment, the executing body may, in response to detecting that the user performs a preset operation, display an identifier of one or more groups of dubbing files of which evaluation information satisfies a preset condition in at least one group of dubbing files of the text in the cartoon content as follows: the execution body may detect whether a user performs a preset operation, for example, a click operation performed on a virtual button presented in a screen or a long-press operation on displayed comic content. If it is detected that the user performs a preset operation, the execution body may send a file identification information acquisition request and an evaluation information acquisition request to the service device. The file identification information acquisition request can be used for acquiring identification information of one or more groups of dubbing files of which evaluation information meets preset conditions in at least one group of dubbing files of texts in the displayed cartoon contents, the identification information has uniqueness, and the terminal equipment and the service equipment can search the dubbing files by utilizing the identification information of the dubbing files. The evaluation information obtaining request may be used to obtain evaluation information corresponding to the dubbing file. Then, the execution subject may receive identification information and evaluation information of at least one set of dubbing files of the text in the comic content fed back by the service device. Then, the execution subject may select one or more sets of dubbing files whose evaluation information satisfies a predetermined condition from at least one set of dubbing files. If the evaluation information is characterized by a score, the execution subject may select one or more groups of dubbing files having a score greater than a preset score threshold (e.g., 80 points) among the at least one group of dubbing files. If the evaluation information is represented by text information, the execution subject may select one or more groups of dubbing files with text information as predetermined text (e.g., good) from the at least one group of dubbing files. Finally, the execution subject may display the identifiers of the one or more groups of dubbing files based on the identification information of the one or more groups of dubbing files. The execution main body may search for an identifier of one or more sets of dubbing files corresponding to the identification information of the one or more sets of dubbing files, and display the identifier of the searched one or more sets of dubbing files.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the method for presenting information according to the present embodiment. In the application scenario of fig. 3, the terminal device 301 may determine whether a comic content acquisition request is received, where the terminal device 301 detects a click operation of the user on the comic title "comic M3" 302, and may display the comic content in the comic "comic M3", as indicated by an icon 303. After that, the terminal device 301 may determine whether the user performs a preset operation, where if the terminal device 301 detects a long press operation of the user on a blank area in the displayed comic content 303, identifiers of three sets of dubbing files having scores greater than eighty out of at least one set of dubbing files of texts such as "dialog content 1", "bystander content 1", and "dialog content 2" in the comic content 303 may be displayed, as shown by an icon 304.
According to the method provided by the embodiment of the application, the identifications of one or more groups of dubbing files of which the evaluation information meets the preset conditions in at least one group of dubbing files of the text in the cartoon content are displayed, so that personalized dubbing contents which are high in quality and come from different dubbing users can be displayed for the user.
With further reference to FIG. 4, a flow 400 of yet another embodiment of a method for presenting information is shown. The method for presenting information is generally applied to terminal devices. The process 400 of the method for presenting information includes the steps of:
In this embodiment, the operations in step 401 to step 402 are substantially the same as the operations in step 201 to step 202, and are not described herein again.
In response to detecting the user triggering the presented identification, a group or a dubbing file is played based on the triggered identification, step 403.
In this embodiment, the execution body may detect whether the user triggers the displayed identifier. The operation of the user for triggering the displayed identifier may be a click operation on the displayed identifier, or a click operation on the displayed playing icon after the displayed identifier is clicked.
In this embodiment, if the identifier displayed by the user trigger is detected, the execution main body may play a group or a dubbing file based on the triggered identifier. Specifically, if the identifier triggered by the user is a group identifier, the execution main body may play a dubbing file corresponding to a dialog text in a group of dubbing files corresponding to the triggered group identifier; if the identifier triggered by the user is a file identifier, the execution main body can play the dubbing file corresponding to the triggered file identifier.
In some optional implementations of the embodiment, the text may include a plurality of text segments. In some cases, multiple dialog texts, multiple voice-over texts, and multiple annotation texts may be included in the caricature content. Each dialog text, each voice-over text, or each comment text may be treated as a text fragment. If the identifier triggered by the user is a group identifier, the execution main body may play a group or a dubbing file based on the triggered identifier in the following manner: the execution main body can determine a starting dubbing file from a group of dubbing files corresponding to the triggered group identification, and then can play the dubbing files in the same group with the starting dubbing file in sequence from the starting dubbing file. It should be noted that the order of a group of dubbing files is usually determined based on the display order of the text segments corresponding to the dubbing files in the group. That is, if the display order of the text segments corresponding to the dubbing files is earlier, the order of the dubbing files is also earlier.
In some optional implementations of this embodiment, the starting dubbing file may be a first-ranked dubbing file (generally, a dubbing file corresponding to a text fragment whose display order is first-ranked) in a group of dubbing files corresponding to the triggered group identifier. The start dubbing file may be a dubbing file of a text segment displayed most in the front in the current display order.
In some optional implementations of the embodiment, the text may include a plurality of text segments. In some cases, multiple dialog texts, multiple voice-over texts, and multiple annotation texts may be included in the caricature content. Each dialog text, each voice-over text, or each comment text may be treated as a text fragment. If the identifier triggered by the user is a file identifier, the execution main body may play one group or one dubbing file based on the triggered identifier in the following manner: the execution main body can only play the dubbing file corresponding to the triggered file identifier; or, the execution main body may use the dubbing file corresponding to the triggered file identifier as an initial dubbing file, and start from the initial dubbing file, sequentially play the dubbing files in the same group as the initial dubbing file. It should be noted that the order of a group of dubbing files is usually determined based on the display order of the text segments corresponding to the dubbing files in the group. That is, if the display order of the text segments corresponding to the dubbing files is earlier, the order of the dubbing files is also earlier.
In some optional implementation manners of this embodiment, the executing main body may determine whether a text segment corresponding to a currently played dubbing file is presented in a current display interface, and if it is determined that the text segment corresponding to the currently played dubbing file is not presented in the current display interface, the executing main body may scroll the current display interface to a cartoon content including the text segment corresponding to the currently played dubbing file. Here, the text segment in the cartoon content displayed in the current display interface generally needs to be scrolled according to the currently played dubbing file, that is, when the user hears the dubbing file, the user generally needs to see the text segment corresponding to the heard dubbing file in the current display interface.
As can be seen from fig. 4, compared with the embodiment shown in fig. 2, the flow 400 of the method for presenting information in this embodiment represents a step of playing a group or a dubbing file based on the triggered identifier if it is detected that the user triggers the presented identifier. Therefore, the scheme described in the embodiment can play dubbing files which are interested by the user.
With further reference to fig. 5, a flow 500 of one embodiment of a method for providing information is shown. The method for providing information is generally applied to a service device. The process 500 of the method for providing information includes the steps of:
In the present embodiment, an execution subject (e.g., a service device shown in fig. 1) of the method for providing information may determine whether a request from a terminal device is received. The request may include a file identification information acquisition request and/or an evaluation information acquisition request. The file identification information acquisition request can be used for acquiring identification information of one or more groups of dubbing files of which evaluation information meets preset conditions in at least one group of dubbing files of texts in the requested cartoon contents, the file identification information acquisition request usually comprises identifications of the texts in the requested cartoon contents, the identification information has uniqueness, and the terminal equipment and the service equipment can search the dubbing files by using the identification information of the dubbing files. The evaluation information obtaining request may be used to obtain evaluation information corresponding to the dubbing file.
In this embodiment, if a request from the terminal device is received in step 501, the execution main body may determine whether the received request is a file identification information acquisition request, and the execution main body may determine whether the request is a file identification information acquisition request by using the request identifier. If the request is a file identification information acquisition request, the execution main body may feed back, to the terminal device, identification information of one or more groups of dubbing files of which evaluation information satisfies a preset condition among at least one group of dubbing files of a text in the comic content requested by the file identification information acquisition request.
In this embodiment, the execution subject may determine the evaluation information of the dubbing file according to factors such as evaluation of the dubbing file by other users, clarity of speech in the dubbing file, and matching degree between the dubbing file and the corresponding dubbing text. The evaluation information can be characterized as a score corresponding to the dubbing file; textual information that evaluates the dubbing file may also be characterized, e.g., good, general, bad, etc.
In some optional implementations of this embodiment, the executing body may feed back, to the terminal device, identification information of one or more sets of dubbing files of which evaluation information satisfies a preset condition, among at least one set of dubbing files of a text in the comic content requested by the file identification information acquisition request, as follows: the execution main body may determine at least one group of dubbing files of the text in the comic content requested by the file identification information acquisition request, and since the file identification information acquisition request generally includes the identification of the text in the requested comic content, the execution main body may determine at least one group of dubbing files corresponding to the identification of the requested text. Then, the execution subject may obtain evaluation information of the at least one set of dubbing files, and select one or more sets of dubbing files from the at least one set of dubbing files based on the evaluation information. Specifically, if the evaluation information is represented by a score, the execution subject may select one or more groups of dubbing files having a score greater than a preset score threshold (e.g., 80 points) among the at least one group of dubbing files. If the evaluation information is represented by text information, the execution subject may select one or more groups of dubbing files with text information as predetermined text (e.g., good) from the at least one group of dubbing files. And then, the identification information of the one or more groups of dubbing files can be fed back to the terminal equipment.
In this embodiment, if a request is received from the terminal device in step 501, the execution main body may determine whether the received request is a file identification information acquisition request and an evaluation information acquisition request, and the execution main body may determine whether the request is a file identification information acquisition request and an evaluation information acquisition request by using the request identifier. If the request is a file identification information acquisition request evaluation information acquisition request, the execution main body may feed back, to the terminal device, identification information and evaluation information of at least one set of dubbing files of text in the comic content requested by the file identification information acquisition request and the evaluation information acquisition request.
In this embodiment, the execution subject may determine the evaluation information of the dubbing file according to factors such as evaluation of the dubbing file by other users, clarity of speech in the dubbing file, and matching degree between the dubbing file and the corresponding dubbing text. The evaluation information can be characterized as a score corresponding to the dubbing file; textual information that evaluates the dubbing file may also be characterized, e.g., good, general, bad, etc. If the evaluation information is represented by a score, after the terminal device receives the identification information and the evaluation information fed back by the execution subject, the terminal device may select one or more groups of dubbing files with a score greater than a preset score threshold (e.g., 80 points) from at least one group of dubbing files indicated by the identification information. If the evaluation information is represented by text information, after the terminal device receives the identification information and the evaluation information fed back by the execution main body, the terminal device may select one or more groups of dubbing files of which the text information is a preset text (e.g., good or good) from the at least one group of dubbing files.
In some optional implementation manners of this embodiment, the execution main body may receive a dubbing file of the user and identification information of a text segment corresponding to the dubbing file, where the dubbing file is sent by the terminal device; then, the dubbing file of the user may be stored in association with the identification information of the text segment corresponding to the dubbing file. The terminal device may detect whether a user performs a dubbing operation for dubbing the text segment. The dubbing operation may be an operation in which the user selects a text segment to be dubbed, and then clicks on the "dubbing" icon. If the dubbing operation performed by the user for dubbing the text segments is detected, the terminal device can receive the voice of the user to generate a dubbing file of the dubbed text segments by the user. The terminal device may transmit the dubbing file of the user and the identification information of the text segment corresponding to the dubbing file to the executing agent in association with each other, so that the executing agent stores the dubbing file of the user.
In some optional implementation manners of this embodiment, the execution subject may evaluate the dubbing file of the user to obtain evaluation information. As an example, the execution subject may evaluate the dubbing file by other users with respect to the evaluation of the dubbing file of the user, the clarity of the dubbing file of the user, and the like.
In some optional implementations of this embodiment, the executing entity may evaluate the dubbing file of the user as follows: the execution subject may recognize the dubbing words from the dubbing file of the user, and the execution subject may convert the dubbing file into the dubbing words by a voice-to-text (voice-to-text) technique. Speech-to-text conversion is a speech recognition program that converts spoken language to written language. Then, a matching degree between the dubbing file and the dubbed text may be determined as a first matching degree, and the executing body may determine the matching degree between the dubbing file and the dubbed text by an existing text similarity calculation method (for example, a cosine similarity calculation method, an edit distance calculation method, or the like); finally, the dubbing file of the user can be evaluated based on the first matching degree. As an example, if the evaluation information is represented by a score, a product of the first matching degree and a preset numerical value may be used as the evaluation information of the dubbing file of the user; if the evaluation information is represented by the text information, the evaluation information corresponding to the first matching degree can be determined by using a preset corresponding relation table of the first matching degree and the evaluation information.
In some optional implementations of the embodiment, the text may correspond to one or more speech features, and the speech features generally refer to features having identification in the audio signal. Typically, the speech characteristics of different people are not the same. The execution subject may evaluate the dubbing file of the user as follows: the execution body may extract one or more voice features of the user from the dubbing file of the user, and may extract one or more voice features of the user from the dubbing file of the user by using an existing voice feature extraction method (e.g., Mel-frequency cepstral Coefficients (MFCCs)). Then, the executing body may determine a matching degree between one or more speech features of the user and one or more speech features corresponding to the dubbed text as a second matching degree, for example, the executing body may determine a ratio between the same speech features and the number of speech features of the user as the second matching degree. Finally, the dubbing file of the user can be evaluated based on the second matching degree. As an example, if the evaluation information is represented by a score, a product of the second matching degree and a preset numerical value may be used as the evaluation information of the dubbing file of the user; if the evaluation information is represented by the text information, the evaluation information corresponding to the second matching degree can be determined by using a preset corresponding relation table of the second matching degree and the evaluation information.
The method for providing information provided by the above embodiment of the application feeds back the identification information of one or more groups of dubbing files of which the evaluation information meets the preset condition to the terminal equipment, or feeds back the identification information and the evaluation information of at least one group of dubbing files to the terminal equipment, so that the terminal equipment can display high-quality dubbing contents, or the terminal equipment can select high-quality dubbing contents from the dubbing files to display based on the evaluation information.
Referring now to FIG. 6, shown is a block diagram of a computer system 600 suitable for use with the electronic device implementing an embodiment of the present invention. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 6, the computer system 600 includes a Central Processing Unit (CPU)601 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the system 600 are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a keyboard, a mouse, and the like; an output portion 607 including a display such as a Liquid Crystal Display (LCD) and a speaker; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611. The computer program performs the above-described functions defined in the method of the present application when executed by a Central Processing Unit (CPU) 601. It should be noted that the computer readable medium mentioned above in the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
As another aspect, the present application also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may exist separately without being assembled into the terminal device or the network device. The computer readable medium carries one or more programs which, when executed by the terminal device or the network device, cause the terminal device to: responding to the received cartoon content acquisition request, and displaying corresponding cartoon content; and in response to detecting that a user performs a preset operation, displaying identifications of one or more groups of dubbing files of which evaluation information meets a preset condition in at least one group of dubbing files of the text in the cartoon content, wherein the dubbing users of different groups of dubbing files are different, the identification of each group of dubbing files comprises group identifications corresponding to all the dubbing files of the group or at least one file identification respectively corresponding to at least one dubbing file in the group of dubbing files, and the identification of each dubbing file is displayed in association with the text corresponding to the dubbing file. Or cause the service device to: in response to receiving a file identification information acquisition request from a terminal device, feeding back identification information of one or more groups of dubbing files of which evaluation information meets a preset condition in at least one group of dubbing files of a text in the cartoon content requested by the file identification information acquisition request to the terminal device; or in response to receiving the file identification information acquisition request and the evaluation information acquisition request from the terminal device, feeding back the identification information and the evaluation information of at least one group of dubbing files of the text in the comic content requested by the file identification information acquisition request and the evaluation information acquisition request to the terminal device.
The foregoing description is only exemplary of the preferred embodiments of the invention and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention according to the present invention is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is possible without departing from the scope of the invention as defined by the appended claims. For example, the above features and (but not limited to) features having similar functions disclosed in the present invention are mutually replaced to form the technical solution.
Claims (21)
1. A method for displaying information is applied to terminal equipment, and the method comprises the following steps:
responding to the received cartoon content acquisition request, and displaying corresponding cartoon content;
and displaying the identifications of one or more groups of dubbing files of which the evaluation information meets the preset condition in at least one group of dubbing files of the text in the cartoon content in response to the detection of the user executing the preset operation, wherein the dubbing users of different groups of dubbing files are different, the identification of each group of dubbing files comprises group identifications corresponding to all the dubbing files of the group or at least one file identification respectively corresponding to at least one dubbing file in the group of dubbing files, and the identification of each dubbing file is displayed in association with the text corresponding to the dubbing file.
2. The method of claim 1, wherein the text comprises at least one text segment, the method further comprising:
in response to detecting a dubbing operation performed by the user for dubbing the text segment, receiving speech of the user to generate a dubbing file for the text segment by the user;
and sending the dubbing file of the user and the identification information of the text fragment corresponding to the dubbing file to service equipment so that the service equipment evaluates the dubbing file of the user.
3. The method of claim 1, wherein the method further comprises:
in response to detecting the user triggering the presented identification, playing a group or one dubbing file based on the triggered identification.
4. The method of claim 3, wherein the text comprises a plurality of text segments, the triggered identification being the group identification; and
the playing of a group or a dubbing file based on the triggered identification comprises:
and determining a starting dubbing file from a group of dubbing files corresponding to the triggered group identifier, and playing the dubbing files in the same group as the starting dubbing file in sequence from the starting dubbing file, wherein the sequence of a group of dubbing files is determined based on the display sequence of text segments corresponding to the dubbing files in the group.
5. The method according to claim 4, wherein the starting dubbing file is the first dubbing file in the group of dubbing files corresponding to the triggered group identifier or the dubbing file of the text segment with the display sequence most front in the currently displayed text segment.
6. The method of claim 3, wherein the text comprises a plurality of text segments, the triggered identification being the file identification; and
the playing of a group or a dubbing file based on the triggered identification comprises:
and taking the dubbing file corresponding to the triggered file identifier as an initial dubbing file, and playing the dubbing files in the same group as the initial dubbing file in sequence from the initial dubbing file, wherein the sequence of one group of dubbing files is determined based on the display sequence of the text segments corresponding to the dubbing files in the group.
7. The method according to one of claims 4-6, wherein the method further comprises:
and in response to determining that the text segment corresponding to the currently played dubbing file is not presented in the current display interface, scrolling the current display interface to the cartoon content containing the text segment corresponding to the currently played dubbing file.
8. The method according to claim 1, wherein a preset display icon is presented in the display interface of the cartoon content, and the display icon is used for displaying the identifiers of other dubbing files except the identifier of the displayed dubbing file; and
the method further comprises the following steps:
in response to detecting that the user triggers the presentation icon, presenting an identification of the other dubbing files in addition to the identification of the presented dubbing file.
9. The method according to one of claims 1 to 8, wherein the method further comprises:
and receiving the evaluation of the user on the dubbing file to which the preset operation aims in response to the fact that the user executes the preset operation on the identification of the dubbing file.
10. The method according to claim 1, wherein the displaying, in response to detecting that a user performs a preset operation, an identification of one or more groups of dubbing files of at least one group of dubbing files of text in the cartoon content whose evaluation information satisfies a preset condition includes:
sending a file identification information acquisition request to the service equipment in response to the detection that the user executes the preset operation;
receiving identification information of one or more groups of dubbing files of which evaluation information meets preset conditions in at least one group of dubbing files of the text in the cartoon content fed back by the service equipment;
and displaying the identification of the one or more groups of dubbing files based on the identification information.
11. The method according to claim 1, wherein the displaying, in response to detecting that a user performs a preset operation, an identification of one or more groups of dubbing files of at least one group of dubbing files of text in the cartoon content whose evaluation information satisfies a preset condition includes:
in response to the fact that the user executes preset operation, sending a file identification information acquisition request and an evaluation information acquisition request to the service equipment;
receiving identification information and evaluation information of at least one group of dubbing files of texts in the cartoon contents fed back by the service equipment;
selecting one or more groups of dubbing files of which the evaluation information meets the preset condition from at least one group of dubbing files;
and displaying the identification of the one or more groups of dubbing files based on the identification information of the one or more groups of dubbing files.
12. A method for providing information, applied to a service device, the method comprising:
in response to receiving a file identification information acquisition request from a terminal device, feeding back identification information of one or more groups of dubbing files of which evaluation information meets a preset condition from at least one group of dubbing files of a text in the cartoon content requested by the file identification information acquisition request to the terminal device; or
In response to receiving a file identification information acquisition request and an evaluation information acquisition request from a terminal device, feeding back identification information and evaluation information of at least one set of dubbing files of text in comic content requested by the file identification information acquisition request and the evaluation information acquisition request to the terminal device.
13. The method according to claim 12, wherein the feeding back, to the terminal device, identification information of one or more sets of dubbing files of which evaluation information satisfies a preset condition among at least one set of dubbing files of text in the comic content requested by the file identification information acquisition request includes:
determining at least one group of dubbing files of the text in the cartoon content requested by the file identification information acquisition request;
acquiring evaluation information of the at least one group of dubbing files, and selecting one or more groups of dubbing files from the at least one group of dubbing files based on the evaluation information;
and feeding back the identification information of the one or more groups of dubbing files to the terminal equipment.
14. The method of claim 12, wherein the method further comprises:
receiving a dubbing file of a user and identification information of a text segment corresponding to the dubbing file, which are sent by the terminal equipment;
and storing the dubbing file of the user and the identification information of the text segment corresponding to the dubbing file in a correlated mode.
15. The method of claim 14, wherein the method further comprises:
and evaluating the dubbing file of the user to obtain evaluation information.
16. The method of claim 15, wherein said evaluating the user's dubbing file comprises:
identifying dubbing characters from the dubbing file of the user;
determining the matching degree between the dubbing file and the dubbed text as a first matching degree;
and evaluating the dubbing file of the user based on the first matching degree.
17. The method of claim 15, wherein the text corresponds to one or more speech features; and
the evaluating the dubbing file of the user comprises:
extracting one or more voice features of the user from the dubbing file of the user;
determining a matching degree between one or more voice features of the user and one or more voice features corresponding to the dubbed text as a second matching degree;
and evaluating the dubbing file of the user based on the second matching degree.
18. A terminal device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-11.
19. A service device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 12-17.
20. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-11.
21. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 12-17.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811468336.5A CN111259181B (en) | 2018-12-03 | 2018-12-03 | Method and device for displaying information and providing information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811468336.5A CN111259181B (en) | 2018-12-03 | 2018-12-03 | Method and device for displaying information and providing information |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111259181A true CN111259181A (en) | 2020-06-09 |
CN111259181B CN111259181B (en) | 2024-04-12 |
Family
ID=70953747
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811468336.5A Active CN111259181B (en) | 2018-12-03 | 2018-12-03 | Method and device for displaying information and providing information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111259181B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113282784A (en) * | 2021-06-03 | 2021-08-20 | 北京得间科技有限公司 | Audio recommendation method, computing device and computer storage medium for dialog novel |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120196260A1 (en) * | 2011-02-01 | 2012-08-02 | Kao Nhiayi | Electronic Comic (E-Comic) Metadata Processing |
CN103117057A (en) * | 2012-12-27 | 2013-05-22 | 安徽科大讯飞信息科技股份有限公司 | Application method of special human voice synthesis technique in mobile phone cartoon dubbing |
CN103186578A (en) * | 2011-12-29 | 2013-07-03 | 方正国际软件(北京)有限公司 | Processing system and processing method for sound effects of cartoon |
CN105302908A (en) * | 2015-11-02 | 2016-02-03 | 北京奇虎科技有限公司 | E-book related audio resource recommendation method and apparatus |
US20160378739A1 (en) * | 2015-06-29 | 2016-12-29 | International Business Machines Corporation | Editing one or more text files from an editing session for an associated text file |
CN106531148A (en) * | 2016-10-24 | 2017-03-22 | 咪咕数字传媒有限公司 | Cartoon dubbing method and apparatus based on voice synthesis |
CN106971415A (en) * | 2017-03-29 | 2017-07-21 | 广州阿里巴巴文学信息技术有限公司 | Multimedia caricature player method, device and terminal device |
CN107040452A (en) * | 2017-02-08 | 2017-08-11 | 浙江翼信科技有限公司 | A kind of information processing method, device and computer-readable recording medium |
CN107885855A (en) * | 2017-11-15 | 2018-04-06 | 福州掌易通信息技术有限公司 | Dynamic caricature generation method and system based on intelligent terminal |
CN107967104A (en) * | 2017-12-20 | 2018-04-27 | 北京时代脉搏信息技术有限公司 | The method and electronic equipment of voice remark are carried out to information entity |
US20180268820A1 (en) * | 2017-03-16 | 2018-09-20 | Naver Corporation | Method and system for generating content using speech comment |
JP2018169691A (en) * | 2017-03-29 | 2018-11-01 | 富士通株式会社 | Reproduction control device, cartoon data provision program, voice reproduction program, reproduction control program, and reproduction control method |
-
2018
- 2018-12-03 CN CN201811468336.5A patent/CN111259181B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120196260A1 (en) * | 2011-02-01 | 2012-08-02 | Kao Nhiayi | Electronic Comic (E-Comic) Metadata Processing |
CN103186578A (en) * | 2011-12-29 | 2013-07-03 | 方正国际软件(北京)有限公司 | Processing system and processing method for sound effects of cartoon |
CN103117057A (en) * | 2012-12-27 | 2013-05-22 | 安徽科大讯飞信息科技股份有限公司 | Application method of special human voice synthesis technique in mobile phone cartoon dubbing |
US20160378739A1 (en) * | 2015-06-29 | 2016-12-29 | International Business Machines Corporation | Editing one or more text files from an editing session for an associated text file |
CN105302908A (en) * | 2015-11-02 | 2016-02-03 | 北京奇虎科技有限公司 | E-book related audio resource recommendation method and apparatus |
CN106531148A (en) * | 2016-10-24 | 2017-03-22 | 咪咕数字传媒有限公司 | Cartoon dubbing method and apparatus based on voice synthesis |
CN107040452A (en) * | 2017-02-08 | 2017-08-11 | 浙江翼信科技有限公司 | A kind of information processing method, device and computer-readable recording medium |
US20180268820A1 (en) * | 2017-03-16 | 2018-09-20 | Naver Corporation | Method and system for generating content using speech comment |
CN106971415A (en) * | 2017-03-29 | 2017-07-21 | 广州阿里巴巴文学信息技术有限公司 | Multimedia caricature player method, device and terminal device |
JP2018169691A (en) * | 2017-03-29 | 2018-11-01 | 富士通株式会社 | Reproduction control device, cartoon data provision program, voice reproduction program, reproduction control program, and reproduction control method |
CN107885855A (en) * | 2017-11-15 | 2018-04-06 | 福州掌易通信息技术有限公司 | Dynamic caricature generation method and system based on intelligent terminal |
CN107967104A (en) * | 2017-12-20 | 2018-04-27 | 北京时代脉搏信息技术有限公司 | The method and electronic equipment of voice remark are carried out to information entity |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113282784A (en) * | 2021-06-03 | 2021-08-20 | 北京得间科技有限公司 | Audio recommendation method, computing device and computer storage medium for dialog novel |
Also Published As
Publication number | Publication date |
---|---|
CN111259181B (en) | 2024-04-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107871500B (en) | Method and device for playing multimedia | |
CN110069608B (en) | Voice interaction method, device, equipment and computer storage medium | |
US20090327272A1 (en) | Method and System for Searching Multiple Data Types | |
US9472209B2 (en) | Deep tagging background noises | |
JP2019061662A (en) | Method and apparatus for extracting information | |
CN113596579B (en) | Video generation method, device, medium and electronic equipment | |
CN111767740B (en) | Sound effect adding method and device, storage medium and electronic equipment | |
CN109582825B (en) | Method and apparatus for generating information | |
JP2022088304A (en) | Method for processing video, device, electronic device, medium, and computer program | |
CN111986655B (en) | Audio content identification method, device, equipment and computer readable medium | |
CN107680584B (en) | Method and device for segmenting audio | |
US11750898B2 (en) | Method for generating target video, apparatus, server, and medium | |
CN111800671A (en) | Method and apparatus for aligning paragraphs and video | |
US20240061899A1 (en) | Conference information query method and apparatus, storage medium, terminal device, and server | |
JP2014513828A (en) | Automatic conversation support | |
CN114023301A (en) | Audio editing method, electronic device and storage medium | |
CN112182255A (en) | Method and apparatus for storing media files and for retrieving media files | |
CN113011169A (en) | Conference summary processing method, device, equipment and medium | |
CN113407775B (en) | Video searching method and device and electronic equipment | |
CN113923479A (en) | Audio and video editing method and device | |
CN111259181B (en) | Method and device for displaying information and providing information | |
CN112954453A (en) | Video dubbing method and apparatus, storage medium, and electronic device | |
CN111767259A (en) | Content sharing method and device, readable medium and electronic equipment | |
CN114697762B (en) | Processing method, processing device, terminal equipment and medium | |
CN113132789B (en) | Multimedia interaction method, device, equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |