CN110555127A - Multimedia content generation method and device - Google Patents
Multimedia content generation method and device Download PDFInfo
- Publication number
- CN110555127A CN110555127A CN201810279654.0A CN201810279654A CN110555127A CN 110555127 A CN110555127 A CN 110555127A CN 201810279654 A CN201810279654 A CN 201810279654A CN 110555127 A CN110555127 A CN 110555127A
- Authority
- CN
- China
- Prior art keywords
- multimedia content
- feature vector
- multimedia
- vector group
- generating
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 51
- 239000013598 vector Substances 0.000 claims abstract description 289
- 239000000463 material Substances 0.000 claims abstract description 90
- 238000012545 processing Methods 0.000 claims description 19
- 238000004590 computer program Methods 0.000 claims description 10
- 230000010354 integration Effects 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 16
- 230000004044 response Effects 0.000 description 11
- 238000004891 communication Methods 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 10
- 235000008331 Pinus X rigitaeda Nutrition 0.000 description 7
- 241000018646 Pinus brutia Species 0.000 description 7
- 235000011613 Pinus brutia Nutrition 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 238000004458 analytical method Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 238000003780 insertion Methods 0.000 description 3
- 230000037431 insertion Effects 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 241000218691 Cupressaceae Species 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 2
- 230000003796 beauty Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000036651 mood Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Landscapes
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
the present disclosure relates to a method and an apparatus for generating multimedia content, including: acquiring a keyword of first multimedia content; acquiring a characteristic value corresponding to the keyword from a characteristic library, wherein the characteristic value is used for representing the semantic meaning corresponding to the keyword; combining the characteristic values corresponding to the keywords into first characteristic vectors corresponding to the keywords; searching a feature vector group corresponding to the first multimedia content from a material library, wherein the feature vector group comprises a second feature vector comprising the first feature vector; and generating second multimedia content corresponding to the first multimedia content according to the multimedia material corresponding to the feature vector group. According to the method and the device for generating the multimedia content, the accuracy of the generated multimedia content can be improved.
Description
Technical Field
The present disclosure relates to the field of multimedia technologies, and in particular, to a method and an apparatus for generating multimedia content.
Background
With the development of multimedia technology, users can perform secondary creation through existing materials (e.g., video, audio, and pictures) to obtain a new multimedia content.
And the secondary creation process comprises the following steps: according to the contents such as the outline, the outline and the like of the script, a search engine and other tools are utilized to search for the required materials, and then the searched materials are edited, spliced and synthesized through post-processing to form new multimedia contents.
disclosure of Invention
In view of this, the present disclosure provides a method and an apparatus for generating multimedia content, which can improve the accuracy of the generated multimedia content.
According to an aspect of the present disclosure, there is provided a method for generating multimedia content, applied to a terminal, including:
acquiring a keyword of first multimedia content;
Acquiring a characteristic value corresponding to the keyword from a characteristic library, wherein the characteristic value is used for representing the semantic meaning corresponding to the keyword;
Combining the characteristic values corresponding to the keywords into first characteristic vectors corresponding to the keywords;
Searching a feature vector group corresponding to first multimedia content from a material library, wherein the feature vector group comprises a second feature vector comprising the first feature vector;
And generating second multimedia content corresponding to the first multimedia content according to the multimedia material corresponding to the feature vector group.
in one possible implementation, the method further includes:
Splitting multimedia contents to obtain a plurality of first multimedia contents;
and integrating the second multimedia contents corresponding to the first multimedia contents into third multimedia contents corresponding to the multimedia contents.
in a possible implementation manner, the generating, according to the multimedia material corresponding to the feature vector group, the second multimedia content corresponding to the first multimedia content includes:
when a plurality of feature vector groups are found, determining the association degree of the first multimedia content and each feature vector group respectively;
Determining a set of associated feature vectors of the first multimedia content from the plurality of sets of feature vectors according to the degree of association;
And generating second multimedia content corresponding to the first multimedia content according to the multimedia material corresponding to the associated feature vector group.
In one possible implementation, determining a degree of association between the first multimedia content and each of the feature vector groups includes:
for each feature vector group, determining the vector association degree of a second feature vector of the feature vector group and a corresponding first feature vector, wherein the second feature vector comprises the first feature vector;
And determining the association degree of the feature vector group and the first multimedia content according to the vector association degree.
In one possible implementation, the method further includes:
Acquiring a selection operation aiming at a multimedia material in the second multimedia content;
And generating fourth multimedia content corresponding to the multimedia content according to the multimedia material corresponding to the selection operation.
According to another aspect of the present disclosure, there is provided a multimedia content generating apparatus applied to a terminal, including:
The first acquisition module is used for acquiring keywords of the first multimedia content;
the second acquisition module is used for acquiring a characteristic value corresponding to the keyword from a characteristic library, wherein the characteristic value is used for representing the semantic meaning corresponding to the keyword;
the processing module is used for combining the characteristic values corresponding to the keywords into first characteristic vectors corresponding to the keywords;
The searching module is used for searching a feature vector group corresponding to the first multimedia content from a material library, wherein the feature vector group is provided with a second feature vector containing the first feature vector;
and the generating module is used for generating second multimedia content corresponding to the first multimedia content according to the multimedia material corresponding to the feature vector group.
In one possible implementation, the apparatus further includes:
the splitting module is used for splitting multimedia contents to obtain a plurality of first multimedia contents;
And the integration module is used for integrating the second multimedia contents corresponding to the first multimedia contents into third multimedia contents corresponding to the multimedia contents.
In one possible implementation manner, the generating module includes:
The first determining submodule is used for respectively determining the association degree of the first multimedia content and each feature vector group when the plurality of feature vector groups are found;
a second determining sub-module, configured to determine, according to the association degree, an associated feature vector group of the first multimedia content from the plurality of feature vector groups;
And the first generation submodule is used for generating second multimedia content corresponding to the first multimedia content according to the multimedia material corresponding to the associated feature vector group.
In one possible implementation, the first determining sub-module is further configured to:
For each feature vector group, determining the vector association degree of a second feature vector of the feature vector group and a corresponding first feature vector, wherein the second feature vector comprises the first feature vector;
and determining the association degree of the feature vector group and the first multimedia content according to the vector association degree.
In one possible implementation, the apparatus further includes:
A third obtaining module, configured to obtain a selection operation for a multimedia material in the second multimedia content;
And the determining module is used for generating fourth multimedia content corresponding to the multimedia content according to the multimedia material corresponding to the selection operation.
according to another aspect of the present disclosure, there is provided a multimedia content generating apparatus including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to perform the above method.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having computer program instructions stored thereon, wherein the computer program instructions, when executed by a processor, implement the above-described method.
thus, according to the multimedia content generation method and device provided by the disclosure, after the terminal acquires the keyword of the first multimedia content, the terminal can acquire the feature value corresponding to the keyword from the feature library, and form all the feature values corresponding to the keyword into the first feature vector. The terminal searches a feature vector group from the material library, the feature vector group comprises a second feature vector containing the first feature vector, and generates second multimedia content corresponding to the first multimedia content according to the multimedia material corresponding to the feature vector group. Therefore, according to the method and device for generating multimedia content provided by the embodiment of the disclosure, after the keyword of the first multimedia content is acquired, the terminal can search the feature vector group including the keyword or the synonym including the keyword through the feature value representing the semantic meaning, and generate the first multimedia content corresponding to the first multimedia content according to the multimedia material corresponding to the feature vector group, so that the acquired second multimedia content can better conform to the scene described by the first multimedia content, and the accuracy of the generated multimedia content can be further improved.
other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features, and aspects of the disclosure and, together with the description, serve to explain the principles of the disclosure.
fig. 1 illustrates a flowchart of a method of generating multimedia content according to an embodiment of the present disclosure.
Fig. 2 illustrates a flow chart of a method of generating multimedia content according to an embodiment of the present disclosure.
Fig. 3 shows a flowchart of a method of generating multimedia content according to an embodiment of the present disclosure.
Fig. 4 shows a flowchart of a method of generating multimedia content according to an embodiment of the present disclosure.
Fig. 5 illustrates a flowchart of a method of generating multimedia content according to an embodiment of the present disclosure.
fig. 6 is a block diagram illustrating a multimedia content generating apparatus according to an embodiment of the present disclosure.
Fig. 7 is a block diagram illustrating a multimedia content generating apparatus according to an embodiment of the present disclosure.
Fig. 8 is a block diagram illustrating a generation apparatus 800 for multimedia content according to an example embodiment.
Detailed Description
various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
the word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 shows a flowchart of a multimedia content generating method according to an embodiment of the present disclosure, which is applied to a terminal. As shown in fig. 1, the method for generating multimedia content may include the steps of:
step 101, obtaining a keyword of a first multimedia content;
Wherein, the first multimedia content can be a text content created by a user: for example, a user may create a drama, a novel drama, a fretwork drama, a phase sound drama, or the like, or the first multimedia content may also be picture content, or audio content, video content, or the like, and the type of the first multimedia content is not particularly limited in the embodiments of the present disclosure.
For example, the terminal display interface may include a keyword input box, the user may input a keyword of the first multimedia content in the keyword input box, and the terminal obtains the keyword of the first multimedia content in response to the input operation of the user on the keyword input box.
as another example, the terminal may include a portal through which the first multimedia content is uploaded by the user (e.g., in document form, picture form, audio or video form, etc.). For example, the first multimedia content is an uploaded text, and after receiving the first multimedia content, the terminal may perform a word segmentation operation on the first multimedia content to obtain a keyword of the first multimedia content.
or the first multimedia content may be an uploaded picture or a video, and the terminal may identify the content in the picture by using an image identification technology, or may identify the content in each frame of picture corresponding to the video by using the image identification technology and determine that the identified content is a keyword corresponding to the first multimedia content.
Or the first multimedia content may be an uploaded audio, and the terminal may recognize the voice content in the audio through a voice recognition technology and determine that the recognized voice content is a keyword corresponding to the first multimedia content.
In a possible implementation manner, the terminal may modify the keyword corresponding to the first multimedia content in response to a modification operation of the keyword by the user. For example, the terminal may display a keyword corresponding to the first multimedia content on the display interface in response to a viewing operation of the user on the keyword, and may modify the keyword in response to a modification operation of the user on a certain keyword, or the terminal may add the keyword of the first multimedia content in response to an addition operation of the user on the keyword.
102, acquiring a feature value corresponding to the keyword from a feature library, wherein the feature value is used for representing the semantic meaning corresponding to the keyword;
The feature library may be a database for storing a correspondence relationship between the keyword and the feature value. The feature value can be used for representing the corresponding semantics of the keyword. Each keyword may correspond to at least one semantic meaning, and each keyword may correspond to at least one feature value.
illustratively, the keywords obtained by the terminal include a keyword "style and beauty", wherein "style and beauty" may correspond to semantics in one context: weather is good, which corresponds to a feature value of 1, in another context may correspond to semantics: good mood, which corresponds to a characteristic value of 2. Therefore, the terminal can obtain the feature values corresponding to the keyword pair 'wind and beautiful' from the feature library as follows: eigenvalue 1 and eigenvalue 2.
Wherein, the characteristic value can be a numerical combination, such as: eigenvalue 1 is 1001 and eigenvalue 2 is 2002. In practice, the feature values are only required to be able to distinguish between different semantics, for example: the feature value may also be a combination of characters, or a combination of numbers and characters, and the present disclosure does not limit the expression form of the feature value, and in the embodiments of the present disclosure, the present disclosure is described by taking the feature value as the combination of numbers.
The corresponding relationship between the keywords and the feature values in the feature library may be preset. For example, each word and sentence may correspond to a numerical value, semantic analysis is performed on each word and sentence, semantics corresponding to the word and sentence (the semantics are also words and sentences) can be obtained, a numerical value corresponding to the semantics corresponding to the word and sentence is obtained, the numerical value is determined to be a characteristic value corresponding to the word and sentence, and a corresponding relationship between the word and the characteristic value can be recorded in a characteristic library. Therefore, after the keyword of the first multimedia content is obtained, the keyword can be searched from the feature library, and the feature value corresponding to the keyword is obtained according to the corresponding relationship between the keyword and the feature value stored in the feature library.
103, combining the characteristic values corresponding to the keywords into first characteristic vectors corresponding to the keywords;
and the terminal makes all the characteristic values corresponding to each keyword into a first characteristic vector corresponding to the keyword. For example, the terminal obtains two keywords, including: the terminal comprises a wind, a sun and a pine, wherein the wind, the sun and the pine correspond to a characteristic value 1 and a characteristic value 2, and the pine corresponds to a characteristic value 3, a characteristic value 4 and a characteristic value 5, so that a first characteristic vector corresponding to the wind, the sun and the pine is formed by the terminal as (characteristic value 1, characteristic value 2), and a first characteristic vector corresponding to the pine is formed as (characteristic value 3, characteristic value 4, characteristic value 5).
Step 104, searching a feature vector group corresponding to first multimedia content from a material library, wherein the feature vector group comprises a second feature vector comprising the first feature vector;
Illustratively, the material library may be used to store the feature vector sets, the multimedia materials, and the correspondence of the feature vector sets to the multimedia materials. The feature vector group may include a plurality of feature vectors, and each feature vector may correspond to a plurality of feature values. The feature vector group can correspond to a multimedia content, the multimedia content is provided with a plurality of keywords, each keyword can correspond to a feature vector, and the feature vectors corresponding to the keywords form the feature vector group.
the terminal searches a feature vector group from the material library, wherein the feature vector group comprises a second feature vector containing the first feature vector. The second feature vector containing the first feature vector may mean that the same feature value may be found in the second feature vector for each feature value in the first feature vector. That is, the keyword corresponding to the first feature vector and the keyword corresponding to the second feature vector may be considered to be a synonym or synonym. In the case that the first multimedia content corresponds to a plurality of first feature vectors, the feature vector set has a plurality of second feature vectors respectively including a plurality of first feature vectors corresponding to the first multimedia content, so that the content of the keyword description corresponding to the feature vector set can be considered to be similar to the content of the keyword description of the first multimedia content.
For example, the first multimedia content has two keywords: the first eigenvector corresponding to wind and beautiful, pine and cypress is (eigenvalue 1, eigenvalue 2), and the first eigenvector corresponding to pine and cypress is (eigenvalue 3, eigenvalue 4, eigenvalue 5). The feature vector group searched in the material library by the terminal has at least two second feature vectors, wherein one second feature vector can contain feature value 1 and feature value 2, and one second feature vector can contain feature value 3, feature value 4 and feature value 5.
And 105, generating a second multimedia content corresponding to the first multimedia content according to the multimedia material corresponding to the feature vector group.
For example, after determining the feature vector group corresponding to the first multimedia content, the terminal may search for a multimedia material corresponding to the feature vector group in a material library according to the feature vector group, where the multimedia material may be one or more of text content, picture content, or audio content and video content.
exemplarily, when the terminal determines that the number of the feature vector group corresponding to the first multimedia content is one, the multimedia material corresponding to the feature vector group can be used as the second multimedia content corresponding to the first multimedia content; or, when the terminal determines that the number of the feature vector groups corresponding to the first multimedia content is multiple, the terminal may generate a multimedia material set according to the multimedia material corresponding to each feature vector group, and then determine that the multimedia material set is the second multimedia content corresponding to the first multimedia content.
In a possible implementation manner, when the number of the feature vector groups corresponding to the first multimedia content acquired by the terminal is less than the vector group number threshold, the terminal may acquire the feature vector group having an association relationship with the feature vector group, combine the multimedia material corresponding to the feature vector group and the multimedia material corresponding to the feature vector group having an association relationship with the feature vector group into a multimedia material set, and determine that the multimedia material set is the second multimedia content corresponding to the first multimedia content.
The vector group number threshold may be a preset number of feature vector groups. The disclosed embodiments do not limit the vector group number threshold.
wherein, the association relationship between the feature vector group and the feature vector group associated with the feature vector group can be stored in the material library. The incidence relation between the characteristic vector group and the characteristic vector group associated with the characteristic vector group can be preset; or, when the terminal acquires the multimedia content for the first time, after acquiring the feature vector group corresponding to the multimedia content, the terminal may establish an association relationship between the feature vector group of the multimedia content (the feature vector group formed by the first feature vectors corresponding to the keywords of the multimedia content) and the feature vector group corresponding to the acquired multimedia content, and store the association relationship in the material library.
For example, the first multimedia content is a scenario, after the terminal acquires a keyword of the scenario, the terminal may combine feature values corresponding to the keyword into a first feature vector, and search for a feature vector group having a second feature vector including the first feature vector, where the keyword corresponding to the second feature vector in the feature vector group is a synonym or synonym of the keyword in the scenario, so that a video material corresponding to the feature vector group may be used as the video content corresponding to the scenario, and thus, after the terminal acquires the scenario, the terminal may generate the video content corresponding to the scenario.
according to the multimedia content generation method provided by the present disclosure, a terminal may generate a second multimedia content consistent with a type of a first multimedia content according to the first multimedia content, that is, a second multimedia content similar to and the same as the first multimedia content may be generated according to the first multimedia content, for example: if the first multimedia content is a video, a video similar to the first multimedia content may be generated according to the first multimedia content. Alternatively, the terminal may generate a second multimedia content from the first multimedia content, which is inconsistent with the type of the first multimedia content, that is, may generate a multimedia content from the first multimedia content, which is similar to the first multimedia content but is inconsistent in presentation form, for example: if the first multimedia content is a scenario, a video similar to the first multimedia content may be generated from the first multimedia content.
In this way, according to the multimedia content generation method provided by the present disclosure, after acquiring the keyword of the first multimedia content, the terminal may acquire the feature value corresponding to the keyword from the feature library, and combine all the feature values corresponding to the keyword into the first feature vector. The terminal searches a feature vector group corresponding to the first multimedia content from the material library, the feature vector group comprises a second feature vector containing the first feature vector, and generates second multimedia content corresponding to the first multimedia content according to the multimedia content corresponding to the feature vector group.
Therefore, according to the method for generating multimedia content provided by the embodiment of the disclosure, after acquiring the keyword of the first multimedia content, the terminal can search the feature vector group including the keyword or the synonym and the synonym including the keyword through the feature value representing the semantic meaning, and generate the second multimedia content corresponding to the first multimedia content according to the multimedia material corresponding to the feature vector group, so that the generated second multimedia content can better conform to the scene described by the first multimedia content, and further the accuracy of the generated multimedia content can be improved.
Fig. 2 illustrates a flow chart of a method of generating multimedia content according to an embodiment of the present disclosure.
In a possible implementation manner, referring to fig. 2, in the step 105, generating the second multimedia content corresponding to the first multimedia content according to the multimedia material corresponding to the feature vector group may be implemented by the following steps:
Step 1051, when finding a plurality of feature vector groups, respectively determining the association degree of the first multimedia content and each feature vector group;
When the terminal finds the plurality of feature vector groups, the association degree between each feature vector group and the first multimedia content can be determined. Wherein the relevance may be used to characterize the similarity of the content described by the feature vector group and the content described by the first multimedia content.
Fig. 3 shows a flowchart of a method of generating multimedia content according to an embodiment of the present disclosure.
in a possible implementation manner, the step 1051 of determining the association degree of the first multimedia content and each feature vector group can be implemented by the following steps.
step 10511, determining a vector association degree between a second feature vector of each feature vector group and a corresponding first feature vector, wherein the second feature vector comprises the first feature vector;
the vector association degree can be used for representing the similarity of the first feature vector and the second feature vector. For example, the terminal may determine the vector association degree between the first feature vector and the second feature vector according to the proportion of the feature values included in the first feature vector in the second feature vector.
illustratively, the first feature vector includes feature value 1, feature value 2, feature value 3, feature value 4, and the second feature vector containing the first feature vector includes feature value 1, feature value 2, feature value 3, feature value 4, and feature value 5. The proportion of the eigenvalue contained in the first eigenvector in the second eigenvector is 80%, and the terminal determines that the vector association degree between the first eigenvector and the second eigenvector is 80%.
and 10512, determining the association degree between the feature vector group and the first multimedia content according to the vector association degree.
After determining the vector association degree between each first feature vector of the first multimedia content and the corresponding second feature vector in the feature vector group, the terminal may determine the association degree between the feature vector group and the first multimedia content according to each vector association degree.
For example, the terminal may determine an average of the relevance of the respective vectors as the relevance of the feature vector group to the first multimedia content, for example: the first multimedia content comprises 4 keywords, and the vector association degrees of the first feature vector corresponding to each keyword and the second feature vector in the corresponding feature vector group are respectively as follows: 80%, 90%, 60%, 70%, the relevance of the set of feature vectors to the first multimedia content is 75%.
Or, the terminal may determine the association degree between the feature vector group and the first multimedia content according to the weight corresponding to each keyword and the vector association degree corresponding to the first feature vector corresponding to each keyword. For example: the first multimedia content comprises 4 keywords, and the vector association degrees of the first feature vector corresponding to each keyword and the second feature vector in the corresponding feature vector group are respectively as follows: 80%, 90%, 60% and 70%, wherein the first keyword and the third keyword in the 4 keywords are nouns, the second keyword is a verb, and the fourth keyword is an adjective. The weight corresponding to the verb is 0.4, the weight corresponding to the noun is 0.4, and the weight corresponding to the adjective is 0.2. The relevance of the feature vector to the first multimedia content may be ((0.4 × 80% +0.4 × 90%)/2 +0.4 × 60% +0.2 × 70%) -72%.
Thus, the multimedia content generating method provided by the present disclosure may determine the vector association degree between the first vector corresponding to the keyword of the first multimedia content and the second feature vector in the feature vector group (determine the similarity between each keyword of the first multimedia content and the keyword corresponding to the second feature vector in the feature vector group), and further may accurately determine the association degree between the feature vector group and the first multimedia content (the similarity between the content described by the feature vector group and the content described by the first multimedia content) according to the vector association degree.
As shown in fig. 2, after determining the association degree of the first multimedia content with each feature vector group, the method may further include:
Step 1052, determining an associated feature vector group of the first multimedia content from the plurality of feature vector groups according to the association degree;
For example, after determining the association degree between each feature vector group and the first multimedia content, the terminal may determine a preset number of feature vector groups with a greater association degree as the association vector group of the first multimedia content, where the preset number may be the preset number of feature vector groups. For example: if the preset number is 3, the terminal can determine that the association degrees are arranged from large to small, and the 3 feature vector groups arranged in the first three names are the associated feature vector group of the first multimedia content.
or after determining the association degree between each feature vector group and the first multimedia content, the terminal may determine that the feature vector group with the association degree greater than the preset association degree is the associated feature vector group of the first multimedia content. The preset association degree may be a preset association degree. For example: if the preset association degree is 80%, the terminal may determine that the feature vector group with the association degree greater than 80% is the associated feature vector group of the first multimedia content.
And 1053, generating a second multimedia content corresponding to the first multimedia content according to the multimedia material corresponding to the associated feature vector group.
For example, after determining the associated feature vector group corresponding to the first multimedia content, the terminal may generate the second multimedia content corresponding to the first multimedia content according to the multimedia material corresponding to the associated feature vector group.
therefore, according to the method for generating multimedia content provided by the embodiment of the disclosure, the terminal can determine the association degree between the first multimedia content and the corresponding feature vector group, and further determine the associated feature vector group of the first multimedia content from the feature vector group according to the association degree, and the terminal generates the second multimedia content corresponding to the first multimedia content according to the multimedia material corresponding to the associated feature vector group, so that the generated second multimedia content can better conform to the content described by the first multimedia content, and the accuracy of generating the second multimedia content can be improved.
Fig. 4 shows a flowchart of a method of generating multimedia content according to an embodiment of the present disclosure.
In a possible implementation manner, referring to fig. 4, the method may further include the following steps:
step 106, splitting multimedia contents to obtain a plurality of first multimedia contents;
For example, the terminal may obtain the multimedia content in response to an input operation or an upload operation of the user for the multimedia content. And after the multimedia content is obtained, the multimedia content is split to obtain a plurality of first multimedia contents.
Exemplarily, if the multimedia content is a text content, the terminal may perform semantic analysis on the multimedia content to split the multimedia content according to a scene analyzed by the semantic analysis to obtain a plurality of first multimedia contents, where each first multimedia content corresponds to one scene; or, the terminal may split the multimedia content according to the paragraphs; or, the terminal may identify a splitting identifier inserted by the user in the multimedia content, and divide the multimedia content into a plurality of first multimedia contents according to the splitting identifier. Or the terminal performs image recognition or voice recognition on the multimedia content in the form of video, picture or audio, obtains the keywords corresponding to the multimedia content, performs semantic analysis on the keywords corresponding to the multimedia content, and splits the multimedia content according to the scenes analyzed by the semantic analysis to obtain a plurality of first multimedia contents. The splitting mode of the multimedia content is not specifically limited in the embodiment of the present disclosure.
step 107, integrating the second multimedia contents corresponding to the first multimedia contents into third multimedia contents corresponding to the multimedia contents.
after the terminal splits the multimedia content into a plurality of first multimedia contents, a second multimedia content corresponding to each first multimedia content may be generated according to the above steps S101 to S105. And integrating the second multimedia content corresponding to each first multimedia content into a third multimedia content corresponding to the multimedia content.
In a possible implementation manner, after generating a third multimedia content corresponding to the multimedia content, the terminal may store a feature vector group formed by the first feature vectors of the multimedia content in the material library, and store the third multimedia content in the material library as a multimedia material corresponding to the feature vector group.
Illustratively, the multimedia content may be a script. The terminal can divide the scenario into a plurality of first scenarios, acquire the video content corresponding to each first scenario, and integrate the video content corresponding to each first scenario into the finished video corresponding to the scenario after acquiring the video content corresponding to each first scenario. In this way, after the terminal acquires the scenario, the terminal can generate the video content corresponding to the scenario.
In this way, after obtaining the multimedia content input or uploaded by the user, the terminal can split the multimedia content into a plurality of first multimedia contents, sequentially generate second multimedia contents corresponding to the first multimedia contents, and integrate the second multimedia contents corresponding to each first multimedia content into third multimedia contents corresponding to the multimedia contents. According to the method for generating the multimedia content, a user does not need to search corresponding multimedia materials according to different scenes in sequence, all the multimedia materials are searched and then spliced to form the third multimedia content, the generation efficiency of the third multimedia content is improved, and the generated third multimedia content is more consistent with the content described by the multimedia content, so that the accuracy of the generated multimedia content is improved.
fig. 5 illustrates a flowchart of a method of generating multimedia content according to an embodiment of the present disclosure.
in one possible implementation, referring to fig. 5, the method may further include the steps of:
step 108, obtaining a selection operation for the multimedia material in the second multimedia content;
For example, when the terminal determines that a plurality of feature vector groups correspond to the first multimedia content, the terminal may generate the second multimedia content corresponding to the first multimedia content according to the multimedia material corresponding to the plurality of feature vector groups, where the second multimedia content includes the multimedia material corresponding to each feature vector group.
after generating the second multimedia content corresponding to each first multimedia content, the terminal may sequentially display the second multimedia content corresponding to each first multimedia content on the display interface in response to the play operation of the user. Illustratively, when the terminal first displays the second multimedia content corresponding to the first multimedia content, the terminal may randomly display any multimedia material included in the second multimedia content, or may display a multimedia material corresponding to the feature vector group with the highest association degree with the first multimedia content.
illustratively, when the terminal displays second multimedia content corresponding to the first multimedia content, the display interface may include a selection control, and the terminal may display, in response to a trigger operation of a user on the selection control, a plurality of multimedia materials included in the second multimedia content on the display interface. The selection operation of the multimedia material for the second multimedia content may be a trigger operation such as clicking on the multimedia material.
and step 109, generating a fourth multimedia content corresponding to the multimedia content according to the multimedia material corresponding to the selection operation.
the terminal can respond to the selection operation of the user on the multimedia materials, display the multimedia materials corresponding to the selection operation on the display interface, and can determine that the multimedia materials corresponding to the selection operation are the multimedia materials corresponding to the second multimedia content.
after the selection operations of the multimedia materials corresponding to all the second multimedia contents are completed, the terminal may respond to a determination operation (for example, a trigger operation of a determination control displayed on a display interface) of the user, and generate a fourth multimedia content corresponding to the multimedia content according to the multimedia material corresponding to each second multimedia content determined by each selection operation.
in a possible implementation manner, after determining a multimedia material corresponding to a second multimedia content corresponding to a first multimedia content, the terminal may store a feature vector group formed by first feature vectors of the first multimedia content in a material library. And storing the multimedia material as the multimedia material corresponding to the feature vector group in a material library. After the fourth multimedia content corresponding to the multimedia content is determined, a feature vector group formed by each first feature vector of each first multimedia content corresponding to the multimedia content may be stored in a material library, and the fourth multimedia content may be stored in the material library as a multimedia material corresponding to the feature vector group.
in one possible implementation, the terminal may insert a fifth multimedia content between the second multimedia contents corresponding to the two first multimedia contents in response to an insertion operation by the user. Illustratively, the terminal determines an insertion position of the fifth multimedia content in response to an insertion operation of the user, and acquires the fifth multimedia content and inserts the fifth multimedia content into a corresponding position in response to an upload operation of the user for the fifth multimedia content. The fifth multimedia content may be multimedia content stored locally by the user, or multimedia content searched by a search engine, for example: the fifth multimedia content may be a transition content of the second multimedia content corresponding to the two first multimedia contents.
the terminal can generate fourth multimedia content corresponding to the multimedia content according to the second multimedia content and the fifth multimedia content corresponding to each first multimedia content, so that the consistency of the generated fourth multimedia content is better.
In this way, the terminal can respond to the selection operation of the user, select the multimedia material more conforming to the first multimedia content as the second multimedia content corresponding to the first multimedia content, and integrate the second multimedia content corresponding to each first multimedia content into the fourth multimedia content corresponding to the multimedia content, so that the generated fourth multimedia content more conforms to the content described by the multimedia content.
Fig. 6 is a block diagram illustrating a multimedia content generating apparatus according to an embodiment of the present disclosure.
As shown in fig. 6, the apparatus may include:
A first obtaining module 601, configured to obtain a keyword of a first multimedia content;
a second obtaining module 602, configured to obtain a feature value corresponding to the keyword from a feature library, where the feature value is used to represent a semantic meaning corresponding to the keyword;
A processing module 603, configured to combine the feature values corresponding to the keyword into a first feature vector corresponding to the keyword;
a searching module 604, configured to search a feature vector group corresponding to the first multimedia content from a material library, where the feature vector group has a second feature vector that includes the first feature vector;
the generating module 605 may be configured to generate a second multimedia content corresponding to the first multimedia content according to the multimedia material corresponding to the feature vector group.
fig. 7 is a block diagram illustrating a multimedia content generating apparatus according to an embodiment of the present disclosure.
in one possible implementation, referring to fig. 7, the apparatus may further include:
A splitting module 606, configured to split multimedia content to obtain a plurality of first multimedia contents;
The integration module 607 may be configured to integrate the second multimedia content corresponding to each of the first multimedia contents into the third multimedia content corresponding to the multimedia content.
In one possible implementation manner, referring to fig. 7, the generating module 605 may include:
The first determining sub-module 6051 may be configured to determine, when multiple feature vector groups are found, association degrees of the first multimedia content and the feature vector groups, respectively;
a second determining sub-module 6052 operable to determine a set of associated feature vectors of the first multimedia content from the plurality of sets of feature vectors in dependence on the degree of association;
The first generating sub-module 6053 may be configured to generate, according to the multimedia material corresponding to the associated feature vector group, second multimedia content corresponding to the first multimedia content.
In one possible implementation, referring to fig. 7, the first determining sub-module 6051 may further be configured to:
For each feature vector group, determining the vector association degree of a second feature vector of the feature vector group and a corresponding first feature vector, wherein the second feature vector comprises the first feature vector;
And determining the association degree of the feature vector group and the first multimedia content according to the vector association degree.
In one possible implementation, referring to fig. 7, the apparatus may further include:
A third obtaining module 608, configured to obtain a selection operation for a multimedia material in the second multimedia content;
The determining module 609 may be configured to generate a fourth multimedia content corresponding to the multimedia content according to the multimedia material corresponding to the selecting operation.
Fig. 8 is a block diagram illustrating a generation apparatus 800 for multimedia content according to an example embodiment. For example, the apparatus 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 8, the apparatus 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the apparatus 800. Examples of such data include instructions for any application or method operating on device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power components 806 provide power to the various components of device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 800.
The multimedia component 808 includes a screen that provides an output interface between the device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 800 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
the sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the device 800. For example, the sensor assembly 814 may detect the open/closed status of the device 800, the relative positioning of components, such as a display and keypad of the device 800, the sensor assembly 814 may also detect a change in the position of the device 800 or a component of the device 800, the presence or absence of user contact with the device 800, the orientation or acceleration/deceleration of the device 800, and a change in the temperature of the device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communications between the apparatus 800 and other devices in a wired or wireless manner. The device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
in an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
in an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the device 800 to perform the above-described methods.
the present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
the computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terms used herein were chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the techniques in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
Claims (12)
1. A method for generating multimedia content, applied to a terminal, includes:
acquiring a keyword of first multimedia content;
acquiring a characteristic value corresponding to the keyword from a characteristic library, wherein the characteristic value is used for representing the semantic meaning corresponding to the keyword;
Combining the characteristic values corresponding to the keywords into first characteristic vectors corresponding to the keywords;
Searching a feature vector group corresponding to the first multimedia content from a material library, wherein the feature vector group comprises a second feature vector comprising the first feature vector;
And generating second multimedia content corresponding to the first multimedia content according to the multimedia material corresponding to the feature vector group.
2. the method of claim 1, further comprising:
Splitting multimedia contents to obtain a plurality of first multimedia contents;
And integrating the second multimedia contents corresponding to the first multimedia contents into third multimedia contents corresponding to the multimedia contents.
3. The method of claim 1, wherein generating the second multimedia content corresponding to the first multimedia content according to the multimedia material corresponding to the feature vector group comprises:
when a plurality of feature vector groups are found, determining the association degree of the first multimedia content and each feature vector group respectively;
determining a set of associated feature vectors of the first multimedia content from the plurality of sets of feature vectors according to the degree of association;
And generating second multimedia content corresponding to the first multimedia content according to the multimedia material corresponding to the associated feature vector group.
4. The method of claim 3, wherein determining the association of the first multimedia content with each of the feature vector sets comprises:
For each feature vector group, determining the vector association degree of a second feature vector of the feature vector group and a corresponding first feature vector, wherein the second feature vector comprises the first feature vector;
and determining the association degree of the feature vector group and the first multimedia content according to the vector association degree.
5. The method of claim 2, further comprising:
Acquiring a selection operation aiming at a multimedia material in the second multimedia content;
and generating fourth multimedia content corresponding to the multimedia content according to the multimedia material corresponding to the selection operation.
6. An apparatus for generating multimedia content, applied to a terminal, comprising:
The first acquisition module is used for acquiring keywords of the first multimedia content;
the second acquisition module is used for acquiring a characteristic value corresponding to the keyword from a characteristic library, wherein the characteristic value is used for representing the semantic meaning corresponding to the keyword;
the processing module is used for combining the characteristic values corresponding to the keywords into first characteristic vectors corresponding to the keywords;
The searching module is used for searching a feature vector group corresponding to the first multimedia content from a material library, wherein the feature vector group is provided with a second feature vector containing the first feature vector;
and the generating module is used for generating second multimedia content corresponding to the first multimedia content according to the multimedia material corresponding to the feature vector group.
7. the apparatus of claim 6, further comprising:
the splitting module is used for splitting multimedia contents to obtain a plurality of first multimedia contents;
and the integration module is used for integrating the second multimedia contents corresponding to the first multimedia contents into third multimedia contents corresponding to the multimedia contents.
8. The apparatus of claim 6, wherein the generating module comprises:
The first determining submodule is used for respectively determining the association degree of the first multimedia content and each feature vector group when the plurality of feature vector groups are found;
a second determining sub-module, configured to determine, according to the association degree, an associated feature vector group of the first multimedia content from the plurality of feature vector groups;
and the first generation submodule is used for generating second multimedia content corresponding to the first multimedia content according to the multimedia material corresponding to the associated feature vector group.
9. the apparatus of claim 8, wherein the first determination submodule is further configured to:
for each feature vector group, determining the vector association degree of a second feature vector of the feature vector group and a corresponding first feature vector, wherein the second feature vector comprises the first feature vector;
And determining the association degree of the feature vector group and the first multimedia content according to the vector association degree.
10. The apparatus of claim 7, further comprising:
A third obtaining module, configured to obtain a selection operation for a multimedia material in the second multimedia content;
and the determining module is used for generating fourth multimedia content corresponding to the multimedia content according to the multimedia material corresponding to the selection operation.
11. An apparatus for generating multimedia content, comprising:
A processor;
a memory for storing processor-executable instructions;
Wherein the processor is configured to perform the method of any one of claims 1 to 5.
12. A non-transitory computer readable storage medium having computer program instructions stored thereon, wherein the computer program instructions, when executed by a processor, implement the method of any of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810279654.0A CN110555127A (en) | 2018-03-30 | 2018-03-30 | Multimedia content generation method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810279654.0A CN110555127A (en) | 2018-03-30 | 2018-03-30 | Multimedia content generation method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110555127A true CN110555127A (en) | 2019-12-10 |
Family
ID=68733613
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810279654.0A Pending CN110555127A (en) | 2018-03-30 | 2018-03-30 | Multimedia content generation method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110555127A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101398854A (en) * | 2008-10-24 | 2009-04-01 | 清华大学 | Video fragment searching method and system |
CN101673263A (en) * | 2008-09-12 | 2010-03-17 | 未序网络科技(上海)有限公司 | Method for searching video content |
CN105224521A (en) * | 2015-09-28 | 2016-01-06 | 北大方正集团有限公司 | Key phrases extraction method and use its method obtaining correlated digital resource and device |
CN106708929A (en) * | 2016-11-18 | 2017-05-24 | 广州视源电子科技股份有限公司 | Video program searching method and device |
US20170228470A1 (en) * | 2013-02-07 | 2017-08-10 | Enigma Technologies, Inc. | Data system and method |
-
2018
- 2018-03-30 CN CN201810279654.0A patent/CN110555127A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101673263A (en) * | 2008-09-12 | 2010-03-17 | 未序网络科技(上海)有限公司 | Method for searching video content |
CN101398854A (en) * | 2008-10-24 | 2009-04-01 | 清华大学 | Video fragment searching method and system |
US20170228470A1 (en) * | 2013-02-07 | 2017-08-10 | Enigma Technologies, Inc. | Data system and method |
CN105224521A (en) * | 2015-09-28 | 2016-01-06 | 北大方正集团有限公司 | Key phrases extraction method and use its method obtaining correlated digital resource and device |
CN106708929A (en) * | 2016-11-18 | 2017-05-24 | 广州视源电子科技股份有限公司 | Video program searching method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109089133B (en) | Video processing method and device, electronic equipment and storage medium | |
CN108932253B (en) | Multimedia search result display method and device | |
CN110121093A (en) | The searching method and device of target object in video | |
CN108038102B (en) | Method and device for recommending expression image, terminal and storage medium | |
CN107515870B (en) | Searching method and device and searching device | |
CN111695505B (en) | Video processing method and device, electronic equipment and storage medium | |
CN109413478B (en) | Video editing method and device, electronic equipment and storage medium | |
CN110858924B (en) | Video background music generation method and device and storage medium | |
CN109145213A (en) | Inquiry recommended method and device based on historical information | |
CN105404401A (en) | Input processing method, apparatus and device | |
CN109918565B (en) | Processing method and device for search data and electronic equipment | |
CN109063101B (en) | Video cover generation method and device | |
CN111046210B (en) | Information recommendation method and device and electronic equipment | |
CN110232181B (en) | Comment analysis method and device | |
CN110633715A (en) | Image processing method, network training method and device and electronic equipment | |
CN113987128A (en) | Related article searching method and device, electronic equipment and storage medium | |
CN109977390B (en) | Method and device for generating text | |
CN109992754B (en) | Document processing method and device | |
CN108241438B (en) | Input method, input device and input device | |
CN103970831A (en) | Icon recommending method and device | |
CN116543211A (en) | Image attribute editing method, device, electronic equipment and storage medium | |
CN108255917B (en) | Image management method and device and electronic device | |
CN113128181B (en) | Information processing method and device | |
CN110555127A (en) | Multimedia content generation method and device | |
CN110620960B (en) | Video subtitle processing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20200512 Address after: 310052 room 508, floor 5, building 4, No. 699, Wangshang Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province Applicant after: Alibaba (China) Co.,Ltd. Address before: 100080 Beijing Haidian District city Haidian street A Sinosteel International Plaza No. 8 block 5 layer A, C Applicant before: Youku network technology (Beijing) Co.,Ltd. |
|
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191210 |