CN116450588A - Method, device, computer equipment and storage medium for generating multimedia file - Google Patents
Method, device, computer equipment and storage medium for generating multimedia file Download PDFInfo
- Publication number
- CN116450588A CN116450588A CN202210010617.6A CN202210010617A CN116450588A CN 116450588 A CN116450588 A CN 116450588A CN 202210010617 A CN202210010617 A CN 202210010617A CN 116450588 A CN116450588 A CN 116450588A
- Authority
- CN
- China
- Prior art keywords
- rendering
- data
- multimedia file
- replaceable
- multimedia
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 88
- 238000003860 storage Methods 0.000 title claims abstract description 18
- 238000009877 rendering Methods 0.000 claims abstract description 222
- 238000012545 processing Methods 0.000 claims abstract description 120
- 238000004519 manufacturing process Methods 0.000 claims abstract description 65
- 230000008569 process Effects 0.000 claims abstract description 16
- 230000000694 effects Effects 0.000 claims description 44
- 238000004590 computer program Methods 0.000 claims description 18
- 238000007499 fusion processing Methods 0.000 claims description 7
- 230000001960 triggered effect Effects 0.000 claims description 4
- 238000013473 artificial intelligence Methods 0.000 abstract description 14
- 238000005516 engineering process Methods 0.000 description 25
- 239000000463 material Substances 0.000 description 15
- 238000013461 design Methods 0.000 description 9
- 230000004048 modification Effects 0.000 description 9
- 238000012986 modification Methods 0.000 description 9
- 230000002194 synthesizing effect Effects 0.000 description 9
- 230000008901 benefit Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 239000003086 colorant Substances 0.000 description 4
- 238000011160 research Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 238000013500 data storage Methods 0.000 description 2
- 238000001212 derivatisation Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- VYZAMTAEIAYCRO-UHFFFAOYSA-N Chromium Chemical compound [Cr] VYZAMTAEIAYCRO-UHFFFAOYSA-N 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000005266 casting Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 229910021389 graphene Inorganic materials 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/17—Details of further file system functions
- G06F16/173—Customisation support for file systems, e.g. localisation, multi-language support, personalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/16—File or folder operations, e.g. details of user interfaces specifically adapted to file systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Processing Or Creating Images (AREA)
Abstract
The application relates to a multimedia file generation method, a device, a computer device and a storage medium. The method involves artificial intelligence, comprising: acquiring a multimedia file template generated based on an animation library plug-in, and extracting an alternative element identifier carried by the multimedia file template; acquiring a target element corresponding to the multimedia file production requirement, and respectively carrying out replacement processing on the replaceable elements corresponding to the replaceable element identifiers according to the target element to obtain rendering data; calling a rendering program instance, performing rendering processing according to the rendering data and the multimedia data of the target element, and generating a rendering screenshot; and generating a multimedia file according to each rendering screenshot. The method can realize the export, popularization and application of the multimedia file template generated based on the animation library plug-in, directly replace and modify elements on the template according to actual requirements, does not need to be manufactured again from the head, simplifies the manufacturing flow of the multimedia file, and improves the data processing efficiency in the process of generating the multimedia file.
Description
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method and apparatus for generating a multimedia file, a computer device, and a storage medium.
Background
With the development of computer technology and the wide application of the internet, more and more users acquire and transmit different types of information or data through the internet, wherein multimedia data is a common transmission mode of information data, and may include images, audio data, video data and the like.
Conventionally, AE (auxiliary tool for producing dynamic images) is often used to produce multimedia files. However, the AE manufacturing process is complex, the design engineering cost is high, special designers are required to manufacture templates, the manufactured animation video templates cannot be directly exported to other platforms or tools for use, and finished multimedia files can be obtained only after AE manufacturing, and if a user needs to modify the multimedia files, all the animation video templates need to be manufactured again.
Therefore, the conventional AE method for creating the multimedia file still has the problems of complex process and high design engineering cost.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a method, an apparatus, a computer device, and a storage medium for generating a multimedia file, which can simplify the process of generating a multimedia file, improve the data processing efficiency in the process of generating a multimedia file, and reduce the cost of generating a file.
A method of multimedia file generation, the method comprising:
acquiring a multimedia file template generated based on an animation library plug-in, and extracting an alternative element identifier carried by the multimedia file template;
acquiring a target element corresponding to a multimedia file production requirement, and respectively carrying out replacement processing on a replaceable element corresponding to the replaceable element identifier according to the target element to obtain rendering data;
calling a rendering program instance, performing rendering processing according to the rendering data and the multimedia data of the target element, and generating a rendering screenshot;
and generating a multimedia file according to each rendering screenshot.
A multimedia file generation apparatus, the apparatus comprising:
the multimedia file template acquisition module is used for acquiring a multimedia file template generated based on the animation library plug-in, and extracting an alternative element identifier carried by the multimedia file template;
The rendering data generation module is used for acquiring target elements corresponding to the production requirements of the multimedia files, and respectively carrying out replacement processing on the replaceable elements corresponding to the replaceable element identifiers according to the target elements to obtain rendering data;
the rendering processing module is used for calling a rendering program instance, performing rendering processing according to the rendering data and the multimedia data of the target element, and generating a rendering screenshot;
and the multimedia file generation module is used for generating a multimedia file according to each rendering screenshot.
A computer device comprising a memory storing a computer program and a processor which when executing the computer program performs the steps of:
acquiring a multimedia file template generated based on an animation library plug-in, and extracting an alternative element identifier carried by the multimedia file template;
acquiring a target element corresponding to a multimedia file production requirement, and respectively carrying out replacement processing on a replaceable element corresponding to the replaceable element identifier according to the target element to obtain rendering data;
calling a rendering program instance, performing rendering processing according to the rendering data and the multimedia data of the target element, and generating a rendering screenshot;
And generating a multimedia file according to each rendering screenshot.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
acquiring a multimedia file template generated based on an animation library plug-in, and extracting an alternative element identifier carried by the multimedia file template;
acquiring a target element corresponding to a multimedia file production requirement, and respectively carrying out replacement processing on a replaceable element corresponding to the replaceable element identifier according to the target element to obtain rendering data;
calling a rendering program instance, performing rendering processing according to the rendering data and the multimedia data of the target element, and generating a rendering screenshot;
and generating a multimedia file according to each rendering screenshot.
A computer program product comprising a computer program which when executed by a processor performs the steps of:
acquiring a multimedia file template generated based on an animation library plug-in, and extracting an alternative element identifier carried by the multimedia file template;
acquiring a target element corresponding to a multimedia file production requirement, and respectively carrying out replacement processing on a replaceable element corresponding to the replaceable element identifier according to the target element to obtain rendering data;
Calling a rendering program instance, performing rendering processing according to the rendering data and the multimedia data of the target element, and generating a rendering screenshot;
and generating a multimedia file according to each rendering screenshot.
In the method, the device, the computer equipment and the storage medium for generating the multimedia file, the replaceable element corresponding to the replaceable element identification can be replaced according to the target element by acquiring the multimedia file template generated based on the animation library plug-in, extracting the replaceable element identification carried by the multimedia file template and acquiring the target element corresponding to the multimedia file manufacturing requirement, so that the rendering data can be obtained. And further, by calling a rendering program instance, performing rendering processing according to the rendering data and the multimedia data of the target element, and generating a rendering screenshot so as to generate a multimedia file according to each rendering screenshot. The method realizes the export and further popularization and application of the multimedia file template generated based on the animation library plug-in, and simultaneously, the user can directly replace and modify the elements on the multimedia file template according to the actual requirements so as to promote the richness of the content and effect of the generated multimedia file. And because the multimedia file does not need to be manufactured from the head again when the modification and the adjustment are needed, the manufacturing flow of the multimedia file is simplified, the data processing efficiency in the process of generating the multimedia file is further improved, and meanwhile, the manufacturing cost consumption of the multimedia file is reduced.
Drawings
FIG. 1 is an application environment diagram of a method of generating a multimedia file in one embodiment;
FIG. 2 is a flow chart of a method for generating a multimedia file in one embodiment;
FIG. 3 is a flowchart of a method for generating a multimedia file according to another embodiment;
FIG. 4 is a flowchart of a method for generating a multimedia file according to still another embodiment;
FIG. 5 is a real-time preview display interface of a multimedia file in one embodiment;
FIG. 6 is a schematic diagram of a rendering flow of a method for generating a multimedia file in one embodiment;
FIG. 7 is a flowchart illustrating a method for generating a multimedia file according to an embodiment;
FIG. 8 is a schematic diagram of advertising effectiveness of a multimedia file in one embodiment;
FIG. 9 is a block diagram of a multimedia file generation device in one embodiment;
fig. 10 is an internal structural view of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The present application provides a method for generating multimedia files, which relates to the artificial intelligence technology, wherein, the artificial intelligence (Artificial Intelligence, AI) is the theory, method, technology and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand the intelligence of a person, sense the environment, acquire knowledge and use knowledge to obtain the best result. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning, automatic driving, intelligent traffic and other directions.
The Computer Vision technology (CV) Computer Vision is a science of researching how to make a machine "look at", and more specifically, it means to replace a human eye with a camera and a Computer to perform machine Vision such as identifying, tracking and measuring on a target, and further perform graphic processing, so that the Computer processing becomes an image more suitable for the human eye to observe or transmit to an instrument to detect. As a scientific discipline, computer vision research-related theory and technology has attempted to build artificial intelligence systems that can acquire information from images or multidimensional data. Computer vision technologies typically include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technology, virtual reality, augmented reality, synchronous positioning and mapping, autopilot, intelligent transportation, etc., as well as common biometric technologies such as face recognition, fingerprint recognition, etc.
With research and progress of artificial intelligence technology, research and application of artificial intelligence technology are being developed in various fields, such as common smart home, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned, autopilot, unmanned, robotic, smart medical, smart customer service, car networking, autopilot, smart transportation, etc., and it is believed that with the development of technology, artificial intelligence technology will be applied in more fields and will be of increasing importance.
The method for generating the multimedia file provided by the application relates to an artificial intelligence computer vision technology, and can be applied to an application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that needs to be processed by the server 104, and the data storage system may be integrated on the server 104, or may be placed on a cloud or other network server. The server 104 obtains a multimedia file template generated based on the animation library plug-in, and extracts the replaceable element identifier carried by the multimedia file template. The terminal 102 is provided with AE software (i.e., an After Effect, an auxiliary tool for making a dynamic image design, and an editing software for post-synthesis processing of a multimedia file), and when a designer designs a multimedia file template, the designer can call an animation library plug-in of the server 104 based on the AE software, and then generate and obtain the multimedia file template based on the animation library plug-in. The server 104 obtains the target element corresponding to the multimedia file production requirement, and respectively performs replacement processing on the replaceable element corresponding to the replaceable element identifier according to the target element to obtain the rendering data, further invokes the rendering program instance, performs rendering processing according to the rendering data and the multimedia data of the target element to generate a rendering screenshot, and generates the multimedia file according to each rendering screenshot. The server 104 may further send the generated multimedia file to the terminal 102 for viewing by the user, or may upload the multimedia file to a cloud storage for downloading and viewing by different users. The terminal 102 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, a vehicle-mounted terminal, a smart television, etc. The server 104 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud computing services. The terminal 102 and the server 104 may be directly or indirectly connected through wired or wireless communication, which is not limited herein.
In one embodiment, as shown in fig. 2, a method for generating a multimedia file is provided, and the method is applied to the server in fig. 1 for illustration, and includes the following steps:
step S202, a multimedia file template generated based on the animation library plug-in is obtained, and an alternative element identifier carried by the multimedia file template is extracted.
In particular, the server is configured with an animation library plug-in, which may be a Lottie plug-in particular, represented as an open source tool for adding animation effects to native applications. It can be understood that the terminal is provided with AE software, and when a designer designs the multimedia file template, the animation library plug-in of the server, namely the Lottie plug-in, can be called based on the AE software, and then the multimedia file template is generated and obtained based on the Lottie plug-in. The Lottie plug-in comprises different plug-in types, and corresponding functions are different.
In this embodiment, the AE software may call a body movie plug-in of the Lottie plug-in, that is, an AE export plug-in for exporting the generated multimedia file template, so as to export json files generated by the AE software calling the Lottie plug-in. Json (JavaScript Object Notation) is a JavaScript object notation, expressed as a lightweight, text-based, readable file format, and the derived json file carries information data such as the data structure, composition, and execution logic of the current multimedia file.
The method comprises the steps that a user needs to replace materials according to a generated multimedia file template to obtain a multimedia file meeting actual requirements of the user, and then AE software calls a lottiE plug-in to generate a json file which carries a pre-marked replaceable element identifier. In this embodiment, the multimedia file may be a video file.
In one embodiment, before obtaining the multimedia file template generated based on the animation library plug-in and extracting the replaceable element identifier carried by the multimedia file template, the method further comprises:
acquiring predefined replaceable layer attribute data; obtaining replaceable element identifiers corresponding to different replaceable layers based on the configuration of the animation library plug-in according to the replaceable layer attribute data; and carrying out fusion processing on the original file templates subjected to the identification and expansion processing of each replaceable element to obtain the multimedia file templates.
Specifically, by acquiring predefined replaceable layer attribute data, the replaceable picture layer attribute data, the replaceable text layer attribute data, the replaceable color layer attribute data and the replaceable video layer attribute data can be specifically included, and replaceable element identifiers corresponding to different replaceable layers are obtained based on the configuration of the animation library plug-in according to the replaceable layer attribute data.
The replaceable element identifiers corresponding to different replaceable layers specifically may include a replaceable picture element identifier corresponding to replaceable picture layer attribute data, a replaceable text element identifier corresponding to replaceable text layer attribute data, a replaceable color element text identifier corresponding to replaceable color layer attribute data, and a replaceable video element identifier corresponding to replaceable video layer attribute data.
Further, the original file template is obtained, and fusion processing is carried out on each replaceable element identifier and the original file template after expansion processing, so that a multimedia file template is obtained, and the fused multimedia file template carries a video layer which is not possessed by the original file template and the replaceable video element identifier.
Step S204, obtaining target elements corresponding to the multimedia file production requirements, and respectively carrying out replacement processing on the replaceable elements corresponding to the replaceable element identifiers according to the target elements to obtain rendering data.
Specifically, the target element corresponding to the multimedia file production requirement is obtained, and the target element can be the multimedia file production requirement input by the user or the multimedia file production requirement matched with different actual application scenes, and the target element carried by the multimedia file production requirement is identified. The target element may include different element types, that is, may be one or a combination of several of a picture element, a text element, a video element, and a color element.
Further, according to the target element, determining the replaceable element to be replaced, namely according to the target element, respectively matching with the replaceable element identifier, and carrying out replacement processing on the replaceable element corresponding to the replaceable element identifier to obtain the rendering data.
For example, if the target element corresponding to the requirement for making the multimedia file includes a picture element and a video element, by matching the picture element and the video element with each replaceable element identifier carried by the multimedia file, determining a replaceable picture element identifier matched with the picture element, further replacing a replaceable element corresponding to the replaceable picture element identifier according to the picture element, determining a replaceable video element identifier matched with the video element, and further replacing a replaceable element corresponding to the replaceable video element identifier according to the video element, so as to obtain final rendering data.
Step S206, calling a rendering program instance, performing rendering processing according to the rendering data and the multimedia data of the target element, and generating a rendering screenshot.
Specifically, the split sub-data is obtained by splitting the multimedia data corresponding to the target element, for example, if the target element includes a plurality of picture elements or a plurality of video elements, the plurality of picture elements or the plurality of video elements need to be split to obtain a plurality of sub-data, and corresponding task threads are respectively established based on the sub-data.
Further, by calling the rendering program instance, the task threads corresponding to each sub data are executed in parallel to perform rendering processing, so that the effect of concurrent rendering is achieved, the rendering effect of the original animation library plug-in, namely the Lottie plug-in, is improved, and the actual requirements are met. Specifically, a rendering program instance needs to be called, each task thread is executed in parallel, and rendering processing is performed according to rendering data and split sub-data, so that corresponding rendering screenshot is generated.
And executing the Lottie program to render the picture by calling the rendering program instance of the server, and then capturing the picture frame by frame to obtain the rendering screenshot. Specifically, when the replaceable element identified by the replaceable element is replaced according to the target element in the early stage, the element replacement is performed on the json data result carried by the multimedia file template, and the current element is in url form, namely not an actual data resource, but when the subsequent rendering is performed, the rendering is performed according to the json data subjected to the element replacement, namely the multimedia data of the target element which is actually downloaded needs to be rendered.
In one embodiment, after the corresponding task threads are respectively established according to each piece of sub-data, the method further comprises:
Calling a front end frame of a browser running on a server end, and creating a rendering program instance corresponding to a task thread; the rendering program example is headless browser coding data, and is used for performing frame-by-frame screenshot and rendering processing on the rendering data and the multimedia data to obtain the rendering data.
Specifically, a rendering program instance corresponding to a task thread, namely headless browser coding data, is newly created by calling a browser front end frame running at a server side, namely a node. Js frame running at the server side, so that frame-by-frame screenshot and rendering processing can be carried out on rendering data and multimedia data according to the headless browser coding data, and rendering data is obtained.
Wherein, the headless browser coding data is used for realizing the frame-by-frame screenshot of the Lottie rendering result, and as the traditional Lottie plug-in unit does not support the server-side rendering, the headless browser coding data is newly built, the method has the advantages that the browser does not need to be configured, the code data of the headless browser can be called, the rendering screenshot processing based on the server side is realized, and the rendering data can be obtained without using terminal equipment.
Step S208, generating a multimedia file according to each rendering screenshot.
Specifically, by utilizing a screenshot synthesizing tool, the rendered screenshot is subjected to assembly coding processing, and a multimedia file is obtained. The screenshot synthesizing tool may be a ffmpeg tool (a synthesizing tool), that is, a synthesizing tool for recording, converting digital audio data and video data into streams, and performing assembly encoding processing on the rendered screenshot by the ffmpeg tool, and further adding other media elements such as background music to obtain a final multimedia file.
In the method for generating the multimedia file, the replaceable element identification carried by the multimedia file template is extracted by acquiring the multimedia file template generated based on the animation library plug-in, and the target element corresponding to the multimedia file manufacturing requirement is acquired, so that the replaceable element corresponding to the replaceable element identification can be respectively replaced according to the target element, and the rendering data is obtained. And further, by calling a rendering program instance, performing rendering processing according to the rendering data and the multimedia data of the target element, and generating a rendering screenshot so as to generate a multimedia file according to each rendering screenshot. The method has the advantages that the multimedia file template generated based on the animation library plug-in is exported and further popularized and applied, meanwhile, the user can directly replace and modify elements on the multimedia file template according to actual demands, the richness of the content and effect of the generated multimedia file is improved, the multimedia file is not required to be manufactured again from the head when the modification and adjustment are needed, the manufacturing flow of the multimedia file is simplified, the data processing efficiency in the process of generating the multimedia file is further improved, and meanwhile, the manufacturing cost consumption of the multimedia file is reduced. In one embodiment, as shown in fig. 3, a method for generating a multimedia file is provided, which specifically includes the following steps:
Step S302, predefined replaceable layer attribute data is acquired.
In particular, the predefined replaceable layer attribute data is acquired, and may specifically include replaceable picture layer attribute data, replaceable text layer attribute data, replaceable color layer attribute data, and replaceable video layer attribute data.
Further, the predefined replaceable layer attribute data is used for marking whether the replaceable layer is a replaceable layer, wherein the attribute data of the replaceable layer, the attribute data of the width and the height and the like can be further included, and the predefined replaceable layer attribute data specifically includes the following replaceable layer attribute data:
1) Replaceable picture layer attribute data
Layer naming of the replaceable picture layer applies to the lowest layer of pictures, in "image-holder? [] "naming". Wherein named intermediately existing spaces cannot be deleted, "? "replace with the number from 1," [ ] "is an extension parameter, the extension parameters are separated by commas and have no space, the parameter format of the extension parameter is parameter name=parameter value, [ ] is also indistinct.
For example, if there are two picture layers to be replaced, the two picture layers are named as "image-holder 1[ ]" and "image-holder 2[ ]", respectively, and the expansion parameters, such as width and height of the picture, storage location, and other different parameter information, are in the [ ] ".
2) Replaceable text layer attribute data
The naming of the replaceable text layer applies to the bottom-most text, in "text-holder? [ min=min_length, max=max_length ] "is named. Wherein named intermediately existing spaces cannot be deleted, "? "replace with the number from 1," [ ] "is an extension parameter, the extension parameters are separated by commas and have no space, the parameter format of the extension parameter is parameter name=parameter value, [ ] is also indistinct.
For example, there are two alternative text layers named "text-holder 1[ min=1, max=10 ]" and "text-holder 2[ min=0, max=6 ]", respectively, "[ ]" are expansion parameters, such as a text length range, a maximum text length and a minimum text length.
3) Replaceable color layer attribute data
Alternative color layer naming is applied to the underlying curtain/layer, in the color-holder? [] "naming". Wherein named intermediately existing spaces cannot be deleted, "? "replace with the number from 1," [ ] "is an extension parameter, the extension parameters are separated by commas and have no space, the parameter format of the extension parameter is parameter name=parameter value, [ ] is also indistinct.
For example, if there is one alternative color layer, then it may be named "color-holder 1[ ]. In "[ ]" are expansion parameters such as specific colors of the layers, etc.
4) Replaceable video layer attribute data
The alternate video layer is named to the video at the lowest layer, "video-holder? Width=width, height=height ] ". Wherein, the space that exists in the middle of its naming cannot be deleted, "? "replace with a sequence number starting from 1, sequence number=1 indicates a primary video, the others are secondary videos," [ ] "indicates extension parameters, the extension parameters are separated by commas and have no spaces, and the parameter format of the extension parameters is parameter name=parameter value, [ ] is also indistinct.
For example, if there is one alternative video layer, it may be named "video-holder 1[ with=1280, height=720, min=2, max=10 ]", where the video duration of the primary video determines the final duration of the scene, and the video duration of the secondary video does not affect the scene duration. In "[ ]" are extension parameters such as the specific duration of the video, etc.
Step S304, obtaining the replaceable element identifiers corresponding to different replaceable layers based on the configuration of the animation library plug-in according to the replaceable layer attribute data.
Specifically, according to the replaceable image layer attribute data, replaceable element identifiers corresponding to different replaceable image layers are obtained based on the configuration of the animation library plug-in. The replaceable element identifiers corresponding to different replaceable layers specifically may include a replaceable picture element identifier corresponding to replaceable picture layer attribute data, a replaceable text element identifier corresponding to replaceable text layer attribute data, a replaceable color element text identifier corresponding to replaceable color layer attribute data, and a replaceable video element identifier corresponding to replaceable video layer attribute data.
The animation library plug-in can be a Lottie plug-in, the original file template derived by calling a body movie of the plug-in of Lottie based on AE software is a static finished video without supporting replacement materials, and the animation library plug-in is configured based on replaceable layer attribute data, so that the generated multimedia file can be used as a dynamic multimedia file template, namely, different elements in the multimedia file can be dynamically replaced, and a user can replace the elements of the multimedia file template through the multimedia file template and required target elements to generate the multimedia file meeting the self requirements.
Further, most of the current multimedia file creation methods are that designers use AE to create files to directly export and render the files into multimedia finished files, while Lottie plug-ins carry third party plug-ins (such as body movie plug-ins) to export json files for a browser or other players to play. However, the derived json file corresponds to a multimedia file, and cannot be directly used as a template to simplify file production, and the user can perform element replacement on the multimedia file template through the multimedia file template and the required target elements based on the replaceable element identifiers corresponding to different replaceable layers obtained by configuring the animation library plug-in according to the replaceable layer attribute data, so that the multimedia file meeting the self requirements is generated.
Step S306, layer coding data corresponding to the video layer identification is obtained.
Specifically, types of different layers are distinguished in the encoded data of the Lottie plug-in, such as pictures, texts, colors and the like, and by adding the video layer identifier in the encoded data of the Lottie plug-in, a designer can identify the video layer identifier of the Lottie plug-in, and after the identification is successful, the designer can further call the expanded video function to realize operations such as editing of the video layer, element replacement and the like.
Step S308, according to the layer coding data, video layer expansion processing is carried out on the original file template based on the animation library plug-in, and the original file template after the expansion processing is obtained.
Specifically, according to the layer coding data and adding the video layer identification in the coding data of the Lottie plug-in, the video layer expansion processing is carried out on the original file template based on the animation library plug-in, so that the callable video function is expanded based on the Lottie plug-in.
And step S310, carrying out fusion processing on the original file templates subjected to the identification and expansion processing of each replaceable element to obtain the multimedia file templates.
Specifically, by integrating each replaceable element identifier and the original file template after the expansion processing, specifically, the replaceable picture element identifier corresponding to the replaceable picture layer attribute data, the replaceable text element identifier corresponding to the replaceable text layer attribute data, the replaceable color element text identifier corresponding to the replaceable color layer attribute data, the replaceable video element identifier corresponding to the replaceable video layer attribute data, and the like, the original file template after the expansion processing, that is, the original file template with the video layer, may be subjected to fusion processing to obtain the multimedia file template.
In the method for generating the multimedia file, the replaceable element identifiers corresponding to different replaceable layers are obtained based on the configuration of the animation library plug-in unit by acquiring the predefined replaceable layer attribute data and according to the replaceable layer attribute data. And further acquiring layer coding data corresponding to the video layer identifier, performing video layer expansion processing on the original file template based on the animation library plug-in according to the layer coding data to obtain an expanded original file template, and further fusing the replaceable element identifiers and the expanded original file template to obtain the multimedia file template. The video layer expansion processing of the traditional animation library plug-in is realized, the traditional animation library plug-in has a video function, and is configured based on the animation library plug-in according to the replaceable layer attribute data, so that a multimedia file template with a replaceable identifier is obtained, a subsequent user can directly call the multimedia file template and required target elements to perform element replacement, a multimedia file meeting the self requirements is generated, the manufacturing flow of the multimedia file is simplified, the data processing efficiency in the process of generating the multimedia file is improved, and the manufacturing cost consumption is reduced.
In one embodiment, as shown in fig. 4, a method for generating a multimedia file is provided, which specifically includes the following steps:
step S402, a multimedia file template generated based on the animation library plug-in is obtained, and an alternative element identifier carried by the multimedia file template is extracted.
Specifically, the server side is configured with an animation library plug-in, which may be a Lottie plug-in, which may be understood that the terminal is provided with AE software, and when a designer designs a multimedia file template, the designer may call the animation library plug-in of the server, i.e. the Lottie plug-in, based on the AE software, and then generate and obtain the multimedia file template based on the Lottie plug-in.
The method comprises the steps that a user needs to replace materials according to a generated multimedia file template to obtain a multimedia file meeting actual requirements of the user, and then AE software calls a lottiE plug-in to generate a json file which carries a pre-marked replaceable element identifier.
Step S404, obtaining target elements corresponding to the multimedia file production requirements, and respectively carrying out replacement processing on the replaceable elements corresponding to the replaceable element identifiers according to the target elements to obtain rendering data.
Specifically, the target element corresponding to the multimedia file production requirement is obtained, and the target element can be the multimedia file production requirement input by the user or the multimedia file production requirement matched with different actual application scenes, and the target element carried by the multimedia file production requirement is identified.
Further, according to the target element, determining the replaceable element to be replaced, namely according to the target element, respectively matching with the replaceable element identifier, and carrying out replacement processing on the replaceable element corresponding to the replaceable element identifier to obtain the rendering data.
Step S406, calling a player obtained according to the program element combination, and previewing the rendering data in real time.
Specifically, as the player provided by the native front-end browser cannot modify the content in real time, the player obtained by combining the program elements needs to be called to realize real-time preview of the rendering data. In the preview playing process, the content can be adjusted and modified in real time according to the user requirement by the player obtained by combining the program elements.
The real-time preview representation displays the elements appearing at the same time, if the elements are not at the display time, the elements disappear, and when the elements are played in real time, what elements should appear at the current time are frequently calculated, and the definition of the real-time preview and the definition of the real-time playing are different.
In one embodiment, as shown in fig. 5, a real-time preview display interface of a multimedia file is provided, and referring to fig. 5, it can be known that by previewing a generated multimedia file in real time, in this embodiment, a picture file may be a picture file, and content replacement options including text replacement options and picture replacement options, such as modification of font colors and sizes of different texts, modification and replacement of specific text contents of different texts, and replaceable different picture materials, may be provided while previewing the picture file in real time, which may be adjusted and modified in real time according to user requirements.
The positions of the picture materials can be adjusted as well, and the number of the picture materials can be adjusted according to actual requirements, and the number of the picture materials can be increased or reduced without being limited to the picture materials 1 and 2 shown in fig. 5.
Step S408, obtaining the preview effect corresponding to the rendering data of the real-time preview, and judging whether the preview effect meets the requirement of multimedia file production.
Specifically, while performing real-time preview on the rendering data, a preview effect corresponding to the rendering data of the real-time preview is obtained in real time, where the preview effect may include a display effect of a specific interface currently displayed, such as a page layout effect, a page creative effect, and the like, and may further include a delivery effect of the inserted advertisement data, that is, a delivery time point, a duration length, and the like of the advertisement, and determine whether the preview effect meets a requirement of making the multimedia file.
Further, the preview effect corresponding to the rendering data of the real-time preview, such as the display effect of the specific interface currently displayed, such as the page layout effect and the page creative effect, may further include the putting effect of the inserted advertisement data, and by comparing the display effect of the interface with the putting effect of the advertisement data, and the like with the multimedia file making requirement, it is determined whether the preview effect meets the multimedia file making requirement.
In one embodiment, the preview effect corresponding to the rendering data of the real-time preview may directly detect the preview effect fed back by the user, that is, whether the user is satisfied with the display effect of the current rendering data, if the effect fed back by the user is satisfied, it indicates that the preview effect meets the requirement for making the multimedia file, and if not, it may also indicate that the preview effect does not meet the requirement for making the current multimedia file, and the multimedia file needs to be modified until the preview effect meets the requirement for making the multimedia file.
In step S410, when it is determined that the multimedia file production requirement is satisfied, a rendering instruction is triggered, and the rendering instruction is used to call a rendering program instance.
Specifically, when the requirement for making the multimedia file is met, a rendering instruction is triggered, an established rendering program instance is called and executed by responding to the rendering instruction, and rendering processing is performed according to the rendering data and the multimedia data of the target element.
The rendering program instance is called, so that task threads corresponding to all sub-data included in the multimedia data of the target element can be executed in parallel, rendering processing is performed, and the effect of concurrent rendering is achieved.
Step S412, a rendering program instance is called, rendering processing is carried out according to the rendering data and the multimedia data of the target element, and a rendering screenshot is generated.
Specifically, the multimedia data corresponding to the target element is split to obtain split sub-data, a rendering program instance is called, task threads corresponding to the sub-data are executed in parallel to perform rendering processing, concurrent rendering is achieved, rendering processing time is shortened, and rendering effects are improved.
Specifically, a Lottie program is executed to render a picture by calling a rendering program instance of a server, and then a screenshot is obtained frame by frame.
Step S414, generating a multimedia file according to each rendering screenshot.
Specifically, by utilizing a screenshot synthesizing tool, the rendered screenshot is subjected to assembly coding processing, and a multimedia file is obtained. The screenshot synthesizing tool can be a ffmpeg tool, namely a synthesizing tool for recording, converting digital audio data and video data and converting the digital audio data and the video data into streams, and can be used for carrying out assembly coding processing on rendering screenshots through the ffmpeg tool, and further adding other media elements such as background music and the like to obtain a final multimedia file.
In the method for generating the multimedia file, the replaceable element identification carried by the multimedia file template is extracted by acquiring the multimedia file template generated based on the animation library plug-in, and the target element corresponding to the multimedia file manufacturing requirement is acquired, so that the replaceable element corresponding to the replaceable element identification can be respectively replaced according to the target element, and the rendering data is obtained. And previewing the rendering data in real time by calling a player obtained according to the program element combination, acquiring a preview effect corresponding to the rendering data of the real-time preview, and judging whether the preview effect meets the requirement of multimedia file production. When the requirement for manufacturing the multimedia file is met, a rendering processing instruction is triggered, a rendering program instance is called, rendering processing is conducted according to the rendering data and the multimedia data of the target element, a rendering screenshot is generated, and then the multimedia file is generated according to each rendering screenshot. The method and the device realize element replacement of the multimedia file template generated according to the animation library plug-in, obtain corresponding rendering data, preview the rendering data in real time, and provide a way for a user to modify and adjust so as to generate and obtain a final multimedia file on the premise of meeting the requirement of multimedia file generation, avoid the situation that the generated file does not meet the requirement and needs to be re-generated, and more flexibly and variably perform multimedia file generation so as to improve the data processing efficiency in the process of multimedia file generation and reduce the cost consumption of file re-generation.
In one embodiment, a method for generating a multimedia file is provided, which specifically includes the following steps:
acquiring a target element corresponding to the multimedia file production requirement, and respectively carrying out replacement processing on the replaceable elements corresponding to the replaceable element identifiers according to the target element to obtain rendering data;
calling a front end frame of a browser running on a server end, and creating a rendering program instance corresponding to a task thread;
calling a rendering program instance, performing rendering processing according to the rendering data and the multimedia data of the target element, and generating a rendering screenshot;
and generating a multimedia file according to each rendering screenshot.
Specifically, the rendering data is obtained by acquiring target elements corresponding to the production requirement of the multimedia file, such as a picture element, a text element, a video element and the like, and respectively carrying out replacement processing on replaceable elements corresponding to the replaceable element identifiers according to the target elements.
And because the rendering processing needs to be implemented at the server side, a front end frame of the browser running at the server side needs to be called, specifically, a puppeter (namely, a node. Js package is expressed and used for simulating the running of a Chrome browser) is used for establishing and obtaining a rendering program instance corresponding to a rendering task thread needing to be executed. In this embodiment, the rendering program instance may be understood as headless browser encoded data, which is used to perform frame-by-frame screenshot and rendering processing on the rendering data and multimedia data, so as to obtain the rendering data.
Further, by downloading the multimedia data corresponding to the target element, that is, the material of the replaceable element in the multimedia file template, such as the graphic material, the text material, the video material, and the like, specifically needs to be replaced, and by calling the rendering program instance, rendering is performed according to the rendering data and the multimedia data of the target element, so as to generate a rendering screenshot.
The method specifically needs to split the multimedia data of the target element to obtain split sub-data, establishes corresponding task threads according to each sub-data respectively, executes each task thread in parallel by calling a rendering program instance, performs rendering processing according to the rendering data and the split sub-data respectively, realizes parallel rendering processing, generates corresponding rendering screenshot respectively, and further obtains a final multimedia file according to a screenshot synthesis tool, for example, by using a ffmpeg tool to assemble and encode the rendering screenshot, and further adding other media elements such as background music.
Further, as shown in fig. 6, a rendering flow of the multimedia file generation method is provided, and referring to fig. 6, the rendering flow of the multimedia file generation method specifically includes the following components P1-P5:
P1, data processing: and acquiring target elements corresponding to the production requirements of the multimedia file, such as picture elements, text elements, video elements and the like, and respectively carrying out replacement processing on the replaceable elements corresponding to the replaceable element identifiers according to the target elements based on the Node service processing program instance to obtain rendering data.
P2, creating a browser: the establishment of a rendering program instance, namely the generation of the encoding data of the headless browser, is realized by calling a front end frame of the browser running on a server side, which can be specifically a puppeter.
The headless browser corresponding to the headless browser coded data is a resident browser, rendering screenshot can be directly realized at the server without configuring display equipment, and then the server and a workbench process (working process) can manage the same headless browser instance and maintain the availability of the same headless browser instance without establishing the headless browser each time.
P3, downloading resources: and downloading the multimedia data corresponding to the target element.
P4, rendering screenshot: splitting the multimedia data of the target element to obtain split sub-data, respectively establishing corresponding task threads according to the sub-data, executing the task threads in parallel by calling a rendering program instance, respectively performing rendering processing according to the rendering data and the split sub-data, realizing parallel rendering processing, and respectively generating corresponding rendering screenshots.
In the multimedia file, for example, when still pictures exist in the video file, when still pictures are not changed continuously, the previous screenshot can be directly copied until the change occurs, and the screenshot is started for the changed pictures.
P5, synthesizing files: and according to a screenshot synthesizing tool, for example, using a ffmpeg tool to assemble and encode the rendered screenshot to obtain a final multimedia file, which can be a video file.
The picture coding of the rendered screenshot adopts jpeg coding preferentially, and png coding is only used in a scene requiring a transparent layer.
In this embodiment, the rendering data is obtained by acquiring the target element corresponding to the multimedia file production requirement, and performing replacement processing on the replaceable element corresponding to the replaceable element identifier according to the target element. And creating a rendering program instance corresponding to the task thread by calling a browser front end frame running on the server side, further calling the rendering program instance, performing rendering processing according to the rendering data and the multimedia data of the target element, generating rendering screen shots, and generating a multimedia file according to each rendering screen shot. The method and the device realize the generation of the rendering data at the server, and simultaneously call the rendering program instance at the server to realize the generation of the multimedia file, and the multimedia file meeting the user requirement can be directly generated without applying terminal equipment of a third party or configuring an additional display interface, so that the manufacturing flow of the multimedia file is simplified, the content and the effect of the multimedia file are enriched, the data processing efficiency in the process of generating the multimedia file is improved, and the manufacturing cost consumption of the multimedia file is reduced.
In one embodiment, as shown in fig. 7, an overall flow of a multimedia file generation method is provided, and referring to fig. 7, the multimedia file generation method specifically includes the following several components A1 to A4:
a1, template management project (template design, designer side):
in the template management project part, a designer can generate and obtain a multimedia file template based on an animation library plug-in, namely a Lottie plug-in, at the AE software server side. The derived multimedia file template is subjected to video layer expansion processing, namely, different layers such as pictures, texts and videos are included, and the multimedia file template has the substitution processing functions of the pictures, the texts and the video materials.
When the template design is carried out, the related predefined replaceable layer attribute data comprises replaceable picture layer attribute data, replaceable text layer attribute data, replaceable color layer attribute data and replaceable video layer attribute data, and the replacement processing of materials such as pictures, texts, colors, videos and the like can be realized.
A2, front-end application project (editing production, WEB service):
and in the front-end application project part, according to the attribute data of the replaceable image layers, obtaining the replaceable element identifiers corresponding to different replaceable image layers based on the configuration of the animation library plug-in, obtaining target elements corresponding to the production requirements of the multimedia files, and respectively carrying out replacement processing on the replaceable elements corresponding to the replaceable element identifiers according to the target elements to obtain the rendering data. And further calling a player obtained according to the program element combination to preview the rendering data in real time.
A3, rendering service items (automatically generated, rendering service):
in the rendering service item part, the preview effect corresponding to the rendering data of the real-time preview is obtained, and whether the preview effect meets the requirement of multimedia file production is judged. When the requirement for manufacturing the multimedia file is met, triggering a rendering processing instruction to call a rendering program instance, performing rendering processing according to the rendering data and the multimedia data of the target element to generate a rendering screenshot, and performing assembling and encoding processing on the rendering screenshot through a ffmpeg tool to synthesize the rendering screenshot to obtain the multimedia file.
In this embodiment, the synthesized multimedia file may be a video file.
A4, multimedia file export items (multimedia file export+advertisement delivery):
in the multimedia file export item section, the multimedia file subjected to the element replacement processing and rendering processing is exported and put.
In one embodiment, as shown in fig. 8, an advertisement delivery benefit schematic diagram of a multimedia file is provided, and referring to fig. 8, it can be seen that the method for generating a multimedia file is applied to a micro-film template video tool, a micro-film workbench, a micro-film automatic deriving video creative tool, and other creative tools, so as to realize video rendering and video generation, reduce a template delivery period, reduce a template manufacturing cost, and reduce a manufacturing cost of a video file.
In the service of automatically deriving the video creative, after the derived video creative is started, the user does not have the cost of making the video, and the multimedia file template in the multimedia file generation method is directly utilized to generate the video creative or the video material. Referring to fig. 8, it can be seen that the consumption of the advertisement bars after video derivatization is better than that of the advertisement bars after video derivatization, which is more remarkable in the comprehensive electronic commerce, food and travel industries. The consumption of the advertisement of the open video derived bar in the food industry is 2.7 times of that of the advertisement of the unopened video derived bar in the travel industry is 2.4 times of that of the advertisement of the video derived bar in the unopened video, and the consumption of the advertisement of the video derived bar in the integrated e-commerce platform is 1.4 times of that of the advertisement of the travel industry.
According to the multimedia file generation method, the multimedia file template generated based on the animation library plug-in is obtained, the replaceable element identification carried by the multimedia file template is extracted, the target element corresponding to the multimedia file manufacturing requirement is obtained, and then the replaceable elements corresponding to the replaceable element identification can be replaced according to the target element, so that the rendering data is obtained. And further, by calling a rendering program instance, performing rendering processing according to the rendering data and the multimedia data of the target element, and generating a rendering screenshot so as to generate a multimedia file according to each rendering screenshot. The method has the advantages that the multimedia file template generated based on the animation library plug-in is exported and further popularized and applied, meanwhile, the user can directly replace and modify elements on the multimedia file template according to actual demands, the richness of the content and effect of the generated multimedia file is improved, the multimedia file is not required to be manufactured again from the head when the modification and adjustment are needed, the manufacturing flow of the multimedia file is simplified, the data processing efficiency in the process of generating the multimedia file is further improved, and meanwhile, the manufacturing cost consumption of the multimedia file is reduced.
The application scene is applied to the method for generating the multimedia file. Specifically, the application of the multimedia file generation method in the application scene is as follows:
the multimedia file generation method can be applied to the creative tools such as a micro-film template video tool, a micro-film workbench, a micro-film automatic derivative video creative and the like to realize video rendering and video file generation.
The video template tool is an independent template video making tool in the micro-film video tool, a user can directly use the video template to make videos, and the micro-film workbench refers to a workbench which can directly make videos when the delivery flow creates advertising creatives. The micro-film workbench has rich intelligent creative video production capability, the video template is an important item of the intelligent creative video production capability of the workbench, and the video creative is produced by recommending a proper video template through a specific algorithm intelligent engine related to the micro-film workbench. And the automatic derivation of the micro-film creative refers to a product which intelligently synthesizes the materials such as pictures or videos in advertisements and video templates into a new video creative and inserts the new video creative into the original advertisements for mixed casting.
That is, the method for generating the multimedia file can realize the rapid production of the video file and the insertion and the delivery of the advertisement data in the video file. The web video template of the micro-film video tool uses pages to support advertisement to replace template materials, the front-end player supports real-time preview modification results, the response speed from editing content to preview effects is high, for example, the response time is 0.5S, the interaction performance of the current real-time preview is high, and real-time modification and adjustment can be realized.
It should be understood that, although the steps in the flowcharts related to the above embodiments are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least a part of the steps in the flowcharts related to the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages performed is not necessarily sequential, but may be performed alternately or alternately with at least a part of the steps or stages in other steps or other steps.
In one embodiment, as shown in fig. 9, there is provided a multimedia file generating apparatus, which may employ a software module or a hardware module, or a combination of both, as a part of a computer device, and the apparatus specifically includes: a multimedia file template acquisition module 902, a rendering data generation module 904, a rendering processing module 906, and a multimedia file generation module 908, wherein:
the multimedia file template obtaining module 902 is configured to obtain a multimedia file template generated based on the animation library plug-in, and extract an identifier of an alternative element carried by the multimedia file template.
The rendering data generating module 904 is configured to obtain a target element corresponding to the multimedia file production requirement, and perform replacement processing on the replaceable elements corresponding to the replaceable element identifiers according to the target element, so as to obtain rendering data.
The rendering processing module 906 is configured to invoke a rendering program instance, perform rendering processing according to the rendering data and the multimedia data of the target element, and generate a rendering screenshot.
The multimedia file generation module 908 is configured to generate a multimedia file according to each rendering screenshot.
In the multimedia file generating device, the multimedia file template generated based on the animation library plug-in is obtained, the replaceable element identifier carried by the multimedia file template is extracted, the target element corresponding to the multimedia file manufacturing requirement is obtained, and then the replaceable elements corresponding to the replaceable element identifier can be respectively replaced according to the target element, so that the rendering data is obtained. And further, by calling a rendering program instance, performing rendering processing according to the rendering data and the multimedia data of the target element, and generating a rendering screenshot so as to generate a multimedia file according to each rendering screenshot. The method has the advantages that the multimedia file template generated based on the animation library plug-in is exported and further popularized and applied, meanwhile, the user can directly replace and modify elements on the multimedia file template according to actual demands, the richness of the content and effect of the generated multimedia file is improved, the multimedia file is not required to be manufactured again from the head when the modification and adjustment are needed, the manufacturing flow of the multimedia file is simplified, the data processing efficiency in the process of generating the multimedia file is further improved, and meanwhile, the manufacturing cost consumption of the multimedia file is reduced. In one embodiment, the rendering processing module is further configured to:
Splitting the multimedia data of the target element to obtain split sub-data; respectively establishing corresponding task threads according to each sub data; and calling a rendering program instance, executing each task thread in parallel, and respectively performing rendering processing according to the rendering data and the split sub data to generate a corresponding rendering screenshot.
In one embodiment, the rendering data generation module is further configured to:
acquiring a target element corresponding to the multimedia file production requirement, and extracting resource address data carried by the target element; acquiring replaceable elements matched with the replaceable identifications; and respectively carrying out replacement processing on the replaceable elements according to the resource address data to obtain rendering data.
In one embodiment, there is provided a multimedia file generation apparatus, the apparatus further comprising:
the replaceable layer attribute data acquisition module is used for acquiring predefined replaceable layer attribute data; the replaceable picture layer attribute data comprises replaceable picture layer attribute data, replaceable text layer attribute data, replaceable color layer attribute data and replaceable video layer attribute data;
the configuration module is used for obtaining replaceable element identifiers corresponding to different replaceable layers based on the configuration of the animation library plug-in according to the replaceable layer attribute data;
And the fusion processing module is used for carrying out fusion processing on the original file templates subjected to the identification and expansion processing of each replaceable element to obtain the multimedia file templates.
In one embodiment, there is provided a multimedia file generation apparatus, the apparatus further comprising:
the layer coding data acquisition module is used for acquiring layer coding data corresponding to the video layer identification;
and the layer expansion processing module is used for carrying out video layer expansion processing on the original file template based on the animation library plug-in according to the layer coding data to obtain an original file template after expansion processing, wherein the original file template after expansion processing carries video layer data.
In one embodiment, there is provided a multimedia file generation apparatus, the apparatus further comprising:
the real-time preview module is used for calling a player obtained according to the program element combination and previewing the rendering data in real time;
the judging module is used for acquiring the preview effect corresponding to the rendering data of the real-time preview and judging whether the preview effect meets the requirement of multimedia file production;
and the rendering processing instruction triggering module is used for triggering a rendering processing instruction when the requirement for making the multimedia file is met, wherein the rendering processing instruction is used for calling a rendering program instance.
In one embodiment, a multimedia file generation apparatus is provided, the apparatus further comprising a rendering program instance creation module for:
calling a front end frame of a browser running on a server end, and creating a rendering program instance corresponding to a task thread; the rendering program example is headless browser coding data, and is used for performing frame-by-frame screenshot and rendering processing on the rendering data and the multimedia data to obtain the rendering data.
For specific limitations of the multimedia file generation device, reference may be made to the above limitations of the multimedia file generation method, and no further description is given here. The respective modules in the above-described multimedia file generation apparatus may be implemented in whole or in part by software, hardware, and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 10. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing data such as multimedia file templates, replaceable element identifiers, target elements, rendering data, rendering screen shots, multimedia files and the like. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of generating a multimedia file.
It will be appreciated by those skilled in the art that the structure shown in fig. 10 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, storing a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
In one embodiment, a computer program product or computer program is provided that includes computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the steps in the above-described method embodiments.
Those skilled in the art will appreciate that implementing all or part of the above-described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples represent only a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.
Claims (10)
1. A method of generating a multimedia file, the method comprising:
acquiring a multimedia file template generated based on an animation library plug-in, and extracting an alternative element identifier carried by the multimedia file template;
acquiring a target element corresponding to a multimedia file production requirement, and respectively carrying out replacement processing on a replaceable element corresponding to the replaceable element identifier according to the target element to obtain rendering data;
Calling a rendering program instance, performing rendering processing according to the rendering data and the multimedia data of the target element, and generating a rendering screenshot;
and generating a multimedia file according to each rendering screenshot.
2. The method of claim 1, wherein the invoking the instance of the rendering program to render the rendering data and the multimedia data of the target element to generate the rendering screenshot comprises:
splitting the multimedia data of the target element to obtain split sub-data;
respectively establishing corresponding task threads according to each sub data;
and calling a rendering program instance, executing each task thread in parallel, and respectively performing rendering processing according to the rendering data and the split sub-data to generate a corresponding rendering screenshot.
3. The method according to claim 1, wherein the obtaining the target element corresponding to the multimedia file production requirement, and according to the target element, respectively performing replacement processing on the replaceable elements corresponding to the replaceable element identifiers to obtain the rendering data, includes:
acquiring a target element corresponding to a multimedia file production requirement, and extracting resource address data carried by the target element;
Acquiring replaceable elements matched with the replaceable identifications;
and respectively carrying out replacement processing on the replaceable elements according to the resource address data to obtain rendering data.
4. A method according to any one of claims 1 to 3, comprising, before said obtaining a multimedia file template generated based on an animation library plug-in and extracting a replaceable element identifier carried by said multimedia file template:
acquiring predefined replaceable layer attribute data; the replaceable picture layer attribute data comprises replaceable picture layer attribute data, replaceable text layer attribute data, replaceable color layer attribute data and replaceable video layer attribute data;
obtaining replaceable element identifiers corresponding to different replaceable layers based on the configuration of the animation library plug-in according to the replaceable layer attribute data;
and carrying out fusion processing on the replaceable element identifiers and the original file templates after the expansion processing to obtain the multimedia file templates.
5. The method according to claim 4, wherein before the merging process is performed on the original document template after the identifying and expanding process of each replaceable element, the method further comprises:
Acquiring layer coding data corresponding to the video layer identification;
and carrying out video layer expansion processing on the original file template based on the animation library plug-in according to the layer coding data to obtain an expanded original file template, wherein the expanded original file template carries video layer data.
6. A method according to any one of claims 1 to 3, wherein after the obtaining a target element corresponding to a requirement for producing a multimedia file, and performing replacement processing on replaceable elements corresponding to the replaceable element identifiers according to the target element, respectively, to obtain rendering data, the method further includes:
calling a player obtained according to the program element combination, and previewing the rendering data in real time;
acquiring a preview effect corresponding to the rendering data of the real-time preview, and judging whether the preview effect meets the manufacturing requirement of the multimedia file;
when the multimedia file production requirement is determined to be met, a rendering processing instruction is triggered, and the rendering processing instruction is used for calling a rendering program instance.
7. The method according to claim 2, further comprising, after said respectively creating corresponding task threads from each of said sub-data:
Calling a browser front end frame running on a server side, and creating a rendering program instance corresponding to the task thread; the rendering program example is headless browser coding data and is used for carrying out frame-by-frame screenshot and rendering processing on rendering data and multimedia data to obtain rendering data.
8. A multimedia file generation apparatus, the apparatus comprising:
the multimedia file template acquisition module is used for acquiring a multimedia file template generated based on the animation library plug-in, and extracting an alternative element identifier carried by the multimedia file template;
the rendering data generation module is used for acquiring target elements corresponding to the production requirements of the multimedia files, and respectively carrying out replacement processing on the replaceable elements corresponding to the replaceable element identifiers according to the target elements to obtain rendering data;
the rendering processing module is used for calling a rendering program instance, performing rendering processing according to the rendering data and the multimedia data of the target element, and generating a rendering screenshot;
and the multimedia file generation module is used for generating a multimedia file according to each rendering screenshot.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210010617.6A CN116450588A (en) | 2022-01-05 | 2022-01-05 | Method, device, computer equipment and storage medium for generating multimedia file |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210010617.6A CN116450588A (en) | 2022-01-05 | 2022-01-05 | Method, device, computer equipment and storage medium for generating multimedia file |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116450588A true CN116450588A (en) | 2023-07-18 |
Family
ID=87124272
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210010617.6A Pending CN116450588A (en) | 2022-01-05 | 2022-01-05 | Method, device, computer equipment and storage medium for generating multimedia file |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116450588A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN119094860A (en) * | 2024-08-27 | 2024-12-06 | 广州三七极耀网络科技有限公司 | A method, device, storage medium and equipment for replacing text content |
-
2022
- 2022-01-05 CN CN202210010617.6A patent/CN116450588A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN119094860A (en) * | 2024-08-27 | 2024-12-06 | 广州三七极耀网络科技有限公司 | A method, device, storage medium and equipment for replacing text content |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106611435B (en) | Animation processing method and device | |
Thwaites | Digital heritage: what happens when we digitize everything? | |
CN108959392B (en) | Method, device and equipment for displaying rich text on 3D model | |
WO2022183519A1 (en) | Three-dimensional graphics image player capable of real-time interaction | |
US20190171869A1 (en) | Creating videos with facial expressions | |
CN107402985B (en) | Video special effect output control method and device and computer readable storage medium | |
US20180143741A1 (en) | Intelligent graphical feature generation for user content | |
JP7577357B2 (en) | Method for encapsulating and transmitting point cloud data | |
CN113032708A (en) | Code-free Web development system | |
WO2010045736A1 (en) | Reduced-latency rendering for a text-to-movie system | |
US8363055B1 (en) | Multiple time scales in computer graphics | |
CN101401130B (en) | Apparatus and method for providing a sequence of video frames, apparatus and method for providing a scene model, scene model, apparatus and method for creating a menu structure and computer program | |
CN116450588A (en) | Method, device, computer equipment and storage medium for generating multimedia file | |
Preda et al. | Avatar interoperability and control in virtual Worlds | |
CN114445750A (en) | Video object segmentation method, device, storage medium and program product | |
EP3246921B1 (en) | Integrated media processing pipeline | |
Castro et al. | Cognition inspired format for the expression of computer vision metadata | |
CN114917583A (en) | Animation style game background generation method and platform based on generation confrontation network | |
Shim et al. | CAMEO-camera, audio and motion with emotion orchestration for immersive cinematography | |
JP2006505050A (en) | Moving picture composition method and scene composition method | |
CN115770387A (en) | Text data processing method, device, equipment and medium | |
CN113709575B (en) | Video editing processing method and device, electronic equipment and storage medium | |
CN114302229B (en) | Method, system and storage medium for converting scene material into video | |
Zhu et al. | Simulation methods realized by virtual reality modeling language for 3D animation considering fuzzy model recognition | |
CN119233000A (en) | Bullet screen information processing method, device, computer equipment and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |