Nothing Special   »   [go: up one dir, main page]

CN116962783A - Visual element processing method and device, computer equipment and storage medium - Google Patents

Visual element processing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN116962783A
CN116962783A CN202310473425.3A CN202310473425A CN116962783A CN 116962783 A CN116962783 A CN 116962783A CN 202310473425 A CN202310473425 A CN 202310473425A CN 116962783 A CN116962783 A CN 116962783A
Authority
CN
China
Prior art keywords
mirror
sub
layer
visual
visual element
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310473425.3A
Other languages
Chinese (zh)
Inventor
杜林林
王文帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202310473425.3A priority Critical patent/CN116962783A/en
Publication of CN116962783A publication Critical patent/CN116962783A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The application provides a processing method and device of visual elements, computer equipment and a storage medium, and belongs to the technical field of computers. The method comprises the following steps: displaying a sub-mirror editing interface of the video; determining a first layer in response to adding the first visualization element in the second sub-mirror based on the sub-mirror editing interface; creating a first clip instance in an element track of the first visual element based on the first layer; and traversing the element track of the added visual element in the video at any time corresponding to the second sub-mirror, and finding the visual element in the second sub-mirror from the element set and displaying the visual element based on the clipping example corresponding to the second sub-mirror in the traversed element track. The method achieves the purpose of displaying the same visual elements in different sub-mirrors, does not need to copy the visual elements to be added, and reduces redundant data in the process of making the video.

Description

Visual element processing method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and apparatus for processing a visual element, a computer device, and a storage medium.
Background
With the development of computer technology, more and more users are used to making videos to share knowledge or life, etc. When a user makes a video, a visual element such as a map is often added to a split mirror of the video to strengthen the content to be shared or express the self-viewpoint. How to generate higher quality video based on the above-described visualization elements is an important focus of research in the art.
At present, in the process of making a video based on a visualization element such as a map, when a certain visualization element is added to a sub-mirror corresponding to a certain time interval, if the same visualization element is adopted in other sub-mirrors, the same visualization element needs to be reconstructed, so that the effect of presenting the same visualization element in different sub-mirrors is realized, and then the video is generated based on a plurality of sub-mirrors.
However, in the above technical solution, because the content of the partial mirrors in the video needs to have consistency, some identical visual elements may exist in adjacent partial mirrors, so that the process of making the video needs to reconstruct the identical visual elements multiple times, and redundant data in the process of making the video is increased.
Disclosure of Invention
The embodiment of the application provides a processing method, a processing device, computer equipment and a storage medium for visual elements, when the visual elements existing in a previously created sub-mirror are added into the sub-mirror, the corresponding visual elements can be found from a formed element set and displayed, the purpose of displaying the same visual elements in different sub-mirrors is realized, the visual elements to be added do not need to be copied, the operation is simple, and redundant data in the process of making video is reduced. The technical scheme is as follows:
in one aspect, a method for processing a visual element is provided, the method comprising:
a sub-mirror editing interface for displaying a video, wherein the video comprises at least one first sub-mirror and a second sub-mirror, the creation time of the first sub-mirror is earlier than that of the second sub-mirror, a first visual element is added in the first sub-mirror, the video is associated with an element set, and the element set comprises visual elements added in the process of manufacturing the video;
determining a first layer in response to adding the first visual element in the second sub-mirror based on the sub-mirror editing interface, the first layer being a layer of the first visual element in the second sub-mirror;
Creating a first clipping instance in an element track of the first visual element based on the first layer, wherein the element track is used for bearing clipping instances of the first visual element existing in the video, the duration of the element track is equal to the duration of the video, and the first clipping instance comprises an identification of the first visual element and the first layer;
and traversing the element track of the added visual element in the video at any time corresponding to the second sub-mirror, and finding and displaying the visual element in the second sub-mirror from the element set based on the clip instance corresponding to the second sub-mirror in the traversed element track, wherein the clip instance corresponding to the second sub-mirror comprises the first clip instance.
In another aspect, there is provided a processing apparatus for visualizing elements, the apparatus comprising:
the system comprises a display module, a video editing module and a display module, wherein the display module is used for displaying a sub-mirror editing interface of a video, the video comprises at least one first sub-mirror and a second sub-mirror, the creation time of the first sub-mirror is earlier than that of the second sub-mirror, a first visual element is added in the first sub-mirror, the video is associated with an element set, and the element set comprises visual elements added in the process of manufacturing the video;
The determining module is used for determining a first image layer in response to adding the first visual element into the second sub-mirror based on the sub-mirror editing interface, wherein the first image layer is an image layer of the first visual element in the second sub-mirror;
a first creating module, configured to create, based on the first layer, a first clip instance in an element track of the first visual element, where the element track is used to carry an existing clip instance of the first visual element in the video, and a duration of the element track is equal to a duration of the video, and the first clip instance includes an identifier of the first visual element and the first layer;
the display module is further configured to traverse an element track of the added visual element in the video at any time corresponding to the second sub-mirror, find the visual element in the second sub-mirror from the element set and display the visual element based on the clip instance corresponding to the second sub-mirror in the traversed element track, where the clip instance corresponding to the second sub-mirror includes the first clip instance.
In some embodiments, the display module includes:
The traversing unit is used for traversing the element track of the added visual element in the video according to the creation sequence of the track at any time corresponding to the second sub-mirror;
a determining unit, configured to determine, from element tracks of visualization elements that have been added in the video, that clip instances covering the time instants are clip instances of respective visualization elements in the second sub-mirror;
and the display unit is used for finding and displaying the visual elements in the second sub-mirror from the element set based on the clipping examples of the visual elements in the second sub-mirror.
In some embodiments, the display unit is configured to determine, for any one of the visualization elements in the first sub-mirror, a layer of the visualization element based on a clipping instance of the visualization element; and based on the layers of the visual elements in the second sub-mirror and the element set, sequentially rendering and displaying the visual elements according to the sequence from bottom to top.
In some embodiments, the display module is configured to, for an element track of any visual element that has been added in the video, sequentially detect, according to a timing sequence of clip instances in the element track, whether clip instances in the element track cover the time; in the event that any clip instance is detected to cover the time instant, the traversing of the element track is stopped.
In some embodiments, a third visual element has been added to the second sub-mirror, both the first visual element and the third visual element have been added to a third sub-mirror, the third sub-mirror being adjacent to the second sub-mirror, and the third sub-mirror having been created earlier than the second sub-mirror;
the determining module is used for responding to the fact that the first visual element is added in the second sub-mirror based on the sub-mirror editing interface, and obtaining a layer relation between the first visual element and the third visual element in the third sub-mirror; the first layer is determined based on the layer relationship and a second layer of the third visualization element in the second mirror.
In some embodiments, the apparatus further comprises:
the second creation module is used for creating an element track of any visual element under the condition that the visual element is added for the first time;
the second creation module is further configured to reconstruct a layer track, where the layer track includes a layer clip instance, and the layer clip instance is configured to indicate layer rendering logic of each visualization element in the second sub-mirror;
And the display module is used for finding and displaying the visual elements in the second sub-mirror from the element set based on the clipping examples corresponding to the second sub-mirror in the traversed element track under the condition of traversing to the layer track.
In some embodiments, the apparatus further comprises:
the updating module is used for updating the layer of a second visual element in the second sub-mirror based on the first layer of the first visual element in the second sub-mirror, wherein the second visual element is a visual element added in the second sub-mirror before the first visual element is added;
the updating module is further configured to update a clip instance of the second visual element in the element track of the second visual element.
In some embodiments, the apparatus further comprises:
the display module is used for responding to the selection operation of any visual element in the first sub-mirror and displaying a single-layer adjustment control of the visual element;
and the adjusting module is used for responding to the triggering operation of the single-layer adjusting control and adjusting a layer of the visual element according to the layer adjusting direction corresponding to the single-layer adjusting control.
In another aspect, a computer device is provided, the computer device including a processor and a memory for storing at least one segment of a computer program loaded and executed by the processor to implement operations performed by a method for processing a visual element in an embodiment of the application.
In another aspect, a computer readable storage medium having stored therein at least one segment of a computer program loaded and executed by a processor to perform operations as performed by a method of processing a visual element in an embodiment of the application is provided.
In another aspect, a computer program product is provided, comprising a computer program stored in a computer readable storage medium, the computer program being read from the computer readable storage medium by a processor of a computer device, the computer program being executed by the processor to cause the computer device to perform the method of processing a visual element provided in each of the above aspects or in various alternative implementations of each of the aspects.
The embodiment of the application provides a processing method of a visual element, when the visual element is added into a sub-mirror in a video in the process of manufacturing the video, a clipping example of the visual element in the sub-mirror can be created in a created element track according to a layer of the visual element in the sub-mirror, because the element track is used for bearing all clipping examples of the corresponding visual element in the video, the clipping example meeting the time condition can be found by traversing the element track of the visual element added in the video, and because the clipping example of the visual element contains an identifier and a layer of the visual element, even if the visual element in the sub-mirror which is created before is added, the corresponding visual element can be found from a formed element set and displayed according to the identifier in the clipping example corresponding to the sub-mirror, the purpose of displaying the same visual element in different sub-mirrors is achieved, the operation is simple, and the redundancy data in the manufacturing process of the video is reduced; in addition, because the layers of the same visual elements in different sub-mirrors are different, the visual elements can be accurately displayed on the corresponding sub-mirrors through the editing examples of the visual elements corresponding to the sub-mirrors, and the video accuracy is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic view of an implementation environment of a method for processing a visual element according to an embodiment of the present application;
FIG. 2 is a flow chart of a method of processing a visualization element provided in accordance with an embodiment of the present application;
FIG. 3 is a flow chart of another method of processing a visualization element provided in accordance with an embodiment of the present application;
FIG. 4 is a schematic diagram of a micromirror editing interface according to an embodiment of the present application;
FIG. 5 is a schematic diagram of an element track provided in accordance with an embodiment of the present application;
FIG. 6 is a schematic diagram of a layer track provided according to an embodiment of the present application;
FIG. 7 is a schematic diagram of another exemplary embodiment of a micromirror editing interface;
FIG. 8 is a flow chart of adding other existing visualization elements of a sub-mirror to the sub-mirror, according to an embodiment of the present application;
FIG. 9 is a flow chart of adding visualization elements in a split mirror that other split mirrors do not have, according to an embodiment of the application;
FIG. 10 is a schematic diagram of a layer adjustment provided according to an embodiment of the present application;
FIG. 11 is a flow chart of adjusting a layer of a visual element provided in accordance with an embodiment of the present application;
FIG. 12 is a flow chart for deleting a visual element provided in accordance with an embodiment of the present application;
FIG. 13 is a block diagram of a processing apparatus for visualization elements provided in accordance with an embodiment of the present application;
FIG. 14 is a block diagram of another visualization element processing apparatus provided in accordance with an embodiment of the present application;
fig. 15 is a block diagram of a terminal according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
The terms "first," "second," and the like in this disclosure are used for distinguishing between similar elements or items having substantially the same function and function, and it should be understood that there is no logical or chronological dependency between the terms "first," "second," and "n," and that there is no limitation on the amount and order of execution.
The term "at least one" in the present application means one or more, and the meaning of "a plurality of" means two or more.
It should be noted that, the information (including but not limited to user equipment information, user personal information, etc.), data (including but not limited to data for analysis, stored data, presented data, etc.), and signals related to the present application are all authorized by the user or are fully authorized by the parties, and the collection, use, and processing of the related data is required to comply with the relevant laws and regulations and standards of the relevant countries and regions. For example, the visual elements involved in the present application are all acquired with sufficient authorization.
In order to facilitate understanding, terms related to the present application are explained below.
Visualization element: refers to the object which can be loaded and rendered in the video editing process, and comprises background pictures, character figures, articles or characters and the like.
Splitting mirror: refers to the arrangement of one picture of video content, and generally, continuous video is decomposed by taking a mirror as a unit.
Unity: refers to a cross-platform two-dimensional and three-dimensional game engine, which can develop cross-platform games.
Flutter: refers to a toolkit for building user interfaces that can be used to create high-performance, cross-platform mobile applications.
StartTime: refer to the starting time of the playing of items such as a minute mirror or video.
Seek: refers to the content of a split mirror at a certain point in time to be rendered based on a time axis in milliseconds. For example: seek (1000) refers to the content that is currently rendered by the timeline for the 1000 th millisecond. In the embodiment of the application, the visual element in the sub-mirror at a certain time point (moment) can be displayed according to the time point (moment) in the Seek video triggered by the user.
Examples: refers to an abstract concept that creates an object of a class (class) based on that class. In the embodiment of the application, the clipping instance of the visual element can be created aiming at the visual element. The clip instance includes an identification of the visualization element and a layer.
Layer (c): refers to the hierarchy of the visualization elements in the top view. In the embodiment of the application, the lowest layer level in the Unity system layer is 0.
Timeline (time axis): the time axis component is built in by Unity, and consists of tracks, each frame traverses all tracks, and the length of the time axis is determined by the duration of the tracks.
And (3) a track: referring to the entity holding the clips, each frame traverses all clips, extending in length as the clips increase.
Clipping: the entity playing the content can be a clip of the content such as audio, roles, background pictures, articles, characters and the like.
Element set: refers to a parent container for loading all the visual elements, wherein the first added visual element layer is 0, the later added visual element layer is +1, and the larger the number is, the higher the hierarchy is.
The processing method of the visual element provided by the embodiment of the application can be executed by computer equipment. In some embodiments, the computer device is a terminal or a server. In the following, taking a computer device as an example, an implementation environment of a processing method of a visual element provided by an embodiment of the present application is introduced, and fig. 1 is a schematic diagram of an implementation environment of a processing method of a visual element provided by an embodiment of the present application. Referring to fig. 1, the implementation environment includes a terminal 101 and a server 102. The terminal 101 and the server 102 can be directly or indirectly connected through wired or wireless communication, and the present application is not limited herein.
In some embodiments, terminal 101 is, but is not limited to, a smart phone, tablet, notebook, desktop, smart speaker, smart watch, smart voice-interactive device, smart home appliance, vehicle-mounted terminal, etc. The terminal 101 runs an application program supporting video production. The application may be a clip-type application, a game-type application, or a multimedia-type application, to which embodiments of the present application are not limited. Illustratively, the terminal 101 is a terminal used by a user. The user can make a video using the above application in the terminal 101. In the process of making the video, the user can add a visual element to any of the partial mirrors in the video. The terminal 101 displays the added visual element in a minute mirror in the video in response to the element addition operation triggered by the user.
Those skilled in the art will recognize that the number of terminals may be greater or lesser. Such as the above-mentioned terminals may be only one, or the above-mentioned terminals may be several tens or hundreds, or more. The embodiment of the application does not limit the number of terminals and the equipment type.
In some embodiments, the server 102 is a stand-alone physical server, can be a server cluster or a distributed system formed by a plurality of physical servers, and can also be a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs (Content Delivery Network, content delivery networks), basic cloud computing services such as big data and artificial intelligence platforms, and the like. Server 102 is used to provide background services for applications that support video production. In some embodiments, the server 102 takes on primary computing work and the terminal 101 takes on secondary computing work; alternatively, the server 102 takes on secondary computing work and the terminal 101 takes on primary computing work; alternatively, a distributed computing architecture is used for collaborative computing between the server 102 and the terminal 101.
Fig. 2 is a flowchart of a processing method of a visualization element according to an embodiment of the present application, and referring to fig. 2, in an embodiment of the present application, an example of execution by a terminal is described. The processing method of the visual element comprises the following steps:
201. the method comprises the steps that a terminal displays a sub-mirror editing interface of a video, the video comprises at least one first sub-mirror and a second sub-mirror, the creation time of the first sub-mirror is longer than that of the second sub-mirror, a first visual element is added in the first sub-mirror, the video is associated with an element set, and the element set comprises the visual elements added in the process of making the video.
In the embodiment of the application, in the process of manufacturing the video, a user can divide the video into a plurality of sub-mirrors through the terminal, edit the plurality of sub-mirrors respectively, and finally synthesize the plurality of sub-mirrors into the video. In the process of editing any of the sub-mirrors, the terminal can add the visual element into the sub-mirror. The visualization element may be a text map or a pattern map, etc., which is not limited by the embodiments of the present application. In the process of making a video, the visual elements added in the partial mirrors of the video form an element set. The video includes at least one edited first sub-mirror and a second sub-mirror to be edited. And the terminal displays a second sub-mirror to be edited in a sub-mirror editing interface of the video. The visualization elements added in different sub-mirrors may be the same or different, and the embodiment of the present application is not limited thereto. The first sub-mirror includes a first visual element therein. The terminal may add the visualization element added in the first sub-mirror to the second sub-mirror.
202. And determining a first layer in response to adding the first visual element in the second sub-mirror based on the sub-mirror editing interface, wherein the first layer is a layer of the first visual element in the second sub-mirror.
In the embodiment of the application, the terminal can add the first visual element in the second sub-mirror in response to the element adding operation triggered based on the sub-mirror editing interface. In the process of adding the first visual element to the second sub-mirror, the terminal determines a layer of the first visual element in the second sub-mirror, namely, a first layer. The embodiment of the application does not limit the size of the first layer. The layer refers to the hierarchy of the visualization elements in the top view of the mirror. In the same sub-mirror, the layers of different visual elements are different.
203. The terminal creates a first clipping instance in an element track of a first visual element based on a first layer, wherein the element track is used for bearing the clipping instance of the first visual element existing in the video, the duration of the element track is equal to the duration of the video, and the first clipping instance comprises an identification of the first visual element and the first layer.
In the embodiment of the application, the terminal can create a corresponding element track for each added visual element. For any visualization element, the length of the element track of the visualization element is equal to the length of the video. The element track can accommodate clip instances that the corresponding visual element has created within the video. The clip instance of the visual element is used to indicate the effect that the visual element is to present in the mirror. The clip instance of the visualization element includes an identification of the visualization element and a layer of the visualization element in the split mirror. The element track of the first visual element is created during the first addition of the first visual element to the video. In the process of adding the first visual element to the second sub-mirror, the terminal can create a first clipping instance of the first visual element in the second sub-mirror in the created element track of the first visual element according to the layer of the first visual element in the second sub-mirror.
204. And for any moment corresponding to the second sub-mirror, traversing the element track of the added visual element in the video by the terminal, and finding the visual element in the second sub-mirror from the element set and displaying the visual element based on the clip instance corresponding to the second sub-mirror in the traversed element track, wherein the clip instance corresponding to the second sub-mirror comprises the first clip instance.
In the embodiment of the application, the duration of the clipping instance is equal to the display duration of the corresponding visual element in the minute mirror. In the process of displaying the second sub-mirror, the terminal can traverse the element track of the added visual element in the video according to any moment of the currently corresponding second sub-mirror so as to find a clipping example covering the current moment. And the clipping example covering the current moment is the clipping example corresponding to the visual element in the second sub-mirror. And then, the terminal displays the added visual elements in the second sub-mirror in the sub-mirror editing interface according to the identification and the layer of the visual elements in the searched editing example. That is, the terminal can display the first visual element in the second sub-mirror according to the first clip instance.
The embodiment of the application provides a processing method of a visual element, when the visual element existing in a previously created sub-mirror is added into the sub-mirror in the process of manufacturing a video, a clipping example of the visual element in the sub-mirror can be created in a created element track according to a layer of the visual element in the sub-mirror, because the element track is used for bearing all clipping examples of the corresponding visual element in the video, the clipping example meeting the time condition can be found by traversing the element track of the visual element added in the video, and because the clipping example of the visual element contains an identifier and a layer of the visual element, even if the visual element in the previously created sub-mirror is added, the corresponding visual element can be found from a formed element set according to the identifier in the clipping example corresponding to the sub-mirror, the purposes of displaying the same visual element in different sub-mirrors, copying the visual element to be added are achieved, the operation is simple, the redundant data storage in the process of the video is saved; in addition, because the layers of the same visual elements in different sub-mirrors are different, the visual elements can be accurately displayed on the corresponding sub-mirrors through the editing examples of the visual elements corresponding to the sub-mirrors, and the video accuracy is improved.
Fig. 3 is a flowchart of another processing method of a visualization element according to an embodiment of the present application, referring to fig. 3, in an embodiment of the present application, an example will be described in which the visualization element is executed by a terminal. The processing method of the visual element comprises the following steps:
301. the method comprises the steps that a terminal displays a sub-mirror editing interface of a video, the video comprises at least one first sub-mirror and a second sub-mirror, the creation time of the first sub-mirror is longer than that of the second sub-mirror, a first visual element is added in the first sub-mirror, the video is associated with an element set, and the element set comprises the visual elements added in the process of making the video.
In the embodiment of the application, the video sub-mirror editing interface is used for editing sub-mirrors in the video. Wherein, "editing" may include operations of adding a visual element or audio in a minute mirror, and the like. The terminal can display the edited first sub-mirror in the sub-mirror editing interface or display the edited second sub-mirror. That is, the user can preview the edited first sub-mirror through the sub-mirror editing interface or edit the editing second sub-mirror through the sub-mirror editing interface. The embodiment of the application does not limit the number of the first sub-mirrors which are edited currently. In the process of editing the split mirrors, the terminal can add the visual elements to the split mirrors. In the case that the visual element is added for the first time, the terminal can sum up the visual element in the element set associated with the video, so as to acquire the visual element from the element set and display the visual element. That is, in the process of editing any one of the first sub-mirrors, the terminal adds the first visual element to the first sub-mirror. The terminal then generalizes the first visual element in the set of elements associated with the video.
For example, fig. 4 is a schematic diagram of a micromirror editing interface according to an embodiment of the present application. Referring to fig. 4, the mirror editing interface includes a display area and an editing area. The first partial mirror is present in the display area. The first sub-scope includes four visualization elements, namely a "table" map 401, a "2021" map 402, a "dog" map 403, and a "doctor" map 404. The editing area is used for editing the first sub-mirror, such as adding a visual element to the first sub-mirror. Wherein the layer of the "table" map is 4; the layer of the "2021" map is 3; the layer of the "dog" map is 1; the "doctor" map has a layer 2. The smaller the number of layers, the higher the layer can be. Accordingly, a "dog" map 403 is displayed at the upper layer of a "doctor" map 404. That is, the "dog" map 403 may obscure the display of the "doctor" map 404.
302. And responding to the first visual element added in the second sub-mirror based on the sub-mirror editing interface, and determining a first layer by the terminal, wherein the first layer is a layer of the first visual element in the second sub-mirror.
In the embodiment of the application, in the process of editing the second sub-mirror, the terminal can add the same visual element as that in the first sub-mirror to the second sub-mirror, or can add a different visual element from that in the first sub-mirror to the second sub-mirror, and the embodiment of the application is not limited to this. In the case of adding a visualization element to the second sub-mirror, the terminal can determine the layer of the newly added visualization element. Accordingly, in the case of adding the same first visual element as in the first sub-mirror to the second sub-mirror, the terminal determines the first layer of the first visual element in the second sub-mirror.
In some embodiments, at least one second visualization element is also added to the second partial mirror. The second visual element is a visual element that has been added in the second sub-mirror before the first visual element is added. In the second sub-mirror, each second visual element corresponds to a layer. The terminal can adjust the layers of each second visual element in the second sub-mirror according to the newly added first layer of the first visual element. Correspondingly, the terminal updates the layer of the second visual element in the second sub-mirror based on the first layer of the first visual element in the second sub-mirror. According to the scheme provided by the embodiment of the application, the newly added visual elements are arranged on the layers in the split mirrors, so that the layers of other added visual elements in the split mirrors are updated, the situation that a plurality of visual elements appear on the same layer due to the fact that the visual elements in the split mirrors cannot be accurately displayed due to the fact that the layers of the visual elements are fixed is avoided, and in addition, the layers of all the visual elements in the split mirrors are adjusted along with the addition of the visual elements, so that all the visual elements are arranged on different layers, the display accuracy can be improved, and the adjustment operation of a user can be reduced.
Optionally, the terminal may adjust each layer of the second visualization element in the second sub-mirror to be incremented by one based on the first layer of the first visualization element in the second sub-mirror.
In the process of determining the first layer, the first layer may be a preset value, for example, the terminal sets the layer of the newly added first visualization element in the second sub-mirror to be 1. The first layer may also be user-defined. Alternatively, the first layer may be determined based on the layer of the first visualization element in the previous sub-mirror of the second sub-mirror, which embodiments of the present application are not limited in this respect.
In some embodiments, the first layer may be determined based on the layer of the first visualization element in a previous sub-mirror to the second sub-mirror. A third visual element has been added to the second partial mirror. Both the first visual element and the third visual element were added in the third partial mirror. The third sub-mirror is adjacent to the second sub-mirror, and the creation time of the third sub-mirror is earlier than that of the second sub-mirror. Correspondingly, in response to adding the first visual element in the second sub-mirror based on the sub-mirror editing interface, the terminal determines that the process of the first layer is: and responding to the addition of the first visual element in the second sub-mirror based on the sub-mirror editing interface, and acquiring the layer relation between the first visual element and the third visual element in the third sub-mirror by the terminal. The terminal then determines the first layer based on the layer relationship and a second layer of the third visualization element in the second mirror. The layer relationship may be a layer up-down relationship between the first visual element and the third visual element in the third sub-mirror, or may be a layer difference value between the first visual element and the third visual element in the third sub-mirror, which is not limited in the embodiment of the present application. According to the scheme provided by the embodiment of the application, as the adjacent sub-mirrors in the video generally have content continuity, namely, the same visual elements generally exist between the adjacent sub-mirrors, the layer relation among the same visual elements generally does not change or is not changed too much, and the layer of the visual element in the currently edited sub-mirror is determined through the layer of the visual element in the previous sub-mirror, the determined layer is more likely to meet the requirement of a user, the subsequent layer adjustment operation is reduced, and the efficiency of manufacturing the video can be improved.
For example, a "dog" map and a "doctor" map are included in the third partial mirror. The layer of the 'dog' map in the third split mirror is 3; the "doctor" map has a layer 2 in the third split mirror. It follows that the "doctor" map is on the upper layer of the "dog" map. In the case where the "doctor" map has been added to the second minute and the "doctor" map has a layer 1 in the second minute, the terminal may determine that the layer of the "dog" map is 2 in the next layer of the "doctor" map when the "dog" map is added to the second minute.
In some embodiments, the terminal may set the first layer of the newly added first visualization element through a router (a tool kit for building a user interface). Then, the terminal traverses the layer of the second visual element in the second sub-mirror through the router and updates. And then, the terminal issues the first layer of the first visual element and the updated layer of the second visual element to the Unity (an engine), and the Unity displays the first layer and the updated layer of the second visual element based on the layers of the visual elements.
303. The terminal creates a first clipping instance in an element track of a first visual element based on a first layer, wherein the element track is used for bearing the clipping instance of the first visual element existing in the video, the duration of the element track is equal to the duration of the video, and the first clipping instance comprises an identification of the first visual element and the first layer.
In the embodiment of the application, the terminal creates an element track by taking the visual element as a dimension, and creates a clipping instance by taking the time period as the dimension. That is, each visual element corresponds to an element track. In the case of adding a visual element to a split mirror within a video for the first time, a terminal can create an element track of the visual element. The terminal may then create a clip instance of the visual element in the element track. The time length corresponding to the clipping example is the display time length of the visual element in the sub-mirror. Because the first visual element is added in the first sub-mirror, the creation time of the first sub-mirror is earlier than the second sub-mirror, and thus the element track of the first visual element is already created. Accordingly, in the process of adding the first visual element to the second sub-mirror, the terminal can create a first clipping instance of the first visual element in the second sub-mirror in the element track of the first visual element according to the first layer.
The element track of the first visual element further comprises a second clipping instance of the first visual element in the first sub-mirror. The second clip instance is created earlier than the first clip instance. For any clip instance, the clip instance comprises the identification of the corresponding visual element and the layer in the sub-mirror where the corresponding visual element is located. That is, the first clip instance includes an identification of the first visual element and a first layer of the first visual element in the second mirror. The second clip instance includes an identification of the first visualization element and a second layer of the first visualization element in the first mirror.
For example, fig. 5 is a schematic diagram of an element track provided according to an embodiment of the present application. Referring to FIG. 5, the time interval of the first minute mirror is 0:00 to 0:02. The first sub-mirror comprises four visual elements of a table map, 2021 map, dog map and doctor map. In the first sub-mirror, the layer of the "table" map is 4; the layer of the "2021" map is 3; the layer of the "dog" map is 1; the "doctor" map has a layer 2. The time interval of the second sub-mirror is 0:02-0:03. The second sub-mirror comprises four visual elements of a dog map, a doctor map, a fortune map and a platform map. In the second partial mirror, the layer of the "dog" map is 2; the layer of the doctor' map is 1; the layer of the 'Fu' map is 3; the layer of the "podium" map is 4. Wherein both the "dog" map and the "doctor" map can be regarded as the first visualization element mentioned above. Taking doctor mapping as an example, the terminal creates a doctor-track when adding the doctor mapping to the first split lens; and, clip instance 1 of the "doctor" map in the first minute mirror is created in the "doctor-track". When the same "doctor" map is added to the second sub-scope, the terminal creates a clip instance 2 of the "doctor" map in the second sub-scope in the created "doctor-track".
In some embodiments, at least one second visualization element is also added to the second partial mirror. The second visual element is a visual element that has been added in the second sub-mirror before the first visual element is added. The terminal can adjust the layers of each second visual element in the second sub-mirror according to the newly added first layer of the first visual element. Since the layer of the second visual element in the second mirror exists in the clip instance of the second visual element, the terminal can update the clip instance of the second visual element. Correspondingly, the terminal updates the clipping instance of the second visual element in the element track of the second visual element based on the updated layer of the second visual element. According to the scheme provided by the embodiment of the application, as the layers of the visual elements are recorded in the corresponding editing examples and are displayed through the editing examples, under the condition that the layers of the visual elements are updated, the terminal correspondingly updates the editing examples of the visual elements, so that the accuracy of displaying the visual elements is ensured.
In some embodiments, after receiving information issued by the router, the Unity in the terminal analyzes data of an Add (Add) behavior in the information to obtain a first layer of the first visualization element in the second sub-mirror. Then, the terminal creates an element track of the first visual element; and creating a first clip instance of the first visual element in the second mirror in the element track of the first visual element. The first clip instance includes a first layer. The terminal can also analyze the data of the modification behavior in the information to obtain a first layer of the second visual element in the second sub-mirror. Then, the terminal finds the corresponding clip instance through the identification of the second visual element and the identification of the clip instance, and modifies the layer in the clip instance. Wherein the adding behavior belongs to an editing operation performed on the split mirrors.
304. And traversing the element track of the added visual element in the video by the terminal according to the creation sequence of the track at any moment corresponding to the second sub-mirror.
In the embodiment of the application, the terminal can display the sub-mirrors at different moments based on the time axis of the video. For any moment in the time interval of the second sub-mirror, the terminal traverses the element track of the added visual element in the video according to the creation sequence of the track so as to find the clipping instance meeting the moment. For any element track, in the process of traversing the element track, the terminal can stop traversing the element track after detecting whether all clip instances in the element track are consistent with the moment. Or, since the duration corresponding to the clip instance is equal to the duration of the sub-mirror, that is, only one clip instance corresponds to any visual element in one sub-mirror. During the traversing of any element track, the terminal can find a clipping instance meeting the above time at most. Thus, the terminal may also stop traversing the current element track after detecting that a certain clip instance corresponds to the moment. The embodiment of the application does not limit the traversing mode of the element track.
In some embodiments, the terminal may stop traversing the current element track upon detecting that a clip instance meets the above-described time. Correspondingly, the process of traversing the element track of the added visual element in the video by the terminal is as follows: for the element track of any visual element added in the video, the terminal sequentially detects whether the clip instance in the element track covers the moment according to the time sequence of the clip instance in the element track. Then, in the case where any clip instance overlay time is detected, traversing the element track is stopped. According to the scheme provided by the embodiment of the application, as the terminal can find the clipping example conforming to the moment at most in the process of traversing any element track, when detecting that a clipping example conforms to the moment, the terminal stops traversing the current element track without detecting the subsequent clipping example, so that the running consumption is reduced, and the video production efficiency is improved.
In some embodiments, the tracks associated with the video include layer tracks in addition to the element tracks described above. The layer track is the track newly created by the terminal. In the process of traversing the track, the terminal traverses to the layer track last. The layer track is used for indicating that all element tracks have traversed to be finished, and the terminal can render and display the visualized elements according to the found clip instance conforming to the moment. The layer track includes layer clip instances therein. The layer clip instance is used to indicate layer rendering logic for each visualization element in the first sub-mirror. The embodiment of the application does not limit the size of the time interval corresponding to the layer clip instance. According to the scheme provided by the embodiment of the application, the terminal element track is prompted to finish traversing through the layer track, so that logic for rendering the visual element is executed, the terminal is enabled to display after traversing all the element tracks, the visual element is prevented from being omitted, and the accuracy of the visual element display is ensured.
For example, fig. 6 is a schematic diagram of a layer track according to an embodiment of the present application. Referring to fig. 6, the layer track is located at the lowest of all element tracks, the track newly created for the terminal. In the track traversing process, the terminal traverses according to the sequence from top to bottom until the layer track is traversed, and then the traversing is stopped. That is, the order in which the terminal traverses the tracks is "table-track", "2021-track", "dog-track", "doctor-track", "fortune-track", "podium-track", and "floor track" in this order. Taking "doctor-track" as an example, the terminal can detect whether clip instance 1 and clip instance 2 in "doctor-track" meet time T in traversing "doctor-track". The terminal then determines that clip instance 2 in "doctor-track" corresponds to time T. Then, the terminal records layer 1 in clip example 2. The layer clip instance of the layer track can indicate layer rendering logic of the respective visualization element in the second mirror. The terminal renders and displays each visualization element in the second sub-mirror based on the layer rendering logic indicated by the layer clip instance.
In order to ensure that the layer track is the latest track created, the terminal can update the layer track based on the element track. Accordingly, for any one of the visual elements, in the case of adding the visual element for the first time, the terminal creates an element track of the visual element. The terminal then reconstructs the layer track. That is, if the layer track exists before, the terminal deletes the original layer track and layer clip instance, and creates a new layer track and layer clip instance. If no layer track exists before, the terminal creates a layer track and a layer clip instance after creating the element track. According to the scheme provided by the embodiment of the application, the layer track is reconstructed through the element track, so that the layer track is ensured to be the latest track created, whether all the element tracks are traversed or not can be indicated through the layer track in the subsequent track traversing process, any visual element is not omitted, and the accuracy of visual element display is ensured.
305. The terminal determines that the clipping instance of the overlay moment is the clipping instance of each visual element in the second sub-mirror from the element track of the added visual element in the video.
In the embodiment of the application, in the process of traversing the element track of the added visual element in the video, the terminal finds a clipping instance covering the moment from the element track associated with the video. The clipping examples covering the time are clipping examples of all the visual elements in the second sub-mirror. The time refers to any time corresponding to the second sub-mirror, that is, the time when the element track of the visual element in the video is traversed in step 304. The time may be a time (seek) designated by the user, or may be a time (play) when the video is automatically played, which is not limited in the embodiment of the present application. In the subsequent step 306, the terminal can display the video content corresponding to the above moment, that is, the visual element in the second sub-mirror at the above moment.
In some embodiments, the terminal may determine the above time based on a Timeline (Unity built-in Timeline component) control. Then, the terminal traverses all tracks associated with the video based on the above time, finding a clip instance meeting the above time. The terminal may then record the layers and visualization elements contained in the clip instance via a map (a container that stores data according to keys) provided by a layer Processor (Processor). Wherein the map records data in the form of key-value pairs (key-value). Keys are layers; the value (value) is the identity of the visualization element.
306. The terminal finds and displays the visual elements in the second sub-mirror from the element set based on the clipping examples of the visual elements in the second sub-mirror.
In the embodiment of the application, for the clipping example of any visual element in the second sub-mirror, the terminal finds the visual element from the video associated element set based on the identification of the visual element in the clipping example. The terminal then displays the visual element in the second sub-mirror based on the layer of the visual element in the clip instance.
For example, fig. 7 is a schematic diagram of another embodiment of a micromirror editing interface according to the present application. Referring to fig. 7, a second sub-mirror is present in a display area in the sub-mirror editing interface. The second sub-scope includes four visualization elements, namely a "dog" map 701, a "doctor" map 702, a "good fortune" map 703, and a "podium" map 704. The layer of the "dog" map is 2; the layer of the doctor' map is 1; the layer of the 'Fu' map is 3; the layer of the "podium" map is 4. The smaller the number of layers, the higher the layer can be. The layer sequence of visual elements is, from top to bottom, a "doctor" map 702, a "dog" map 701, a "Fu" map 703, and a "podium" map 704. The visualization element at the upper layer may obscure the display of the visualization element at the lower layer.
In some embodiments, the terminal can display the individual visual elements in the second sub-mirror in the order of their layers. Correspondingly, the process of displaying each visual element in the second sub-mirror by the terminal is as follows: for any visualization element in the second sub-mirror, the terminal determines a layer of the visualization element based on the clipping instance of the visualization element. Then, the terminal sequentially renders and displays each visual element according to the sequence from bottom to top based on the layer and the element set of each visual element in the second sub-mirror. According to the scheme provided by the embodiment of the application, the visualization elements are sequentially rendered according to the sequence of the layers of the visualization elements in the second sub-mirror from bottom to top, so that the visualization element at the bottommost layer can be rendered first, and other visualization elements are gradually rendered upwards, so that the visualization elements can be displayed from bottom to top, the situation that the visualization elements flicker due to the fact that the visualization elements at the upper layer are displayed first is avoided, the display effect of the visualization elements can be improved, and the visualization elements are more stable.
In some embodiments, the terminal may employ Unity to render each visualization element based on its layer in the second mirror. The terminal calculates a Unity system layer under the Unity specification based on the layers of the visualization elements in the second sub-mirror. And the terminal renders the visual element in the current sub-mirror through the Unity system layer. For any visual element in the second sub-mirror, the terminal subtracts the layer of the visual element from the number of visual elements in the current element set to obtain a Unity system layer of the visual element. The Unity records a layer of all the added visual elements in the video. In the process of displaying any partial mirror, certain visual elements only added in other partial mirrors still exist, but are not displayed under the parent partial mirror. The layers of the visual elements are used for representing the layer relation among the visual elements which can be displayed, and belong to self-defined layer logic.
For example, the second sub-scope includes four visualization elements, namely, a "dog" map, a "doctor" map, a "fortune" map, and a "podium" map. Since the first sub-mirror is created before the second sub-mirror, in this case the element set includes six visualization elements of "table" map, "2021" map, "dog" map, "doctor" map "blessing" map, and "podium" map. That is, the number of visual elements in the element set is 6. Taking doctor's map as an example, doctor's map is the newly added visualization element in the second scope. The terminal sets the layer of the "doctor" map in the second sub-mirror to 1 through the router. The terminal can then determine the Unity system layer of the "doctor" map as 6-1=5.
In some embodiments, the tracks associated with the video include layer tracks in addition to the element tracks described above. In the process of traversing the track, the terminal executes the rendering logic of the visualized element after traversing the layer track last time. Correspondingly, under the condition of traversing to the layer track, the terminal finds and displays the visualized elements in the second sub-mirror from the element set based on the clipping examples corresponding to the second sub-mirror in the traversed element track. The scheme provided by the embodiment of the application can avoid missing the visual elements and ensure the accuracy of the display of the visual elements.
In order to more clearly describe the process of adding other existing visualization elements in the sub-mirrors, the above process is described again with reference to the accompanying drawings. Fig. 8 is a flowchart of adding other existing visualization elements of a split lens to the split lens according to an embodiment of the present application. Referring to fig. 8, 801, a terminal notifies Unity to add a new clip instance to an added first visual element through a router, and issues a first layer of the first visual element in a second sub-mirror. 802. The terminal finds the element track of the first visual element through Unity, and adds a new clip instance in the element track of the first visual element. The first layer is saved in the new clip instance. 803. The terminal modifies the clipping instance of the second visual element in the second sub-mirror by Unity. 804. And the terminal determines the time through the time control and traverses all tracks associated with the video based on the time. 805. The terminal records the layers and the visualization elements contained in the clip instance conforming to the above-mentioned moment through the map provided by the layer Processor (Processor). 806. And rendering and displaying the terminal based on the recorded layers and the visual elements.
The terminal may also add visualization elements to the sub-mirrors that are not present in other sub-mirrors. I.e. the visual element is first added in the mirror of the video. In order to more clearly describe the process by which the visual element is first added, the above process is described below with reference to the accompanying drawings. Fig. 9 is a flowchart of adding a visualization element that is not included in other sub-mirrors to a sub-mirror according to an embodiment of the present application. Referring to fig. 9, 901, the terminal notifies Unity to add a fourth visual element through the router, and issues a layer of the fourth visual element in the second sub-mirror. 902. The terminal creates an element track of the fourth visual element through Unity, and creates a clipping instance on the element track of the fourth visual element. The clip instance holds a layer of the fourth visual element in the second partial mirror. 903. The terminal deletes the original layer track and layer clip. 904. The terminal creates new layer tracks and layer clips. 905. And the terminal determines the time through the time control and traverses all tracks associated with the video based on the time. 906. The terminal records the layers and the visualization elements contained in the clip instance conforming to the above-mentioned moment through the map provided by the layer Processor (Processor). 907. And rendering and displaying the terminal based on the recorded layers and the visual elements.
In some embodiments, the user may also adjust the layer of the visualization element in any of the mirrors through the terminal. Correspondingly, the process of adjusting the layer of the visual element by the terminal is as follows: for any visual element in the first sub-mirror, responding to the selection operation of the visual element, and displaying a single-layer adjustment control of the visual element by the terminal. Then, responding to the triggering operation of the single-layer adjustment control, and adjusting the layer of the visual element according to the layer adjustment direction corresponding to the single-layer adjustment control by the terminal. The single-layer adjustment control may be a layer adjustment control for moving up one layer, or a layer adjustment control for moving down one layer, which is not limited in the embodiment of the present application. According to the scheme provided by the embodiment of the application, the single-layer adjustment control is provided, so that a user can adjust the layers of the visual elements layer by layer, the granularity of single adjustment is finer, and the user can conveniently adjust.
For example, fig. 10 is a schematic diagram of adjustment at a layer according to an embodiment of the present application. Referring to fig. 10, in response to a selection operation of the "doctor" map 1001, the terminal displays a layer adjustment control 1002 of the "doctor" map. The layer adjustment control 1002 may be displayed in the form of a bubble, as embodiments of the application are not limited in this regard. Then, with the layer adjustment control 1002 triggered, the terminal displays a single layer adjustment control 1003 of "move down one layer". Then, in response to a trigger operation to the single-layer adjustment control 1003 of "move down one layer", the terminal moves down the layer of the "doctor" map 1001 one layer. In the case of the selection of the "doctor" map 1001, the terminal may also display a delete control 1004, a rotate control 1005, and a mirror control 1006, as embodiments of the application are not limited in this respect. Wherein delete control 1004 is used to delete a visualization element. The rotation control 1005 is used to rotate the visualization element. The mirror control 1006 is used to mirror the visualization element.
In order to more clearly describe the process of adjusting the layer of the visualization element by the terminal, the above process is described again with reference to the accompanying drawings. FIG. 11 is a flow chart of adjusting a layer of a visual element according to an embodiment of the present application. Referring to fig. 11, 1101, the terminal selects the fifth visual element through Unity, and triggers a single-layer adjustment control to adjust the layer of the fifth visual element. 1102. And the terminal sends the layer of the visual element being displayed in the element set to the router. 1103. And the terminal updates the layer of the visual element being displayed through the router and sends the updated layer to the Unity. 1104. The terminal updates the clip instance of the visual element being presented through Unity. 1105. And the terminal determines the time through the time control and traverses all tracks associated with the video based on the time. 1106. The terminal records the layers and the visualization elements contained in the clip instance conforming to the above-mentioned moment through the map provided by the layer Processor (Processor). 1107. And rendering and displaying the terminal based on the recorded layers and the visual elements.
In some embodiments, the user may also delete the visualization elements in the split mirrors as desired. Accordingly, in response to a delete operation of any one of the visual elements, the terminal no longer displays the visual element. In order to more clearly describe the process of deleting a visual element by a terminal, the above process is described again with reference to the accompanying drawings. Fig. 12 is a flow chart for deleting a visual element provided in accordance with an embodiment of the present application. Referring to fig. 12, 1201, the terminal selects the sixth visual element through Unity and triggers the delete control. 1202. The terminal sends the identification of the sixth visual element and the identification of the clip instance to the router. 1203. And the terminal updates the layer of the visual element being displayed through the router and sends the updated layer to the Unity. 1204. The terminal updates the clip instance of the visual element being presented through Unity. 1205. And the terminal determines the time through the time control and traverses all tracks associated with the video based on the time. 1206. The terminal records the layers and the visualization elements contained in the clip instance conforming to the above-mentioned moment through the map provided by the layer Processor (Processor). 1207. And rendering and displaying the terminal based on the recorded layers and the visual elements.
The embodiment of the application provides a processing method of a visual element, when the visual element is added into a sub-mirror in a video in the process of manufacturing the video, a clipping example of the visual element in the sub-mirror can be created in a created element track according to a layer of the visual element in the sub-mirror, because the element track is used for bearing all clipping examples of the corresponding visual element in the video, the clipping example meeting the time condition can be found by traversing the element track of the visual element added in the video, and because the clipping example of the visual element contains an identifier and a layer of the visual element, even if the visual element in the sub-mirror which is created before is added, the corresponding visual element can be found from a formed element set and displayed according to the identifier in the clipping example corresponding to the sub-mirror, the purposes of displaying the same visual element in different sub-mirrors are achieved, the operation is simple, the transmission of the video is saved, and the redundancy data in the process is saved; in addition, because the layers of the same visual elements in different sub-mirrors are different, the visual elements can be accurately displayed on the corresponding sub-mirrors through the editing examples of the visual elements corresponding to the sub-mirrors, and the video accuracy is improved.
Fig. 13 is a block diagram of a processing apparatus for a visualization element according to an embodiment of the present application. The apparatus is used for executing the steps when the processing method of the visual element is executed, referring to fig. 13, the apparatus includes: a display module 1301, a determination module 1302, and a first creation module 1303.
The display module 1301 is configured to display a split mirror editing interface of a video, where the video includes at least one first split mirror and a second split mirror, the creation time of the first split mirror is longer than that of the second split mirror, a first visual element is added in the first split mirror, the video is associated with an element set, and the element set includes visual elements added in the process of making the video;
a determining module 1302, configured to determine, in response to adding the first visual element in the second sub-mirror based on the sub-mirror editing interface, a first layer, the first layer being a layer of the first visual element in the second sub-mirror;
the first creating module 1303 is configured to create, based on the first layer, a first clip instance in an element track of the first visual element, where the element track is used to carry an existing clip instance of the first visual element in the video, and a duration of the element track is equal to a duration of the video, and the first clip instance includes an identifier of the first visual element and the first layer;
The display module 1301 is further configured to traverse, for any time corresponding to the second sub-mirror, an element track of the added visual element in the video, find, from the element set, the visual element in the second sub-mirror based on the clip instance corresponding to the second sub-mirror in the traversed element track, and display, where the clip instance corresponding to the second sub-mirror includes the first clip instance.
In some embodiments, FIG. 14 is a block diagram of another processing device for visualizing elements provided in accordance with an embodiment of the application. Referring to fig. 14, a display module 1301 includes:
a traversing unit 13011, configured to traverse, for any time corresponding to the second sub-mirror, the element tracks of the added visual elements in the video according to the creation sequence of the tracks;
a determining unit 13012 configured to determine, from the element tracks of the visual elements that have been added in the video, that the clip instance at the overlay time is the clip instance of each visual element in the second sub-mirror;
and a display unit 13013 for finding and displaying the visual elements in the second sub-mirror from the element set based on the clipping instances of the visual elements in the second sub-mirror.
In some embodiments, with continued reference to fig. 14, a display unit 13013 is configured to determine, for any of the visualization elements in the first partial mirror, a layer of the visualization element based on the clipped instance of the visualization element; based on the layers and the element sets of the visual elements in the second sub-mirror, the visual elements are sequentially rendered and displayed in the order from bottom to top.
In some embodiments, with continued reference to fig. 14, a display module 1301 is configured to, for an element track of any visual element that has been added in a video, sequentially detect, according to a timing of clip instances in the element track, whether clip instances in the element track overlap moments; in the event that any clip instance overlay time is detected, the traversing of the element track is stopped.
In some embodiments, a third visual element is added to the second sub-mirror, the first visual element and the third visual element are added to the third sub-mirror, the third sub-mirror is adjacent to the second sub-mirror, and the creation time of the third sub-mirror is earlier than that of the second sub-mirror;
with continued reference to fig. 14, a determining module 1302 is configured to obtain, in response to adding the first visual element in the second sub-mirror based on the sub-mirror editing interface, a layer relationship between the first visual element and the third visual element in the third sub-mirror; the first layer is determined based on the layer relationship and a second layer of the third visualization element in the second mirror.
In some embodiments, with continued reference to fig. 14, the apparatus further comprises:
a second creation module 1304, configured to create, for any visual element, an element track of the visual element if the visual element is added for the first time;
The second creating module 1304 is further configured to reconstruct a layer track, where the layer track includes a layer clip instance, and the layer clip instance is configured to indicate layer rendering logic of each visualization element in the second mirror;
the display module 1301 is configured to find, based on the clip instance corresponding to the second sub-mirror in the traversed element track, and display a visual element in the second sub-mirror from the element set when traversing to the layer track.
In some embodiments, with continued reference to fig. 14, the apparatus further comprises:
the updating module 1305 is configured to update, based on a first layer of the first visual element in the second sub-mirror, a layer of the second visual element in the second sub-mirror, where the second visual element is a visual element that has been added in the second sub-mirror before the first visual element is added;
the updating module 1305 is further configured to update the clip instance of the second visual element in the element track of the second visual element.
In some embodiments, with continued reference to fig. 14, the apparatus further comprises:
the display module 1301 is configured to, for any one of the visualization elements in the first sub-mirror, respond to a selection operation on the visualization element, and display a single-layer adjustment control of the visualization element;
The adjustment module 1306 is configured to adjust a layer of the visualization element according to a layer adjustment direction corresponding to the single-layer adjustment control in response to a triggering operation on the single-layer adjustment control.
The embodiment of the application provides a processing device of a visual element, when the visual element is added into a sub-mirror in a video in the process of manufacturing the video, a clipping example of the visual element in the sub-mirror can be created in a created element track according to a layer of the visual element in the sub-mirror, because the element track is used for bearing all clipping examples of the corresponding visual element in the video, the clipping example meeting the time condition can be found by traversing the element track of the visual element added in the video, and because the clipping example of the visual element contains an identifier and a layer of the visual element, even if the visual element in the sub-mirror which is created before is added, the corresponding visual element can be found from a formed element set and displayed according to the identifier in the clipping example corresponding to the sub-mirror, the purposes of displaying the same visual element in different sub-mirrors are achieved, the operation is simple, the transmission of the video is saved, and the redundancy data in the process is saved; in addition, because the layers of the same visual elements in different sub-mirrors are different, the visual elements can be accurately displayed on the corresponding sub-mirrors through the editing examples of the visual elements corresponding to the sub-mirrors, and the video accuracy is improved.
It should be noted that, when the processing device for a visual element provided in the foregoing embodiment runs an application, only the division of the foregoing functional modules is used as an example, in practical application, the foregoing functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above. In addition, the processing device for the visual element provided in the above embodiment and the processing method embodiment for the visual element belong to the same concept, and the detailed implementation process of the processing device for the visual element is referred to the method embodiment, which is not described herein.
Fig. 15 is a block diagram of a terminal 1500 according to an embodiment of the present application.
In general, the terminal 1500 includes: a processor 1501 and a memory 1502.
The processor 1501 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 1501 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 1501 may also include a main processor, which is a processor for processing data in an awake state, also called a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 1501 may be integrated with a GPU (Graphics Processing Unit, image processor) for taking care of rendering and rendering of content to be displayed by the display screen. In some embodiments, the processor 1501 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 1502 may include one or more computer-readable storage media, which may be non-transitory. Memory 1502 may also include high-speed random access memory as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1502 is used to store at least one computer program for execution by processor 1501 to implement the method of processing a visualization element provided by a method embodiment of the present application.
In some embodiments, the terminal 1500 may further optionally include: a peripheral interface 1503 and at least one peripheral device. The processor 1501, memory 1502 and peripheral interface 1503 may be connected by a bus or signal lines. The individual peripheral devices may be connected to the peripheral device interface 1503 via a bus, signal lines, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1504, a display 1505, a camera assembly 1506, audio circuitry 1507, and a power supply 1508.
A peripheral interface 1503 may be used to connect I/O (Input/Output) related at least one peripheral device to the processor 1501 and the memory 1502. In some embodiments, processor 1501, memory 1502, and peripheral interface 1503 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 1501, the memory 1502, and the peripheral interface 1503 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1504 is configured to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 1504 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 1504 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. In some embodiments, the radio frequency circuit 1504 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuit 1504 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: the world wide web, metropolitan area networks, intranets, generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuit 1504 may also include NFC (Near Field Communication, short range wireless communication) related circuits, which the present application is not limited to.
Display 1505 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When display screen 1505 is a touch display screen, display screen 1505 also has the ability to collect touch signals at or above the surface of display screen 1505. The touch signal may be input to the processor 1501 as a control signal for processing. At this point, display 1505 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1505 may be one, disposed on the front panel of the terminal 1500; in other embodiments, the display 1505 may be at least two, respectively disposed on different surfaces of the terminal 1500 or in a folded design; in other embodiments, display 1505 may be a flexible display disposed on a curved surface or a folded surface of terminal 1500. Even more, the display 1505 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The display screen 1505 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 1506 is used to capture images or video. In some embodiments, the camera assembly 1506 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, the camera assembly 1506 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuitry 1507 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and the environment, converting the sound waves into electric signals, inputting the electric signals to the processor 1501 for processing, or inputting the electric signals to the radio frequency circuit 1504 for voice communication. For purposes of stereo acquisition or noise reduction, a plurality of microphones may be respectively disposed at different portions of the terminal 1500. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 1501 or the radio frequency circuit 1504 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, the audio circuit 1507 may also include a headphone jack.
The power supply 1508 is used to power the various components in the terminal 1500. The power source 1508 may be alternating current, direct current, disposable battery, or rechargeable battery. When the power source 1508 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 1500 also includes one or more sensors 1509. The one or more sensors 1509 include, but are not limited to: an acceleration sensor 1510, a gyro sensor 1511, a pressure sensor 1512, an optical sensor 1513, and a proximity sensor 1514.
The acceleration sensor 1510 may detect the magnitudes of accelerations on three coordinate axes of a coordinate system established with the terminal 1500. For example, the acceleration sensor 1510 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1501 may control the display screen 1505 to display the user interface in either a landscape view or a portrait view based on the gravitational acceleration signal collected by the acceleration sensor 1510. The acceleration sensor 1510 may also be used for acquisition of motion data of a game or user.
The gyro sensor 1511 may detect a body direction and a rotation angle of the terminal 1500, and the gyro sensor 1511 may collect a 3D motion of the user to the terminal 1500 in cooperation with the acceleration sensor 1510. The processor 1501, based on the data collected by the gyro sensor 1511, may implement the following functions: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
The pressure sensor 1512 may be disposed on a side frame of the terminal 1500 and/or below the display 1505. When the pressure sensor 1512 is disposed on the side frame of the terminal 1500, a grip signal of the terminal 1500 by the user may be detected, and the processor 1501 performs a left-right hand recognition or a shortcut operation according to the grip signal collected by the pressure sensor 1512. When the pressure sensor 1512 is disposed at the lower layer of the display screen 1505, the processor 1501 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 1505. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The optical sensor 1513 is used to collect the ambient light intensity. In one embodiment, processor 1501 may control the display brightness of display screen 1505 based on the intensity of ambient light collected by optical sensor 1513. Specifically, when the ambient light intensity is high, the display brightness of the display screen 1505 is turned up; when the ambient light intensity is low, the display luminance of the display screen 1505 is turned down. In another embodiment, the processor 1501 may also dynamically adjust the shooting parameters of the camera assembly 1506 based on the ambient light intensity collected by the optical sensor 1513.
A proximity sensor 1514, also referred to as a distance sensor, is typically provided on the front panel of the terminal 1500. The proximity sensor 1514 is used to collect the distance between the user and the front of the terminal 1500. In one embodiment, when the proximity sensor 1514 detects a gradual decrease in the distance between the user and the front of the terminal 1500, the processor 1501 controls the display 1505 to switch from the on-screen state to the off-screen state; when the proximity sensor 1514 detects that the distance between the user and the front surface of the terminal 1500 gradually increases, the processor 1501 controls the display screen 1505 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the structure shown in fig. 15 is not limiting and that more or fewer components than shown may be included or certain components may be combined or a different arrangement of components may be employed.
The embodiment of the application also provides a computer readable storage medium, in which at least one section of computer program is stored, the at least one section of computer program being loaded and executed by a processor of a computer device to implement the operations performed by the computer device in the processing method of the visual element of the embodiment. For example, the computer readable storage medium may be Read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), compact disc Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM), magnetic tape, floppy disk, optical data storage device, and the like.
Embodiments of the present application also provide a computer program product comprising a computer program stored in a computer readable storage medium. The processor of the computer device reads the computer program from the computer-readable storage medium, and the processor executes the computer program so that the computer device executes the processing method of the visual element provided in the above-described various alternative implementations.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments of the present application is not intended to limit the application, but rather, the application is to be construed as limited to the appended claims.

Claims (12)

1. A method of processing a visual element, the method comprising:
a sub-mirror editing interface for displaying a video, wherein the video comprises at least one first sub-mirror and a second sub-mirror, the creation time of the first sub-mirror is earlier than that of the second sub-mirror, a first visual element is added in the first sub-mirror, the video is associated with an element set, and the element set comprises visual elements added in the process of manufacturing the video;
Determining a first layer in response to adding the first visual element in the second sub-mirror based on the sub-mirror editing interface, the first layer being a layer of the first visual element in the second sub-mirror;
creating a first clipping instance in an element track of the first visual element based on the first layer, wherein the element track is used for bearing clipping instances of the first visual element existing in the video, the duration of the element track is equal to the duration of the video, and the first clipping instance comprises an identification of the first visual element and the first layer;
and traversing the element track of the added visual element in the video at any time corresponding to the second sub-mirror, and finding and displaying the visual element in the second sub-mirror from the element set based on the clip instance corresponding to the second sub-mirror in the traversed element track, wherein the clip instance corresponding to the second sub-mirror comprises the first clip instance.
2. The method according to claim 1, wherein for any time corresponding to the second sub-mirror, traversing the element track of the added visual element in the video, based on the clip instance corresponding to the second sub-mirror in the traversed element track, finding and displaying the visual element in the second sub-mirror from the element set, including:
Traversing the element track of the added visual element in the video according to the creation sequence of the track at any time corresponding to the second sub-mirror;
determining that the clipping instance covering the moment is the clipping instance of each visual element in the second sub-mirror from the element track of the added visual element in the video;
and based on the clipping examples of the visual elements in the second sub-mirror, the visual elements in the second sub-mirror are found from the element set and displayed.
3. The method of claim 2, wherein the finding and displaying the visual elements in the second sub-mirror from the set of elements based on the clipped instances of the individual visual elements in the second sub-mirror comprises:
for any visual element in the second sub-mirror, determining a layer of the visual element based on a clipping instance of the visual element;
and based on the layers of the visual elements in the second sub-mirror and the element set, sequentially rendering and displaying the visual elements according to the sequence from bottom to top.
4. The method of claim 1, wherein traversing the element track of the added visual element in the video comprises:
For the element track of any added visual element in the video, sequentially detecting whether the clip instance in the element track covers the moment according to the time sequence of the clip instance in the element track;
in the event that any clip instance is detected to cover the time instant, the traversing of the element track is stopped.
5. The method of claim 1, wherein a third visual element has been added to the second sub-mirror, wherein the first visual element and the third visual element have been added to a third sub-mirror, wherein the third sub-mirror is adjacent to the second sub-mirror, and wherein the third sub-mirror is created earlier than the second sub-mirror;
the determining a first layer in response to adding the first visualization element in the second sub-mirror based on the sub-mirror editing interface includes:
responsive to adding the first visual element in the second sub-mirror based on the sub-mirror editing interface, acquiring a layer relationship between the first visual element and the third visual element in the third sub-mirror;
the first layer is determined based on the layer relationship and a second layer of the third visualization element in the second mirror.
6. The method according to claim 1, wherein the method further comprises:
for any visual element, creating an element track of the visual element under the condition that the visual element is added for the first time;
reconstructing a layer track, wherein the layer track comprises a layer clip instance, and the layer clip instance is used for indicating layer rendering logic of each visualization element in the second sub-mirror;
the step of finding and displaying the visualized elements in the second sub-mirror from the element set based on the clipping examples corresponding to the second sub-mirror in the traversed element track comprises the following steps:
and under the condition of traversing to the layer track, based on the clip instance corresponding to the second sub-mirror in the traversed element track, the visualized elements in the second sub-mirror are found from the element set and displayed.
7. The method according to claim 1, wherein the method further comprises:
updating a layer of a second visual element in the second sub-mirror based on a first layer of the first visual element in the second sub-mirror, wherein the second visual element is a visual element added in the second sub-mirror before the first visual element is added;
And updating the clipping instance of the second visual element in the element track of the second visual element.
8. The method according to claim 1, wherein the method further comprises:
for any visual element in the first sub-mirror, responding to the selection operation of the visual element, and displaying a single-layer adjustment control of the visual element;
and responding to the triggering operation of the single-layer adjustment control, and adjusting the layer of the visual element according to the layer adjustment direction corresponding to the single-layer adjustment control.
9. A processing apparatus for visualizing elements, the apparatus comprising:
the system comprises a display module, a video editing module and a display module, wherein the display module is used for displaying a sub-mirror editing interface of a video, the video comprises at least one first sub-mirror and a second sub-mirror, the creation time of the first sub-mirror is earlier than that of the second sub-mirror, a first visual element is added in the first sub-mirror, the video is associated with an element set, and the element set comprises visual elements added in the process of manufacturing the video;
the determining module is used for determining a first image layer in response to adding the first visual element into the second sub-mirror based on the sub-mirror editing interface, wherein the first image layer is an image layer of the first visual element in the second sub-mirror;
A first creating module, configured to create, based on the first layer, a first clip instance in an element track of the first visual element, where the element track is used to carry an existing clip instance of the first visual element in the video, and a duration of the element track is equal to a duration of the video, and the first clip instance includes an identifier of the first visual element and the first layer;
the display module is further configured to traverse an element track of the added visual element in the video at any time corresponding to the second sub-mirror, find the visual element in the second sub-mirror from the element set and display the visual element based on the clip instance corresponding to the second sub-mirror in the traversed element track, where the clip instance corresponding to the second sub-mirror includes the first clip instance.
10. A computer device, characterized in that it comprises a processor and a memory for storing at least one section of a computer program, which is loaded by the processor and which carries out the method of processing the visualisation element of any of the claims 1 to 8.
11. A computer-readable storage medium, characterized in that the computer-readable storage medium is used for storing at least one segment of a computer program for executing the method of processing a visualization element according to any one of claims 1 to 8.
12. A computer program product comprising a computer program, characterized in that the computer program, when executed by a processor, implements a method of processing a visualisation element according to any one of claims 1 to 8.
CN202310473425.3A 2023-04-25 2023-04-25 Visual element processing method and device, computer equipment and storage medium Pending CN116962783A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310473425.3A CN116962783A (en) 2023-04-25 2023-04-25 Visual element processing method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310473425.3A CN116962783A (en) 2023-04-25 2023-04-25 Visual element processing method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116962783A true CN116962783A (en) 2023-10-27

Family

ID=88443353

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310473425.3A Pending CN116962783A (en) 2023-04-25 2023-04-25 Visual element processing method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116962783A (en)

Similar Documents

Publication Publication Date Title
CN108769562B (en) Method and device for generating special effect video
CN110336960B (en) Video synthesis method, device, terminal and storage medium
CN110233976B (en) Video synthesis method and device
CN111701238A (en) Virtual picture volume display method, device, equipment and storage medium
CN109167937B (en) Video distribution method, device, terminal and storage medium
CN112492097B (en) Audio playing method, device, terminal and computer readable storage medium
CN110545476B (en) Video synthesis method and device, computer equipment and storage medium
CN110248236B (en) Video playing method, device, terminal and storage medium
CN112565911B (en) Bullet screen display method, bullet screen generation device, bullet screen equipment and storage medium
CN111083526B (en) Video transition method and device, computer equipment and storage medium
CN113409427A (en) Animation playing method and device, electronic equipment and computer readable storage medium
CN111565338A (en) Method, device, system, equipment and storage medium for playing video
CN113747199A (en) Video editing method, video editing apparatus, electronic device, storage medium, and program product
CN114546227A (en) Virtual lens control method, device, computer equipment and medium
CN111598981B (en) Character model display method, device, equipment and storage medium
CN110868642B (en) Video playing method, device and storage medium
CN110517346B (en) Virtual environment interface display method and device, computer equipment and storage medium
CN113556481A (en) Video special effect generation method and device, electronic equipment and storage medium
CN112822544A (en) Video material file generation method, video synthesis method, device and medium
CN110992268A (en) Background setting method, device, terminal and storage medium
CN114554112B (en) Video recording method, device, terminal and storage medium
CN116962783A (en) Visual element processing method and device, computer equipment and storage medium
CN110942426B (en) Image processing method, device, computer equipment and storage medium
CN113268234A (en) Page generation method, device, terminal and storage medium
CN108881715B (en) Starting method and device of shooting mode, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication