Nothing Special   »   [go: up one dir, main page]

CN115297271A - Video determination method and device, electronic equipment and storage medium - Google Patents

Video determination method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115297271A
CN115297271A CN202210907884.3A CN202210907884A CN115297271A CN 115297271 A CN115297271 A CN 115297271A CN 202210907884 A CN202210907884 A CN 202210907884A CN 115297271 A CN115297271 A CN 115297271A
Authority
CN
China
Prior art keywords
target
special effect
model
rendering
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210907884.3A
Other languages
Chinese (zh)
Inventor
刘佳成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202210907884.3A priority Critical patent/CN115297271A/en
Publication of CN115297271A publication Critical patent/CN115297271A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/27Server based end-user applications
    • H04N21/274Storing end-user multimedia data in response to end-user request, e.g. network recorder
    • H04N21/2743Video hosting of uploaded data from client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the disclosure provides a video determination method, a video determination device, an electronic device and a storage medium, wherein the method comprises the following steps: responding to special effect trigger operation, and collecting a video frame to be processed comprising a target object; determining a target rendering mode for rendering the special effect model to the object model according to the decoration type corresponding to the target special effect; the special effect model corresponds to a target special effect, the object model corresponds to a target object, and the decoration types comprise a decoration type wrapping a target part and a decoration type not wrapping the target part; and obtaining and displaying a special effect video frame which adds the target special effect to the target object based on the target rendering mode. According to the technical scheme of the embodiment of the disclosure, the target special effect is rendered on the target part of the target object according to different decoration types, the richness and interestingness of the special effect video are enhanced, and the use experience of a user is improved.

Description

Video determination method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to image processing technologies, and in particular, to a video determining method and apparatus, an electronic device, and a storage medium.
Background
With the development of network technology, more and more application programs enter the life of users, and particularly, a series of software capable of shooting short videos is deeply favored by the users.
In the prior art, software developers can add various special effects in applications for users to use in the process of shooting videos, however, the special effects provided for users at present are very limited, the quality of videos and the richness of contents of the videos are all required to be further improved, and meanwhile, special effects added by users in videos cannot interact with the contents of the videos.
Disclosure of Invention
The disclosure provides a video determining method, a video determining device, an electronic device and a storage medium, so as to render a target special effect on a target part of a target object according to different decoration types, enhance the richness and interestingness of a special effect video and improve the use experience of a user.
In a first aspect, an embodiment of the present disclosure provides a video determining method, where the method includes:
responding to special effect trigger operation, and collecting a video frame to be processed comprising a target object;
determining a target rendering mode for rendering the special effect model to the object model according to the decoration type corresponding to the target special effect; wherein the special effect model corresponds to the target special effect, the object model corresponds to the target object, and the decoration types comprise a decoration type of a wrapped target part and a decoration type of a non-wrapped target part;
and obtaining and displaying a special effect video frame which adds the target special effect to the target object based on the target rendering mode.
In a second aspect, an embodiment of the present disclosure further provides a video determining apparatus, where the apparatus includes:
the system comprises a to-be-processed video frame acquisition module, a to-be-processed video frame acquisition module and a processing module, wherein the to-be-processed video frame acquisition module is used for responding to special effect triggering operation and acquiring a to-be-processed video frame comprising a target object;
the target rendering mode determining module is used for determining a target rendering mode for rendering the special effect model to the object model according to the decoration type corresponding to the target special effect; wherein the special effect model corresponds to the target special effect, the object model corresponds to the target object, and the decoration types comprise a decoration type of a wrapped target part and a decoration type of a non-wrapped target part;
and the special effect video frame display module is used for obtaining and displaying a special effect video frame which adds the target special effect to the target object based on the target rendering mode.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, where the electronic device includes:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement a video determination method as in any of the embodiments of the present disclosure.
In a fourth aspect, the disclosed embodiments also provide a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform the video determination method according to any one of the disclosed embodiments.
According to the technical scheme, the video frame to be processed comprising the target object is collected in response to the special effect triggering operation, the target rendering mode for rendering the special effect model to the object model is further determined according to the decoration type corresponding to the target special effect, and finally the special effect video frame with the target special effect added to the target object is obtained and displayed based on the target rendering mode, so that the target special effect is added to the target object according to different decoration types, the richness and interestingness of the special effect video are enhanced, and the use experience of a user is improved.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
Fig. 1 is a schematic flow chart of a video determining method provided in an embodiment of the present disclosure;
FIG. 2 is a special effects schematic diagram of a target special effect provided by an embodiment of the present disclosure;
fig. 3 is a schematic flow chart of a video determining method provided by the embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a video determining apparatus according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more complete and thorough understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein is intended to be open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
It is understood that before the technical solutions disclosed in the embodiments of the present disclosure are used, the type, the use range, the use scene, etc. of the personal information related to the present disclosure should be informed to the user and obtain the authorization of the user through a proper manner according to the relevant laws and regulations.
For example, in response to receiving an active request from a user, a prompt message is sent to the user to explicitly prompt the user that the requested operation to be performed would require the acquisition and use of personal information to the user. Thus, the user can autonomously select whether to provide personal information to software or hardware such as an electronic device, an application program, a server, or a storage medium that performs the operations of the technical solution of the present disclosure, according to the prompt information.
As an optional but non-limiting implementation manner, in response to receiving an active request from the user, the manner of sending the prompt information to the user may be, for example, a pop-up window, and the prompt information may be presented in a text manner in the pop-up window. In addition, a selection control for providing personal information to the electronic device by the user's selection of "agreeing" or "disagreeing" can be carried in the pop-up window.
It is understood that the above notification and user authorization process is only illustrative and not limiting, and other ways of satisfying relevant laws and regulations may be applied to the implementation of the present disclosure.
It will be appreciated that the data involved in the subject technology, including but not limited to the data itself, the acquisition or use of the data, should comply with the requirements of the corresponding laws and regulations and related regulations.
Before the technical solution is introduced, an application scenario may be exemplarily described. The technical solution of the embodiment of the present disclosure may be applied to any scene that needs to generate a special-effect video, for example, when a user uploads a recorded multimedia data stream to a server corresponding to an application, or a mobile terminal including a camera device acquires a video picture in real time, the application may detect picture contents of a plurality of acquired video frames (i.e., a plurality of objects in the picture) and determine a target object, where the target object in the video picture may be dynamic or static, and the number of the target objects may be one or more. Based on this, when the application detects that the target object exists in the video picture and the target part associated with the target object exists at the same time, based on the scheme of the embodiment of the disclosure, the special effect selected by the user from the special effect package and developed and designed in advance is determined according to different decoration types, the target special effect is added to the target part of the target object based on the target rendering mode to obtain a special effect video frame, and the special effect video is obtained by splicing the sequentially generated special effect video frames.
Fig. 1 is a schematic flow chart of a video determination method provided in an embodiment of the present disclosure, and the embodiment of the present disclosure is applicable to a situation where a current video frame is processed based on application software to generate a special effect video, for example, when a target portion of a target object exists in a current video frame, if it is detected that a user selects a special effect from a special effect package and touches the target portion, an application may process each video frame according to a scheme of the embodiment of the present disclosure, so as to add the special effect selected by the user on the target portion and obtain a corresponding special effect video.
As shown in fig. 1, the method includes:
and S110, responding to the special effect trigger operation, and collecting the video frame to be processed comprising the target object.
The device for executing the method for generating a special effect video provided by the embodiment of the present disclosure may be integrated into application software supporting a special effect video processing function, and the software may be installed in an electronic device, and optionally, the electronic device may be a mobile terminal or a PC terminal, and the like. The application software may be a type of software for processing images/videos, and specific application software thereof is not described herein any more, as long as image/video processing can be implemented. The method can also be a specially developed application program to realize software for adding and displaying the special effects, or be integrated in a corresponding page, and a user can realize the processing of the special effect video through the page integrated in the PC terminal.
In this embodiment, in application software or an application program supporting a special-effect video processing function, a control for triggering a special effect may be developed in advance, and when it is detected that a user triggers the control, a response may be made to a special-effect triggering operation, so as to acquire a to-be-processed video frame including a target object.
It should be noted that the technical solution of this embodiment may be executed in a process of shooting a video by a user, that is, a special effect video is generated in real time according to a special effect item selected by the user and the shot video, and a video uploaded by the user may also be used as an original data basis, so as to generate a special effect video based on the solution of this embodiment.
In this embodiment, a user may shoot a video in real time based on a camera of the mobile terminal, or actively upload a video based on a control developed in advance in the application, so it can be understood that the video shot in real time obtained by the application or the video actively uploaded by the user is a video to be processed. Furthermore, the video to be processed is analyzed based on a pre-programmed program, and a plurality of video frames to be processed can be obtained. In practical application, when it is detected that a user triggers special-effect operation, the terminal device can face the user in real time to acquire a video to be processed, and the acquired video frame is used as the video frame to be processed. Accordingly, the target object may be included in the video frame to be processed. The target object may be a user, a pet, or some limb part associated with the user. It should be noted that, regardless of whether a target object appears in a picture taken into a mirror of the terminal device, the video to be processed may be acquired, and when it is detected that a target object appears in the acquired video to be processed or the video to be processed that is currently being acquired, a video frame including the target object may be used as a video frame to be processed, and a special effect processing may be performed on the video frame to be processed to obtain a corresponding special effect video frame. For example, the target object may be a leg, and when the leg of the user is detected to appear in the inbound image, the video frame in the current terminal device may be collected and used as the video frame to be processed.
In practical application, a target part corresponding to a target special effect can be preset, and when a certain video frame in a video to be processed comprises the target part through a skeletal key point detection algorithm, the video frame can be used as the video frame to be processed, and special effect processing is carried out on the video frame.
It should be noted that, when a video frame to be processed is obtained, a video frame corresponding to a shot video may be processed, for example, a target object corresponding to the shot video may be set in advance, and when it is detected that the video frame includes the target object, an image corresponding to the video frame may be used as the video frame to be processed, so that each video frame in the video may be tracked subsequently, and special effect processing may be performed on the video frame.
It should be further noted that the number of the target objects in the same shooting scene may be one or multiple, and one or multiple target objects may be determined by using the technical solution provided by the embodiment of the present disclosure.
And S120, determining a target rendering mode for rendering the special effect model to the object model according to the decoration type corresponding to the target special effect.
Wherein the effect model corresponds to a target effect and the object model corresponds to a target object. The model structure of the object model corresponds to the size and scale of the target object's limb parts, i.e. the object model may be constructed on the basis of the target object.
In this embodiment, the target special effect may be a special effect developed in advance and integrated in an application, for example, the target special effect may be a dynamic butterfly special effect that can be added to an image, and a plurality of dynamic butterfly models exist in the special effect. Accordingly, the decoration type may be a way of adding the target special effect to the video frame. The type of decoration may be any type, and optionally may include both types of decoration that wrap the target area and types of decoration that do not wrap the target area. The target portion may be an arbitrary portion corresponding to the target object. It should be noted that when the special effect model is rendered on the object model, the special effect model forms a closed loop region at a position where special effect rendering can be performed in the object model to perform special effect decoration, and this decoration type can be taken as a decoration type of a wrapping target position, as shown in fig. 2a for example, that is, a decoration type of wrapping a leg portion; when the special effect model is rendered on the object model, the special effect model does not form a closed loop area on a part of the object model where special effect rendering can be performed, or falls on the part in a fixed-point manner to perform special effect decoration, and this type of decoration can be used as a type of decoration of a non-wrapped target part, as shown in fig. 2b for example, that is, a type of decoration of a non-wrapped leg part. When the decoration type is a wrapping type, the target special effect can be presented in the display interface in a mode of surrounding the target part of the target object; if the decoration type is a non-wrapping type, the target special effect can be added on the target part of the target object in a small-range decoration mode. The special effects model may be a pre-developed model associated with the target special effects. For example, when the target effect is a dynamic butterfly effect, the effect model may be a butterfly model developed in advance in a three-dimensional space.
In this embodiment, the object model may be a pre-constructed three-dimensional model, which may be associated with the target object. In practical applications, when the application detects a target object in a video picture, an object model corresponding to the target object, which is generated in advance or generated in real time, may be called. It will be appreciated that the body and head of the target object are represented by the object model. For example, when the application detects a target object in a video picture, a plurality of patches can be used to construct a 3D mesh reflecting each part of the user's body in real time, and then the 3D mesh is used as an object model corresponding to the user, after the object model is constructed, the application can also label the model and associate the model with the user as the target object, based on which, if the application detects the user's body picture in the video picture again in the subsequent process, the constructed 3D mesh can be directly called as the object model. It should be understood by those skilled in the art that when the application determines a corresponding object model for the target object, the user may edit and adjust the model according to actual requirements, so as to further improve the accuracy of subsequently mounting the special effect on the body of the target object.
It should be noted that, after the to-be-processed video frame including the target object is acquired, a special effect selection list or a special effect selection page may be popped up, and a plurality of special effects and corresponding decoration types are included in the list or the page, so that the user may select among the special effects to determine the target special effect and the corresponding decoration type.
Based on this, before determining a target rendering mode for rendering the special effect model onto the object model according to the decoration type corresponding to the target special effect, the method further includes: displaying at least one special effect to be selected in a video frame to be processed; and determining the target special effect and the decoration type of the target special effect based on the triggering operation of the at least one special effect to be selected.
In this embodiment, the target portion may be any one or more portions of the target object limb body. For example, the target site may be a leg, an arm, or a hand, etc.
In practical application, after a video frame to be processed is obtained, a special effect selection list or a special effect selection page can be popped up, and the list or the page comprises a plurality of special effects, namely the special effects to be selected. It can be understood that a user selects from a plurality of special effects to be selected through trigger operation, when it is detected that the user issues a determination instruction based on a determination control, the currently selected special effect to be selected can be taken as a target special effect, further, after the target special effect is determined, a decoration type selection list associated with the target special effect can be popped up, the list comprises decoration types of wrapping target parts and decoration types of non-wrapping target parts, each decoration type has a corresponding special effect rendering mode, so that the user can select the corresponding decoration type based on own requirements, and the target special effect and the decoration type of the target special effect are finally determined. The advantages of such an arrangement are: the corresponding target special effect and the corresponding decoration type can be selected based on the user requirements, so that the special effect video meeting the personalized requirements of the user is obtained.
Further, after the target special effect and the corresponding decoration type are determined, a corresponding special effect model and a target rendering mode for rendering the special effect model to the object model can be determined according to the target special effect and the decoration type, and therefore the special effect video frame is obtained.
And S130, obtaining and displaying a special effect video frame for adding the target special effect to the target object based on the target rendering mode.
In this embodiment, after the target rendering mode is determined, the special effect model corresponding to the target special effect may be rendered onto the object model according to the corresponding target rendering mode, so as to obtain a special effect video frame in which the target special effect is added to the target object, and the special effect video frame is displayed on the display interface. It should be noted that, the target object may be divided into two types, including a dynamic target object and a static target object, based on which, when the target object is static, after the target special effect is added to the target part of the target object by the application, the special effect will remain static for display; when the target object is dynamic, after the application adds the target special effect to the target part of the target object, the special effect also generates adaptive movement along with the movement of the target object.
Optionally, obtaining a special effect video frame for adding a target special effect to the target object based on the target rendering manner includes: and rendering the target special effect to a target part of the target object based on the target rendering mode to obtain a special effect video frame corresponding to the video frame to be processed.
Specifically, after the target rendering mode is determined, the target special effect can be added to the target part of the target object according to the target rendering mode, so that a special effect video frame corresponding to the video frame to be processed is obtained. For example, when the target special effect is a butterfly special effect, the corresponding decoration type of the target special effect is a decoration type of a non-wrapped target portion, and the target portion is a leg portion, the rendering of the target special effect onto the target portion of the target object may be displaying the butterfly special effect on a display interface in a form that a plurality of butterfly models are embellished on the leg portion of the target object, and a special effect video frame corresponding to the video frame to be processed may be obtained. The advantages of such an arrangement are: the target special effect can be rendered according to different target rendering modes, so that a special effect video frame meeting the personalized requirements of the user is obtained.
In practical application, when an object model is constructed, the adopted model construction parameters are from users of average stature, that is, the object model can be a standard model reflecting the average value of the stature parameters of the users, however, in real life, the stature difference between different users is large, so that the shape difference of a target part is also large, and the special effect display effect is influenced. For example, for target objects with different heights or weights, there may be a certain difference in the shape of the corresponding target region; for target objects with different physiological characteristics, there will be a certain difference between the corresponding target site shapes.
Based on this, on the basis of above-mentioned technical scheme, still include: determining object attribute information of a target object and a target part for adding a target special effect; adjusting a current deformation parameter corresponding to the target part based on the deformation parameter corresponding to the object attribute information; and displaying the target part based on the adjusted deformation parameter.
The object attribute information may be information representing a physical characteristic attribute or a physiological characteristic attribute of the target object. Optionally, the object attribute information may include height, weight, or other physiological characteristic attribute information. The deformation parameter may be a parameter that is determined according to the object attribute information and that can cause the target portion to deform in the display interface. The deformation parameter may be any value, and may be 1 or 0.8, and the like, optionally. It should be noted that different object attribute information corresponds to different deformation parameters, and can be adjusted according to actual conditions. For example, for a gender a user, if the overall stature of the user is of a slender type, in order to make the target region more conform to the actual situation of the user, a BlendShape may be added as a deformation parameter, and the deformation parameter is set to 1 and is superimposed on each BlendShape for constructing the object model, so as to adjust the current deformation parameter to reduce the difference in the outline of the target region. The current deformation parameter may be a deformation parameter set when the application detects a special effect trigger and constructs a model corresponding to the target portion.
In practical application, after the application detects the object attribute information of the target object and the target part to which the target special effect is currently added, the deformation parameter corresponding to the object attribute information is determined, and the current deformation parameter of the target part is adjusted based on the deformation parameter, so that the target part can be displayed in the display interface based on the adjusted deformation parameter. For example, when the target portion is a leg portion, the leg portion shape currently displayed in the display interface may be a long and thin state when displayed based on the deformation parameter after adjustment. The benefit of this arrangement is: the target part can be locally adjusted, so that the target part can better meet the requirements of a user when being displayed or exceed the expectation of the user, the richness and interestingness of the obtained special effect video content are improved, and the user experience is improved.
According to the technical scheme of the embodiment, the video frame to be processed comprising the target object is collected in response to the special effect triggering operation, the target rendering mode for rendering the special effect model to the object model is further determined according to the decoration type corresponding to the target special effect, finally, the special effect video frame for adding the target special effect to the target object is obtained and displayed based on the target rendering mode, so that the target special effect is added to the target object according to different decoration types, the richness and interestingness of the special effect video are enhanced, and the use experience of a user is improved.
Fig. 3 is a schematic flowchart of a video determining method according to an embodiment of the present disclosure, where on the basis of the foregoing embodiment, when the types of decorations are different, corresponding target rendering manners are different, so as to obtain different special effect video frames and display the special effect video frames, and before determining the target rendering manner, an object model corresponding to a target object may be determined, so that the object model and the special effect model may be processed according to the target rendering manner. The technical scheme of the embodiment can be referred to for the specific implementation mode. The technical terms that are the same as or corresponding to the above embodiments are not repeated herein.
As shown in fig. 3, the method specifically includes the following steps:
s210, responding to the special effect trigger operation, and collecting the video frame to be processed comprising the target object.
And S220, determining an object model corresponding to the target object based on the reference model, and processing the object model and the special effect model based on a target rendering mode.
In this embodiment, in the special effect development stage, in order to show the display effect of the target special effect, a target object may be set in advance, an object model may be constructed based on parameters associated with each limb part of the object, the model may be used as a reference model, and an object referred to when constructing the model may be used as an initial setting object. It should be noted that the initial setting object may be an object actually existing in real life, or may be an object set by the special effect developer according to the average value of the body parameters of most users after the big data analysis.
In practical application, because the body part parameters corresponding to different target objects are different, in order to make the object model more adaptive to the target object, thereby further improving the effect of special effect display, the reference model can be adjusted according to each body part parameter of the target object, so as to obtain the object model corresponding to the target object.
The reference model may be constructed based on at least one BlendShape, and the parameters corresponding to each BlendShape may be preset according to the algorithm data. Specifically, when the reference model is constructed, the reference model may be constructed based on at least one BlendShape, the parameters of which correspond to the respective limb portions of the initially set object, and may include, for example, a crotch joint BlendShape, a leg contour BlendShape, a shoulder joint BlendShape, and the like. In this embodiment, the reference model may correspond to 10 blendshapes, and each model includes deformation parameters of different limb portions.
Optionally, determining an object model corresponding to the target object based on the reference model includes: acquiring the attribute of a bone point corresponding to a target object; based on the skeletal point attributes and the reference model, an object model corresponding to the target object is determined.
The skeleton point attribute comprises skeleton point position information, skeleton point rotation information and scaling information of the part to which the skeleton point belongs. It should be noted that the skeletal point position information, the skeletal point rotation information, and the scaling information of the part to which the skeletal point belongs may all be obtained based on the updated Avatar algorithm.
In this embodiment, the skeletal point attribute may be associated with a body type condition of the target object. In practical application, when a target object is detected, the relative positions of skeleton points corresponding to each limb part of the target object, namely skeleton point position information, can be determined according to a skeleton key point detection algorithm, the shape and the length of a corresponding skeleton chain can be determined based on the skeleton point position information, and a basis is made for constructing the overall contour of an object model; meanwhile, the rotation angle of the corresponding bone point of the target object, that is, the rotation information of the bone point, and the size scaling of the part to which the bone point belongs, that is, the scaling information of the part to which the bone point belongs may be determined according to the bone point detection algorithm, and the thickness of the corresponding part may be determined based on the scaling information.
In practical application, after obtaining the position information of the bone point, the rotation information of the bone point and the scaling information of the part to which the bone point belongs, the information can be applied to each BlendShape of the object model, so as to change the parameters of the corresponding BlendShape, thereby realizing the adaptation to different target objects.
Specifically, after a frame to be processed including a target object is obtained, a skeleton point attribute corresponding to each limb part of the target object may be obtained based on a skeleton key point detection algorithm, and each item of information included in the skeleton point attribute is compared with a reference model, so that the reference model is adjusted based on the skeleton point attribute to obtain an object model corresponding to the target object, and the object model and the special effect model may be processed based on a target rendering manner. The advantages of such an arrangement are: the object model and the target object can be more adaptive, and different target objects can correspond to different object models, so that the special effect can be more fittingly added to the object model, and the special effect display effect is improved.
And S230, if the decoration type is the decoration type wrapping the target part, determining that a target rendering mode for rendering the special effect model to the object model is key point binding rendering.
In this embodiment, when the decoration type is a type in which a target special effect is wrapped on a target portion, in order to make the target special effect more fit to the target portion and make the target special effect move when the target portion moves, so as to present a more realistic special effect display effect, when a special effect model is rendered on the special effect model, the special effect model may be bound to a key point on an object model, so that the special effect model is fixed on the object model and rendered in a display interface to obtain a special effect video, and this rendering mode may be bound and rendered as a key point.
In this embodiment, the object model may be used as a coarse model, and the special effect model may be used as a fine model, and the key point binding rendering may be referred to as coarse-model driven fine model.
Optionally, the key point binding rendering includes: acquiring vertex position information of at least one vertex in the special effect model; determining at least two first position information associated with the vertex position information in the object model; and binding the vertex position information with the corresponding at least two first position information to perform key point binding rendering.
In this embodiment, both the special effect model and the object model are composed of at least one patch, where a patch refers to a mesh in application software supporting image rendering processing, and may be understood as an object for a user to carry an image in the application software. Each patch is formed by two triangles, and correspondingly, a patch comprises 12 vertices and front and back sides, and it can be understood that the front and back sides of the patch respectively comprise 6 vertices. Based on the above, for the special effect model, the spatial position information of the vertex of each patch constituting the special effect model is determined in the model space, namely the vertex position information. It should be noted that, for each vertex position information of the special effect model, the following manner may be adopted for binding the position information, and therefore, one of the vertex position information may be taken as an example for description: after the vertex position information of the special effect model is determined, the vertex position information may be mapped into the object model, and position information of a plurality of object model vertexes closest to the vertex position information, that is, first position information may be determined. Further, since there is a certain difference between the vertex position information of the special effect model and the corresponding plurality of first position information, a weight value may be set according to each difference, for example, the closest weight value is the highest, the farthest weight value is the lowest, and a linear difference is performed according to each weight value to calculate the motion that should occur at the vertex of the corresponding special effect model, thereby achieving the binding between the vertex position information and the corresponding plurality of first position information. After the vertex position information of each vertex in the special effect model is bound with the corresponding first position information in the above mode, the key point binding rendering can be performed. The advantages of such an arrangement are: the special effect model can be driven by the object model, so that the special effect model moves along with the object model, and the effect that the special effect model is more attached to the object model is achieved.
It should be noted that, in the model space, binding by using the position relationship of the vertices in the special effect model and the object model may cause a binding error due to the vertex position information being associated with the vertices of other body parts in the object model when determining the first position information associated with the vertex position information in the object model, and for example, when the special effect model is to be bound with the right leg of the object model, at least one first position information may fall on the left leg of the object model when determining the first position information of the vertex position information on the object model because the thighs of the left and right thighs of the object model are basically attached together.
Based on the method, the key point binding rendering process can be realized in the UV space, namely key point position information binding is realized in the UV space. Specifically, the position of the special effect model to be bound on the object model is determined, the UV value of each vertex of the special effect model and the UV value of the vertex corresponding to the target position in the object model are set to be two similar values, meanwhile, the UV values of the vertices corresponding to other positions in the object model are set to be a value with a larger difference distance with the UV values of each vertex of the special effect model, and at the moment, when the key point binding rendering is carried out, the binding of the position information of each vertex of the special effect model and the corresponding at least two first position information can be achieved.
S240, if the decoration type is the decoration type of the non-wrapped target part, determining a target rendering mode of rendering the special effect model to the object model as key point positioning rendering.
In this embodiment, because the decoration type of the non-wrapped target portion is characterized in that the non-wrapped target portion is not attached to the target portion, and the special effect itself may have skeleton animation, a model decoration mainly directed to a concentrated force point may be adopted instead of a model decoration having a large area force point, and a rendering manner directed to a concentrated force point may be used as the key point positioning rendering.
It should be noted that, for a decorative type target special effect of a non-wrapped target part, in order to enable the special effect model to be adapted to object models of different statures and ensure the accuracy of positioning points under different posture actions of the object model, the special effect model can be divided by taking each positioning point as a unit, and the corresponding relation between each special effect sub-model and the object model is respectively determined, so that the key point positioning rendering is realized.
Based on this, before the rendering based on the keypoint localization, the method further comprises the following steps: dividing the special effect model into a plurality of special effect submodels; for the plurality of special effect submodels, determining third position information of the current special effect submodel on the object model; at least one target vertex associated with the third location information and located on the object model is determined for the keypoint localization rendering.
Wherein the special effects submodel corresponds to a respective sub-special effect. In this embodiment, one special effect model may be formed by combining a plurality of small special effect models, and these small special effect models may be used as special effect sub-models, and each special effect sub-model corresponds to a corresponding sub-special effect. For example, when the special effect model is a model composed of 20 dynamic butterfly models, and each dynamic butterfly model corresponds to a respective motion track, each butterfly model may be used as a special effect sub-model. The third location information may be information corresponding to an expected location where the current special effect sub-model last stayed on the object model. It should be noted that, if the third position information may correspond to a target vertex on the object model, the key point positioning rendering may be directly performed; if the third position information is in a certain area of the object model, a plurality of vertexes closest to the position information can be determined, namely target vertexes, and further, interpolation processing is performed on at least one determined target vertex to perform key point positioning rendering.
In practical application, in the process of performing key point positioning rendering, the situation that the position of the special effect sub-model finally falling on the object model is different from the third position information may occur, so that the third position information can be adjusted according to the position of the special effect sub-model actually falling on the object model, and the key point positioning rendering of the current special effect sub-model is realized.
Optionally, the keypoint localization rendering includes: and for each special effect sub-model, determining at least one target vertex associated with the current special effect sub-model, and displaying the current special effect sub-model to a position corresponding to the at least one target vertex after the current special effect sub-model moves according to a preset track.
The preset track may be a motion track preset by each special effect sub-model. Each special effect sub-model can stay on the object model after moving according to the corresponding motion trail.
In this embodiment, for each special effect sub-model, the position information of the current special effect sub-model on the object model may be determined through a pre-created pintotimeh plug-in, and the position of each associated target vertex is determined based on the position information, and further, after the current special effect sub-model moves according to a preset trajectory, the current special effect sub-model is landed at a position corresponding to at least one target vertex and displayed. The advantages of such an arrangement are: the landing positions of the special effect sub-models on the object model after the special effect sub-models move according to the preset motion tracks of the special effect sub-models can be consistent with the expected positions, the positioning points can be ensured to be accurate while the special effect sub-models are adapted to different object models, and the display effect of the special effect models is improved.
Optionally, displaying the current special effect sub-model to a position corresponding to at least one target vertex after moving according to a preset track, including: determining a target display position based on the at least one target vertex and the corresponding weight value; and controlling the current special effect sub-model to move according to a preset track and then displaying the special effect sub-model at a target display position.
In practical application, since the position of the current special effect sub-model on the object model may be in an area formed by a plurality of vertexes, at least one vertex close to the position is determined as a target vertex. The distances between each target vertex and the corresponding position are different, in order to determine the final rendering position of the current special effect sub-model more accurately, a corresponding weight value can be determined according to the mutual distance between each target vertex and the position information mapped to the object model by the current sub-model, further, each target vertex and the corresponding weight value are calculated through an interpolation method, so that position information, namely a target display position, is obtained, the target display position is used as the terminal point of the current special effect sub-model, and the current special effect sub-model can move according to a preset track and finally fall on the target display position for displaying by adjusting the offset and the rotary offset between the display position of the special effect sub-model on the object model and the target display position. The advantages of such an arrangement are: the method and the device have the advantages that each special effect sub-model can accurately fall on the target display position of the object model to be displayed, the accuracy of key point positioning rendering is improved, accordingly, the effect displayed by the special effect video is more vivid, and the use experience of a user is enhanced.
For example, the target special effect may be approximately 20 butterflies with different preset tracks attached to different positions of the leg, and the special effect model is 20 dynamic butterfly models, each of which is a special effect sub-model. For each butterfly model, determining the UV value corresponding to the target display position where each butterfly finally stays on the object model, and positioning each butterfly model on the object model through the UV values. Those skilled in the art can understand that the method for positioning by using the UV value can be implemented by a pre-created pintotimesh plug-in, and the specific implementation principle is as follows: and for each butterfly model, mapping the UV value of the current butterfly model to the object model, determining the position information of the corresponding target vertex or the 3 target vertices closest to the mapping position after interpolation processing, and then unifying the model space origin and the mapping position of the object model. However, in some cases, the final position of the butterfly model on the object model is not the original point of the model space of the object model, so that the landing position of the butterfly model does not match the expected position when the positioning operation is performed directly by using the UV values, and in this case, the flexible adjustment can be performed by adjusting the position offset and the rotation offset of the butterfly model relative to the UV value of the positioning point.
It should be noted that the target special effect corresponding to the decoration type of the wrapped target portion and the target special effect corresponding to the decoration type of the non-wrapped target portion may be added to the target object at the same time, so as to obtain a special effect video for playing two special effects at the same time; or adding the special effects to the target object respectively to obtain special effect videos corresponding to different target special effects.
And S250, obtaining and displaying a special effect video frame which adds a target special effect to the target object based on the target rendering mode.
According to the technical scheme of the embodiment, the video frames to be processed including the target object are collected in response to the special effect triggering operation, then the object model corresponding to the target object is determined based on the reference model, the object model and the special effect model are processed based on the target rendering mode, further, if the decoration type is the decoration type wrapping the target part, the target rendering mode of rendering the special effect model onto the object model is determined to be key point binding rendering, if the decoration type is the decoration type not wrapping the target part, the target rendering mode of rendering the special effect model onto the object model is determined to be key point positioning rendering, finally, the special effect video frames with the special effect added to the target object are obtained and displayed based on the target rendering mode, the object model corresponding to the target object is built, the different object models corresponding to the different target objects are achieved, the display effect of the object model and the special effect model is improved, the use experience of a user is improved, and the different target rendering modes are determined according to the different decoration types, the different video frames are obtained, and the richness and interestingness of the special effect video are enhanced.
Fig. 4 is a schematic structural diagram of a video determining apparatus according to an embodiment of the present disclosure, and as shown in fig. 4, the apparatus includes: a pending video frame acquisition module 310, a target rendering mode determination module 320, and a special effect video frame display module 330.
A to-be-processed video frame acquisition module 310, configured to acquire a to-be-processed video frame including a target object in response to a special effect trigger operation;
a target rendering mode determining module 320, configured to determine a target rendering mode for rendering the special effect model onto the object model according to the decoration type corresponding to the target special effect; wherein the special effect model corresponds to the target special effect, the object model corresponds to the target object, and the decoration types comprise a decoration type of a wrapped target part and a decoration type of a non-wrapped target part;
a special effect video frame display module 330, configured to obtain a special effect video frame that adds the target special effect to the target object based on the target rendering manner, and display the special effect video frame.
On the basis of each technical scheme, the device further comprises: the device comprises a special effect display module to be selected and a target special effect determination module.
The to-be-selected special effect display module is used for displaying at least one to-be-selected special effect in the to-be-processed video frame before the target rendering mode for rendering the special effect model to the object model is determined according to the decoration type corresponding to the target special effect;
and the target special effect determining module is used for determining the target special effect and the decoration type of the target special effect based on the triggering operation of the at least one special effect to be selected.
On the basis of the above technical solutions, the target rendering manner determining module 320 includes a key point binding rendering determining sub-module and a key point positioning rendering determining sub-module.
A key point binding rendering determination submodule, configured to determine, if the decoration type is a decoration type of a package target portion, that a target rendering manner for rendering the special effect model onto the object model is key point binding rendering;
and the key point positioning rendering determining submodule is used for determining a target rendering mode for rendering the special effect model to the object model as key point positioning rendering if the decoration type is a decoration type of a non-wrapped target part.
On the basis of the above technical solutions, the apparatus further includes: an object model determination module.
An object model determining module, configured to determine, based on a reference model, an object model corresponding to the target object before determining the target rendering manner, so as to process the object model and the special effect model based on the target rendering manner; wherein the reference model corresponds to an initial setting object.
On the basis of the technical schemes, the object model determining module comprises a skeleton point attribute obtaining unit and an object model determining unit.
A bone point attribute obtaining unit, configured to obtain a bone point attribute corresponding to the target object; the skeleton point attribute comprises skeleton point position information, skeleton point rotation information and scaling information of a part to which the skeleton point belongs;
an object model determination unit for determining an object model corresponding to the target object based on the bone point attributes and a reference model.
On the basis of the technical schemes, the key point binding rendering determination submodule comprises a vertex position information acquisition unit, a first position information determination unit and a position information binding unit.
A vertex position information acquiring unit, configured to acquire vertex position information of each at least one vertex in the special effect model;
a first position information determination unit configured to determine at least two pieces of first position information with which the vertex position information is associated in the object model;
and the position information binding unit is used for binding the vertex position information with at least two corresponding first position information so as to perform key point binding rendering.
On the basis of the above technical solutions, the target rendering manner is key point positioning rendering, and the apparatus further includes: the system comprises a special effect model dividing module, a third position information determining module and a target vertex determining module.
The special effect model dividing module is used for dividing the special effect model into a plurality of special effect sub-models before the positioning rendering based on the key points; wherein the special effect submodels correspond to respective sub-special effects;
the third position information determining module is used for determining third position information of the current special effect sub-model on the object model for the plurality of special effect sub-models;
and the target vertex determining module is used for determining at least one target vertex which is associated with the third position information and is positioned on the object model so as to perform key point positioning rendering.
On the basis of the technical schemes, the key point positioning rendering determination submodule comprises a target vertex determination unit.
And the target vertex determining unit is used for determining at least one target vertex associated with the current special effect sub-model for each special effect sub-model, and displaying the target vertex to a position corresponding to the at least one target vertex after the current special effect sub-model moves according to a preset track.
On the basis of the technical scheme, the target vertex determining unit comprises a target display position determining subunit and a special effect sub-model control subunit.
A target display position determining subunit, configured to determine a target display position based on the at least one target vertex and the corresponding weight value;
and the special effect sub-model control sub-unit is used for controlling the current special effect sub-model to move according to a preset track and then display the current special effect sub-model at the target display position.
On the basis of the above technical solutions, the special effect video frame display module 330 includes a special effect video frame determination unit.
And the special effect video frame determining unit is used for rendering the target special effect to the target part of the target object based on the target rendering mode to obtain a special effect video frame corresponding to the video frame to be processed.
On the basis of the above technical solutions, the special effect video frame display module 330 includes an object attribute information determining unit, a current deformation parameter adjusting unit, and a target portion display unit.
An object attribute information determination unit configured to determine object attribute information of the target object and a target portion to which the target special effect is added;
a current deformation parameter adjusting unit, configured to adjust a current deformation parameter corresponding to the target portion based on a deformation parameter corresponding to the object attribute information;
and the target part display unit is used for displaying the target part based on the adjusted deformation parameter.
On the basis of the technical schemes, the target part is any one or more parts on the body of the target object.
According to the technical scheme of the embodiment, the video frame to be processed comprising the target object is collected in response to the special effect triggering operation, the target rendering mode for rendering the special effect model to the object model is further determined according to the decoration type corresponding to the target special effect, finally, the special effect video frame for adding the target special effect to the target object is obtained and displayed based on the target rendering mode, so that the target special effect is added to the target object according to different decoration types, the richness and interestingness of the special effect video are enhanced, and the use experience of a user is improved.
The video determining device provided by the embodiment of the disclosure can execute the video determining method provided by any embodiment of the disclosure, and has corresponding functional modules and beneficial effects of the executing method.
It should be noted that, the units and modules included in the apparatus are merely divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only used for distinguishing one functional unit from another, and are not used for limiting the protection scope of the embodiments of the present disclosure.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure. Referring now to fig. 5, a schematic diagram of an electronic device (e.g., the terminal device or the server in fig. 5) 500 suitable for implementing embodiments of the present disclosure is shown. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 5, electronic device 500 may include a processing means (e.g., central processing unit, graphics processor, etc.) 501 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data necessary for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504. An editing/output (I/O) interface 505 is also connected to bus 504.
Generally, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 507 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 508 including, for example, magnetic tape, hard disk, etc.; and a communication device 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 illustrates an electronic device 500 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may be alternatively implemented or provided.
In particular, the processes described above with reference to the flow diagrams may be implemented as computer software programs, according to embodiments of the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or installed from the storage means 508, or installed from the ROM 502. The computer program performs the above-described functions defined in the methods of the embodiments of the present disclosure when executed by the processing device 501.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The electronic device provided by the embodiment of the present disclosure and the video determining method provided by the above embodiment belong to the same inventive concept, and technical details that are not described in detail in the embodiment can be referred to the above embodiment, and the embodiment has the same beneficial effects as the above embodiment.
The disclosed embodiments provide a computer storage medium having stored thereon a computer program that, when executed by a processor, implements the video determination method provided by the above-described embodiments.
It should be noted that the computer readable medium of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may be separate and not incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to:
the computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to:
responding to a special effect triggering operation, and acquiring a video frame to be processed comprising a target object;
determining a target rendering mode for rendering the special effect model to the object model according to the decoration type corresponding to the target special effect; wherein the special effect model corresponds to the target special effect, the object model corresponds to the target object, and the decoration types comprise a decoration type of a wrapped target part and a decoration type of a non-wrapped target part;
and obtaining and displaying a special effect video frame which adds the target special effect to the target object based on the target rendering mode.
Alternatively, the computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to:
responding to a special effect triggering operation, and acquiring a video frame to be processed comprising a target object;
determining a target rendering mode for rendering the special effect model to the object model according to the decoration type corresponding to the target special effect; wherein the special effect model corresponds to the target special effect, the object model corresponds to the target object, and the decoration types comprise a decoration type of a wrapped target part and a decoration type of a non-wrapped target part;
and obtaining and displaying a special effect video frame which adds the target special effect to the target object based on the target rendering mode.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first obtaining unit may also be described as a "unit obtaining at least two internet protocol addresses".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure [ example one ] there is provided a video determining method, the method comprising:
responding to a special effect triggering operation, and acquiring a video frame to be processed comprising a target object;
determining a target rendering mode for rendering the special effect model to the object model according to the decoration type corresponding to the target special effect; wherein the special effect model corresponds to the target special effect, the object model corresponds to the target object, and the decoration types include a decoration type of a wrapped target portion and a decoration type of a non-wrapped target portion;
and obtaining and displaying a special effect video frame which adds the target special effect to the target object based on the target rendering mode.
According to one or more embodiments of the present disclosure [ example two ] there is provided a video determining method, further comprising:
optionally, before determining a target rendering manner for rendering the special effect model onto the object model according to the decoration type corresponding to the target special effect, displaying at least one to-be-selected special effect in the to-be-processed video frame;
determining the target special effect and the decoration type of the target special effect based on the triggering operation of the at least one special effect to be selected.
According to one or more embodiments of the present disclosure [ example three ] there is provided a video determining method, further comprising:
optionally, if the decoration type is a decoration type wrapping a target part, determining that a target rendering mode for rendering the special effect model onto the object model is a key point binding rendering;
and if the decoration type is the decoration type of the non-wrapped target part, determining that a target rendering mode for rendering the special effect model to the object model is key point positioning rendering.
According to one or more embodiments of the present disclosure, [ example four ] there is provided a video determination method, further comprising:
optionally, before determining the target rendering manner, an object model corresponding to the target object is determined based on a reference model, so as to process the object model and the special effect model based on the target rendering manner; wherein the reference model corresponds to an initially set object.
According to one or more embodiments of the present disclosure, [ example five ] there is provided a video determination method, further comprising:
optionally, obtaining a bone point attribute corresponding to the target object; the skeleton point attribute comprises skeleton point position information, skeleton point rotation information and scaling information of a part to which the skeleton point belongs;
determining an object model corresponding to the target object based on the bone point attributes and a reference model.
According to one or more embodiments of the present disclosure [ example six ] there is provided a video determining method, further comprising:
optionally, vertex position information of each at least one vertex in the special effect model is obtained;
determining at least two first position information associated with the vertex position information in the object model;
and binding the vertex position information with at least two corresponding first position information to perform key point binding rendering.
According to one or more embodiments of the present disclosure, [ example seven ] there is provided a video determination method, the method further comprising:
optionally, before the rendering is positioned based on the key points, dividing the special effect model into a plurality of special effect sub-models; wherein the special effect submodels correspond to respective sub-special effects;
for the plurality of special effect sub-models, determining third position information of the current special effect sub-model on the object model;
determining at least one target vertex associated with the third location information and located on the object model for keypoint localization rendering.
According to one or more embodiments of the present disclosure, [ example eight ] there is provided a video determination method, further comprising:
optionally, for each special effect sub-model, at least one target vertex associated with the current special effect sub-model is determined, and the current special effect sub-model is displayed to a position corresponding to the at least one target vertex after moving according to a preset track.
According to one or more embodiments of the present disclosure [ example nine ] there is provided a video determining method, further comprising:
optionally, determining a target display position based on the at least one target vertex and the corresponding weight value;
and controlling the current special effect sub-model to move according to a preset track and then displaying the current special effect sub-model at the target display position.
According to one or more embodiments of the present disclosure [ example ten ] there is provided a video determining method, the method further comprising:
optionally, based on the target rendering manner, rendering the target special effect to the target portion of the target object, so as to obtain a special effect video frame corresponding to the video frame to be processed.
According to one or more embodiments of the present disclosure, [ example eleven ] there is provided a video determination method, further comprising:
optionally, determining object attribute information of the target object and a target part to which the target special effect is added;
adjusting a current deformation parameter corresponding to the target part based on a deformation parameter corresponding to the object attribute information;
and displaying the target part based on the adjusted deformation parameters.
According to one or more embodiments of the present disclosure [ example twelve ] there is provided a video determination method, the method further comprising:
the target part is any one or more parts on the body of the target object.
According to one or more embodiments of the present disclosure, [ example thirteen ] provides a video determining apparatus, including:
the system comprises a to-be-processed video frame acquisition module, a to-be-processed video frame acquisition module and a processing module, wherein the to-be-processed video frame acquisition module is used for responding to special effect triggering operation and acquiring a to-be-processed video frame comprising a target object;
the target rendering mode determining module is used for determining a target rendering mode for rendering the special effect model to the object model according to the decoration type corresponding to the target special effect; wherein the special effect model corresponds to the target special effect, the object model corresponds to the target object, and the decoration types comprise a decoration type of a wrapped target part and a decoration type of a non-wrapped target part;
and the special effect video frame display module is used for obtaining and displaying a special effect video frame which adds the target special effect to the target object based on the target rendering mode.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other combinations of features described above or equivalents thereof without departing from the spirit of the disclosure. For example, the above features and the technical features disclosed in the present disclosure (but not limited to) having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (15)

1. A method for video determination, comprising:
responding to special effect trigger operation, and collecting a video frame to be processed comprising a target object;
determining a target rendering mode for rendering the special effect model to the object model according to the decoration type corresponding to the target special effect; wherein the special effect model corresponds to the target special effect, the object model corresponds to the target object, and the decoration types comprise a decoration type of a wrapped target part and a decoration type of a non-wrapped target part;
and obtaining and displaying a special effect video frame which adds the target special effect to the target object based on the target rendering mode.
2. The method of claim 1, prior to determining a target rendering style for rendering the special effects model onto the object model based on the type of decoration corresponding to the target special effect, further comprising:
displaying at least one special effect to be selected in the video frame to be processed;
determining the target special effect and the decoration type of the target special effect based on the triggering operation of the at least one special effect to be selected.
3. The method of claim 1, wherein determining a target rendering manner for rendering the special effect model onto the object model according to the decoration type corresponding to the target special effect comprises:
if the decoration type is the decoration type wrapping a target part, determining that a target rendering mode for rendering the special effect model to the object model is key point binding rendering;
and if the decoration type is the decoration type of the non-wrapped target part, determining that a target rendering mode for rendering the special effect model to the object model is key point positioning rendering.
4. The method of claim 3, prior to determining the target rendering style, further comprising:
determining an object model corresponding to the target object based on a reference model, and processing the object model and the special effect model based on the target rendering mode;
wherein the reference model corresponds to an initial setting object.
5. The method of claim 4, wherein determining the object model corresponding to the target object based on the reference model comprises:
acquiring a bone point attribute corresponding to the target object; the skeleton point attribute comprises skeleton point position information, skeleton point rotation information and scaling information of a part to which the skeleton point belongs;
determining an object model corresponding to the target object based on the bone point attributes and a reference model.
6. The method of claim 3 or 5, wherein the keypoint binding rendering comprises:
acquiring vertex position information of at least one vertex in the special effect model;
determining at least two first position information associated with the vertex position information in the object model;
and binding the vertex position information with the corresponding at least two first position information to perform key point binding rendering.
7. The method of claim 3, wherein the target rendering is a keypoint localization rendering, and further comprising, prior to the keypoint localization rendering:
dividing the special effect model into a plurality of special effect submodels; wherein the special effect submodels correspond to respective sub-special effects;
for the plurality of special effect sub-models, determining third position information of the current special effect sub-model on the object model;
determining at least one target vertex associated with the third location information and located on the object model for keypoint localization rendering.
8. The method of claim 7, wherein the keypoint location rendering comprises:
and for each special effect sub-model, determining at least one target vertex associated with the current special effect sub-model, and displaying the current special effect sub-model to a position corresponding to the at least one target vertex after the current special effect sub-model moves according to a preset track.
9. The method of claim 8, wherein displaying the current special effect sub-model to a position corresponding to the at least one target vertex after moving according to a preset trajectory comprises:
determining a target display position based on the at least one target vertex and the corresponding weight value;
and controlling the current special effect sub-model to move according to a preset track and then displaying the current special effect sub-model at the target display position.
10. The method according to claim 1, wherein the obtaining a special effect video frame for adding the target special effect to the target object based on the target rendering manner comprises:
and rendering the target special effect to the target part of the target object based on the target rendering mode to obtain a special effect video frame corresponding to the video frame to be processed.
11. The method according to claim 1, wherein the obtaining and displaying a special effect video frame that adds the target special effect to the target object based on the target rendering manner further comprises:
determining object attribute information of the target object and a target part added with the target special effect;
adjusting a current deformation parameter corresponding to the target part based on a deformation parameter corresponding to the object attribute information;
and displaying the target part based on the adjusted deformation parameters.
12. The method of claim 1, wherein the target site is any one or more sites on the target subject's limb torso.
13. A video determination apparatus, comprising:
the system comprises a to-be-processed video frame acquisition module, a to-be-processed video frame acquisition module and a processing module, wherein the to-be-processed video frame acquisition module is used for responding to special effect triggering operation and acquiring a to-be-processed video frame comprising a target object;
the target rendering mode determining module is used for determining a target rendering mode for rendering the special effect model to the object model according to the decoration type corresponding to the target special effect; wherein the special effect model corresponds to the target special effect, the object model corresponds to the target object, and the decoration types comprise a decoration type of a wrapped target part and a decoration type of a non-wrapped target part;
and the special effect video frame display module is used for obtaining and displaying a special effect video frame which adds the target special effect to the target object based on the target rendering mode.
14. An electronic device, characterized in that the electronic device comprises:
one or more processors;
a storage device to store one or more programs,
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the video determination method of any of claims 1-12.
15. A storage medium containing computer-executable instructions for performing the video determination method of any of claims 1-12 when executed by a computer processor.
CN202210907884.3A 2022-07-29 2022-07-29 Video determination method and device, electronic equipment and storage medium Pending CN115297271A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210907884.3A CN115297271A (en) 2022-07-29 2022-07-29 Video determination method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210907884.3A CN115297271A (en) 2022-07-29 2022-07-29 Video determination method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115297271A true CN115297271A (en) 2022-11-04

Family

ID=83826352

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210907884.3A Pending CN115297271A (en) 2022-07-29 2022-07-29 Video determination method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115297271A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111282277A (en) * 2020-02-28 2020-06-16 苏州叠纸网络科技股份有限公司 Special effect processing method, device and equipment and storage medium
CN111880709A (en) * 2020-07-31 2020-11-03 北京市商汤科技开发有限公司 Display method and device, computer equipment and storage medium
CN112218107A (en) * 2020-09-18 2021-01-12 广州虎牙科技有限公司 Live broadcast rendering method and device, electronic equipment and storage medium
CN114445601A (en) * 2022-04-08 2022-05-06 北京大甜绵白糖科技有限公司 Image processing method, device, equipment and storage medium
CN114567805A (en) * 2022-02-24 2022-05-31 北京字跳网络技术有限公司 Method and device for determining special effect video, electronic equipment and storage medium
CN114663555A (en) * 2022-02-14 2022-06-24 网易(杭州)网络有限公司 Model generation method and device and electronic equipment
CN114782593A (en) * 2022-04-24 2022-07-22 脸萌有限公司 Image processing method, image processing device, electronic equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111282277A (en) * 2020-02-28 2020-06-16 苏州叠纸网络科技股份有限公司 Special effect processing method, device and equipment and storage medium
CN111880709A (en) * 2020-07-31 2020-11-03 北京市商汤科技开发有限公司 Display method and device, computer equipment and storage medium
CN112218107A (en) * 2020-09-18 2021-01-12 广州虎牙科技有限公司 Live broadcast rendering method and device, electronic equipment and storage medium
CN114663555A (en) * 2022-02-14 2022-06-24 网易(杭州)网络有限公司 Model generation method and device and electronic equipment
CN114567805A (en) * 2022-02-24 2022-05-31 北京字跳网络技术有限公司 Method and device for determining special effect video, electronic equipment and storage medium
CN114445601A (en) * 2022-04-08 2022-05-06 北京大甜绵白糖科技有限公司 Image processing method, device, equipment and storage medium
CN114782593A (en) * 2022-04-24 2022-07-22 脸萌有限公司 Image processing method, image processing device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US11498003B2 (en) Image rendering method, device, and storage medium
US20130293686A1 (en) 3d reconstruction of human subject using a mobile device
WO2021094537A1 (en) 3d body model generation
CN109754464B (en) Method and apparatus for generating information
CN112035041B (en) Image processing method and device, electronic equipment and storage medium
US10818078B2 (en) Reconstruction and detection of occluded portions of 3D human body model using depth data from single viewpoint
CN114677386A (en) Special effect image processing method and device, electronic equipment and storage medium
CN112237739A (en) Game role rendering method and device, electronic equipment and computer readable medium
CN115690382A (en) Training method of deep learning model, and method and device for generating panorama
CN113327318B (en) Image display method, image display device, electronic equipment and computer readable medium
EP4332904A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN115984447A (en) Image rendering method, device, equipment and medium
CN114494658A (en) Special effect display method, device, equipment, storage medium and program product
CN111818265B (en) Interaction method and device based on augmented reality model, electronic equipment and medium
CN111833459B (en) Image processing method and device, electronic equipment and storage medium
CN114078181B (en) Method and device for establishing human body three-dimensional model, electronic equipment and storage medium
CN110189364B (en) Method and device for generating information, and target tracking method and device
CN114067030A (en) Dynamic fluid effect processing method and device, electronic equipment and readable medium
CN107657657A (en) A kind of three-dimensional human modeling method, device, system and storage medium
CN114782593A (en) Image processing method, image processing device, electronic equipment and storage medium
CN115297271A (en) Video determination method and device, electronic equipment and storage medium
US20240220406A1 (en) Collision processing method and apparatus for virtual object, and electronic device and storage medium
CN116228952A (en) Virtual object mounting method, device, equipment and medium
CN116360661A (en) Special effect processing method and device, electronic equipment and storage medium
CN115272145A (en) Image processing method, image processing device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination