Nothing Special   »   [go: up one dir, main page]

WO2021047420A1 - 虚拟礼物特效的渲染方法和装置、直播系统 - Google Patents

虚拟礼物特效的渲染方法和装置、直播系统 Download PDF

Info

Publication number
WO2021047420A1
WO2021047420A1 PCT/CN2020/112815 CN2020112815W WO2021047420A1 WO 2021047420 A1 WO2021047420 A1 WO 2021047420A1 CN 2020112815 W CN2020112815 W CN 2020112815W WO 2021047420 A1 WO2021047420 A1 WO 2021047420A1
Authority
WO
WIPO (PCT)
Prior art keywords
special effect
gift
live video
target
client
Prior art date
Application number
PCT/CN2020/112815
Other languages
English (en)
French (fr)
Inventor
杨克敏
陈杰
欧燕雄
Original Assignee
广州华多网络科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州华多网络科技有限公司 filed Critical 广州华多网络科技有限公司
Publication of WO2021047420A1 publication Critical patent/WO2021047420A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting

Definitions

  • the embodiments of the present application relate to the field of live broadcast technology, and in particular, to a method and device for rendering virtual gift special effects, a live broadcast system, and also to a computer device and a storage medium.
  • real-time video communication such as live webcasts and video chat rooms have become an increasingly popular form of entertainment.
  • the host user performs the live broadcast in the live broadcast room, and the viewer user watches the host's live broadcast process on the viewer client.
  • the audience user can choose a specific target special effect gift to give to the anchor, and add the target special effect gift to a specific position of the anchor screen according to the corresponding entertainment template to display the corresponding special effect.
  • the existing gift special effect display method is to synthesize the gift special effect into the video frame through the host client, and put the video frame containing the gift special effect into the video stream and transmit it to other host clients or audience clients for special effect display, but such special effects
  • the display method makes the gift special effects only be displayed in the live video playback area of the client, which limits the display effect of the special effects.
  • the purpose of this application is to solve at least one of the above technical shortcomings, especially the gift special effect can only be displayed in the live video playback area of the client, which limits the problem of the special effect display effect.
  • an embodiment of the present application provides a method for rendering virtual gift special effects, including the following steps:
  • the combined location information includes performing the live video on the live video based on the host client
  • the identified target special effect gift is synthesized at the target position on the live video
  • a special effect display area is set on the live window, and the special effect frame image is synchronously rendered in the special effect display area during the process of playing the live video.
  • an embodiment of the present application provides a method for rendering virtual gift special effects, including the following steps:
  • the viewer client forwards the live video stream data to the viewer client; wherein, the viewer client adds the target special effect gift to the live video according to the combined location information for synthesis to obtain a special effect frame image; on the live window Setting a display special effect display area, and in the process of playing the live video, synchronously rendering the special effect frame image in the special effect display area.
  • an embodiment of the present application provides a rendering device for virtual gift special effects, including:
  • the information acquisition module is used to receive live video stream data and target special effect gifts, and obtain the combined location information of the live video and the target special effect gifts from the live video stream data; wherein, the combined location information includes information based on the host client
  • the target special effect gift obtained by recognizing the live video is synthesized into the target position on the live video;
  • a special effect frame synthesis module configured to add the target special effect gift to the live video for synthesis according to the synthesis position information, to obtain a special effect frame image
  • the special effect frame rendering module is configured to set a special effect display area on the live window, and render the special effect frame image synchronously in the special effect display area during the process of playing the live video.
  • an embodiment of the present application provides a virtual gift special effect rendering device, including:
  • the video stream receiving module is configured to receive live video stream data sent by the host client; wherein the live video stream data includes the combined location information of the live video and the target special effect gift;
  • the video stream forwarding module is used to forward the live video stream data to the audience client; wherein the audience client adds the target special effect gift to the live video for synthesis according to the synthesis location information, to obtain special effects Frame image; set a special effect display area on the live window, and render the special effect frame image synchronously in the special effect display area during the process of playing the live video.
  • an embodiment of the present application provides a live broadcast system, including a server, an anchor client, and an audience client, where the anchor client communicates with the client through the network via the audience server;
  • the server is configured to receive a virtual gift giving instruction sent by the viewer client, and send the gift instruction to the anchor client;
  • the anchor client is configured to receive the gift instruction and obtain a target special effect gift identifier; find the target special effect gift according to the target special effect gift identifier, and determine the characteristic area corresponding to the target special effect gift; according to the characteristic area Determine the combined location information of the target special effect gift on the live video; encode the combined location information and the live video into live video stream data and send it to the server;
  • the server is also used to forward the live video stream data to the viewer client;
  • the audience client is configured to receive the live video stream data and the target special effect gift, and obtain the combined location information of the live video and the target special effect gift from the live video stream data; according to the combined location information
  • the target special effect gift is added to the live video for synthesis to obtain a special effect frame image; a special effect display area is set on the live window, and during the process of playing the live video, the special effect display area is rendered synchronously Special effects frame image.
  • an embodiment of the present application provides a computer device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor.
  • the processor executes the program to implement any of the above-mentioned implementations.
  • an embodiment of the present application provides a storage medium containing computer-executable instructions, when the computer-executable instructions are executed by a computer processor, they are used to execute the method for rendering virtual gift special effects as described in any of the above embodiments. A step of.
  • the virtual gift special effect rendering method and device, live broadcast system, equipment, and storage medium obtained the combined location information of the live video and the target special effect gift from the live video stream data by receiving the live video stream data and the target special effect gift
  • the synthesized location information includes the target location of the target special effect gift synthesized on the live video based on the host client's recognition of the live video; according to the synthesized location information, the target special effect gift is added to the live video for synthesis to obtain a special effect frame image; Set the special effect display area on the live broadcast window.
  • the special effect frame image is synchronously rendered in the special effect display area, so that the virtual gift special effect is not limited to the live video playback area of the client, but can be performed across the video playback area. Rendering and display.
  • this solution uses the host client to display the live video
  • the synthesized location information is encoded and packaged outside, and the synthesized location information is decoded at the audience client, which is convenient for the secondary editing of the virtual gift effect display, so that the special effect of the target special effect gift can be accurately added to the target location of the live video, and at the same time.
  • FIG. 1 is a schematic diagram of a system framework of a method for rendering a virtual gift special effect provided by an embodiment
  • FIG. 2 is a schematic structural diagram of a live broadcast system provided by an embodiment
  • FIG. 3 is a flowchart of a method for rendering a virtual gift special effect provided by an embodiment
  • Figure 4 is a rendering effect diagram of a virtual gift in a live broadcast technology
  • FIG. 5 is an effect diagram of virtual gift rendering provided by an embodiment
  • FIG. 6 is a flowchart of a method for synthesizing and displaying a target special effect gift provided by an embodiment
  • Figure 7 is a composite effect diagram of a virtual gift in a live broadcast technology
  • FIG. 8 is another flowchart of a method for rendering a virtual gift special effect provided by an embodiment
  • FIG. 9 is a sequence diagram of a virtual gift giving process provided by an embodiment
  • FIG. 10 is a schematic structural diagram of a rendering device for virtual gift special effects provided by an embodiment
  • FIG. 11 is a schematic diagram of another structure of a virtual gift special effect rendering apparatus provided by an embodiment.
  • FIG. 1 is a schematic diagram of a system framework of a method for rendering a virtual gift special effect provided by an embodiment.
  • the system framework may include a server and a client.
  • the live broadcast platform on the server side may include multiple virtual live broadcast rooms and servers, etc., and each virtual live broadcast room correspondingly plays different live content.
  • Clients include a viewer client and a host client.
  • the host conducts live broadcast through the host client, and the viewer chooses to enter a virtual live room through the viewer client to watch the host live.
  • the audience client and the host client can enter the live broadcast platform through a live broadcast application (Application, APP) installed on the terminal device.
  • Application, APP live broadcast application
  • the terminal device may be a terminal such as a smart phone, a tablet computer, an e-reader, a desktop computer, or a notebook computer, which is not limited.
  • a server is a back-end server used to provide back-end services to terminal devices, and it can be implemented by an independent server or a server cluster composed of multiple servers.
  • the method for rendering virtual gift special effects is suitable for presenting virtual gifts and displaying virtual gift special effects during a live broadcast. It can be that the audience presents virtual gifts to the target anchor through the audience client, so as to be at the target anchor’s location.
  • the host client and multiple viewer clients display the special effects of virtual gifts, or the host can give a virtual gift to another host through the host client, so that the host who gives the virtual gift and the host who receives the virtual gift is located on the host client And multiple audience clients show special effects of virtual gifts, etc.
  • the following takes the audience client to present a virtual special effect gift to the target anchor, and the virtual gift special effect is rendered on the audience client as an example to illustrate the solution.
  • FIG. 2 is a schematic structural diagram of a live broadcast system provided by an embodiment.
  • the live broadcast system 200 includes: an anchor client 210, an audience client 230, and a server 220.
  • the host client 210 communicates with the audience client 230 via the server 220 through a network.
  • the host client can be the host client installed on a computer, or it can be installed on a mobile terminal, such as a mobile phone or tablet; similarly, the audience client can be installed on The audience client on the computer may also be the audience client installed on a mobile terminal, such as a mobile phone or a tablet computer.
  • the server 220 is configured to receive a virtual gift giving instruction sent by the viewer client 230, and send the gift instruction to the anchor client 210;
  • the host client 210 is configured to receive the gift instruction and obtain the target special effect gift identifier; find the target special effect gift according to the target special effect gift identifier, and determine the characteristic area corresponding to the target special effect gift; according to the characteristics
  • the region determines the combined location information of the target special effect gift on the live video; encodes the combined location information and the live video into live video stream data and sends it to the server;
  • the server 220 is also used to forward the live video stream data to the viewer client 230;
  • the audience client 230 is configured to receive live video stream data and target special effect gifts, and obtain the combined location information of the live video and the target special effect gifts from the live video stream data; and convert the combined location information according to the combined location information.
  • the target special effect gift is added to the live video for synthesis to obtain a special effect frame image; a special effect display area is set on the live broadcast window, and the special effect frame image is synchronously rendered in the special effect display area during the process of playing the live video .
  • FIG. 3 is a flowchart of a method for rendering a virtual gift special effect provided by an embodiment.
  • the rendering method of the virtual gift special effect is executed on a client, such as a viewer client.
  • This embodiment takes the viewer client as an example for description.
  • the rendering method of the virtual gift special effect may include the following steps:
  • S110 Receive live video stream data and a target special effect gift, and obtain synthetic location information of the live video and the target special effect gift from the live video stream data.
  • the combined location information includes a target location on the live video that is synthesized on the live video based on the target special effect gift obtained by the host client recognizing the live video.
  • the user sends a virtual gift giving instruction to the server through the audience client.
  • the host client receives the gift instruction forwarded by the server, and obtains the feature area corresponding to the live video and the target special effect gift.
  • the characteristic area may be recognized by the host client according to the gift instruction, or it may be obtained by the server after receiving the gift instruction and then forwarded to the host client.
  • the anchor client recognizes the characteristic area corresponding to the target special effect gift according to the gift instruction as an example for description.
  • the host client When the host client receives the virtual gift giving instruction sent by the viewer client, it obtains the live video of the live room where the target host is located, extracts the current video frame image from the live video, and processes the current video frame image according to the target special effect gift.
  • relevant information used to synthesize the target special effect gift such as the synthesis position information of the characteristic area of the target special effect gift in the current video frame image.
  • the target special effect gift can be synthesized to the target position of the current video frame image, wherein the characteristic area of the target special effect gift corresponds to the target position of the current video frame image one-to-one.
  • the synthesized position information may include: at least one of face information, human contour information, gesture information, and human bone information.
  • the synthesized position information may be represented by one or more key points of the person's outline, where each key point of the person's outline has a unique coordinate value in the current video frame image, according to one or more of the key points of the person's outline.
  • a coordinate value can be obtained by adding the target special effect gift to the target position of the current video frame image.
  • the collection of key points of different person outlines corresponds to different human body information.
  • the face part of the current video frame image is identified, and the contour key points of the face part are extracted.
  • the face information may include 106 contour key points, and each contour key point corresponds to the face
  • each contour key point corresponds to a unique coordinate value, which indicates the position of the contour key point in the current video frame image.
  • the body contour includes 59 contour key points.
  • Each contour key point corresponds to the edge contour of each part of the human body.
  • the human skeleton includes 22 contour key points.
  • Each contour key point corresponds to the joint point of the human bone.
  • the coordinate value of the point indicates the position in the current video frame image.
  • the characteristic area corresponding to the target special effect gift corresponds to the target position in the current video frame image.
  • the characteristic area of the target special effect gift "angel wings" is "back".
  • the key points of the contours belonging to the "back" feature are identified and determined as the target contour points.
  • the coordinate value on the current video frame image determines the target position for synthesis on the current video frame image where the target special effect gift is located.
  • the target position can be a collection of the coordinate values of the target contour points, or it can be formed by the connection of the target contour points Area.
  • the composite location information and the live video are encoded and packaged to form live video stream data, so that the composite location information can be forwarded to the viewer client via the server along with the live video.
  • the viewer client After the viewer client receives the live video stream data, decodes it to obtain the synthesized location information and the live video, and obtains the current video frame image from the live video.
  • the host client is used to identify the current video frame image corresponding to the synthesized location information and the current video frame image obtained by the viewer client from the live video is the same frame image, which is displayed on the host client and the viewer client. Resolution, size, color, etc. can be different.
  • the target special effect gift can be a special effect gift in a two-dimensional display form, or a special effect gift in a three-dimensional display form, that is, a three-dimensional special effect gift.
  • the target special effect gift is preferably a three-dimensional special effect gift, and the three-dimensional special effect gift is used to create a three-dimensional special effect, enhance the reality experience, and improve the rendering effect of the virtual gift special effect.
  • the audience client obtains the composite position information, determines the target position of the target special effect gift in the current video frame image of the live video according to the composite position information, adds the target special effect gift to the target position and synthesizes the current video frame image to obtain the special effect frame image.
  • the current video frame image may be one frame of video frame image or multiple frames of video frame image.
  • the current video frame image can be divided into a foreground image layer and a background image layer
  • the target special effect gift can include one or more virtual gift special effect layers.
  • the target special effect gift can be split to generate the target One or more virtual gift special effect layers corresponding to the special effect gift, determine the target position of each virtual gift special effect layer on the foreground image layer or the background image layer according to the synthesis position information, and combine the virtual gift special effect layer, foreground image layer and background The image layer is synthesized to obtain the special effect frame image.
  • each of the virtual gift special effect layers, the foreground image layer and the background image layer are synthesized and displayed in a priority order according to the synthesis position information.
  • the live broadcast window refers to the corresponding window when the live broadcast application is in the open state, and in the maximized state, it can occupy the entire terminal device screen.
  • a special effect display area is set on the live window, the special effect display area is set above the live video playback area, and the special effect display area is larger than the live video playback area, so that the special effect corresponding to the target special effect gift can be magnified and rendered.
  • the live video playing area refers to an area used to play the live video.
  • the viewer client After the viewer client obtains the composite location information, it will convert the composite location information according to the size of the special effect display area of the current viewer client, and determine the target of the target special effect gift on the current video frame image according to the converted composite location information Position, add the target special effect gift to the target position for synthesis.
  • the host client recognizes that the resolution size of the current video frame image is 400*300, and the coordinate value of the target contour point A in the obtained composite position information is (50, 50), and the viewer client displays the same current
  • the resolution size of the video frame image is 800*600, and the synthesized position information is converted correspondingly to obtain the coordinate value of the current target contour point A'(100, 100).
  • the target special effect gift is added to the target position determined by the converted synthesis position information for synthesis.
  • the same current video frame image means that the content of the video frame image is the same, and other features such as resolution and image size may be different.
  • the special effect display of the image frame image and the live video playback occupy different threads, so that while one of the threads is playing the live video, the other thread can simultaneously render the special effect frame images to the special effect display area to achieve Video playback and special effects display are carried out simultaneously, which improves the display effect of special effects.
  • the area of the special effect layer corresponding to the special effect gift that is blocked by the host character is made transparent, so that the display of special effects across the live video playback area does not affect the normal video playback of the live video playback area.
  • Figure 4 is a rendering of a virtual gift in a live broadcast technology.
  • AR virtual gifts can only be realistic In the live video playback area, the display effect is not good; however, after adopting the technology of this application, it can be displayed across the live video playback area, and a better special effect display effect can be obtained.
  • FIG. 5 is an effect diagram of virtual gift rendering provided by an embodiment.
  • a special effect display area is set in the live video playback area, and the area of the special effect display area is larger than the live video playback area, and the virtual gift special effect is rendered to the special effect display Area, so that the virtual gift special effect can be displayed across the live video area, as shown in Figure 5, the "angel wings" in the virtual special effect gift, to get a better special effect display effect.
  • the virtual gift special effect rendering method obtains the combined location information of the live video and the target special effect gift from the live video stream data by receiving the live video stream data and the target special effect gift; wherein the combined location information includes information based on the host client
  • the target special effect gift obtained by recognizing the live video is synthesized on the target position of the live video; according to the synthesized position information, the target special effect gift is added to the live video for synthesis to obtain the special effect frame image; the special effect display area is set on the live broadcast window,
  • the special effect frame image is rendered synchronously in the special effect display area, so that the virtual gift special effect is not limited to the live video playback area of the client, and can be rendered and displayed across the video playback area, which improves the performance of the virtual gift special effect. Display of results.
  • this solution uses the host client to add the special effect to the live video.
  • the synthesized location information is encoded and encapsulated, and the synthesized location information is decoded at the audience client, which is convenient for secondary editing of the effect display of the virtual gift.
  • FIG. 6 is a flowchart of a method for synthesizing and displaying a target special effect gift provided by an embodiment. As shown in FIG. 6, in an embodiment, the target special effect gift is added to all the target special effect gifts according to the synthesized position information in step S120. Synthesizing the live video to obtain a special effect frame image may include the following steps:
  • the current video frame image may be one frame of video frame image or multiple frames of video frame image.
  • S1202 Segment the current video frame image into a foreground image layer and a background image layer, and generate at least one virtual gift special effect layer according to the target special effect gift.
  • the existing algorithm can be used to compare the pixel values of the current video frame image, and the current video frame image is divided into the foreground area and the background area. For example, the area corresponding to the set of pixel points with a pixel value greater than a certain threshold is used as the foreground area , The area corresponding to the set of pixel points whose pixel value is less than a certain threshold is regarded as the background area.
  • the foreground area and the background area are located in different image layers, where the image layer where the foreground area is located is the foreground image layer, and the image layer where the background area is located is the background image layer.
  • the foreground image layer may include an anchor character area in the live video
  • the background image layer may include a background area excluding the anchor character area in the live video.
  • the target special effect gift can be split to generate one or more virtual gift special effect layers corresponding to the target special effect gift.
  • the "mask" gift has only one virtual gift special effect layer
  • the "snowflake” gift can It includes multiple virtual gift special effect layers, such as the first snowflake on the virtual gift special effect layer A, the second snowflake on the virtual gift special effect layer B, and the third and fourth snowflakes on the virtual gift special effect layer C.
  • the viewer client obtains the foreground image layer, background image layer, and one or more virtual gift special effect layers corresponding to the target special effect gift of the current video frame image. Optionally, it can be processed and cached accordingly.
  • each of the virtual gift special effect layers, the foreground image layer and the background image layer are synthesized and displayed in sequence.
  • the synthesized position information includes the positions A (50, 50), B (55, 60), C (70, 100) of the key points of the character outline, and each layer includes a foreground image layer a, a background image layer b, and virtual gift special effects.
  • Layer c, virtual gift special effect layer d and virtual gift special effect layer e The composition order of each layer is b, c, a, d and e, and c corresponds to the position of A, d corresponds to the position of B, and e corresponds to the position of C .
  • Figure 7 is a composite effect diagram of a virtual gift in a live broadcast technology.
  • the virtual gift is directly added to the live video, making the virtual gift and the live broadcast.
  • the video images overlap and obscure the anchor characters and affect the user's viewing; and the technology of this application can avoid obstructing the anchor characters, and a better special effect display effect can be obtained.
  • the effect of adding a "mask" to the eyes of the anchor character enables the target special effect gift to be synthesized into the target position of the current video frame image of the live video according to the contour characteristics of the human body, and a better special effect display effect can be obtained.
  • step S1203 combining each of the virtual gift special effect layers with the foreground image layer and the background image layer in sequence according to the combining position information may include the following steps:
  • S201 Determine the priority of each virtual gift special effect layer and the foreground image layer and the background image layer through the target special effect gift identifier.
  • the priority of each virtual gift special effect layer, the foreground image layer and the background image layer in the target special effect gift is preset.
  • the priority is from high to low or from low to low. Gao proceeded to synthesize in turn.
  • the identifier of the target special effect gift carries the synthesis sequence between each virtual gift special effect layer and the foreground image layer and the background image layer corresponding to the target special effect gift.
  • the synthesized position information may correspond to the target position synthesized on the foreground image layer or the background image layer by one or more virtual gift special effect layers.
  • S202 Combine each of the virtual gift special effect layers with the foreground image layer and the background image layer according to the priority from high to low according to the combined position information to obtain a special effect frame image.
  • the identifier of the target special effect gift is 01
  • the corresponding virtual gift includes angel wings, feather 001, feather 002, and so on.
  • the special effect layers of angel wings, feather 001, feather 002, and anchor are special effect layer A, special effect layer B, special effect layer C, and special effect layer D, respectively.
  • the foreground image layer and the background image layer can be understood as special effect layers.
  • the special effects corresponding to this target special effects gift are: angel wings are added to the anchor’s back; feather 001 is added to the anchor’s arm, covering the corresponding area of the anchor’s arm; feather 002 is located on the anchor’s shoulder, and half of it is blocked by the anchor, and the other half is not. Obscured by the anchor.
  • the priority of each special effect layer is pre-configured.
  • the higher the priority of the special effect layer the closer its position to the bottom layer.
  • the priority of each special effect layer is from high to low: special effect layer C, Special effect layer A, special effect layer D and special effect layer B. Synthesize according to priority from high to low, that is: set the special effect layer D of the anchor on the special effect layer C corresponding to feather 001 and the special effect layer A corresponding to angel wings, so that the anchor’s back produces angel wings, And the anchor occludes the effect of the general feather 002, and then the special effect layer corresponding to the feather 001 is synthesized to produce the effect of the feather 001 occluding the anchor's arm.
  • each special effect layer the area outside the object image is transparent or semi-transparent.
  • the area except the angel wings is transparent, so that it is located on the angel wings.
  • Other special effect objects under the special effect layer can be displayed through this area.
  • the combined position information of the live video and the target special effect gift is obtained from the live video stream data through the viewer client; the live video is divided into a foreground image layer and a background image layer, and according to the target
  • the special effect gift generates at least one virtual gift special effect layer; according to the synthetic position information, the virtual gift special effect layer, the foreground image layer and the background image layer are synthesized and displayed in order, and the target special effect is realized according to the synthetic position information obtained from the silhouette of the person.
  • the gift is synthesized to a predetermined target position, so that some special effect layers of the target special effect gift will obscure the anchor character in the video, and some special effect layers will not obscure the anchor character in the video, so as to achieve the special effect of combining the virtual gift and the anchor character , Does not affect the display of the anchor in the video, and at the same time improves the display effect of the virtual gift special effect.
  • FIG. 8 is another flowchart of a method for rendering a virtual gift special effect provided by an embodiment.
  • the method for rendering a virtual gift special effect is applied to the server and can be executed by the server.
  • the rendering method of the virtual gift special effect may include the following steps:
  • S510 Receive live video stream data sent by the host client.
  • the live video stream data includes the combined location information of the live video and the target special effect gift.
  • the server receives the gift instruction of the virtual gift, forwards the gift instruction to the host client, and obtains the live video stream data sent by the host client.
  • the live video stream data is formed by encoding and packaging the combined location information and the live video after the host audience recognizes the combined location information, so that the combined location information can be sent to the server along with the live video.
  • the audience client adds the target special effect gift to the live video for synthesis according to the synthesis location information to obtain a special effect frame image; set a special effect display area on the live broadcast window, In the process, the special effect frame image is rendered synchronously in the special effect display area.
  • the audience client obtains the composite position information, determines the target position of the target special effect gift in the current video frame image of the live video according to the composite position information, adds the target special effect gift to the target position and synthesizes the current video frame image to obtain the special effect frame image.
  • the current video frame image may be one frame of video frame image or multiple frames of video frame image.
  • the current video frame image can be divided into a foreground image layer and a background image layer
  • the target special effect gift can include one or more virtual gift special effect layers.
  • the target special effect gift can be split to generate the target One or more virtual gift special effect layers corresponding to the special effect gift, determine the target position of each virtual gift special effect layer on the foreground image layer or the background image layer according to the synthesis position information, and combine the virtual gift special effect layer, foreground image layer and background The image layer is synthesized to obtain the special effect frame image.
  • each of the virtual gift special effect layers, the foreground image layer and the background image layer are synthesized and displayed in a priority order according to the synthesis position information.
  • a special effect display area can be set on the live window, the special effect display area is set above the live video playback area, and the special effect display area is larger than the live video playback area, so that the special effect corresponding to the target special effect gift can be magnified and rendered to improve the special effect The effect of the show.
  • the live broadcast window refers to the corresponding window when the live broadcast application is in the open state, and in the maximized state, it can occupy the entire terminal device screen.
  • the special effect display of the image frame image and the live video playback occupy different threads, so that while one of the threads is playing the live video, the other thread can simultaneously render the special effect frame images to the special effect display area to achieve Video playback and special effects display are carried out simultaneously, which improves the display effect of special effects.
  • the area of the special effect layer corresponding to the special effect gift that is blocked by the host character is made transparent, so that the display of special effects across the live video playback area does not affect the normal video playback of the live video playback area.
  • the method for rendering virtual gift special effects is applied to the server by receiving live video stream data sent by the host client; among them, the live video stream data includes the combined location information of the live video and the target special effect gift;
  • the video stream data is forwarded to the audience client; among them, the audience client adds the target special effect gift to the live video for synthesis according to the synthesis location information to obtain the special effect frame image; set the special effect display area on the live broadcast window, in the process of playing the live video
  • the special effect frame images are synchronously rendered, so that the virtual gift special effect is not limited to the live video playback area of the client, but can be rendered and displayed across the video playback area.
  • step S510 before receiving the live video stream data sent by the host client in step S510, the following steps may be further included:
  • the anchor client obtains the target special effect gift identifier according to the gift instruction; finds the target special effect gift according to the target special effect gift identifier, determines the characteristic area corresponding to the target special effect gift; determines the target special effect gift according to the characteristic area The synthesized location information of the target special effect gift on the live video.
  • the user sends a virtual gift giving instruction to the server through the viewer client, and the host client receives the virtual gift giving instruction forwarded by the server, and obtains the live video of the live room where the target host is located, from the live video Extract the current video frame image, and at the same time find the target special effect gift according to the obtained target special effect gift, and determine its characteristic area.
  • the host client processes the current video frame image according to the target special effect gift, and identifies the relevant information used to synthesize the target special effect gift on the current video frame image, such as the characteristic area of the target special effect gift in the synthesis position of the current video frame image Information, where the synthesis position information is used to synthesize the target special effect gift to the target position of the current video frame image, wherein the characteristic area of the target special effect gift corresponds to the target position of the current video frame image one-to-one.
  • FIG. 9 is a sequence diagram of the virtual gift giving process provided by an embodiment; in this example, the audience presents a three-dimensional special effect gift "angel wings" to the anchor, and the corresponding identification is ID1648, then the main process can be as follows :
  • the audience client sends a gift-giving request to the server.
  • the audience user W sends a gift-giving request to the server through the audience client, where the virtual gift is ID1648.
  • the server performs business processing.
  • the server After the server receives the gift-giving request, it does the corresponding business processing (such as deduction, etc.).
  • the server broadcasts gift-giving information.
  • the host client After receiving the gift-giving information, the host client queries the virtual gift to identify the synthesized location information.
  • the host client After the host client receives the broadcast of the gift information, it queries the gift configuration according to the virtual gift ID 1648 and obtains that the virtual gift is a three-dimensional special effect gift (such as an AI (Artificial Intelligence) gift).
  • the synthetic location information that needs to be recognized includes the face and back.
  • the host client starts to perform face recognition and background segmentation recognition.
  • the host client packs the synthesized location information into the live video stream for transmission.
  • the host client packs the synthesized location information (may be AI information) obtained by face recognition and background segmentation recognition into the live video stream, so that the synthesized location information is transmitted to the server along with the live video stream.
  • the synthesized location information may be AI information
  • the server forwards the live video stream.
  • the server transmits the live video stream containing the synthesized location information to the audience client.
  • the audience client obtains the synthesized location information, and performs virtual gift synthesis and display.
  • the audience client decodes the synthesized location information from the live video stream, combines the synthesized location information with the virtual gift, and plays the special effect of angel wings: angel wings grow behind the anchor.
  • Scenario 2 The audience presents a three-dimensional special effects gift "pet bird" to the anchor, and the corresponding identification is ID1649, then the main process can be as follows:
  • the audience client sends a gift-giving request to the server.
  • the audience user Q sends a gift-giving request to the server through the audience client, where the virtual gift is ID1649.
  • the server performs business processing
  • the server After the server receives the gift-giving request, it does the corresponding business processing (such as deduction, etc.).
  • the server broadcasts gift-giving information.
  • the host client After receiving the gift-giving information, the host client queries the virtual gift to identify the synthesized location information.
  • the host client After receiving the broadcast of the gift information, the host client queries the gift configuration according to the virtual gift ID 1649 and obtains that the virtual gift is a three-dimensional special effect gift (such as an AI (Artificial Intelligence) gift).
  • the synthetic location information that needs to be recognized includes the face and the outline of the human body. Then the host client starts to perform face recognition and human contour recognition.
  • the host client packs the synthesized location information into the live video stream for transmission.
  • the host client packs the synthesized location information (which can be AI information) obtained by face recognition and human contour recognition into the live video stream, so that the synthesized location information is transmitted to the server along with the live video stream.
  • synthesized location information which can be AI information
  • the server forwards the live video stream.
  • the server transmits the live video stream containing the synthesized location information to the audience client.
  • the audience client obtains the synthesized location information, and performs virtual gift synthesis and display.
  • the viewer client decodes the synthesized location information from the live video stream, combines the synthesized location information with the virtual gift, and plays the special effect of "pet bird”: the bird flies over the shoulder of the anchor from the area outside the video.
  • FIG. 10 is a schematic structural diagram of a rendering device for virtual gift special effects provided by an embodiment.
  • the rendering device for virtual gift special effects is applied to a client, such as a viewer client.
  • the device 100 for rendering virtual gift special effects may include: an information acquisition module 110, a special effect frame synthesis module 120 and a special effect frame display module 130.
  • the information obtaining module 110 is configured to receive live video stream data and target special effect gifts, and obtain the combined location information of the live video and the target special effect gifts from the live video stream data; wherein, the combined location information includes information based on The target special effect gift obtained by the host client identifying the live video is synthesized into the target position on the live video;
  • the special effect frame synthesis module 120 is configured to add the target special effect gift to the live video for synthesis according to the synthesis position information to obtain a special effect frame image;
  • the special effect frame rendering module 130 is configured to set a special effect display area on the live broadcast window, and in the process of playing the live video, synchronously render the special effect frame image in the special effect display area.
  • the virtual gift special effect rendering device provided in this embodiment is applied to the client.
  • the information acquisition module 110 obtains the combined location information of the live video and the target special effect gift from the live video stream data by receiving the live video stream data and the target special effect gift;
  • the synthesis position information includes the target position of the target special effect gift synthesized on the live video based on the host client's recognition of the live video;
  • the special effect frame synthesis module 120 adds the target special effect gift to the live video for synthesis according to the synthesized position information, and obtains Special effect frame image;
  • the special effect frame display module 130 sets a special effect display area on the live window, and during the process of playing the live video, the special effect frame image is rendered synchronously in the special effect display area, so that the virtual gift special effect is not limited to the live video of the client
  • the playback area can be rendered and displayed across the video playback area.
  • the area of the special effect display area is greater than or equal to the live video playback area.
  • the special effect frame synthesis module 120 includes: a video frame acquisition unit, an image layer segmentation unit, and an image layer synthesis unit;
  • the video frame acquisition unit is used to obtain the current video frame image of the live video
  • the image layer segmentation unit is used to divide the previous video frame image into a foreground image layer and a background image layer, and generate according to the target special effect gift At least one virtual gift special effect layer
  • an image layer synthesis unit for synthesizing and displaying each virtual gift special effect layer, the foreground image layer and the background image layer in sequence according to the synthesis position information.
  • the image layer synthesis unit may include: a priority determination unit and a special effect frame synthesis unit;
  • the priority determining unit is used to determine the priority of each virtual gift special effect layer, the foreground image layer and the background image layer through the target special effect gift identifier;
  • the special effect frame synthesis unit is used to combine Each of the virtual gift special effect layer, the foreground image layer and the background image layer are synthesized according to the priority from high to low to obtain a special effect frame image.
  • the target special effect gift is a special effect gift in the form of a three-dimensional display.
  • the synthesized position information includes at least one of face information, human contour information, gesture information, and human bone information.
  • FIG. 11 is a schematic diagram of another structure of a virtual gift special effect rendering apparatus provided by an embodiment.
  • the virtual gift special effect rendering apparatus is applied to a server, such as a server.
  • the device 500 for rendering virtual gift special effects may include: a video stream receiving module 510 and a video stream forwarding module 520.
  • the video stream receiving module 510 is configured to receive live video stream data sent by the host client; wherein, the live video stream data includes the combined location information of the live video and the target special effect gift;
  • the video stream forwarding module 520 is configured to forward the live video stream data to an audience client; wherein the audience client adds the target special effect gift to the live video for synthesis according to the synthesis location information, and obtains A special effect frame image; a special effect display area is set on the live window, and the special effect frame image is synchronously rendered in the special effect display area during the process of playing the live video.
  • the device for rendering virtual gift special effects further includes: a gift instruction receiving module;
  • the gift instruction receiving module is configured to receive the gift instruction of the virtual gift sent by the viewer client, and send the gift instruction to the anchor client; wherein, the anchor client obtains the target special effect gift identifier according to the gift instruction; According to the target special effect gift identifier, the target special effect gift is found, and the characteristic area corresponding to the target special effect gift is determined; and the synthesized position information of the target special effect gift on the live video is determined according to the characteristic area.
  • the rendering device for virtual gift special effects provided above can be used to execute the method for rendering virtual gift special effects provided by any of the foregoing embodiments, and has corresponding functions and beneficial effects.
  • An embodiment of the present application also provides a computer device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor.
  • the processor executes the program, the implementation is as in any of the above embodiments.
  • the computer device may be a mobile terminal, a tablet computer, a computer computer, or a server.
  • the computer device provided above has corresponding functions and beneficial effects when executing the method for rendering virtual gift special effects provided by any of the above embodiments.
  • An embodiment of the present application also provides a storage medium containing computer-executable instructions, which are used to execute a virtual gift special effect rendering method when the computer-executable instructions are executed by a computer processor, including:
  • the combined location information includes performing the live video on the live video based on the host client
  • the identified target special effect gift is synthesized at the target position on the live video
  • a special effect display area is set on the live window, and the special effect frame image is synchronously rendered in the special effect display area during the process of playing the live video.
  • the computer-executable instructions are used to execute a method for rendering virtual gift special effects when executed by a computer processor, including:
  • the viewer client forwards the live video stream data to the viewer client; wherein, the viewer client adds the target special effect gift to the live video according to the combined location information for synthesis to obtain a special effect frame image; on the live window Setting a display special effect display area, and in the process of playing the live video, synchronously rendering the special effect frame image in the special effect display area.
  • the storage medium containing computer-executable instructions provided by the embodiments of the present application is not limited to the above-mentioned virtual gift special effect rendering method operation, and can also execute any of the embodiments provided in the present application.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • FLASH Flash memory
  • hard disk or optical disk etc.
  • a computer device which can be a personal computer, A server, or a network device, etc. executes the method for rendering a virtual gift special effect described in any embodiment of the present application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

一种虚拟礼物特效的渲染方法和装置、直播系统、设备以及存储介质,属于直播技术领域,该渲染方法通过接收直播视频流数据和目标特效礼物,从直播视频流数据中获取直播视频和目标特效礼物的合成位置信息;其中,合成位置信息包括基于主播客户端对直播视频进行识别得到的目标特效礼物合成在直播视频上的目标位置;根据合成位置信息将目标特效礼物添加到直播视频进行合成,得到特效帧图像;在直播窗口上设置特效展示区域,在播放直播视频的过程中,在特效展示区域同步渲染特效帧图像。本技术方案能够让虚拟礼物特效并非仅局限于客户端的直播视频播放区域,能够跨视频播放区域进行渲染和展示。

Description

虚拟礼物特效的渲染方法和装置、直播系统
本申请要求于2019年9月11日提交至中国专利局、申请号为201910859928.8、发明名称为“虚拟礼物特效的渲染方法和装置、直播系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及直播技术领域,具体而言,涉及一种虚拟礼物特效的渲染方法和装置、直播系统,还涉及一种计算机设备和存储介质。
背景技术
随着网络技术发展,实时视频交流如网络直播、视频聊天室等成为一种越来越流行的娱乐方式。在实时视频交流过程中,可以通过赠送礼物展示特效的方式增加用户之间的互动性。
例如,在直播场景中,主播用户在直播间进行直播,观众用户在观众客户端观看主播的直播过程。为了增加主播用户和观众用户之间的互动性,观众用户可以选择特定的目标特效礼物赠送给主播,将目标特效礼物按照所对应的娱乐模板添加到主播画面的特定位置,展示相对应的特效。
现有的礼物特效的展示方法是通过主播客户端将礼物特效合成到视频帧,将含有礼物特效的视频帧放到视频流传输到其他主播客户端或观众客户端进行特效展示,但是这样的特效展示方式使得礼物特效只能展示在客户端的直播视频播放区域,限制了特效的展示效果。
发明内容
本申请的目的旨在至少解决上述技术缺陷之一,特别是礼物特效只能展示在客户端的直播视频播放区域,限制了特效展示效果的问题。
第一方面,本申请实施例提供一种虚拟礼物特效的渲染方法,包括以下步骤:
接收直播视频流数据和目标特效礼物,从所述直播视频流数据中获取直播视频和所述目标特效礼物的合成位置信息;其中,所述合成位置信息包括基于主播客户端对所述直播视频进行识别得到的目标特效礼物合成在 所述直播视频上的目标位置;
根据所述合成位置信息将所述目标特效礼物添加到所述直播视频进行合成,得到特效帧图像;
在直播窗口上设置展示特效展示区域,在播放所述直播视频的过程中,在所述特效展示区域同步渲染所述特效帧图像。
第二方面,本申请实施例提供一种虚拟礼物特效的渲染方法,包括以下步骤:
接收主播客户端发送的直播视频流数据;其中,所述直播视频流数据中包括直播视频和目标特效礼物的合成位置信息;
将所述直播视频流数据转发至观众客户端;其中,所述观众客户端根据所述合成位置信息将所述目标特效礼物添加到所述直播视频进行合成,得到特效帧图像;在直播窗口上设置展示特效展示区域,在播放所述直播视频的过程中,在所述特效展示区域同步渲染所述特效帧图像。
第三方面,本申请实施例提供一种虚拟礼物特效的渲染装置,包括:
信息获取模块,用于接收直播视频流数据和目标特效礼物,从所述直播视频流数据中获取直播视频和所述目标特效礼物的合成位置信息;其中,所述合成位置信息包括基于主播客户端对所述直播视频进行识别得到的目标特效礼物合成在所述直播视频上的目标位置;
特效帧合成模块,用于根据所述合成位置信息将所述目标特效礼物添加到所述直播视频进行合成,得到特效帧图像;
特效帧渲染模块,用于在直播窗口上设置展示特效展示区域,在播放所述直播视频的过程中,在所述特效展示区域同步渲染所述特效帧图像。
第四方面,本申请实施例提供一种虚拟礼物特效的渲染装置,包括:
视频流接收模块,用于接收主播客户端发送的直播视频流数据;其中,所述直播视频流数据中包括直播视频和目标特效礼物的合成位置信息;
视频流转发模块,用于将所述直播视频流数据转发至观众客户端;其中,所述观众客户端根据所述合成位置信息将所述目标特效礼物添加到所述直播视频进行合成,得到特效帧图像;在直播窗口上设置展示特效展示区域,在播放所述直播视频的过程中,在所述特效展示区域同步渲染所述特效帧图像。
第五方面,本申请实施例提供一种直播系统,包括服务器、主播客户端和观众客户端,所述主播客户端经所述观众服务器与所述客户端通过网络进行通信连接;
所述服务器,用于接收所述观众客户端发送的虚拟礼物的赠送指令,并向所述主播客户端发送所述赠送指令;
所述主播客户端,用于接收所述赠送指令并获取目标特效礼物标识;根据所述目标特效礼物标识查找得到目标特效礼物,确定所述目标特效礼物所对应的特征区域;根据所述特征区域确定所述目标特效礼物在所述直播视频上的合成位置信息;将所述合成位置信息和所述直播视频编码成直播视频流数据发送至服务器;
所述服务器,还用于将所述直播视频流数据转发至所述观众客户端;
所述观众客户端,用于接收所述直播视频流数据和所述目标特效礼物,从所述直播视频流数据中获取直播视频和所述目标特效礼物的合成位置信息;根据所述合成位置信息将所述目标特效礼物添加到所述直播视频进行合成,得到特效帧图像;在直播窗口上设置展示特效展示区域,在播放所述直播视频的过程中,在所述特效展示区域同步渲染所述特效帧图像。
第六方面,本申请实施例提供一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现如上述任一实施例所述的虚拟礼物特效的渲染方法的步骤。
第七方面,本申请实施例提供一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行如上述任一实施例所述虚拟礼物特效的渲染方法的步骤。
上述实施例提供的虚拟礼物特效的渲染方法和装置、直播系统、设备以及存储介质,通过接收直播视频流数据和目标特效礼物,从直播视频流数据中获取直播视频和目标特效礼物的合成位置信息;其中,合成位置信息包括基于主播客户端对直播视频进行识别得到的目标特效礼物合成在直播视频上的目标位置;根据合成位置信息将目标特效礼物添加到直播视频进行合成,得到特效帧图像;在直播窗口上设置展示特效展示区域,在播放直播视频的过程中,在特效展示区域同步渲染特效帧图像,以使得虚拟礼物特效并非仅局限于客户端的直播视频播放区域,能够跨视频播放区域 进行渲染和展示。
同时,相对于通过主播客户端或服务器直接将目标特效礼物合成到直播视频,再发送至各观众客户端以在观众客户端的直播视频播放区域播放虚拟礼物特效,该方案利用主播客户端在直播视频外将合成位置信息进行编码封装,在观众客户端解码得到合成位置信息,便于对虚拟礼物的效果展示进行二次编辑,以使得目标特效礼物的特效能够准确添加到直播视频的目标位置,同时实现对目标特效礼物特效层的拆分,使得某些特效层会遮挡视频中的主播人物,某些特效层不遮挡视频中的主播人物,以实现虚拟礼物与主播人物相结合的特效效果,又不影响视频中主播的展示,同时提高了虚拟礼物特效的展示效果。
本申请附加的方面和优点将在下面的描述中部分给出,这些将从下面的描述中变得明显,或通过本申请的实践了解到。
附图说明
本申请上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:
图1是一实施例提供的虚拟礼物特效的渲染方法的系统框架示意图;
图2是一实施例提供的直播系统的结构示意图;
图3是一实施例提供的虚拟礼物特效的渲染方法的流程图;
图4是一种直播技术中虚拟礼物的渲染效果图;
图5是一实施例提供的虚拟礼物渲染的效果图;
图6是一实施例提供的目标特效礼物的合成展示方法的流程图;
图7是一种直播技术中虚拟礼物的合成效果图;
图8是一实施例提供的虚拟礼物特效的渲染方法的另一流程图;
图9是一实施例提供的虚拟礼物赠送过程的时序图;
图10是一实施例提供的虚拟礼物特效的渲染装置的结构示意图;
图11是一实施例提供的虚拟礼物特效的渲染装置的另一结构示意图。
具体实施方式
下面详细描述本申请的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功 能的元件。下面通过参考附图描述的实施例是示例性的,仅用于解释本申请,而不能解释为对本申请的限制。
本技术领域技术人员可以理解,除非特意声明,这里使用的单数形式“一”、“一个”、“所述”和“该”也可包括复数形式。应该进一步理解的是,本申请的说明书中使用的措辞“包括”是指存在所述特征、整数、步骤、操作、元件和/或组件,但是并不排除存在或添加一个或多个其他特征、整数、步骤、操作、元件、组件和/或它们的组。应该理解,当我们称元件被“连接”或“耦接”到另一元件时,它可以直接连接或耦接到其他元件,或者也可以存在中间元件。此外,这里使用的“连接”或“耦接”可以包括无线连接或无线耦接。这里使用的措辞“和/或”包括一个或更多个相关联的列出项的全部或任一单元和全部组合。
本领域技术人员应当理解,本申请所称的“客户端”、“应用”、“应用程序”以及类似表述的概念,是业内技术人员所公知的相同概念,是指由一系列计算机指令及相关数据资源有机构造的适于电子运行的计算机软件。除非特别指定,这种命名本身不受编程语言种类、级别,也不受其赖以运行的操作系统或平台所限制。理所当然地,此类概念也不受任何形式的终端所限制。
本技术领域技术人员可以理解,除非另外定义,这里使用的所有术语(包括技术术语和科学术语),具有与本申请所属领域中的普通技术人员的一般理解相同的意义。还应该理解的是,诸如通用字典中定义的那些术语,应该被理解为具有与现有技术的上下文中的意义一致的意义,并且除非像这里一样被特定定义,否则不会用理想化或过于正式的含义来解释。
为了更好地阐释本申请的技术方案,下面示出本方案的虚拟礼物特效的渲染方法所可以适用的应用环境。如图1所示,图1是一实施例提供的虚拟礼物特效的渲染方法的系统框架示意图,该系统框架可以包括服务端和客户端。位于服务端上的直播平台中可以包括多个虚拟直播间和服务器等,各个虚拟直播间对应播放不同的直播内容。客户端包括观众客户端和主播客户端,通常而言,主播通过主播客户端进行直播,观众通过观众客户端选择进入某一虚拟直播间观看主播进行直播。观众客户端和主播客户端可以通过安装在终端设备上的直播应用程序(Application,APP)进入直播 平台。
在本实施例中,终端设备可以为智能手机、平板电脑、电子阅读器、台式电脑或笔记本电脑等终端,对此并不做限定。服务器是用于为终端设备提供后台服务的后台服务器,可以用独立服务器或多个服务器组成的服务器集群来实现。
本实施例提供的虚拟礼物特效的渲染方法适用于在直播过程中赠送虚拟礼物并对虚拟礼物特效进行展示的情况,可以是观众通过观众客户端向目标主播赠送虚拟礼物,以在目标主播所处主播客户端和多个观众客户端展示虚拟礼物的特效,也可以是主播通过主播客户端向另一主播赠送虚拟礼物,以在赠送虚拟礼物的主播和接收虚拟礼物的主播所处的主播客户端以及多个观众客户端展示虚拟礼物的特效等。
下面以观众客户端向目标主播赠送虚拟特效礼物,在观众客户端渲染出虚拟礼物特效为例,对本方案进行示例性说明。
图2是一实施例提供的直播系统的结构示意图,如图2所示,该直播系统200包括:主播客户端210、观众客户端230和服务器220。主播客户端210经服务器220与观众客户端230通过网络进行通信连接。
在本实施例中,主播客户端可以是安装于计算机电脑上的主播客户端,也可以是安装于移动终端,如手机或平板电脑上的主播客户端;同理,观众客户端可以是安装于计算机电脑上的观众客户端,也可以是安装于移动终端,如手机或平板电脑上的观众客户端。
所述服务器220,用于接收所述观众客户端230发送的虚拟礼物的赠送指令,并向主播客户端210发送所述赠送指令;
所述主播客户端210,用于接收所述赠送指令并获取目标特效礼物标识;根据所述目标特效礼物标识查找得到目标特效礼物,确定所述目标特效礼物所对应的特征区域;根据所述特征区域确定所述目标特效礼物在所述直播视频上的合成位置信息;将所述合成位置信息和所述直播视频编码成直播视频流数据发送至服务器;
所述服务器220,还用于将所述直播视频流数据转发至所述观众客户端230;
所述观众客户端230,用于接收直播视频流数据和目标特效礼物,从 所述直播视频流数据中获取直播视频和所述目标特效礼物的合成位置信息;根据所述合成位置信息将所述目标特效礼物添加到所述直播视频进行合成,得到特效帧图像;在直播窗口上设置展示特效展示区域,在播放所述直播视频的过程中,在所述特效展示区域同步渲染所述特效帧图像。
图3是一实施例提供的虚拟礼物特效的渲染方法的流程图,该虚拟礼物特效的渲染方法执行于客户端,如观众客户端,本实施例以观众客户端为例进行说明。
具体的,如图3所示,该虚拟礼物特效的渲染方法可以包括以下步骤:
S110、接收直播视频流数据和目标特效礼物,从所述直播视频流数据中获取直播视频和所述目标特效礼物的合成位置信息。
其中,所述合成位置信息包括基于主播客户端对所述直播视频进行识别得到的目标特效礼物合成在所述直播视频上的目标位置。
在实施例中,用户通过观众客户端向服务器发送虚拟礼物的赠送指令。主播客户端接收服务器转发的赠送指令,获取直播视频和目标特效礼物所对应的特征区域。可选的,特征区域可以是主播客户端根据赠送指令进行识别得到的,也可以是服务器接收到赠送指令进行识别得到再转发给主播客户端。本实施例以主播客户端根据赠送指令识别目标特效礼物所对应的特征区域为例进行说明。
主播客户端接收到观众客户端发送的虚拟礼物的赠送指令时,获取目标主播所在直播间的直播视频,从该直播视频中提取当前视频帧图像,根据目标特效礼物对当前视频帧图像进行处理,以提取出用于对目标特效礼物进行合成的相关信息,如目标特效礼物的特征区域在当前视频帧图像的合成位置信息。根据合成位置信息,可以将目标特效礼物合成到当前视频帧图像的目标位置,其中,目标特效礼物的特征区域与当前视频帧图像的目标位置一一对应。
可选的,合成位置信息可以包括:人脸信息、人体轮廓信息、手势信息和人体骨骼信息中的至少一者。在实施例中,合成位置信息可以由一个或多个人物轮廓关键点来表示,其中,每个人物轮廓关键点在当前视频帧图像中有唯一的坐标值,根据人物轮廓关键点的一个或多个坐标值可以得到目标特效礼物添加在当前视频帧图像的目标位置。
不同人物轮廓关键点的集合对应不同的人体信息。例如,识别出当前视频帧图像的人脸部位,提取人脸部位的轮廓关键点,在一实施例中,人脸信息可以包括106个轮廓关键点,每个轮廓关键点对应人脸的某一部位,每个轮廓关键点对应唯一的坐标值,表示该轮廓关键点在当前视频帧图像中的位置。同理,身体轮廓包括59个轮廓关键点,每个轮廓关键点对应人身体各部位的边缘轮廓,人体骨骼包括22个轮廓关键点,每个轮廓关键点对应人体骨骼关节点,每个轮廓关键点的坐标值表示在当前视频帧图像中的位置。
其中,目标特效礼物对应的特征区域与在当前视频帧图像的目标位置相对应。例如,目标特效礼物“天使翅膀”的特征区域为“背部”,从提取出来的人物轮廓关键点识别出属于“背部”特征的轮廓关键点,并确定为目标轮廓点,根据这些目标轮廓点在当前视频帧图像上的坐标值,确定目标特效礼物所在当前视频帧图像上进行合成的目标位置,其中,目标位置可以是目标轮廓点的坐标值的集合,也可以是目标轮廓点连线所形成的区域。
进一步的,主播观众端识别到合成位置信息后,将合成位置信息与直播视频进行编码打包形成直播视频流数据,使得合成位置信息能够跟随直播视频一起经服务器转发到观众客户端。
观众客户端接收到直播视频流数据后进行解码,得到合成位置信息和直播视频,并从直播视频中获取当前视频帧图像。需要说明的是,主播客户端用于识别合成位置信息所对应的当前视频帧图像和观众客户端从直播视频中获取的当前视频帧图像是同一帧图像,其展示在主播客户端和观众客户端的分辨率、尺寸和颜色等可以不同。
其中,目标特效礼物可以是二维显示形式的特效礼物,也可以是三维显示形式的特效礼物,即三维特效礼物。在本实施例中,该目标特效礼物优选为三维特效礼物,通过三维特效礼物营造出立体特效,增强现实感受,提高虚拟礼物特效的渲染效果。
S120、根据所述合成位置信息将所述目标特效礼物添加到所述直播视频进行合成,得到特效帧图像。
观众客户端获取到合成位置信息,根据合成位置信息确定目标特效礼 物在直播视频的当前视频帧图像的目标位置,将目标特效礼物添加到该目标位置上与当前视频帧图像进行合成,得到特效帧图像。其中,当前视频帧图像可以是一帧视频帧图像,也可以是多帧视频帧图像。
在实施例中,当前视频帧图像可以分割为前景图像层和背景图像层,目标特效礼物可以包括一个或多个虚拟礼物特效层,在实施例中,可以对目标特效礼物进行拆分,生成目标特效礼物所对应的一个或多个虚拟礼物特效层,根据合成位置信息确定每个虚拟礼物特效层在前景图像层或背景图像层上的目标位置,将各虚拟礼物特效层、前景图像层和背景图像层进行合成,得到特效帧图像。
在一实施例中,根据合成位置信息将各所述虚拟礼物特效层与所述前景图像层和背景图像层按照优先级顺序进行合成并展示。
S130、在直播窗口上设置展示特效展示区域,在播放所述直播视频的过程中,在所述特效展示区域同步渲染所述特效帧图像。
直播窗口是指直播应用处于开启状态下对应的窗口,处于最大化状态是可以占据整个终端设备屏幕。在本实施例中,在直播窗口上设置特效展示区域,特效展示区域设置于直播视频播放区域之上,特效展示区域大于直播视频播放区域,使得目标特效礼物所对应的特效能够能放大渲染出来,提高特效展示的效果。其中,直播视频播放区域是指用于播放直播视频的区域。
观众客户端获取到合成位置信息后,结合当前观众客户端进行特效展示区域的大小对合成位置信息进行相对应的换算,根据换算后的合成位置信息确定目标特效礼物在当前视频帧图像上的目标位置,将目标特效礼物添加到目标位置上进行合成。
例如,主播客户端识别到当前视频帧图像的分辨率大小为400*300,所得到的合成位置信息中目标轮廓点A的坐标值为(50,50),而观众客户端进行展示的同一当前视频帧图像的分辨率大小为800*600,相对应地对合成位置信息进行换算,得到当前目标轮廓点A’的坐标值为(100,100)。将目标特效礼物添加到经换算后的合成位置信息所确定的目标位置上进行合成。需要说明的是,同一当前视频帧图像是指视频帧图像的内容相同,其余特征如分辨率、图像尺寸等可以不同。
在本实施例中,图像帧图像的特效展示和直播视频播放占用不同的线程,以使得其中一线程在播放直播视频过程中,另一线程能够将特效帧图像同步渲染到特效展示区域,以达到视频播放和特效展示同步进行,提高了特效的展示效果。
需要说明的是,将特效礼物所对应的特效层中被主播人物遮挡的区域透明化,以使得跨直播视频播放区域的特效展示而不影响直播视频播放区域正常的视频播放。
如图4所示,图4是一种直播技术中虚拟礼物的渲染效果图,在该技术中,尤其是在AR(Augmented Reality,增强现实)虚拟特效礼物展示过程中,AR虚拟礼物只能现实在直播视频播放区域,展示效果不佳;而采用本申请的技术后,可以跨直播视频播放区域展示,可以得到更好的特效展示效果。
图5是一实施例提供的虚拟礼物渲染的效果图,如图5所示,在直播视频播放区域设置特效展示区域,在特效展示区域的面积大于直播视频播放区域,虚拟礼物特效渲染到特效展示区域,以使得虚拟礼物特效能够跨直播视频区域进行展示,如图5所示中的虚拟特效礼物中的“天使翅膀”,得到更好的特效展示效果。
本实施例提供的虚拟礼物特效的渲染方法,通过接收直播视频流数据和目标特效礼物,从直播视频流数据中获取直播视频和目标特效礼物的合成位置信息;其中,合成位置信息包括基于主播客户端对直播视频进行识别得到的目标特效礼物合成在直播视频上的目标位置;根据合成位置信息将目标特效礼物添加到直播视频进行合成,得到特效帧图像;在直播窗口上设置展示特效展示区域,在播放直播视频的过程中,在特效展示区域同步渲染特效帧图像,以使得虚拟礼物特效并非仅局限于客户端的直播视频播放区域,能够跨视频播放区域进行渲染和展示,提高了虚拟礼物特效的展示效果。
同时,相对于通过主播客户端或服务器直接将目标特效礼物合成到直播视频,再发送至各观众客户端以在观众客户端的视频区域播放虚拟礼物特效,该方案利用主播客户端在直播视频外将合成位置信息进行编码封装,在观众客户端解码得到合成位置信息,便于对虚拟礼物的效果展示进行二 次编辑。
为了使本技术方案更为清晰,更为便于理解,下面对本技术方案中的各个步骤的具体的实现过程和方式加以详细的描述。
图6是一实施例提供的目标特效礼物的合成展示方法的流程图,如图6所示,在一实施例中,步骤S120中的根据所述合成位置信息将所述目标特效礼物添加到所述直播视频进行合成,得到特效帧图像,可以包括以下步骤:
S1201、获取所述直播视频的当前视频帧图像。
其中,当前视频帧图像可以一帧视频帧图像,也可以是多帧视频帧图像。
S1202、将所述当前视频帧图像分割为前景图像层和背景图像层,并根据目标特效礼物生成至少一个虚拟礼物特效层。
对当前视频帧图像进行背景分割处理。可以利用现有算法对当前视频帧图像的各像素值进行对比,将当前视频帧图像分割为前景区域和背景区域,如将像素值大于某一阈值的像素点的集合所对应的区域作为前景区域,将像素值小于某一阈值的像素点的集合所对应的区域作为背景区域。在实施例中,前景区域和背景区域分别位于不同的图像层,其中,前景区域所在的图像层为前景图像层,背景区域所在的图像层为背景图像层。
在一实施例中,前景图像层可以包括直播视频中的主播人物区域,背景图像层可以包括直播视频中除主播人物区域外的背景区域。另外,在实施例中,可以对目标特效礼物进行拆分,生成目标特效礼物所对应的一个或多个虚拟礼物特效层,例如,“面具”礼物只有一个虚拟礼物特效层,“雪花”礼物可以包括多个虚拟礼物特效层,如第一片雪花在虚拟礼物特效层A,第二片雪花在虚拟礼物特效层B,第三片和第四片雪花在虚拟礼物特效层C等。
观众客户端获取到当前视频帧图像的前景图像层、背景图像层以及目标特效礼物所对应的一个或多个虚拟礼物特效层。可选的,可以将其进行相应的处理并缓存起来。
S1203、根据所述合成位置信息将各所述虚拟礼物特效层与所述前景图像层和背景图像层按照顺序进行合成并展示。
示例性的,合成位置信息包括人物轮廓关键点的位置A(50,50),B(55,60),C(70,100),各图层包括前景图像层a、背景图像层b、虚拟礼物特效层c、虚拟礼物特效层d和虚拟礼物特效层e,各图层的合成顺序为b、c、a、d和e,且c对应A的位置,d对应B的位置,e对应C的位置。
首先,将背景图像层b设置于底层,随后根据A位置将虚拟礼物特效层c与前景图像层a进行合成,接着根据B位置将虚拟礼物特效层d进行合成,最后根据C位置将虚拟礼物特效层e进行合成,以使得目标特效礼物中各部分均添加在当前视频帧图像所对应的目标位置上后,对合成目标特效礼物的当前视频帧图像进行展示。
如图7所示,图7是一种直播技术中虚拟礼物的合成效果图,在该技术中,尤其是在大特效礼物展示过程中,虚拟礼物直接添加到直播视频上,使得虚拟礼物和直播视频画面重叠,遮挡主播人物,影响用户观看;而采用本申请的技术后,可以避免遮挡主播人物,可以得到更好的特效展示效果。
继续参考图5,如图5所示,根据主播的人体背部轮廓信息,将主播人物所在前景图像层设置在“天使翅膀”所在特效层的上面,遮挡“天使翅膀”的设定区域,达到将“天使翅膀”添加到主播人物的背部的效果,根据主播的脸部轮廓信息,将“面具”所在特效层设置在主播人物所在前景图像层之上,遮挡主播人物的脸部设定区域,达到将“面具”添加到主播人物眼睛上的效果,使得目标特效礼物能够根据人体轮廓特征合成到直播视频的当前视频帧图像的目标位置,得到更好的特效展示效果。
进一步的,步骤S1203中的根据所述合成位置信息将各所述虚拟礼物特效层与所述前景图像层和背景图像层按照顺序进行合成,可以包括以下步骤:
S201、通过目标特效礼物标识确定各所述虚拟礼物特效层与所述前景图像层和背景图像层的优先级。
在实施例中,预先设定目标特效礼物中各虚拟礼物特效层与前景图像层和背景图像层的优先级,在对虚拟礼物特效进行合成时,按照该优先级由高到低或由低到高依次进行合成。
可选的,目标特效礼物的标识携带有目标特效礼物所对应的各虚拟礼 物特效层与前景图像层和背景图像层之间的合成顺序。合成位置信息可以对应于一个或多个虚拟礼物特效层合成在前景图像层或背景图像层上的目标位置。
S202、根据所述合成位置信息将各所述虚拟礼物特效层与所述前景图像层和背景图像层按照所述优先级由高到低进行合成,得到特效帧图像。
示例性的,目标特效礼物的标识为01,对应的虚拟礼物有天使翅膀、羽毛001、羽毛002等。相对应的,天使翅膀、羽毛001、羽毛002和主播(即前景图像层)所在特效层分别为特效层A、特效层B、特效层C和特效层D。为了便于说明和解释,前景图像层和背景图像层可以理解为特效层。
该目标特效礼物所对应的特效是:天使翅膀添加在主播的后背;羽毛001添加到主播的手臂,遮挡主播手臂相对应区域;羽毛002位于主播的肩膀,且一半被主播遮挡,另一半未被主播遮挡。
相对应的,预配置各特效层的优先级,在本实施例中,特效层的优先级越高其位置越靠近底层,其中,各特效层的优先级由高到低为:特效层C、特效层A、特效层D和特效层B。按照优先级由高到低依次合成,即:将主播所在特效层D设于羽毛001所对应的特效层C和天使翅膀所对应的特效层A之上,以使得产生主播后背产生天使翅膀、且主播遮挡一般羽毛002的效果,之后,将羽毛001所对应的特效层合成,以产生羽毛001遮挡主播手臂的效果。
还需要说明的是,在各个特效层中,在物象之外的区域呈透明或半透明状态,如在天使翅膀的特效层中,除了天使翅膀之外的区域呈透明状态,以使得位于天使翅膀的特效层之下的其他特效物象能够透过该区域显示出来。
本实施例提供的虚拟礼物特效的渲染方法,通过观众客户端从直播视频流数据中获取直播视频和目标特效礼物的合成位置信息;将直播视频分割为前景图像层和背景图像层,并根据目标特效礼物生成至少一个虚拟礼物特效层;根据合成位置信息将各虚拟礼物特效层与前景图像层和背景图像层按照顺序进行合成并展示,实现了根据人物轮廓等所得到的合成位置信息将目标特效礼物合成到既定的目标位置上,让目标特效礼物的某些特效层会遮挡视频中的主播人物,某些特效层不遮挡视频中的主播人物,以 实现虚拟礼物与主播人物相结合的特效效果,又不影响视频中主播的展示,同时提高了虚拟礼物特效的展示效果。
图8是一实施例提供的虚拟礼物特效的渲染方法的另一流程图,该虚拟礼物特效的渲染方法应用于服务端,可以由服务器来执行。
具体的,如图8所示,该虚拟礼物特效的渲染方法可以包括以下步骤:
S510、接收主播客户端发送的直播视频流数据。
其中,所述直播视频流数据中包括直播视频和目标特效礼物的合成位置信息。
服务器接收到虚拟礼物的赠送指令,将该赠送指令转发至主播客户端后,获取主播客户端发送的直播视频流数据。其中,直播视频流数据是由主播观众端识别到合成位置信息后,将合成位置信息与直播视频进行编码打包形成,以使得合成位置信息能够跟随直播视频一起发送至服务器。
S520、将所述直播视频流数据转发至观众客户端。
其中,所述观众客户端根据所述合成位置信息将所述目标特效礼物添加到所述直播视频进行合成,得到特效帧图像;在直播窗口上设置展示特效展示区域,在播放所述直播视频的过程中,在所述特效展示区域同步渲染所述特效帧图像。
观众客户端获取到合成位置信息,根据合成位置信息确定目标特效礼物在直播视频的当前视频帧图像的目标位置,将目标特效礼物添加到该目标位置上与当前视频帧图像进行合成,得到特效帧图像。其中,当前视频帧图像可以是一帧视频帧图像,也可以是多帧视频帧图像。
在实施例中,当前视频帧图像可以分割为前景图像层和背景图像层,目标特效礼物可以包括一个或多个虚拟礼物特效层,在实施例中,可以对目标特效礼物进行拆分,生成目标特效礼物所对应的一个或多个虚拟礼物特效层,根据合成位置信息确定每个虚拟礼物特效层在前景图像层或背景图像层上的目标位置,将各虚拟礼物特效层、前景图像层和背景图像层进行合成,得到特效帧图像。
在一实施例中,根据合成位置信息将各所述虚拟礼物特效层与所述前景图像层和背景图像层按照优先级顺序进行合成并展示。
进一步的,可以在直播窗口上设置特效展示区域,特效展示区域设置 于直播视频播放区域之上,特效展示区域大于直播视频播放区域,使得目标特效礼物所对应的特效能够能放大渲染出来,提高特效展示的效果。其中,直播窗口是指直播应用处于开启状态下对应的窗口,处于最大化状态是可以占据整个终端设备屏幕。
在本实施例中,图像帧图像的特效展示和直播视频播放占用不同的线程,以使得其中一线程在播放直播视频过程中,另一线程能够将特效帧图像同步渲染到特效展示区域,以达到视频播放和特效展示同步进行,提高了特效的展示效果。
需要说明的是,将特效礼物所对应的特效层中被主播人物遮挡的区域透明化,以使得跨直播视频播放区域的特效展示而不影响直播视频播放区域正常的视频播放。
本实施例提供的虚拟礼物特效的渲染方法,应用于服务端,通过接收主播客户端发送的直播视频流数据;其中,直播视频流数据中包括直播视频和目标特效礼物的合成位置信息;将直播视频流数据转发至观众客户端;其中,观众客户端根据合成位置信息将目标特效礼物添加到直播视频进行合成,得到特效帧图像;在直播窗口上设置展示特效展示区域,在播放直播视频的过程中,在特效展示区域同步渲染特效帧图像,以使得虚拟礼物特效并非仅局限于客户端的直播视频播放区域,能够跨视频播放区域进行渲染和展示。
在一实施例中,在步骤S510的接收主播客户端发送的直播视频流数据之前,还可以包括以下步骤:
S500、接收观众客户端发送的虚拟礼物的赠送指令,并向主播客户端发送所述赠送指令。
其中,所述主播客户端根据所述赠送指令获取目标特效礼物标识;根据所述目标特效礼物标识查找得到目标特效礼物,确定所述目标特效礼物所对应的特征区域;根据所述特征区域确定所述目标特效礼物在所述直播视频上的合成位置信息。
在本实施例中,用户通过观众客户端向服务器发送虚拟礼物的赠送指令,主播客户端接收到经服务器转发的虚拟礼物的赠送指令,获取目标主播所在直播间的直播视频,从该直播视频中提取当前视频帧图像,同时根 据获取到的目标特效礼物表示查找目标特效礼物,确定其特征区域。主播客户端根据目标特效礼物对当前视频帧图像进行处理,在当前视频帧图像上识别出用于对目标特效礼物进行合成的相关信息,如目标特效礼物的特征区域在当前视频帧图像的合成位置信息,其中,合成位置信息用于将目标特效礼物合成到当前视频帧图像的目标位置,其中,目标特效礼物的特征区域与当前视频帧图像的目标位置一一对应。
为了更清楚地阐释本申请的技术方案,将结合若干场景下的示例,进一步进行说明。
场景一:参考图9,图9是一实施例提供的虚拟礼物赠送过程的时序图;该示例中,观众向主播赠送三维特效礼物“天使翅膀”,对应标识为ID1648,则其主要流程可以如下:
S11、观众客户端发出送礼请求到服务器。
观众用户W通过观众客户端发出送礼请求到服务器,其中,虚拟礼物为ID1648。
S12、服务器进行业务处理。
服务器收到送礼请求后,做相应的业务处理(如扣费等)。
S13、服务器广播送礼信息。
将观众用户W向主播赠送ID1648礼物的送礼信息广播到频道内的所有用户,包括主播客户端和观众客户端。
S14、主播客户端收到送礼信息后,查询该虚拟礼物,识别合成位置信息。
主播客户端收到送礼信息的广播后,根据虚拟礼物ID1648,查询礼物配置得到该虚拟礼物为三维特效礼物(如AI(Artificial Intelligence)礼物),需要识别的合成位置信息包括人脸和背部,则主播客户端启动进行人脸识别和背景分割识别。
S15、主播客户端将合成位置信息打包到直播视频流进行传输。
主播客户端将人脸识别和背景分割识别所得到的合成位置信息(可以为AI信息)打包到直播视频流中,使得合成位置信息跟随直播视频流一起传输到服务器。
S16、服务器转发该直播视频流。
服务器将包含有合成位置信息的直播视频流传输至观众客户端。
S17、观众客户端获取合成位置信息,进行虚拟礼物合成并展示。
观众客户端从直播视频流中解码得到合成位置信息,将合成位置信息与虚拟礼物相结合,播放出天使翅膀特效:主播背后生长出天使翅膀。
场景二:观众向主播赠送三维特效礼物“宠物小鸟”,对应标识为ID1649,则其主要流程可以如下:
S21、观众客户端发出送礼请求到服务器。
观众用户Q通过观众客户端发出送礼请求到服务器,其中,虚拟礼物为ID1649。
S22、服务器进行业务处理;
服务器收到送礼请求后,做相应的业务处理(如扣费等)。
S23、服务器广播送礼信息。
将观众用户Q向主播赠送ID1649礼物的送礼信息广播到频道内的所有用户,包括主播客户端和观众客户端。
S24、主播客户端收到送礼信息后,查询该虚拟礼物,识别合成位置信息。
主播客户端收到送礼信息的广播后,根据虚拟礼物ID1649,查询礼物配置得到该虚拟礼物为三维特效礼物(如AI(Artificial Intelligence)礼物),需要识别的合成位置信息包括人脸和人体轮廓,则主播客户端启动进行人脸识别和人体轮廓识别。
S25、主播客户端将合成位置信息打包到直播视频流进行传输。
主播客户端将人脸识别和人体轮廓识别所得到的合成位置信息(可以为AI信息)打包到直播视频流中,使得合成位置信息跟随直播视频流一起传输到服务器。
S26、服务器转发该直播视频流。
服务器将包含有合成位置信息的直播视频流传输至观众客户端。
S27、观众客户端获取合成位置信息,进行虚拟礼物合成并展示。
观众客户端从直播视频流中解码得到合成位置信息,将合成位置信息与虚拟礼物相结合,播放出“宠物小鸟”特效:小鸟从视频外区域飞到主播肩膀上。
以上示例仅用于辅助阐述本申请技术方案,其涉及的图示内容及具体流程不构成对本申请技术方案的使用场景的限定。
下面对虚拟礼物特效的渲染装置的相关实施例进行详细阐述。
图10是一实施例提供的虚拟礼物特效的渲染装置的结构示意图,该虚拟礼物特效的渲染装置应用于客户端,如观众客户端。如图10所示,该虚拟礼物特效的渲染装置100可以包括:信息获取模块110、特效帧合成模块120和特效帧展示模块130。
其中,信息获取模块110,用于接收直播视频流数据和目标特效礼物,从所述直播视频流数据中获取直播视频和所述目标特效礼物的合成位置信息;其中,所述合成位置信息包括基于主播客户端对所述直播视频进行识别得到的目标特效礼物合成在所述直播视频上的目标位置;
特效帧合成模块120,用于根据所述合成位置信息将所述目标特效礼物添加到所述直播视频进行合成,得到特效帧图像;
特效帧渲染模块130,用于在直播窗口上设置展示特效展示区域,在播放所述直播视频的过程中,在所述特效展示区域同步渲染所述特效帧图像。
本实施例提供的虚拟礼物特效的渲染装置,应用于客户端,信息获取模块110通过接收直播视频流数据和目标特效礼物,从直播视频流数据中获取直播视频和目标特效礼物的合成位置信息;其中,合成位置信息包括基于主播客户端对直播视频进行识别得到的目标特效礼物合成在直播视频上的目标位置;特效帧合成模块120根据合成位置信息将目标特效礼物添加到直播视频进行合成,得到特效帧图像;特效帧展示模块130在直播窗口上设置展示特效展示区域,在播放直播视频的过程中,在特效展示区域同步渲染特效帧图像,以使得虚拟礼物特效并非仅局限于客户端的直播视频播放区域,能够跨视频播放区域进行渲染和展示。
在一实施例中,所述特效展示区域的面积大于或等于直播视频播放区域。
在一实施例中,特效帧合成模块120包括:视频帧获取单元、图像层分割单元以及图像层合成单元;
其中,视频帧获取单元,用于获取所述直播视频的当前视频帧图像; 图像层分割单元,用于将所述前视频帧图像分割为前景图像层和背景图像层,并根据目标特效礼物生成至少一个虚拟礼物特效层;图像层合成单元,用于根据所述合成位置信息将各所述虚拟礼物特效层与所述前景图像层和背景图像层按照顺序进行合成并展示。
在一实施例中,图像层合成单元可以包括:优先级确定单元和特效帧合成单元;
其中,优先级确定单元,用于通过目标特效礼物标识确定各所述虚拟礼物特效层与所述前景图像层和背景图像层的优先级;特效帧合成单元,用于根据所述合成位置信息将各所述虚拟礼物特效层与所述前景图像层和背景图像层按照所述优先级由高到低进行合成,得到特效帧图像。
在一实施例中,所述目标特效礼物为三维显示形式的特效礼物。
在一实施例中,所述合成位置信息包括:人脸信息、人体轮廓信息、手势信息和人体骨骼信息中的至少一者。
图11是一实施例提供的虚拟礼物特效的渲染装置的另一结构示意图,该虚拟礼物特效的渲染装置应用于服务端,如服务器。如图11所示,该虚拟礼物特效的渲染装置500可以包括:视频流接收模块510和视频流转发模块520。
其中,视频流接收模块510,用于接收主播客户端发送的直播视频流数据;其中,所述直播视频流数据中包括直播视频和目标特效礼物的合成位置信息;
视频流转发模块520,用于将所述直播视频流数据转发至观众客户端;其中,所述观众客户端根据所述合成位置信息将所述目标特效礼物添加到所述直播视频进行合成,得到特效帧图像;在直播窗口上设置展示特效展示区域,在播放所述直播视频的过程中,在所述特效展示区域同步渲染所述特效帧图像。
在一实施例中,虚拟礼物特效的渲染装置,还包括:赠送指令接收模块;
其中,赠送指令接收模块,用于接收观众客户端发送的虚拟礼物的赠送指令,并向主播客户端发送所述赠送指令;其中,所述主播客户端根据所述赠送指令获取目标特效礼物标识;根据所述目标特效礼物标识查找得 到目标特效礼物,确定所述目标特效礼物所对应的特征区域;根据所述特征区域确定所述目标特效礼物在所述直播视频上的合成位置信息。
上述提供的虚拟礼物特效的渲染装置可用于执行上述任意实施例提供的虚拟礼物特效的渲染方法,具备相应的功能和有益效果。
本申请实施例还提供一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时,实现如上述任一实施例中的虚拟礼物特效的渲染方法。
可选的,该计算机设备可以为移动终端、平板电脑、计算机电脑或服务器等。上述提供的计算机设备执行上述任一实施例提供的虚拟礼物特效的渲染方法时,具有相应的功能和有益效果。
本申请实施例还提供一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行一种虚拟礼物特效的渲染方法,包括:
接收直播视频流数据和目标特效礼物,从所述直播视频流数据中获取直播视频和所述目标特效礼物的合成位置信息;其中,所述合成位置信息包括基于主播客户端对所述直播视频进行识别得到的目标特效礼物合成在所述直播视频上的目标位置;
根据所述合成位置信息将所述目标特效礼物添加到所述直播视频进行合成,得到特效帧图像;
在直播窗口上设置展示特效展示区域,在播放所述直播视频的过程中,在所述特效展示区域同步渲染所述特效帧图像。
或者,所述计算机可执行指令在由计算机处理器执行时用于执行一种虚拟礼物特效的渲染方法,包括:
接收主播客户端发送的直播视频流数据;其中,所述直播视频流数据中包括直播视频和目标特效礼物的合成位置信息;
将所述直播视频流数据转发至观众客户端;其中,所述观众客户端根据所述合成位置信息将所述目标特效礼物添加到所述直播视频进行合成,得到特效帧图像;在直播窗口上设置展示特效展示区域,在播放所述直播视频的过程中,在所述特效展示区域同步渲染所述特效帧图像。
当然,本申请实施例所提供的一种包含计算机可执行指令的存储介质, 其计算机可执行指令不限于如上所述的虚拟礼物特效的渲染方法操作,还可以执行本申请任意实施例所提供的虚拟礼物特效的渲染方法中的相关操作,且具备相应的功能和有益效果。
通过以上关于实施方式的描述,所属领域的技术人员可以清楚地了解到,本申请可借助软件及必需的通用硬件来实现,当然也可以通过硬件实现,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如计算机的软盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(RandomAccess Memory,RAM)、闪存(FLASH)、硬盘或光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请任意实施例所述的虚拟礼物特效的渲染方法。
应该理解的是,虽然附图的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,其可以以其他的顺序执行。而且,附图的流程图中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,其执行顺序也不必然是依次进行,而是可以与其他步骤或者其他步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
以上所述仅是本申请的部分实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。

Claims (13)

  1. 一种虚拟礼物特效的渲染方法,其特征在于,包括以下步骤:
    接收直播视频流数据和目标特效礼物,从所述直播视频流数据中获取直播视频和所述目标特效礼物的合成位置信息;其中,所述合成位置信息包括基于主播客户端对所述直播视频进行识别得到的目标特效礼物合成在所述直播视频上的目标位置;
    根据所述合成位置信息将所述目标特效礼物添加到所述直播视频进行合成,得到特效帧图像;
    在直播窗口上设置展示特效展示区域,在播放所述直播视频的过程中,在所述特效展示区域同步渲染所述特效帧图像。
  2. 根据权利要求1所述的虚拟礼物特效的渲染方法,其特征在于,所述特效展示区域的面积大于或等于直播视频播放区域。
  3. 根据权利要求1所述的虚拟礼物特效的渲染方法,其特征在于,所述根据所述合成位置信息将所述目标特效礼物添加到所述直播视频进行合成,得到特效帧图像的步骤包括:
    获取所述直播视频的当前视频帧图像;
    将所述前视频帧图像分割为前景图像层和背景图像层,并根据目标特效礼物生成至少一个虚拟礼物特效层;
    根据所述合成位置信息将各所述虚拟礼物特效层与所述前景图像层和背景图像层按照顺序进行合成并展示。
  4. 根据权利要求3所述的虚拟礼物特效的渲染方法,其特征在于,所述根据所述合成位置信息将各所述虚拟礼物特效层与所述前景图像层和背景图像层按照顺序进行合成的步骤包括:
    通过目标特效礼物标识确定各所述虚拟礼物特效层与所述前景图像层和背景图像层的优先级;
    根据所述合成位置信息将各所述虚拟礼物特效层与所述前景图像层和背景图像层按照所述优先级由高到低进行合成,得到特效帧图像。
  5. 根据权利要求1至4任一项所述的虚拟礼物特效的渲染方法,其特征在于,所述目标特效礼物为三维显示形式的特效礼物。
  6. 根据权利要求1至4任一项所述的虚拟礼物特效的渲染方法,其特 征在于,所述合成位置信息包括:人脸信息、人体轮廓信息、手势信息和人体骨骼信息中的至少一者。
  7. 一种虚拟礼物特效的渲染方法,其特征在于,包括以下步骤:
    接收主播客户端发送的直播视频流数据;其中,所述直播视频流数据中包括直播视频和目标特效礼物的合成位置信息;
    将所述直播视频流数据转发至观众客户端;其中,所述观众客户端根据所述合成位置信息将所述目标特效礼物添加到所述直播视频进行合成,得到特效帧图像;在直播窗口上设置展示特效展示区域,在播放所述直播视频的过程中,在所述特效展示区域同步渲染所述特效帧图像。
  8. 根据权利要求7所述的虚拟礼物特效的渲染方法,其特征在于,所述接收主播客户端发送的直播视频流数据之前,还包括以下步骤:
    接收观众客户端发送的虚拟礼物的赠送指令,并向主播客户端发送所述赠送指令;其中,所述主播客户端根据所述赠送指令获取目标特效礼物标识;根据所述目标特效礼物标识查找得到目标特效礼物,确定所述目标特效礼物所对应的特征区域;根据所述特征区域确定所述目标特效礼物在所述直播视频上的合成位置信息。
  9. 一种虚拟礼物特效的渲染装置,其特征在于,包括:
    信息获取模块,用于接收直播视频流数据和目标特效礼物,从所述直播视频流数据中获取直播视频和所述目标特效礼物的合成位置信息;其中,所述合成位置信息包括基于主播客户端对所述直播视频进行识别得到的目标特效礼物合成在所述直播视频上的目标位置;
    特效帧合成模块,用于根据所述合成位置信息将所述目标特效礼物添加到所述直播视频进行合成,得到特效帧图像;
    特效帧渲染模块,用于在直播窗口上设置展示特效展示区域,在播放所述直播视频的过程中,在所述特效展示区域同步渲染所述特效帧图像。
  10. 一种虚拟礼物特效的渲染装置,其特征在于,包括:
    视频流接收模块,用于接收主播客户端发送的直播视频流数据;其中,所述直播视频流数据中包括直播视频和目标特效礼物的合成位置信息;
    视频流转发模块,用于将所述直播视频流数据转发至观众客户端;其中,所述观众客户端根据所述合成位置信息将所述目标特效礼物添加到所 述直播视频进行合成,得到特效帧图像;在直播窗口上设置展示特效展示区域,在播放所述直播视频的过程中,在所述特效展示区域同步渲染所述特效帧图像。
  11. 一种直播系统,其特征在于,包括服务器、主播客户端和观众客户端,所述主播客户端经所述服务器与所述观众客户端通过网络进行通信连接;
    所述服务器,用于接收所述观众客户端发送的虚拟礼物的赠送指令,并向所述主播客户端发送所述赠送指令;
    所述主播客户端,用于接收所述赠送指令并获取目标特效礼物标识;根据所述目标特效礼物标识查找得到目标特效礼物,确定所述目标特效礼物所对应的特征区域;根据所述特征区域确定所述目标特效礼物在所述直播视频上的合成位置信息;将所述合成位置信息和所述直播视频编码成直播视频流数据发送至服务器;
    所述服务器,还用于将所述直播视频流数据转发至所述观众客户端;
    所述观众客户端,用于接收所述直播视频流数据和所述目标特效礼物,从所述直播视频流数据中获取直播视频和所述目标特效礼物的合成位置信息;根据所述合成位置信息将所述目标特效礼物添加到所述直播视频进行合成,得到特效帧图像;在直播窗口上设置展示特效展示区域,在播放所述直播视频的过程中,在所述特效展示区域同步渲染所述特效帧图像。
  12. 一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现如权利要求1-8任一项所述的虚拟礼物特效的渲染方法的步骤。
  13. 一种包含计算机可执行指令的存储介质,其特征在于,所述计算机可执行指令在由计算机处理器执行时用于执行如权利要求1-8任一项所述虚拟礼物特效的渲染方法的步骤。
PCT/CN2020/112815 2019-09-11 2020-09-01 虚拟礼物特效的渲染方法和装置、直播系统 WO2021047420A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910859928.8A CN110475150B (zh) 2019-09-11 2019-09-11 虚拟礼物特效的渲染方法和装置、直播系统
CN201910859928.8 2019-09-11

Publications (1)

Publication Number Publication Date
WO2021047420A1 true WO2021047420A1 (zh) 2021-03-18

Family

ID=68515628

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/112815 WO2021047420A1 (zh) 2019-09-11 2020-09-01 虚拟礼物特效的渲染方法和装置、直播系统

Country Status (2)

Country Link
CN (1) CN110475150B (zh)
WO (1) WO2021047420A1 (zh)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113453033A (zh) * 2021-06-29 2021-09-28 广州方硅信息技术有限公司 直播间信息传送处理方法及其装置、设备与介质
CN113487709A (zh) * 2021-07-07 2021-10-08 上海商汤智能科技有限公司 一种特效展示方法、装置、计算机设备以及存储介质
CN113518215A (zh) * 2021-05-19 2021-10-19 上海爱客博信息技术有限公司 3d动态效果生成方法、装置、计算机设备和存储介质
CN113596561A (zh) * 2021-07-29 2021-11-02 北京达佳互联信息技术有限公司 视频流播放方法、装置、电子设备和计算机可读存储介质
CN113645476A (zh) * 2021-08-06 2021-11-12 广州博冠信息科技有限公司 画面处理方法、装置、电子设备及存储介质
CN113784161A (zh) * 2021-09-09 2021-12-10 广州方硅信息技术有限公司 用户标志传输方法及其装置、设备与介质
CN113824976A (zh) * 2021-09-03 2021-12-21 广州方硅信息技术有限公司 直播间内的进场秀显示方法、装置及计算机设备
CN113824982A (zh) * 2021-09-10 2021-12-21 网易(杭州)网络有限公司 一种直播方法、装置、计算机设备及存储介质
CN114025219A (zh) * 2021-11-01 2022-02-08 广州博冠信息科技有限公司 增强现实特效的渲染方法、装置、介质及设备
CN114125488A (zh) * 2021-12-09 2022-03-01 小象(广州)商务有限公司 一种直播中的虚拟礼物展示方法及系统
CN114245154A (zh) * 2021-11-29 2022-03-25 北京达佳互联信息技术有限公司 游戏直播间虚拟物品展示方法、装置及电子设备
CN114268810A (zh) * 2021-12-31 2022-04-01 广州方硅信息技术有限公司 直播视频显示方法、系统、设备及存储介质
CN114710681A (zh) * 2022-03-24 2022-07-05 广州方硅信息技术有限公司 多路直播展示控制方法及其装置、设备、介质
CN115209165A (zh) * 2021-04-08 2022-10-18 北京字节跳动网络技术有限公司 一种控制直播封面显示的方法及装置
CN115665463A (zh) * 2022-10-20 2023-01-31 广州方硅信息技术有限公司 直播礼物交互方法及其装置、设备、介质
CN115883860A (zh) * 2022-10-09 2023-03-31 北京达佳互联信息技术有限公司 虚拟空间的显示方法、装置、设备及存储介质
CN116016972A (zh) * 2022-12-29 2023-04-25 广州方硅信息技术有限公司 直播间美颜方法、装置、系统、存储介质以及电子设备
WO2023104007A1 (zh) * 2021-12-10 2023-06-15 北京字节跳动网络技术有限公司 视频特效包的生成方法、装置、设备及存储介质
CN116456131A (zh) * 2023-03-13 2023-07-18 北京达佳互联信息技术有限公司 特效渲染方法、装置、电子设备及存储介质
CN117376596A (zh) * 2023-12-08 2024-01-09 江西拓世智能科技股份有限公司 基于智能数字人模型的直播方法、装置及存储介质

Families Citing this family (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110475150B (zh) * 2019-09-11 2021-10-08 广州方硅信息技术有限公司 虚拟礼物特效的渲染方法和装置、直播系统
CN110536151B (zh) * 2019-09-11 2021-11-19 广州方硅信息技术有限公司 虚拟礼物特效的合成方法和装置、直播系统
CN110557649B (zh) * 2019-09-12 2021-12-28 广州方硅信息技术有限公司 直播交互方法、直播系统、电子设备及存储介质
CN111698523B (zh) * 2019-12-06 2021-11-12 广州方硅信息技术有限公司 文字虚拟礼物的赠送方法、装置、设备及存储介质
CN112991147B (zh) 2019-12-18 2023-10-27 抖音视界有限公司 图像处理方法、装置、电子设备及计算机可读存储介质
CN111343485B (zh) * 2020-01-17 2022-04-26 广州方硅信息技术有限公司 虚拟礼物展示的方法、装置、设备、系统及存储介质
CN113315924A (zh) * 2020-02-27 2021-08-27 北京字节跳动网络技术有限公司 图像特效处理方法及装置
CN111314663A (zh) * 2020-02-28 2020-06-19 青岛海信智慧家居系统股份有限公司 一种基于5g的智能虚拟窗系统
CN111464430B (zh) * 2020-04-09 2023-07-04 腾讯科技(深圳)有限公司 一种动态表情展示方法、动态表情创建方法及装置
CN111930326A (zh) * 2020-06-30 2020-11-13 西安万像电子科技有限公司 图像处理方法、设备及系统
CN111935505B (zh) * 2020-07-29 2023-04-14 广州华多网络科技有限公司 视频封面生成方法、装置、设备及存储介质
CN111957039A (zh) * 2020-09-04 2020-11-20 Oppo(重庆)智能科技有限公司 一种游戏特效实现方法、装置及计算机可读存储介质
CN112218107B (zh) * 2020-09-18 2022-07-08 广州虎牙科技有限公司 直播渲染方法和装置、电子设备及存储介质
CN112218108B (zh) * 2020-09-18 2022-07-08 广州虎牙科技有限公司 直播渲染方法、装置、电子设备及存储介质
CN112261290B (zh) * 2020-10-16 2022-04-19 海信视像科技股份有限公司 显示设备、摄像头以及ai数据同步传输方法
CN112348968B (zh) * 2020-11-06 2023-04-25 北京市商汤科技开发有限公司 增强现实场景下的展示方法、装置、电子设备及存储介质
CN112383788B (zh) * 2020-11-11 2023-05-26 成都威爱新经济技术研究院有限公司 一种基于智能ai技术的直播实时图像提取系统及方法
CN112533014B (zh) * 2020-11-26 2023-06-09 Oppo广东移动通信有限公司 视频直播中目标物品信息处理和显示方法、装置及设备
CN114640882B (zh) * 2020-12-15 2024-06-28 腾讯科技(深圳)有限公司 视频处理方法、装置、电子设备及计算机可读存储介质
CN112866562B (zh) * 2020-12-31 2023-04-18 上海米哈游天命科技有限公司 画面处理方法、装置、电子设备及存储介质
CN112929680B (zh) * 2021-01-19 2023-09-05 广州虎牙科技有限公司 直播间图像渲染方法、装置、计算机设备及存储介质
CN113038229A (zh) * 2021-02-26 2021-06-25 广州方硅信息技术有限公司 虚拟礼物播控、操控方法及其装置、设备与介质
CN113139913B (zh) * 2021-03-09 2024-04-05 杭州电子科技大学 一种面向人物肖像的新视图修正生成方法
WO2022193070A1 (zh) * 2021-03-15 2022-09-22 百果园技术(新加坡)有限公司 直播视频交互方法、装置、设备及存储介质
CN114501041B (zh) * 2021-04-06 2023-07-14 抖音视界有限公司 特效显示方法、装置、设备及存储介质
CN113315982B (zh) * 2021-05-07 2023-06-27 广州虎牙科技有限公司 一种直播方法、计算机存储介质及设备
CN113360034B (zh) * 2021-05-20 2024-11-08 广州博冠信息科技有限公司 画面显示方法、装置、计算机设备及存储介质
CN113395533B (zh) * 2021-05-24 2023-03-21 广州博冠信息科技有限公司 虚拟礼物特效显示方法、装置、计算机设备及存储介质
CN113382275B (zh) * 2021-06-07 2023-03-07 广州博冠信息科技有限公司 直播数据的生成方法、装置、存储介质及电子设备
CN113422980B (zh) * 2021-06-21 2023-04-14 广州博冠信息科技有限公司 视频数据处理方法及装置、电子设备、存储介质
CN113744135A (zh) * 2021-09-16 2021-12-03 北京字跳网络技术有限公司 图像处理方法、装置、电子设备及存储介质
CN114449305A (zh) * 2022-01-29 2022-05-06 上海哔哩哔哩科技有限公司 直播间中的礼物动画播放方法及装置
CN115022666B (zh) * 2022-06-27 2024-02-09 北京蔚领时代科技有限公司 一种虚拟数字人的互动方法及其系统
CN115442637A (zh) * 2022-09-06 2022-12-06 北京字跳网络技术有限公司 直播特效渲染方法、装置、设备、可读存储介质及产品
CN115484472B (zh) * 2022-09-23 2024-05-28 广州方硅信息技术有限公司 直播间特效播放及处理方法、装置、电子设备和存储介质
CN116156268A (zh) * 2023-02-20 2023-05-23 北京乐我无限科技有限责任公司 直播间的虚拟资源控制方法、装置、电子设备及存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040226047A1 (en) * 2003-05-05 2004-11-11 Jyh-Bor Lin Live broadcasting method and its system for SNG webcasting studio
CN106658035A (zh) * 2016-12-09 2017-05-10 武汉斗鱼网络科技有限公司 一种特效礼物动态展示方法及装置
CN107483892A (zh) * 2017-09-08 2017-12-15 北京奇虎科技有限公司 视频数据实时处理方法及装置、计算设备
CN107613360A (zh) * 2017-09-20 2018-01-19 北京奇虎科技有限公司 视频数据实时处理方法及装置、计算设备
CN108391153A (zh) * 2018-01-29 2018-08-10 北京潘达互娱科技有限公司 虚拟礼物显示方法、装置及电子设备
CN110475150A (zh) * 2019-09-11 2019-11-19 广州华多网络科技有限公司 虚拟礼物特效的渲染方法和装置、直播系统
CN110493630A (zh) * 2019-09-11 2019-11-22 广州华多网络科技有限公司 虚拟礼物特效的处理方法和装置、直播系统
CN110536151A (zh) * 2019-09-11 2019-12-03 广州华多网络科技有限公司 虚拟礼物特效的合成方法和装置、直播系统

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE3318659A1 (de) * 1983-05-21 1984-11-22 Robert Bosch Gmbh, 7000 Stuttgart Verfahren und schaltungsanordnung zur trickueberblendung
CN106101736B (zh) * 2016-06-28 2019-02-22 广州华多网络科技有限公司 一种虚拟礼物的展示方法和系统
CN109035373B (zh) * 2018-06-28 2022-02-01 北京市商汤科技开发有限公司 三维特效程序文件包的生成及三维特效生成方法与装置
CN109191544A (zh) * 2018-08-21 2019-01-11 北京潘达互娱科技有限公司 一种贴纸礼物展示方法、装置、电子设备及存储介质
JP6523586B1 (ja) * 2019-02-28 2019-06-05 グリー株式会社 配信ユーザの動きに基づいて生成されるキャラクタオブジェクトのアニメーションを含む動画をライブ配信する動画配信システム、動画配信方法及び動画配信プログラム
CN109286835A (zh) * 2018-09-05 2019-01-29 武汉斗鱼网络科技有限公司 直播间互动元素显示方法、存储介质、设备及系统
CN110012352B (zh) * 2019-04-17 2020-07-24 广州华多网络科技有限公司 图像特效处理方法、装置及视频直播终端

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040226047A1 (en) * 2003-05-05 2004-11-11 Jyh-Bor Lin Live broadcasting method and its system for SNG webcasting studio
CN106658035A (zh) * 2016-12-09 2017-05-10 武汉斗鱼网络科技有限公司 一种特效礼物动态展示方法及装置
CN107483892A (zh) * 2017-09-08 2017-12-15 北京奇虎科技有限公司 视频数据实时处理方法及装置、计算设备
CN107613360A (zh) * 2017-09-20 2018-01-19 北京奇虎科技有限公司 视频数据实时处理方法及装置、计算设备
CN108391153A (zh) * 2018-01-29 2018-08-10 北京潘达互娱科技有限公司 虚拟礼物显示方法、装置及电子设备
CN110475150A (zh) * 2019-09-11 2019-11-19 广州华多网络科技有限公司 虚拟礼物特效的渲染方法和装置、直播系统
CN110493630A (zh) * 2019-09-11 2019-11-22 广州华多网络科技有限公司 虚拟礼物特效的处理方法和装置、直播系统
CN110536151A (zh) * 2019-09-11 2019-12-03 广州华多网络科技有限公司 虚拟礼物特效的合成方法和装置、直播系统

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115209165A (zh) * 2021-04-08 2022-10-18 北京字节跳动网络技术有限公司 一种控制直播封面显示的方法及装置
CN113518215A (zh) * 2021-05-19 2021-10-19 上海爱客博信息技术有限公司 3d动态效果生成方法、装置、计算机设备和存储介质
CN113453033A (zh) * 2021-06-29 2021-09-28 广州方硅信息技术有限公司 直播间信息传送处理方法及其装置、设备与介质
CN113487709A (zh) * 2021-07-07 2021-10-08 上海商汤智能科技有限公司 一种特效展示方法、装置、计算机设备以及存储介质
CN113596561A (zh) * 2021-07-29 2021-11-02 北京达佳互联信息技术有限公司 视频流播放方法、装置、电子设备和计算机可读存储介质
CN113596561B (zh) * 2021-07-29 2023-06-27 北京达佳互联信息技术有限公司 视频流播放方法、装置、电子设备和计算机可读存储介质
CN113645476A (zh) * 2021-08-06 2021-11-12 广州博冠信息科技有限公司 画面处理方法、装置、电子设备及存储介质
CN113645476B (zh) * 2021-08-06 2023-10-03 广州博冠信息科技有限公司 画面处理方法、装置、电子设备及存储介质
CN113824976A (zh) * 2021-09-03 2021-12-21 广州方硅信息技术有限公司 直播间内的进场秀显示方法、装置及计算机设备
CN113784161A (zh) * 2021-09-09 2021-12-10 广州方硅信息技术有限公司 用户标志传输方法及其装置、设备与介质
CN113784161B (zh) * 2021-09-09 2023-11-24 广州方硅信息技术有限公司 用户标志传输方法及其装置、设备与介质
CN113824982A (zh) * 2021-09-10 2021-12-21 网易(杭州)网络有限公司 一种直播方法、装置、计算机设备及存储介质
CN114025219A (zh) * 2021-11-01 2022-02-08 广州博冠信息科技有限公司 增强现实特效的渲染方法、装置、介质及设备
CN114025219B (zh) * 2021-11-01 2024-06-04 广州博冠信息科技有限公司 增强现实特效的渲染方法、装置、介质及设备
CN114245154B (zh) * 2021-11-29 2022-12-27 北京达佳互联信息技术有限公司 游戏直播间虚拟物品展示方法、装置及电子设备
CN114245154A (zh) * 2021-11-29 2022-03-25 北京达佳互联信息技术有限公司 游戏直播间虚拟物品展示方法、装置及电子设备
CN114125488A (zh) * 2021-12-09 2022-03-01 小象(广州)商务有限公司 一种直播中的虚拟礼物展示方法及系统
WO2023104007A1 (zh) * 2021-12-10 2023-06-15 北京字节跳动网络技术有限公司 视频特效包的生成方法、装置、设备及存储介质
CN114268810B (zh) * 2021-12-31 2024-02-06 广州方硅信息技术有限公司 直播视频显示方法、系统、设备及存储介质
CN114268810A (zh) * 2021-12-31 2022-04-01 广州方硅信息技术有限公司 直播视频显示方法、系统、设备及存储介质
CN114710681A (zh) * 2022-03-24 2022-07-05 广州方硅信息技术有限公司 多路直播展示控制方法及其装置、设备、介质
CN115883860A (zh) * 2022-10-09 2023-03-31 北京达佳互联信息技术有限公司 虚拟空间的显示方法、装置、设备及存储介质
CN115665463A (zh) * 2022-10-20 2023-01-31 广州方硅信息技术有限公司 直播礼物交互方法及其装置、设备、介质
CN116016972A (zh) * 2022-12-29 2023-04-25 广州方硅信息技术有限公司 直播间美颜方法、装置、系统、存储介质以及电子设备
CN116456131A (zh) * 2023-03-13 2023-07-18 北京达佳互联信息技术有限公司 特效渲染方法、装置、电子设备及存储介质
CN116456131B (zh) * 2023-03-13 2023-12-19 北京达佳互联信息技术有限公司 特效渲染方法、装置、电子设备及存储介质
CN117376596A (zh) * 2023-12-08 2024-01-09 江西拓世智能科技股份有限公司 基于智能数字人模型的直播方法、装置及存储介质
CN117376596B (zh) * 2023-12-08 2024-04-26 江西拓世智能科技股份有限公司 基于智能数字人模型的直播方法、装置及存储介质

Also Published As

Publication number Publication date
CN110475150B (zh) 2021-10-08
CN110475150A (zh) 2019-11-19

Similar Documents

Publication Publication Date Title
WO2021047420A1 (zh) 虚拟礼物特效的渲染方法和装置、直播系统
WO2021047430A1 (zh) 虚拟礼物特效的合成方法和装置、直播系统
WO2021047094A1 (zh) 虚拟礼物特效的处理方法和装置、直播系统
US10609334B2 (en) Group video communication method and network device
CN110465097B (zh) 游戏中的角色立绘显示方法及装置、电子设备、存储介质
WO2020211385A1 (zh) 图像特效处理方法、装置及视频直播终端
WO2022048097A1 (zh) 一种基于多显卡的单帧画面实时渲染方法
CN106713988A (zh) 一种对虚拟场景直播进行美颜处理的方法及系统
KR20160021146A (ko) 가상 동영상 통화 방법 및 단말
US11741616B2 (en) Expression transfer across telecommunications networks
US11451858B2 (en) Method and system of processing information flow and method of displaying comment information
US20220210484A1 (en) Method for processing live broadcast data, system, electronic device, and storage medium
US20220028157A1 (en) 3d conversations in an artificial reality environment
CN111464828A (zh) 虚拟特效显示方法、装置、终端及存储介质
US20140161173A1 (en) System and method for controlling video encoding using content information
US12010157B2 (en) Systems and methods for enabling user-controlled extended reality
WO2020258907A1 (zh) 虚拟物品的生成方法、装置及设备
US20170221174A1 (en) Gpu data sniffing and 3d streaming system and method
CN110958463A (zh) 虚拟礼物展示位置的检测、合成方法、装置和设备
CN113676720B (zh) 多媒体资源的播放方法、装置、计算机设备及存储介质
US20230396735A1 (en) Providing a 3d representation of a transmitting participant in a virtual meeting
KR20230153468A (ko) 3d 오브젝트의 스트리밍 방법, 장치, 및 프로그램
US20240297953A1 (en) Systems and methods for enabling user-controlled extended reality
EP4406632A1 (en) Image frame rendering method and related apparatus
US20220210520A1 (en) Online video data output method, system, and cloud platform

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20863935

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20863935

Country of ref document: EP

Kind code of ref document: A1