Nothing Special   »   [go: up one dir, main page]

CN109905754B - Virtual gift receiving method and device and storage equipment - Google Patents

Virtual gift receiving method and device and storage equipment Download PDF

Info

Publication number
CN109905754B
CN109905754B CN201711311512.XA CN201711311512A CN109905754B CN 109905754 B CN109905754 B CN 109905754B CN 201711311512 A CN201711311512 A CN 201711311512A CN 109905754 B CN109905754 B CN 109905754B
Authority
CN
China
Prior art keywords
virtual gift
anchor user
gift
virtual
anchor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711311512.XA
Other languages
Chinese (zh)
Other versions
CN109905754A (en
Inventor
杨琳
彭放
冯智超
李安琪
陈营
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201711311512.XA priority Critical patent/CN109905754B/en
Publication of CN109905754A publication Critical patent/CN109905754A/en
Application granted granted Critical
Publication of CN109905754B publication Critical patent/CN109905754B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention discloses a virtual gift receiving method, a device and a storage device, wherein the virtual gift receiving method comprises the following steps: collecting a three-dimensional live-action image; receiving a virtual gift sent by a viewer client; rendering the virtual gift into the three-dimensional real image, generating and displaying an augmented reality scene; tracking gesture positions of an anchor user; determining whether the anchor user captured the virtual gift based on the anchor user's gesture position and the display position of the virtual gift. The embodiment of the invention can increase the interest of gift collection and enhance the interaction between users.

Description

Virtual gift receiving method and device and storage equipment
Technical Field
The embodiment of the invention relates to the technical field of live broadcast, in particular to a virtual gift receiving method and device and storage equipment.
Background
With the development of the internet, the anchor user can show talent to audiences by creating a video live broadcast room on a video live broadcast system, and in the talent showing process, the audiences can give virtual goods as gifts to the anchor user. In the current video live broadcast system, after the audience presents gifts to the anchor user, the default anchor user receives the gifts presented by the audience, and the gift receiving mode is monotonous and has no interest.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method, an apparatus, and a storage device for receiving a virtual gift, which can increase the interest of receiving the gift and enhance the interaction between users.
The virtual gift receiving method provided by the embodiment of the invention comprises the following steps:
collecting a three-dimensional live-action image;
receiving a virtual gift sent by a viewer client;
rendering the virtual gift into the three-dimensional real image, generating and displaying an augmented reality scene;
tracking gesture positions of an anchor user;
determining whether the anchor user captured the virtual gift based on the gesture position of the anchor user and the display position of the virtual gift in the three-dimensional live-action image.
The virtual gift receiving device provided by the embodiment of the invention comprises:
the acquisition unit is used for acquiring a three-dimensional live-action image;
the receiving unit is used for receiving the virtual gift sent by the audience client;
the rendering unit is used for rendering the virtual gift into the three-dimensional real image, generating an augmented reality scene and displaying the augmented reality scene;
the tracking unit is used for tracking the gesture position of the anchor user;
a determination unit configured to determine whether the anchor user captures the virtual gift according to a gesture position of the anchor user and a display position of the virtual gift in the three-dimensional live-action image.
The embodiment of the invention also provides a storage device, wherein the storage device is used for storing a plurality of instructions, and the instructions are suitable for being loaded by the processor and executing the virtual gift collecting method provided by the embodiment of the invention.
In the embodiment of the invention, in the live broadcasting process, the received virtual gift sent by the audience client is rendered into the three-dimensional live-action image of the anchor client, and an augmented reality scene is generated and displayed so as to bring high immersion feeling to the anchor user; and then tracking the gesture position of the anchor user, and determining whether the anchor user captures the virtual gift or not according to the gesture position of the anchor user and the display position of the virtual gift, namely that the anchor user needs to make some gesture actions to realize gift collection. In the embodiment of the invention, the augmented reality technology is applied to the live broadcast scene, and the virtual gift collection is realized through gesture position recognition, so that the interest of gift collection is increased, and the interaction between users is enhanced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic view of an application scenario of a virtual gift receiving method according to an embodiment of the present invention.
Fig. 2 is a flowchart illustrating a virtual gift receiving method according to an embodiment of the present invention.
Fig. 3a is another flow chart of a virtual gift receiving method according to an embodiment of the present invention.
Fig. 3b is a schematic diagram illustrating the effect of receiving the gift by sliding the finger according to the embodiment of the present invention.
Fig. 4a is a schematic flow chart of a virtual gift receiving method according to an embodiment of the present invention.
Fig. 4b is a schematic diagram of an augmented reality scene generated by the embodiment of the present invention.
Fig. 4c is a schematic diagram illustrating the effect of receiving the gift by clicking with a finger according to the embodiment of the present invention.
Fig. 5 is a schematic structural diagram of a virtual gift collecting device according to an embodiment of the present invention.
Fig. 6 is another schematic structural diagram of a virtual gift collecting device according to an embodiment of the present invention.
Fig. 7 is a schematic structural diagram of a virtual gift collecting device according to an embodiment of the present invention.
Detailed Description
Referring to the drawings, wherein like reference numbers refer to like elements, the principles of the present application are illustrated as being implemented in a suitable computing environment. The following description is based on illustrated embodiments of the application and should not be taken as limiting the application with respect to other embodiments that are not detailed herein.
In the description that follows, specific embodiments of the present application will be described with reference to steps and symbols executed by one or more computers, unless otherwise indicated. Accordingly, these steps and operations will be referred to, several times, as being performed by a computer, the computer performing operations involving a processing unit of the computer in electronic signals representing data in a structured form. This operation transforms the data or maintains it at locations in the computer's memory system, which may be reconfigured or otherwise altered in a manner well known to those skilled in the art. The data maintains a data structure that is a physical location of the memory that has particular characteristics defined by the data format. However, while the principles of the application have been described in language specific to above, it is not intended to be limited to the specific form set forth herein, and it will be recognized by those of ordinary skill in the art that various of the steps and operations described below may be implemented in hardware.
The term module, as used herein, may be considered a software object executing on the computing system. The various components, modules, engines, and services described herein may be viewed as objects implemented on the computing system. The apparatus and method described herein may be implemented in software, but may also be implemented in hardware, and are within the scope of the present application.
The terms "first", "second", and "third", etc. in this application are used to distinguish between different objects and not to describe a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or modules is not limited to only those steps or modules listed, but rather, some embodiments may include other steps or modules not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Because the existing method for receiving gifts in live broadcast is monotonous and has no interest, the embodiment of the invention provides a virtual gift receiving method which can increase the interest of gift receiving and enhance the interaction among users. The virtual gift receiving method provided by the embodiment of the invention can be realized in a virtual gift receiving device, the virtual gift receiving device can be a client, such as a live APP, and the client can be installed in a mobile phone, a tablet computer, a personal computer and other terminals.
A specific implementation scenario of the virtual gift receiving method according to the embodiment of the present invention is as shown in fig. 1, and includes a terminal and a server, where the terminal includes a terminal of a main broadcast user (i.e., a main broadcast terminal on which a live broadcast client is installed) and a terminal of a viewer watching a main broadcast (i.e., a viewer terminal on which a live broadcast client is installed), and the server may be a live broadcast server.
The anchor terminal can collect three-dimensional live-action images containing anchor users for live broadcast, the anchor terminal transmits live broadcast data to the audience terminal through the server, the audience can give virtual gifts to the anchor users in the live broadcast watching process, and the audience terminal sends the virtual gifts given by the audience to the anchor terminal through the server. After receiving the virtual gift, the anchor terminal may render the virtual gift into the three-dimensional real image, generate and display an augmented reality scene, and then the anchor terminal may track a gesture position of the anchor user, and determine whether the anchor user captures the virtual gift according to the gesture position of the anchor user and the display position of the virtual gift. For example, when a anchor user stamps a virtual gift, it is determined that the anchor user captured the virtual gift.
The following detailed description is given for each example, and it should be noted that the order of description of the following examples is not intended to limit the preferred order of the examples.
In this embodiment, a virtual gift receiving method provided by an embodiment of the present invention will be described from the perspective of a virtual gift receiving device, which may be installed at a host, as shown in fig. 2, and the virtual gift receiving method of this embodiment includes the following steps:
and S101, acquiring a three-dimensional live-action image.
When the anchor user needs to carry out direct broadcasting, the anchor end starts the camera and collects a three-dimensional live-action image containing the anchor user. After the live broadcast starts, the anchor terminal encodes the acquired three-dimensional live-action image and sends the encoded three-dimensional live-action image as live broadcast data to a live broadcast server, and the live broadcast server sends the live broadcast data to a viewer terminal needing to watch the live broadcast.
The camera of this embodiment may be a three-dimensional (3D) camera, and the acquired three-dimensional live-action image includes three-dimensional coordinate positions of the anchor user and the anchor user, where the three-dimensional coordinate position may be represented by (x, y, z), and of course, the three-dimensional live-action image may also include three-dimensional space coordinate positions of other objects and other objects, where the other objects include: furniture, items, etc. around the anchor user.
In addition, after the live broadcast starts, the anchor terminal can also push a function notification message to the audience terminal through the server according to the function authority of the anchor terminal, so as to notify the audience of the related functions supported by the anchor user in the live broadcast process, such as a virtual gift function, so that the audience can interact with the live broadcast user based on the related functions supported by the anchor user.
For example, the anchor side has an Augmented Reality (AR) function and a three-dimensional gesture recognition function, which enable the anchor user to support the bubble gift function, and the anchor side may push a notification message to the audience side through the server to notify the audience that the anchor user may support the bubble gift function. The bubble gift is a presentation manner of the virtual gift, which presents the virtual gift in the bubble.
Augmented reality refers to a computer technology that combines and interacts a virtual world with a real world scene on a screen by performing position and angle calculations on a camera image and adding an image analysis technology.
Three-dimensional gesture recognition: refers to a computer technology for recognizing human gestures in three-dimensional space through mathematical algorithms. For example, a swipe gesture, a tap gesture, etc. are recognized.
And step S102, receiving the virtual gift sent by the client terminal of the audience.
When a viewer wants to present a virtual gift to a host user, the viewer terminal may send a gift-presentation request to the server, where the gift-presentation request includes identification information of the viewer (e.g., the registered account number and the registered name of the viewer), identification information of the virtual gift (e.g., the type, name, and number of the virtual gift), and identification information of the host user (e.g., the registered account number and the registered name of the host user). The server checks the account balance of the audience to judge whether the audience can purchase the virtual gift, and if the audience cannot purchase the virtual gift, the server feeds back presentation failure notification information to the audience; if the virtual gift can be purchased, the fee is deducted, the virtual gift is sent to the corresponding anchor terminal, and the anchor terminal receives the virtual gift.
In a particular embodiment, the virtual gift gifted by the audience may be a dynamic virtual gift, meaning that the virtual gift may move in space, e.g., from one point in space to another point in space. After receiving the virtual gift, the anchor terminal may obtain a motion trail parameter of the virtual gift according to the identification information of the virtual gift, the motion trail parameter of the virtual gift may include a series of three-dimensional coordinate positions of the virtual gift, and a gift display trail may be generated according to the motion trail parameter of the virtual gift. Different virtual gifts may have different motion trajectory parameters.
The motion trajectory parameter of the virtual gift may be obtained from the anchor terminal according to the identification information of the corresponding virtual gift, or may be obtained from the server or the audience terminal, which is not specifically limited herein.
The motion trail parameters of the virtual gift can be set by the anchor terminal, the server or the audience terminal according to the specific coordinate position of the anchor user in the three-dimensional live-action image. For example, the anchor terminal may establish a three-dimensional coordinate system with the position of the camera as an origin, obtain coordinate positions of the two eyes of the anchor user in the three-dimensional coordinate system, and select a plurality of coordinate positions from the front of the two eyes of the anchor user to form a motion trajectory parameter of the virtual gift.
And step S103, rendering the virtual gift into the three-dimensional real image, and generating and displaying an augmented reality scene.
In a specific implementation, the anchor terminal may render the virtual gift into the three-dimensional real-scene image along a corresponding gift display trajectory to generate an augmented reality scene, where the virtual gift may display an animation effect.
For example, when the virtual gift is displayed in a bubble manner, the generated augmented reality scene may present a picture that the bubble flutters in front of the anchor user. In practical applications, the virtual gift may also be displayed in other manners, for example, in a cloud manner, which is not limited herein.
The audience may present multiple virtual gifts to the anchor user at one time, with the multiple virtual gifts randomly appearing at the anchor. Virtual gifts at different prices can be presented in different ways. For example, the higher the price of the virtual gift, the larger the display volume; as another example, the higher the price of the gift, the closer the display position is to the positions of both eyes of the anchor user.
The anchor user makes a corresponding capture gesture in space by viewing the screen to capture the virtual gift.
In addition, before the anchor user starts capturing the virtual gift, the anchor end may generate an animation preview video of the virtual gift and play the video first, so as to improve the capturing success rate of the anchor user.
And step S104, tracking the gesture position of the anchor user.
In specific implementation, the hand of the anchor user can be positioned in the three-dimensional live-action image, and then the hand action of the anchor user is tracked to obtain the gesture position of the anchor user. The acquired gesture position may be a position where the anchor user finger slides or a position where the anchor user finger clicks, and the position may be a three-dimensional coordinate position.
Step S105, determining whether the anchor user captures the virtual gift according to the gesture position of the anchor user and the display position of the virtual gift.
In specific implementation, it may be determined whether an intersection is generated in space between a gesture position of a host user and a display position of the virtual gift, and if the intersection is generated, it is determined that the host user captures the virtual gift, and if the intersection is not generated, it is determined that the host user does not capture the virtual gift.
Taking the example that the anchor user captures the virtual gift by using a finger-clicked gesture, the anchor end can acquire the three-dimensional coordinate position of a finger tip of the anchor user when the anchor user performs a click action and acquire the current three-dimensional coordinate position of the virtual gift, and the current three-dimensional coordinate position of the virtual gift is a set of three-dimensional coordinate positions because the virtual gift has a certain volume, so that whether the current three-dimensional coordinate position of the finger tip of the anchor user belongs to the set of the current three-dimensional coordinate positions of the virtual gift can be judged, and if the current three-dimensional coordinate position belongs to the set of the current three-dimensional coordinate positions of the virtual gift, the gesture position of the anchor user and the display position of the virtual gift are determined, and.
After determining that the virtual gift was captured by the anchor user, the virtual gift captured by the anchor user may be removed in the three-dimensional live-action image.
In addition, if the anchor user does not capture the virtual gift until the virtual gift movement is finished, the virtual gift is removed from the three-dimensional live-action image after the virtual gift is moved to the last three-dimensional coordinate point.
Finally, after all the virtual gifts on the screen disappear, a capture result may be sent to the server, where the capture result includes identification information of the virtual gifts captured by the anchor user, identification information of the anchor user, and identification information of viewers presenting the corresponding virtual gifts, so that the server performs fund settlement according to the capture result. For example, the amount of the virtual gift is deducted from the account of the corresponding viewer and added to the account of the corresponding anchor user. For virtual gifts that are not captured by the anchor user, the server does not substantially deduct fees to the corresponding viewers.
In the embodiment, in the live broadcasting process, the received virtual gifts sent by the audience client are rendered into the three-dimensional live-action images of the anchor client, so that an augmented reality scene is generated, and a high immersion feeling is brought to the anchor user; and then tracking the gesture position of the anchor user, and determining whether the anchor user captures the virtual gift or not according to the gesture position of the anchor user and the display position of the virtual gift, namely that the anchor user needs to make some gesture actions to realize gift collection. In this embodiment, use augmented reality technique in live scene, realize that virtual gift collects through gesture position identification, increased the interest that the gift was collected, strengthened the interdynamic between the user.
With respect to the methods described in the above embodiments, the following two embodiments will be exemplified in further detail.
Referring to fig. 3a, the embodiment will be described by taking an example of capturing a virtual gift by finger sliding, and the method of the embodiment includes:
step S201, collecting a three-dimensional live-action image.
The three-dimensional live-action image collected includes the anchor user and the three-dimensional coordinate positions of the anchor user, and of course, the three-dimensional live-action image may also include three-dimensional coordinate positions of other objects, which may be represented by (x, y, z), for example: furniture, items, etc. around the anchor user.
Step S202, receiving the virtual gift sent by the client terminal of the audience.
In the present embodiment, a case where a virtual gift given by a viewer is displayed in a bubble manner (hereinafter referred to as a bubble gift) will be described.
Step S203, obtaining the motion trajectory parameter of the virtual gift, and generating a gift display trajectory according to the motion trajectory parameter.
The motion trail parameters of the virtual gift may include a series of three-dimensional coordinate positions of the virtual gift, and a gift display trail may be generated according to the motion trail parameters of the training gift. The gift display track of the bubble gift is a bubble floating track. Different bubble gifts may have different motion trajectory parameters.
The motion trajectory parameter of the bubble gift may be obtained from the anchor side according to the identification information of the corresponding bubble gift, or may be obtained from the server or the viewer side, which is not specifically limited herein.
The motion trail parameters of the bubble gifts can be set by a main broadcasting end, a server or a spectator end according to the specific coordinate position of a main broadcasting user in the three-dimensional live-action image. For example, the anchor terminal may establish a three-dimensional coordinate system with the position of the camera as an origin, obtain coordinate positions of the two eyes of the anchor user in the three-dimensional coordinate system, and select a plurality of coordinate positions from the front of the two eyes of the anchor user to form a motion trajectory parameter of the bubble gift.
And step S204, rendering the virtual gift into the three-dimensional real scene image along the gift display track, and generating and displaying an augmented reality scene.
In specific implementation, the anchor terminal may render the bubble gift into the three-dimensional live-action image along a corresponding gift display track to generate an augmented reality scene, where the bubble gift may display a floating motion effect.
The audience may present multiple bubble gifts to the anchor user at one time, with the multiple bubble gifts randomly appearing at the anchor. The bubble gifts with different prices can be presented in different ways. For example, the higher the price of the bubble gift, the larger the bubble display volume; as another example, the higher the price of the bubble gift, the closer the display position is to the positions of both eyes of the anchor user.
And step S205, tracking the finger sliding position of the anchor user.
In specific implementation, the hand of the anchor user can be positioned in the three-dimensional live-action image, and then whether the anchor user performs finger sliding action or not is detected. Since the captured live-action image is three-dimensional, each point in the image has a three-dimensional coordinate position, i.e., each point includes values of three axes, x, y, and z. If the anchor user performs the finger sliding motion, the values of the x-axis and the y-axis of the finger (the finger tip point) change faster, and the value of the z-axis (the depth value) changes slower, so that whether the anchor user performs the finger sliding motion can be detected according to the changes of the finger of the anchor user in the x-axis, the y-axis and the z-axis.
If the anchor user does not perform the finger sliding action, the processing can be ended; if the anchor user performs the finger sliding action, the finger sliding position of the anchor user can be acquired.
Step S206, determining whether the current sliding position of the finger of the anchor user intersects with the current display position of the virtual gift, if so, executing step S207, otherwise, returning to step S205, and continuing to track the sliding position of the finger of the anchor user.
Typically, the virtual gift has a certain volume, such as shown in fig. 3b, and the display position of the virtual gift will be a set of three-dimensional coordinate positions. In the capturing process, the fingers of the anchor user move and the virtual gift also moves, so that whether the current sliding position of the fingers of the anchor user belongs to the set of the current positions of the virtual gift can be judged, and if the current sliding position of the fingers of the anchor user belongs to the set of the current positions of the virtual gift, it is determined that the current sliding position of the fingers of the anchor user and the current display position of the virtual gift generate intersection.
When the current sliding position of the finger of the anchor user and the current display position of the virtual gift are intersected in space, namely the anchor user is determined to capture the virtual gift, so that the anchor user only needs to slide in space by using the finger in the process of capturing the virtual gift and does not need to contact with a screen of a terminal.
Step S207, determining that the anchor user captured the virtual gift.
After determining that the virtual gift was captured by the anchor user, the virtual gift captured by the anchor user may be removed in the three-dimensional live-action image.
In addition, if the anchor user does not capture the virtual gift until the virtual gift movement is finished, the virtual gift is removed from the three-dimensional live-action image after the virtual gift is moved to the last three-dimensional coordinate point.
And step S208, sending the capture result to a server so that the server performs fund settlement according to the capture result.
Specifically, after all the virtual gifts on the screen disappear, a capture result may be sent to the server, where the capture result includes identification information of the virtual gifts captured by the anchor user, identification information of the anchor user, and identification information of viewers giving corresponding virtual gifts, so that the server performs fund settlement according to the capture result. For example, the amount of the virtual gift is deducted from the account of the corresponding viewer and added to the account of the corresponding anchor user. For virtual gifts that are not captured by the anchor user, the server does not substantially deduct fees to the corresponding viewers.
In this embodiment, use the augmented reality technique in live scene, realize that virtual gift collects through discernment anchor user finger gliding position, increased the interest that the gift was collected, strengthened the interdynamic between the user.
Referring to fig. 4a, the embodiment will be described by taking an example of capturing a virtual gift by finger click, and the method of the embodiment includes:
and S301, acquiring a three-dimensional live-action image.
The three-dimensional live-action image collected includes the anchor user and the three-dimensional coordinate positions of the anchor user, and of course, the three-dimensional live-action image may also include three-dimensional coordinate positions of other objects, which may be represented by (x, y, z), for example: furniture, items, etc. around the anchor user.
Step S302, receiving the virtual gift sent by the client terminal of the audience.
In the present embodiment, a case where a virtual gift given by a viewer is displayed in a bubble manner (hereinafter referred to as a bubble gift) will be described.
And step S303, obtaining the motion track parameters of the virtual gift, and generating a gift display track according to the motion track parameters.
The motion trail parameters of the virtual gift may include a series of three-dimensional coordinate positions of the virtual gift, and a gift display trail may be generated according to the motion trail parameters of the training gift. The gift display track of the bubble gift is a bubble floating track. Different bubble gifts may have different motion trajectory parameters.
The motion trajectory parameter of the bubble gift may be obtained from the anchor side according to the identification information of the corresponding bubble gift, or may be obtained from the server or the viewer side, which is not specifically limited herein.
The motion trail parameters of the bubble gifts can be set by a main broadcasting end, a server or a spectator end according to the specific coordinate position of a main broadcasting user in the three-dimensional live-action image. For example, the anchor terminal may establish a three-dimensional coordinate system with the position of the camera as an origin, obtain coordinate positions of the two eyes of the anchor user in the three-dimensional coordinate system, and select a plurality of coordinate positions from the front of the two eyes of the anchor user to form a motion trajectory parameter of the bubble gift.
Step S304, rendering the virtual gift into the three-dimensional real image along the gift display trajectory, and generating and displaying an augmented reality scene.
In specific implementation, the anchor terminal may render the bubble gift into the three-dimensional live-action image along a corresponding gift display track to generate an augmented reality scene, where the bubble gift may display a floating motion effect. The audience may present multiple bubble gifts to the anchor user at one time, with the multiple bubble gifts randomly appearing at the anchor. The bubble gifts with different prices can be presented in different ways. In a specific embodiment, the generated augmented reality scene may be as shown in fig. 4b, with a plurality of bubble gifts forming the effect of bubble rain, floating in front of the anchor user.
And step S305, tracking the finger click position of the anchor user.
In specific implementation, the hand of the anchor user can be positioned in the three-dimensional live-action image, and then whether the anchor user performs a finger click action or not is detected. Since the captured live-action image is three-dimensional, each point in the image has a three-dimensional coordinate position, and each point includes values of three axes, namely x, y and z. If the anchor user performs the finger click action, the values of the x-axis and the y-axis of the finger (the finger tip point) change slowly, and the value (the depth value) of the z-axis changes rapidly, so that whether the anchor user performs the finger click action can be detected according to the change of the finger of the anchor user in the values of the x-axis, the y-axis and the z-axis.
If the anchor user does not perform the finger click action, the processing can be ended; if the anchor user performs a finger click action, the finger click position of the anchor user can be collected.
Step S306, determining whether the current click position of the finger of the anchor user and the current display position of the virtual gift generate an intersection, if so, executing step S307, otherwise, returning to step S305, and continuing to track the click position of the finger of the anchor user.
Typically, the virtual gift has a certain volume, such as shown in fig. 4c, and the display position of the virtual gift will be a set of three-dimensional coordinate positions. In the capturing process, the fingers of the anchor user move and the virtual gift also moves, so that whether the current click position of the fingers of the anchor user belongs to the set of the current positions of the virtual gift can be judged, and if the current click position of the fingers of the anchor user belongs to the set of the current positions of the virtual gift, the intersection between the current click position of the fingers of the anchor user and the current display position of the virtual gift is determined.
When the current click position of the finger of the anchor user and the current display position of the virtual gift are spatially intersected, that is, the anchor user is determined to capture the virtual gift, so that the anchor user only needs to click (stamp) in the space by using the finger in the process of capturing the virtual gift and does not need to contact with a screen of a terminal.
Step S307, determining that the anchor user captured the virtual gift.
After determining that the virtual gift was captured by the anchor user, the virtual gift captured by the anchor user may be removed in the three-dimensional live-action image.
In addition, if the anchor user does not capture the virtual gift until the virtual gift movement is finished, the virtual gift is removed from the three-dimensional live-action image after the virtual gift is moved to the last three-dimensional coordinate point.
And step S308, sending the capture result to a server so that the server performs fund settlement according to the capture result.
Specifically, after all the virtual gifts on the screen disappear, a capture result may be sent to the server, where the capture result includes identification information of the virtual gifts captured by the anchor user, identification information of the anchor user, and identification information of viewers giving corresponding virtual gifts, so that the server performs fund settlement according to the capture result. For example, the amount of the virtual gift is deducted from the account of the corresponding viewer and added to the account of the corresponding anchor user. For virtual gifts that are not captured by the anchor user, the server does not substantially deduct fees to the corresponding viewers.
In this embodiment, use the augmented reality technique in live scene, realize that virtual gift collects through the position that discernment anchor user's finger clicked, increased the interest that the gift was collected, strengthened the interdynamic between the user.
In order to better implement the above method, an embodiment of the present invention further provides a virtual gift collecting device, as shown in fig. 5, the device of the present embodiment includes an acquisition unit 401, a receiving unit 402, a rendering unit 403, a tracking unit 404, and a determining unit 405, as follows:
an acquisition unit 401, configured to acquire a three-dimensional live-action image;
a receiving unit 402, configured to receive a virtual gift sent by a viewer client;
a rendering unit 403, configured to render the virtual gift into the three-dimensional real image, generate an augmented reality scene, and display the augmented reality scene;
a tracking unit 404 for tracking a gesture position of the anchor user;
a determining unit 405, configured to determine whether the anchor user captures the virtual gift according to the gesture position of the anchor user and the display position of the virtual gift.
In one embodiment, as shown in fig. 6, the apparatus further comprises:
the generating unit 406 is configured to obtain a motion trajectory parameter of the virtual gift, and generate a gift display trajectory according to the motion trajectory parameter;
the rendering unit 403 is specifically configured to render the virtual gift into the three-dimensional real image along the gift display trajectory, generate an augmented reality scene, and display the augmented reality scene.
In one embodiment, as shown in fig. 6, the determining unit 405 includes:
a first determining subunit 4051, configured to determine whether the current sliding position of the finger of the anchor user intersects with the current display position of the virtual gift;
a first determining sub-unit 4052, configured to determine that the anchor user captured the virtual gift when the current sliding position of the finger of the anchor user intersects the current display position of the virtual gift.
In one embodiment, as shown in fig. 6, the determining unit 405 includes:
a second judging subunit 4054, configured to judge whether an intersection is generated between the current click position of the finger of the anchor user and the current display position of the virtual gift;
a second determining subunit 4055, configured to determine that the anchor user captured the virtual gift when the current click position of the finger of the anchor user intersects the current display position of the virtual gift.
In one embodiment, as shown in fig. 6, the apparatus further comprises:
a removing unit 407, configured to remove the virtual gift captured by the anchor user in the three-dimensional live-action image.
In one embodiment, as shown in fig. 6, the apparatus further comprises:
a sending unit 408, configured to send a capture result to a server, where the capture result includes identification information of a virtual gift captured by the anchor user, identification information of the anchor user, and identification information of a viewer presenting the corresponding virtual gift, so that the server performs fund settlement according to the capture result.
In one embodiment, as shown in fig. 6, the apparatus further comprises:
a pushing unit 409, configured to push a function notification message to the viewer client through a server, where the function notification message is used to notify a viewer of the viewer client that the anchor user supports a virtual gift function.
It should be noted that, when the virtual gift receiving device provided in the foregoing embodiment implements virtual gift receiving, only the division of the above functional modules is used for illustration, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the above described functions. In addition, the virtual gift receiving device and the virtual gift receiving method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not described herein again.
In the embodiment, in the live broadcasting process, the rendering unit renders the virtual gift sent by the audience client received by the receiving unit into the three-dimensional live-action image acquired by the acquisition unit, and generates and displays an augmented reality scene to bring a high immersion feeling to a main broadcasting user; then, the tracking unit tracks the gesture position of the anchor user, and the determining unit determines whether the anchor user captures the virtual gift according to the gesture position of the anchor user and the display position of the virtual gift, namely, the anchor user needs to make some gesture actions to realize gift collection. In this embodiment, use augmented reality technique in live scene, realize that virtual gift collects through gesture position identification, increased the interest that the gift was collected, strengthened the interdynamic between the user.
Accordingly, an embodiment of the present invention further provides a virtual gift collecting apparatus, which may include, as shown in fig. 7, a Radio Frequency (RF) circuit 501, a memory 502 including one or more computer-readable storage media, an input unit 503, a display unit 504, a sensor 505, an audio circuit 506, a Wireless Fidelity (WiFi) module 507, a processor 508 including one or more processing cores, and a power supply 509. Those skilled in the art will appreciate that the device configuration shown in fig. 7 does not constitute a limitation of the device and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. Wherein:
the RF circuit 501 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, for receiving downlink information of a base station and then sending the received downlink information to the one or more processors 508 for processing; in addition, data relating to uplink is transmitted to the base station. In general, RF circuit 501 includes, but is not limited to, an antenna, at least one Amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuitry 501 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Message Service (SMS), and the like.
The memory 502 may be used to store software programs and modules, and the processor 508 executes various functional applications and data processing by operating the software programs and modules stored in the memory 502. The memory 502 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the device, and the like. Further, the memory 502 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 502 may also include a memory controller to provide the processor 508 and the input unit 503 access to the memory 502.
The input unit 503 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, in one particular embodiment, the input unit 503 may include a touch-sensitive surface as well as other input devices. The touch-sensitive surface, also referred to as a touch display screen or a touch pad, may collect touch operations by a user (e.g., operations by a user on or near the touch-sensitive surface using a finger, a stylus, or any other suitable object or attachment) thereon or nearby, and drive the corresponding connection device according to a predetermined program. Alternatively, the touch sensitive surface may comprise two parts, a touch detection means and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 508, and can receive and execute commands sent by the processor 508. In addition, touch sensitive surfaces may be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves. The input unit 503 may include other input devices in addition to the touch-sensitive surface. In particular, other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 504 may be used to display information input by or provided to the user and various graphical user interfaces of the terminal, which may be made up of graphics, text, icons, video, and any combination thereof. The Display unit 504 may include a Display panel, and optionally, the Display panel may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch-sensitive surface may overlay the display panel, and when a touch operation is detected on or near the touch-sensitive surface, the touch operation is transmitted to the processor 508 to determine the type of touch event, and then the processor 508 provides a corresponding visual output on the display panel according to the type of touch event. Although in FIG. 7 the touch-sensitive surface and the display panel are two separate components to implement input and output functions, in some embodiments the touch-sensitive surface may be integrated with the display panel to implement input and output functions.
The device may also include at least one sensor 505, such as light sensors, motion sensors, and other sensors. In particular, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel according to the brightness of ambient light, and a proximity sensor that may turn off the display panel and/or backlight when the device is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when the mobile phone is stationary, and can be used for applications of recognizing the posture of the mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured in the terminal, detailed description is omitted here.
Audio circuitry 506, a speaker, and a microphone may provide an audio interface between the user and the terminal. The audio circuit 506 may transmit the electrical signal converted from the received audio data to a speaker, and convert the electrical signal into a sound signal for output; on the other hand, the microphone converts the collected sound signal into an electric signal, which is received by the audio circuit 506 and converted into audio data, which is then processed by the audio data output processor 508, and then sent to, for example, another device via the RF circuit 501, or output to the memory 502 for further processing. The audio circuit 506 may also include an earbud jack to provide communication of peripheral headphones with the device.
WiFi belongs to short-distance wireless transmission technology, and the device can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 507, and provides wireless broadband internet access for the user. Although fig. 7 shows the WiFi module 507, it is understood that it does not belong to the essential constitution of the device, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 508 is a control center of the apparatus, connects various parts of the entire apparatus using various interfaces and lines, performs various functions of the terminal and processes data by operating or executing software programs and/or modules stored in the memory 502 and calling data stored in the memory 502, thereby performing overall monitoring of the apparatus. Optionally, processor 508 may include one or more processing cores; preferably, the processor 508 may integrate an application processor, which primarily handles operating systems, user interfaces, application programs, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 508.
The device also includes a power supply 509 (e.g., a battery) for powering the various components, which may preferably be logically connected to the processor 508 via a power management system to manage charging, discharging, and power consumption management functions via the power management system. The power supply 509 may also include any component such as one or more dc or ac power sources, recharging systems, power failure detection circuitry, power converters or inverters, power status indicators, and the like.
Although not shown, the device may further include a camera, a bluetooth module, etc., which will not be described herein. Specifically, in this embodiment, the processor 508 in the apparatus loads the executable file corresponding to the process of one or more application programs into the memory 502 according to the following instructions, and the processor 508 runs the application programs stored in the memory 502, thereby implementing various functions:
collecting a three-dimensional live-action image;
receiving a virtual gift sent by a viewer client;
rendering the virtual gift into the three-dimensional real image, generating and displaying an augmented reality scene;
tracking gesture positions of an anchor user;
determining whether the anchor user captured the virtual gift based on the anchor user's gesture position and the display position of the virtual gift.
In some embodiments, after receiving the virtual gift sent by the viewer client, the processor 508 is further configured to perform the following steps:
obtaining the motion trail parameters of the virtual gift, and generating a gift display trail according to the motion trail parameters;
the processor 508 renders the virtual gift into the three-dimensional real scene image along the gift display track, generates an augmented reality scene and displays the scene.
In some embodiments, the gesture position of the anchor user is a finger sliding position of the anchor user, and when determining whether the anchor user captured the virtual gift according to the gesture position of the anchor user and the display position of the virtual gift, processor 508 is specifically configured to perform the following steps:
judging whether the current sliding position of the finger of the anchor user and the current display position of the virtual gift generate intersection;
determining that the anchor user captured the virtual gift when the anchor user's finger current swipe position intersects the current display position of the virtual gift.
In some embodiments, the gesture position of the anchor user is a finger click position of the anchor user, and when determining whether the anchor user captured the virtual gift according to the gesture position of the anchor user and the display position of the virtual gift, processor 508 is specifically configured to perform the following steps:
judging whether the current clicking position of the fingers of the anchor user and the current display position of the virtual gift generate intersection;
determining that the anchor user captured the virtual gift when a current click position of a finger of the anchor user intersects a current display position of the virtual gift.
In some embodiments, after determining that the anchor user captured the virtual gift, processor 508 is further configured to perform the steps of:
removing the virtual gift captured by the anchor user in the three-dimensional live-action image.
In some embodiments, the processor 508 is further configured to perform the following steps:
and sending a capturing result to a server, wherein the capturing result comprises the identification information of the virtual gift captured by the anchor user, the identification information of the anchor user and the identification information of the audience presenting the corresponding virtual gift, so that the server performs fund settlement according to the capturing result.
In some embodiments, prior to receiving the virtual gift gifted by the viewer, the processor 508 is further configured to:
pushing, by a server, a function notification message to the viewer client, the function notification message for notifying a viewer of the viewer client that the anchor user supports a virtual gift function.
In the live broadcast process, the virtual gift receiving device of the embodiment renders the received virtual gift sent by the audience client to the three-dimensional live-action image of the anchor client, generates and displays an augmented reality scene, so as to bring high immersion feeling to the anchor user; and then tracking the gesture position of the anchor user, and determining whether the anchor user captures the virtual gift or not according to the gesture position of the anchor user and the display position of the virtual gift, namely that the anchor user needs to make some gesture actions to realize gift collection. The device of this embodiment, use the augmented reality technique in live scene, realize through gesture position recognition that virtual gift collects, increased the interest that the gift was collected, strengthened the interdynamic between the user.
An embodiment of the present application further provides a storage device, where the storage device stores a computer program, and when the computer program runs on a computer, the computer is caused to execute the video transcoding method in any of the above embodiments, for example: collecting a three-dimensional live-action image; receiving a virtual gift sent by a viewer client; rendering the virtual gift into the three-dimensional real image to generate an augmented reality scene; tracking gesture positions of an anchor user; determining whether the anchor user captured the virtual gift based on the anchor user's gesture position and the display position of the virtual gift.
In the embodiment of the present application, the storage device may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a Random Access Memory (RAM), or the like.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
It should be noted that, for the virtual gift receiving method of the embodiment of the present application, a decision-making person in the art may understand that all or part of the process of implementing the virtual gift receiving method of the embodiment of the present application may be implemented by controlling related hardware through a computer program, where the computer program may be stored in a computer readable storage medium, such as a memory of an electronic device, and executed by at least one processor in the electronic device, and the process of executing the computer program may include the process of the embodiment of the virtual gift receiving method. The storage medium may be a magnetic disk, an optical disk, a read-only memory, a random access memory, etc.
For the virtual gift collecting device in the embodiment of the present application, each functional module may be integrated into one processing chip, or each module may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium, such as a read-only memory, a magnetic or optical disk, or the like.
The method, the apparatus, and the storage device for receiving a virtual gift provided in the embodiments of the present application are described in detail above, and a specific example is applied in the description to explain the principles and embodiments of the present application, and the description of the above embodiments is only used to help understand the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (15)

1. A virtual gift collection method, comprising:
collecting a three-dimensional live-action image;
receiving a virtual gift sent by a viewer client;
rendering the virtual gift into the three-dimensional real image, generating and displaying an augmented reality scene;
tracking gesture positions of an anchor user;
determining whether the anchor user captured the virtual gift based on the gesture position of the anchor user and the displayed position of the virtual gift in the three-dimensional live-action image.
2. The virtual gift collecting method of claim 1, further comprising, after receiving the virtual gift transmitted from the viewer client:
obtaining the motion trail parameters of the virtual gift, and generating a gift display trail according to the motion trail parameters;
rendering the virtual gift into the three-dimensional real image along the gift display track, and generating and displaying an augmented reality scene.
3. The virtual gift collection method of claim 2, wherein the gesture position of the anchor user is a finger swipe position of the anchor user, and determining whether the anchor user captured the virtual gift according to the gesture position of the anchor user and a display position of the virtual gift comprises:
judging whether the current sliding position of the finger of the anchor user and the current display position of the virtual gift generate intersection;
determining that the anchor user captured the virtual gift when the anchor user's finger current swipe position intersects the current display position of the virtual gift.
4. The virtual gift collection method of claim 2, wherein the gesture position of the anchor user is a finger click position of the anchor user, and determining whether the anchor user captured the virtual gift according to the gesture trajectory of the anchor user and the display position of the virtual gift comprises:
judging whether the current clicking position of the fingers of the anchor user and the current display position of the virtual gift generate intersection;
determining that the anchor user captured the virtual gift when a current click position of a finger of the anchor user intersects a current display position of the virtual gift.
5. The virtual gift collection method of claim 3 or 4, further comprising, after determining that said anchor user captured said virtual gift:
removing the virtual gift captured by the anchor user in the three-dimensional live-action image.
6. The virtual gift collection method of claim 1, wherein the method further comprises:
and sending a capturing result to a server, wherein the capturing result comprises the identification information of the virtual gift captured by the anchor user, the identification information of the anchor user and the identification information of the audience presenting the corresponding virtual gift, so that the server performs fund settlement according to the capturing result.
7. The virtual gift collecting method of claim 1, further comprising, before receiving the virtual gift gifted by the viewer:
pushing, by a server, a function notification message to the viewer client, the function notification message for notifying a viewer of the viewer client that the anchor user supports a virtual gift function.
8. A virtual gift collecting device, comprising:
the acquisition unit is used for acquiring a three-dimensional live-action image;
the receiving unit is used for receiving the virtual gift sent by the audience client;
the rendering unit is used for rendering the virtual gift into the three-dimensional real image, generating an augmented reality scene and displaying the augmented reality scene;
the tracking unit is used for tracking the gesture position of the anchor user;
a determination unit configured to determine whether the anchor user captures the virtual gift according to a gesture position of the anchor user and a display position of the virtual gift in the three-dimensional live-action image.
9. The virtual gift collecting device of claim 8, wherein the device further comprises:
the generating unit is used for acquiring the motion trail parameters of the virtual gift and generating a gift display trail according to the motion trail parameters;
the rendering unit is specifically configured to render the virtual gift into the three-dimensional real image along the gift display trajectory, generate an augmented reality scene, and display the augmented reality scene.
10. The virtual gift collecting device of claim 9, wherein the gesture position of the anchor user is a finger sliding position of the anchor user, the determining unit comprising:
the first judging subunit is used for judging whether the current sliding position of the finger of the anchor user and the current display position of the virtual gift generate intersection;
a first determining subunit to determine that the anchor user captured the virtual gift when a current sliding position of a finger of the anchor user intersects a current display position of the virtual gift.
11. The virtual gift collecting device of claim 9, wherein the gesture position of the anchor user is a finger click position of the anchor user, the determining unit comprising:
the second judgment subunit is used for judging whether the current click position of the finger of the anchor user and the current display position of the virtual gift generate intersection;
a second determining subunit to determine that the anchor user captured the virtual gift when a current click position of a finger of the anchor user intersects a current display position of the virtual gift.
12. The virtual gift collecting device of claim 10 or 11, wherein the device further comprises:
a removing unit for removing the virtual gift captured by the anchor user in the three-dimensional live-action image.
13. The virtual gift collecting device of claim 8, wherein the device further comprises:
and the sending unit is used for sending a capturing result to a server, wherein the capturing result comprises the identification information of the virtual gift captured by the anchor user, the identification information of the anchor user and the identification information of the audience presenting the corresponding virtual gift, so that the server carries out fund settlement according to the capturing result.
14. The virtual gift collecting device of claim 8, wherein the device further comprises:
a push unit, configured to push a function notification message to the viewer client through a server, where the function notification message is used to notify a viewer of the viewer client that the anchor user supports a virtual gift function.
15. A memory device for storing a plurality of instructions adapted to be loaded by a processor and to perform the method of any one of claims 1 to 7.
CN201711311512.XA 2017-12-11 2017-12-11 Virtual gift receiving method and device and storage equipment Active CN109905754B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711311512.XA CN109905754B (en) 2017-12-11 2017-12-11 Virtual gift receiving method and device and storage equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711311512.XA CN109905754B (en) 2017-12-11 2017-12-11 Virtual gift receiving method and device and storage equipment

Publications (2)

Publication Number Publication Date
CN109905754A CN109905754A (en) 2019-06-18
CN109905754B true CN109905754B (en) 2021-05-07

Family

ID=66942667

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711311512.XA Active CN109905754B (en) 2017-12-11 2017-12-11 Virtual gift receiving method and device and storage equipment

Country Status (1)

Country Link
CN (1) CN109905754B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110703913B (en) * 2019-09-27 2023-09-26 腾讯科技(深圳)有限公司 Object interaction method and device, storage medium and electronic device
CN110837300B (en) * 2019-11-12 2020-11-27 北京达佳互联信息技术有限公司 Virtual interaction method and device, electronic equipment and storage medium
CN112752162B (en) * 2020-02-17 2024-03-15 腾讯数码(天津)有限公司 Virtual article presenting method, device, terminal and computer readable storage medium
CN111314730A (en) * 2020-02-25 2020-06-19 广州华多网络科技有限公司 Virtual resource searching method, device, equipment and storage medium for live video
CN111277890B (en) * 2020-02-25 2023-08-29 广州方硅信息技术有限公司 Virtual gift acquisition method and three-dimensional panoramic living broadcast room generation method
CN111586426B (en) * 2020-04-30 2022-08-09 广州方硅信息技术有限公司 Panoramic live broadcast information display method, device, equipment and storage medium
CN111541909A (en) * 2020-04-30 2020-08-14 广州华多网络科技有限公司 Panoramic live broadcast gift delivery method, device, equipment and storage medium
CN111541932B (en) * 2020-04-30 2022-04-12 广州方硅信息技术有限公司 User image display method, device, equipment and storage medium for live broadcast room
US11233973B1 (en) * 2020-07-23 2022-01-25 International Business Machines Corporation Mixed-reality teleconferencing across multiple locations
CN112383786B (en) * 2020-11-03 2023-03-07 广州繁星互娱信息科技有限公司 Live broadcast interaction method, device, system, terminal and storage medium
CN112367534B (en) * 2020-11-11 2023-04-11 成都威爱新经济技术研究院有限公司 Virtual-real mixed digital live broadcast platform and implementation method
CN112437338B (en) * 2020-11-24 2022-01-04 腾讯科技(深圳)有限公司 Virtual resource transfer method, device, electronic equipment and storage medium
CN112533053B (en) * 2020-11-30 2022-08-23 北京达佳互联信息技术有限公司 Live broadcast interaction method and device, electronic equipment and storage medium
CN113179413A (en) * 2021-03-15 2021-07-27 北京城市网邻信息技术有限公司 Information processing method and device, electronic equipment and storage medium
CN113329234B (en) * 2021-05-28 2022-06-10 腾讯科技(深圳)有限公司 Live broadcast interaction method and related equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012120098A (en) * 2010-12-03 2012-06-21 Linkt Co Ltd Information provision system
CN103246351A (en) * 2013-05-23 2013-08-14 刘广松 User interaction system and method
CN103941851A (en) * 2013-01-23 2014-07-23 青岛海信电器股份有限公司 Method and system for achieving virtual touch calibration
JP2016024682A (en) * 2014-07-22 2016-02-08 トモヤ 高柳 Content distribution system
CN106125937A (en) * 2016-06-30 2016-11-16 联想(北京)有限公司 A kind of information processing method and processor
CN106411877A (en) * 2016-09-23 2017-02-15 武汉斗鱼网络科技有限公司 Method and system for implementing gift giving in video live broadcasting process on basis of AR (Augmented Reality) technology
CN106488327A (en) * 2016-09-21 2017-03-08 广州华多网络科技有限公司 Electronics present sends control method, device and its mobile terminal with charge free
CN106981015A (en) * 2017-03-29 2017-07-25 武汉斗鱼网络科技有限公司 The implementation method of interactive present
CN107015655A (en) * 2017-04-11 2017-08-04 苏州和云观博数字科技有限公司 Museum virtual scene AR experiences eyeglass device and its implementation
CN107222754A (en) * 2017-05-27 2017-09-29 武汉斗鱼网络科技有限公司 Present gives Notification Method, device and server

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012120098A (en) * 2010-12-03 2012-06-21 Linkt Co Ltd Information provision system
CN103941851A (en) * 2013-01-23 2014-07-23 青岛海信电器股份有限公司 Method and system for achieving virtual touch calibration
CN103246351A (en) * 2013-05-23 2013-08-14 刘广松 User interaction system and method
JP2016024682A (en) * 2014-07-22 2016-02-08 トモヤ 高柳 Content distribution system
CN106125937A (en) * 2016-06-30 2016-11-16 联想(北京)有限公司 A kind of information processing method and processor
CN106488327A (en) * 2016-09-21 2017-03-08 广州华多网络科技有限公司 Electronics present sends control method, device and its mobile terminal with charge free
CN106411877A (en) * 2016-09-23 2017-02-15 武汉斗鱼网络科技有限公司 Method and system for implementing gift giving in video live broadcasting process on basis of AR (Augmented Reality) technology
CN106981015A (en) * 2017-03-29 2017-07-25 武汉斗鱼网络科技有限公司 The implementation method of interactive present
CN107015655A (en) * 2017-04-11 2017-08-04 苏州和云观博数字科技有限公司 Museum virtual scene AR experiences eyeglass device and its implementation
CN107222754A (en) * 2017-05-27 2017-09-29 武汉斗鱼网络科技有限公司 Present gives Notification Method, device and server

Also Published As

Publication number Publication date
CN109905754A (en) 2019-06-18

Similar Documents

Publication Publication Date Title
CN109905754B (en) Virtual gift receiving method and device and storage equipment
CN107479784B (en) Expression display method and device and computer readable storage medium
CN108737904B (en) Video data processing method and mobile terminal
CN106803993B (en) Method and device for realizing video branch selection playing
CN111491197B (en) Live content display method and device and storage medium
CN106303733B (en) Method and device for playing live special effect information
CN106254910B (en) Method and device for recording image
CN108513671B (en) Display method and terminal for 2D application in VR equipment
CN110673770B (en) Message display method and terminal equipment
CN108920069B (en) Touch operation method and device, mobile terminal and storage medium
US20160133006A1 (en) Video processing method and apparatus
CN107908765B (en) Game resource processing method, mobile terminal and server
CN108958587B (en) Split screen processing method and device, storage medium and electronic equipment
CN105828160A (en) Video play method and apparatus
CN110087149A (en) A kind of video image sharing method, device and mobile terminal
CN111601139A (en) Information display method, electronic device, and storage medium
CN112044065A (en) Virtual resource display method, device, equipment and storage medium
CN112435069A (en) Advertisement putting method and device, electronic equipment and storage medium
CN109976629A (en) Image display method, terminal and mobile terminal
CN115390707A (en) Sharing processing method and device, electronic equipment and storage medium
CN110908757B (en) Method and related device for displaying media content
CN110750318A (en) Message reply method and device and mobile terminal
CN105513098B (en) Image processing method and device
CN111178306A (en) Display control method and electronic equipment
CN110888572A (en) Message display method and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant