CN117636528A - Voting processing method, device, equipment and medium based on virtual reality space - Google Patents
Voting processing method, device, equipment and medium based on virtual reality space Download PDFInfo
- Publication number
- CN117636528A CN117636528A CN202210962527.7A CN202210962527A CN117636528A CN 117636528 A CN117636528 A CN 117636528A CN 202210962527 A CN202210962527 A CN 202210962527A CN 117636528 A CN117636528 A CN 117636528A
- Authority
- CN
- China
- Prior art keywords
- voting
- object information
- virtual reality
- information
- candidate
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 20
- 238000012790 confirmation Methods 0.000 claims abstract description 33
- 238000000034 method Methods 0.000 claims abstract description 25
- 230000004044 response Effects 0.000 claims abstract description 24
- 238000012545 processing Methods 0.000 claims abstract description 19
- 238000009877 rendering Methods 0.000 claims abstract description 17
- 238000004590 computer program Methods 0.000 claims description 11
- 230000000977 initiatory effect Effects 0.000 claims description 8
- 238000001514 detection method Methods 0.000 claims description 7
- 238000010586 diagram Methods 0.000 description 18
- 230000006870 function Effects 0.000 description 14
- 210000003811 finger Anatomy 0.000 description 10
- 238000004891 communication Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 230000003993 interaction Effects 0.000 description 5
- 230000002452 interceptive effect Effects 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 230000000007 visual effect Effects 0.000 description 5
- 208000013057 hereditary mucoepithelial dysplasia Diseases 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 210000003813 thumb Anatomy 0.000 description 3
- 210000004247 hand Anatomy 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C13/00—Voting apparatus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the disclosure relates to a voting processing method, device, equipment and medium based on a virtual reality space, wherein the method comprises the following steps: when a target video is displayed in a virtual reality space, responding to an opening instruction of a voting stage, and acquiring corresponding multiple candidate voting object information; rendering and displaying a plurality of candidate voting object information at a target space position in a virtual reality space; in response to detecting the voting confirmation operation, updating current vote-obtaining information of the selected candidate voting object information; and in response to the obtaining of the ending instruction of the voting stage, determining target voting object information in the plurality of candidate voting object information according to the corresponding current voting information, and controlling the target voting object information to be displayed as a preset voting winning state. In the embodiment of the disclosure, voting processing in the virtual reality space is realized, and VR operation experience of a user is expanded.
Description
Technical Field
The disclosure relates to the technical field of virtual reality, and in particular relates to a voting processing method, device, equipment and medium based on a virtual reality space.
Background
With the continuous development of social productivity and scientific technology, various industries have increasingly demanded Virtual Reality (VR) technology. VR technology has also made tremendous progress and has gradually become a new scientific and technological area.
At present, based on the VR technology, a user can watch virtual live broadcast and other video contents, for example, the user enters into the virtual live broadcast site after wearing VR equipment, and watches the live broadcast contents as if the user were in the field.
However, in the prior art, the playing of the live video is only implemented in VR, and other interactive functions of the live video in the real world cannot be supported, for example, the interactive requirement of supporting the voting process of the user when the live video is played in the real world cannot be met.
Disclosure of Invention
In order to solve the above technical problems or at least partially solve the above technical problems, the present disclosure provides a voting processing method, apparatus, device, and medium based on a virtual reality space, which implement voting processing in the virtual reality space, and expand VR operation experience of a user.
The embodiment of the disclosure provides a voting processing method based on a virtual reality space, which comprises the following steps: when a target video is displayed in a virtual reality space, responding to an opening instruction of a voting stage, and acquiring corresponding multiple candidate voting object information; determining a target space position in the virtual reality space, and rendering and displaying the plurality of candidate voting object information at the target space position; in response to detecting the voting confirmation operation, updating current vote-obtaining information of the selected candidate voting object information; and responding to the obtained end instruction of the voting stage, determining target voting object information in the plurality of candidate voting object information according to the corresponding current voting information, and controlling the target voting object information to be displayed in a preset voting winning state.
The embodiment of the disclosure also provides a voting processing device based on the virtual reality space, which comprises: the acquisition module is used for responding to the starting instruction of the voting stage when the target video is displayed in the virtual reality space, and acquiring a plurality of corresponding candidate voting object information; the display module is used for determining a target space position in the virtual reality space, and rendering and displaying the plurality of candidate voting object information at the target space position; the vote obtaining updating module is used for updating the current vote obtaining information of the selected candidate voting object information in response to the detection of the voting confirmation operation; and the voting result processing module is used for determining target voting object information in the plurality of candidate voting object information according to the corresponding current voting information in response to the acquired instruction for ending the voting stage, and controlling the target voting object information to be displayed in a preset voting winning state.
The embodiment of the disclosure also provides an electronic device, which comprises: a processor; a memory for storing the processor-executable instructions; the processor is configured to read the executable instructions from the memory and execute the instructions to implement a virtual reality space-based voting processing method according to an embodiment of the disclosure.
The embodiments of the present disclosure also provide a computer-readable storage medium storing a computer program for executing the virtual reality space-based voting processing method as provided by the embodiments of the present disclosure.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has the following advantages:
according to the voting processing scheme based on the virtual reality space, when a target video is displayed in the virtual reality space, a plurality of corresponding candidate voting object information is acquired in response to an opening instruction of a voting stage, a target space position is determined in the virtual reality space, the plurality of candidate voting object information is rendered and displayed in the target space position, the candidate voting object information corresponding to the voting confirmation operation is determined in response to the detection of the voting confirmation operation, the current voting information of the candidate voting object information corresponding to the voting confirmation operation is updated, and further, the target voting object information is determined in the plurality of candidate voting object information according to the corresponding current voting object information in response to an ending instruction of the voting stage, and the target voting object information is controlled to be displayed in a preset voting winning state. Therefore, voting processing in the virtual reality space is realized, and VR operation experience of a user is expanded.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
Fig. 1 is a schematic view of an application scenario of a virtual reality device according to an embodiment of the present disclosure;
fig. 2 is a schematic flow chart of a voting processing method based on a virtual reality space according to an embodiment of the disclosure;
FIG. 3 is a schematic diagram of a voting animation provided by an embodiment of the present disclosure;
fig. 4 is an application scenario schematic diagram of a voting confirmation operation provided in an embodiment of the present disclosure;
fig. 5 is a schematic diagram of a display scenario of current ticketing information provided by an embodiment of the present disclosure;
FIG. 6 is a schematic view of a display scenario of a voting barrage model provided by an embodiment of the present disclosure;
fig. 7 is a schematic diagram of a countdown display scenario provided in an embodiment of the present disclosure;
fig. 8 is a schematic diagram of a voting result display scenario provided in an embodiment of the present disclosure;
fig. 9 is a schematic view of a voting selection scenario provided by an embodiment of the present disclosure;
FIG. 10 is a schematic diagram of another voting choice scenario provided by an embodiment of the present disclosure;
FIG. 11 is a schematic diagram of a voting choice state provided by an embodiment of the present disclosure;
FIG. 12 is a schematic diagram of display positions of a plurality of candidate voting models provided by embodiments of the present disclosure;
FIG. 13 is a schematic diagram of a target spatial location according to an embodiment of the disclosure;
fig. 14 is a schematic structural diagram of a voting processing device based on a virtual reality space according to an embodiment of the present disclosure;
fig. 15 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
Some technical concepts or noun concepts referred to herein are described in association with:
the virtual reality device, the terminal for realizing the virtual reality effect, may be provided in the form of glasses, a head mounted display (Head Mount Display, HMD), or a contact lens for realizing visual perception and other forms of perception, but the form of the virtual reality device is not limited to this, and may be further miniaturized or enlarged as needed.
The virtual reality devices described in embodiments of the present disclosure may include, but are not limited to, the following types:
a computer-side virtual reality (PCVR) device performs related computation of a virtual reality function and data output by using a PC side, and an external computer-side virtual reality device realizes a virtual reality effect by using data output by the PC side.
The mobile virtual reality device supports setting up a mobile terminal (such as a smart phone) in various manners (such as a head-mounted display provided with a special card slot), performing related calculation of a virtual reality function by the mobile terminal through connection with the mobile terminal in a wired or wireless manner, and outputting data to the mobile virtual reality device, for example, watching a virtual reality video through an APP of the mobile terminal.
The integrated virtual reality device has a processor for performing the calculation related to the virtual function, and thus has independent virtual reality input and output functions, and is free from connection with a PC or a mobile terminal, and has high degree of freedom in use.
Virtual objects, objects that interact in a virtual scene, objects that are stationary, moving, and performing various actions in a virtual scene, such as virtual persons corresponding to a user in a live scene, are controlled by a user or a robot program (e.g., an artificial intelligence based robot program).
As shown in fig. 1, HMDs are relatively light, ergonomically comfortable, and provide high resolution content with low latency. The sensor (such as a nine-axis sensor) for detecting the gesture in the virtual reality device is arranged in the virtual reality device, and is used for detecting the gesture change of the virtual reality device in real time, if the user wears the virtual reality device, when the gesture of the head of the user changes, the real-time gesture of the head is transmitted to the processor, so that the gaze point of the sight of the user in the virtual environment is calculated, an image in the gaze range (namely a virtual view field) of the user in the three-dimensional model of the virtual environment is calculated according to the gaze point, and the image is displayed on the display screen, so that the user looks like watching in the real environment.
In this embodiment, when a user wears the HMD device and opens a predetermined application program, for example, a live video application program, the HMD device may run corresponding virtual scenes, where the virtual scenes may be simulation environments for the real world, semi-simulation virtual scenes, or pure virtual scenes. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene or a three-dimensional virtual scene, and the dimension of the virtual scene is not limited in the embodiment of the present application. For example, the virtual scene may include characters, sky, land, sea, etc., the land may include environmental elements such as desert, city, etc., the user may control the virtual object to move in the virtual scene, and may also interactively control the controls, models, presentations, characters, etc. in the virtual scene by means of a handle device, a bare hand gesture, etc.
In order to solve the above problems and realize the interactive function of user voting when displaying videos such as online concert in virtual reality space, the embodiments of the present disclosure provide a voting processing method based on virtual reality space, and the method is described below with reference to specific embodiments.
Fig. 2 is a flow chart of a voting processing method based on a virtual reality space according to an embodiment of the disclosure, where the method may be performed by a voting processing device based on a virtual reality space, and the device may be implemented by using software and/or hardware, and may be generally integrated in an electronic device. As shown in fig. 2, the method includes:
in step 201, when a target video is displayed in a virtual reality space, corresponding candidate voting object information is acquired in response to an opening instruction of a voting stage.
The target video is any video currently played in the virtual scene, for example, may be an online concert, a live video, and the like.
It will be appreciated that the target video may initiate a voting event when viewed due to the scene requirements, and thus, in order to implement an on-line voting interaction function, is responsive to an on-command to the voting stage in order to facilitate the next further voting display process.
In one embodiment of the present disclosure, when a target video is displayed in a virtual reality space, a corresponding plurality of candidate voting object information is acquired in response to an open instruction of a voting stage, where the open instruction of the voting stage may be triggered according to a scene need. For example, under a live video playing scene, when a preset voting opening action is detected to be executed by a host user, an opening instruction of a voting stage is obtained; for another example, in the live broadcast scene, if the live broadcast time is detected to reach the preset voting time, the opening instruction of the voting stage is obtained.
The candidate voting object information includes, but is not limited to, one or more of voting object information, voting object name, voting object description information and the like, and the voting object information can be in a 2D form, a 3D form and the like. It is often intuitively presented what each candidate voting object is based on voting object information.
Step 202, determining a target space position in the virtual reality space, and rendering and displaying a plurality of candidate voting object information at the target space position.
In one embodiment of the present disclosure, a target spatial position is determined in a virtual reality space, and a plurality of candidate voting object information is rendered and displayed at the target spatial position, so that a corresponding user can see the plurality of candidate voting object information in the virtual reality device, thereby facilitating further execution of voting interaction operations and the like.
The target spatial position is generally within the viewing field of the user, and in different application scenarios, the manner of determining the target spatial position is different, and specific reference may be made to the subsequent embodiments, which are not described herein.
In one embodiment of the present disclosure, in order to further enhance the interestingness of the voting, before rendering and displaying the plurality of candidate voting object information at the target space position, voting animation rendering information corresponding to the voting stage may also be acquired, and the corresponding voting animation may be played and displayed in the virtual reality space according to the voting animation rendering information. Wherein the voting animation display path is located within the user's field of view.
It should be noted that, in this embodiment, the voting animation may be set according to the scene requirement, for example, in some possible embodiments, as shown in fig. 3, the voting animation may be an animation of an opening process of the music box, and after the music box is opened, a plurality of candidate voting object information is displayed.
In response to detecting the voting confirmation operation, the current vote-obtaining information of the selected candidate voting object information is updated 203.
In some possible examples, the voting confirmation operation is different in different application scenarios, and in some possible examples, the camera is used for identifying image information shot by the user to obtain user gesture information, the user gesture information is matched with preset voting gesture information, and the voting confirmation operation is determined to be detected under the condition that the user gesture information is successfully matched with the preset voting gesture information, wherein the user gesture information in the example can be single-hand gesture information or double-hand gesture information.
For example, as shown in fig. 4, when the preset voting gesture is a fist-making gesture of both hands, a hand image is captured, where a capturing camera of the hand image of the user may be located on the virtual display device, and when the fist-making gesture of both hands of the user is recognized according to the hand image, the detection of the voting confirmation operation is confirmed.
In some possible examples, the voting control is preset at the virtual manipulation device (e.g., handle, etc.), and upon detecting that the voting control is triggered, a voting confirmation operation is detected.
In one embodiment of the present disclosure, upon detection of a voting confirmation operation, current vote obtaining information of the candidate voting object information selected in advance is updated, where the current vote obtaining information may be total vote obtaining information obtained by the candidate voting object information corresponding to the current position. Wherein, every time the user is detected to execute the voting confirmation operation once, the unit voting information is added on the basis of the total voting information of the candidate voting object information, wherein, the device identification of the virtual display device which executes the voting confirmation operation every time can be also obtained, so as to control the voting times of each virtual reality device and the like according to the device identification.
In order to make the current vote obtaining information of each candidate voting object information more visual, the corresponding current vote obtaining information can be displayed at the associated space position of each candidate voting object information. The current ticket obtaining information may be current real-time ticket obtaining information, and in a scene of displaying corresponding current ticket obtaining information according to a preset display period, the current ticket obtaining information may be understood as corresponding current ticket obtaining information when displayed in a latest display period.
The associated space position for displaying the current voting information can be set according to scene requirements, and the visually displayed current voting information is associated with the corresponding candidate voting object information.
For example, as shown in fig. 5, an associated spatial position may be set below each candidate voting object information, the associated spatial position being in the same plane as the target spatial position at which the candidate voting object information is displayed, and only the value of the Y coordinate axis being different.
In the actual implementation process, the current ticketing information includes one or more of characters, letters and patterns, for example, with continued reference to fig. 5, the current ticketing information includes a progress bar for identifying the number of votes, specific ticket number text information, and the like.
In one embodiment of the present disclosure, in order to further enhance the experience of the campaign participation, after determining candidate voting object information corresponding to the voting confirmation operation, operation initiation object information corresponding to the voting operation may be acquired, where the operation initiation object information may include virtual user name information for wearing a virtual reality device, and the operation initiation object information is used to instruct a corresponding user to perform the voting operation, and a voting barrage model is generated according to the operation initiation object information and the candidate voting object information, where the voting barrage model may be 2D or 3D, a model style of the voting barrage model may be set according to a scene requirement, the voting barrage model includes the operation initiation object information and the candidate voting object information, and the voting barrage model is displayed on a barrage layer of the target video.
The barrage layer of the target video can be an independent layer preset in front of the target video, and the voting barrage model is displayed at the idle position of the barrage layer of the target video. In some possible embodiments, when displaying the corresponding barrage model, the virtual reality device may highlight the corresponding barrage model on the initiation side of the barrage model to indicate that the corresponding barrage model is itself initiated, where the duration of the highlighting may be calibrated according to the scene needs.
For example, as shown in fig. 6, after a user a corresponding to the virtual reality device a initiates a voting confirmation operation on a candidate voting object B, a voting barrage model is generated, the voting barrage model is a barrage component in a 2D form, a "a throws a vote to B" is displayed in the barrage component, and a barrage layer corresponding to the barrage component is displayed in front of a target video, where the corresponding barrage component can be seen to be highlighted in the virtual reality device a, so that the corresponding user a can know that the voting is successful, and interaction experience of voting is improved.
Step 204, in response to obtaining the end instruction of the voting stage, determining target voting object information in the multiple candidate voting object information according to the corresponding current voting information, and controlling the target voting object information to be displayed in a preset voting winning state.
In one embodiment of the present disclosure, it is also monitored whether the voting phase is ended, wherein in some possible embodiments the voting phase has a preset duration, in this embodiment the remaining duration of the voting phase is detected, and in response to detecting that the remaining duration is equal to zero, an end instruction is obtained.
The size duration of the voting stage may be calibrated according to the scene, for example, may be 30 seconds. In this embodiment, in order to indicate the duration of the current voting period of the user, the display position of the countdown layer is determined in the visual field of the user in the virtual reality space, the countdown layer is rendered at the display position, and the remaining duration is displayed in the countdown layer as a countdown prompt.
For example, as shown in fig. 7, a countdown layer is displayed above the display of candidate voting object information in the virtual reality space, and the countdown information is displayed in the countdown layer.
In one embodiment of the present disclosure, it is further detected whether a preset voting stage end operation is acquired, the voting stage end operation being initiated by a user having operation authority, and if the voting stage end operation is detected, an end instruction of the voting stage is acquired.
In one embodiment of the present disclosure, after an end instruction of a voting stage is acquired, in response to acquiring the end instruction of the voting stage, target voting object information is determined from a plurality of candidate voting object information according to corresponding current voting object information, for example, candidate voting object information with highest current voting object information is determined as target voting object information, and the target voting object information is displayed as a preset voting winning state. That is, in this embodiment, the voting result at least includes target voting object information of a preset voting winning state.
In some possible embodiments, a voting winning identifier may be displayed in a region where the target voting object information is located to indicate the target voting object information to vote winning, where the voting winning identifier includes any style such as characters, patterns, and animations, and is not listed here. For example, as shown in fig. 8, the vote winning flag may be a pattern of "/" symbols, or the like.
In some possible embodiments, the target voting object information may be displayed in a highlighted form to indicate that the target voting object information votes winning, or the like.
In one embodiment of the present disclosure, after the voting stage starts, that is, after the voting stage start instruction is received, if a viewing instruction for the target video is received, it may be determined whether the remaining duration of the current voting stage is greater than a preset duration threshold, where the preset duration threshold may be 3 seconds, and generally if the remaining duration is less than the preset duration threshold, it is considered that receiving the voting operation by the corresponding user at this time may bring pressure to calculation of the voting result, and thus may cause display delay of the voting result.
In this case, if the remaining duration of the current voting stage is not greater than the preset duration threshold, the corresponding user is not allowed to participate in the voting, but only the corresponding voting result is displayed after the target voting object information is determined in the plurality of candidate voting object information according to the corresponding current voting obtaining information, wherein after the voting stage starting instruction is received, if the viewing instruction for the target video is received, the corresponding voting animation can be displayed, the corresponding plurality of candidate voting object information is rendered after the voting animation is displayed, and the corresponding voting result is displayed after the voting stage is finished.
In one embodiment of the present disclosure, the voting results in the above embodiments are displayed after the voting period is finished, and the corresponding voting results at least include the target voting object, and the target voting object information is displayed as a preset voting winning state. In this embodiment, the display duration of the voting results is counted, and when the display duration reaches the preset display duration threshold, the corresponding voting results are not displayed any more. The voting result may be displayed in any form such as faded-out or direct disappearance.
In summary, when a target video is displayed in a virtual reality space, in the voting processing method based on the virtual reality space according to the embodiment of the disclosure, a plurality of corresponding candidate voting object information is acquired in response to an opening instruction of a voting stage, a target space position is determined in the virtual reality space, a plurality of candidate voting object information is rendered and displayed in the target space position, in response to a voting confirmation operation being detected, candidate voting object information corresponding to the voting confirmation operation is determined, current voting information of the candidate voting object information corresponding to the voting confirmation operation is updated, in turn, in response to an ending instruction of the voting stage being acquired, target voting object information is determined in the plurality of candidate voting object information according to the corresponding current voting object information, and the target voting object information is controlled to be displayed in a preset voting winning state. Therefore, voting processing in the virtual reality space is realized, and VR operation experience of a user is expanded.
Since the manner of interaction in the virtual reality space is different from the real world, the candidate voting object information is also different in the virtual reality space, and how to select the candidate voting object information before responding to the detection of the voting confirmation operation is described below with reference to examples.
In one embodiment of the present disclosure, determining candidate voting object information corresponding to a voting confirmation operation includes:
shooting a hand image of a user in a visual field range in a virtual reality space, identifying user gesture information based on the hand image, determining an indication direction corresponding to the user gesture information, for example, when the user gesture information is determined to be preset voting gesture information, identifying a hand key point position of the user gesture information when the preset voting gesture information is a single-hand gesture, determining an included angle between a first preset finger and a second preset finger time based on the hand key point position, and determining an included angle central line of the included angle between the fingers as the indication direction corresponding to the user gesture information.
For example, as shown in fig. 9, when the preset voting gesture information is the gesture shown in the figure, the first preset finger is the thumb, the second preset finger is the index finger, the first direction L1 of the thumb is identified, the second direction L2 of the index finger is identified, the included angle between L1 and L2 is determined as the included angle between the thumb and the index finger tip of the index finger, and the center line L3 of the included angle between the finger tips is used as the indication direction of the gesture information of the user.
For example, when the user gesture information is determined to be the preset voting gesture information and when the preset voting gesture information is a two-hand gesture, as shown in fig. 10, a first indication direction L4 of the left hand and a second indication direction L5 of the right hand are identified, where the determination manners of the first indication direction and the second indication direction may refer to the above embodiments and are not repeated herein. The intersection point O2 of the L4 and the L5 is determined, the midpoint position O1 between the left hand and the right hand is determined, and the direction in which the O1 points to the O2 is determined as the indication direction of the gesture information of the user.
In this embodiment, in order to intuitively display the current indication direction, an indication model corresponding to the indication direction may be displayed, and the indication model may be a ray trajectory model or the like. For example, the ray track model is used for indicating the indication direction corresponding to the gesture information of the user.
Further, after determining the indication direction corresponding to the gesture information of the user, determining the candidate voting object information located in the indication direction as the selected candidate voting object information.
In one embodiment of the present disclosure, when a user operates through a virtual manipulation device (such as a handle), an operation direction indication model may be displayed in a virtual reality space, the operation direction indication model being used to indicate a current indication direction of the virtual manipulation device, wherein the operation direction indication model includes, but is not limited to, a ray trace model, etc., wherein an indication direction of the operation direction indication model may be controlled by the user through the operation handle.
In this embodiment, the current indication position of the operation direction indication model displayed in the virtual reality space determines the candidate voting object information located at the current indication position as the selected candidate voting object information.
In this embodiment, since the interactive operation of the user is not a direct contact operation in the virtual reality space, in order to intuitively embody the candidate voting object selected by the current user, so as to improve the interactive operation experience of the user, in one embodiment of the present disclosure, the selected candidate voting object information may be further controlled to be displayed as a preset voting selection state, where the preset voting selection state may be set as any style according to the scene requirement.
In some possible examples, if the three (3) candidate voting object information is included, each candidate voting object information includes a corresponding rendering layer and a candidate object model, when the first candidate object model is selected, the layer where the first candidate object model is located is highlighted as shown in fig. 11, and the candidate voting object information in the layer is enlarged according to a certain proportion, so that a user can intuitively feel that the first candidate object model is in a voting selected state.
In summary, according to the voting processing method based on the virtual reality space, interaction of voting stages is performed based on display logic of the virtual reality space, so that voting operation in the virtual reality space is achieved.
Based on the above embodiment, the target spatial position is determined in the virtual reality space, and the target spatial position is located in the field of view of the user, thereby ensuring the visual experience of the user when voting.
In one embodiment of the present disclosure, a reference spatial position for displaying a target video is determined, where the reference spatial position may be understood as a display position of the target video in a virtual reality space, and further, the target spatial position is determined according to the reference spatial position, where the target spatial position and the reference spatial position are both located in a direction of a user's sight, and a distance between the target spatial position and a human eye of the user is smaller than a distance between the reference spatial position and the human eye of the user, that is, as shown in fig. 12, a direction for displaying a plurality of candidate voting models is located in front of the display target video, and since the target spatial position and the reference spatial position are both located in the direction of the user's sight, an effect of following the human eye display is achieved.
In one embodiment of the present disclosure, the target spatial position may also be determined directly according to the user's line of sight direction, the target spatial position being located in the user's line of sight direction, and the target spatial position being at a distance from the user's eye that is less than the distance from the reference spatial position.
How the target spatial position is determined from the user's line-of-sight direction is described below with reference to the embodiments.
In one embodiment of the present disclosure, a center point position of a virtual reality space is determined, the center point position being located at a center of the virtual reality panorama space, the center point position being related to a shape of the virtual reality panorama space. After the center point position of the virtual reality panorama space is determined, a preset radius distance is obtained, wherein the preset radius distance can be preset according to the size of the virtual reality panorama space, and the length from the center point position to the surface of the virtual reality panorama space in the virtual reality panorama space is generally not exceeded by the length from the center point position to the surface of the virtual reality panorama space in the virtual reality panorama space.
In this embodiment, since the length from the preset radius distance to the center point position generally does not exceed the length from the center point position in the virtual reality panorama space to the surface of the virtual reality panorama space, the position extending to the preset radius distance according to the current sight direction of the user is used as the target space position from the center point position, so that on one hand, the target space position can be ensured to be in the virtual space, the display effect is ensured, and on the other hand, the consistency of the target space position and the sight direction of the user can be ensured, so that the candidate object model displayed at the target space position is consistent with the sight direction of the user, and the viewing experience is improved.
For example, as shown in fig. 13, the virtual reality space is a "box-shaped" cube space, the preset radius distance is R1, and the center point position of the virtual reality panorama space is O1, and after determining the current sight line direction of the user, the position extending to the preset radius distance directly according to the current sight line direction of the user is taken as the target space position.
In summary, according to the voting processing method based on the virtual reality space, the target space position of rendering of the candidate voting object information in the virtual reality panoramic space can be flexibly determined according to scene requirements, the target space position is guaranteed to follow the current sight direction of the user, movement of the candidate voting object information along the current sight direction of the user is achieved, and the visual display effect of the candidate voting object information is guaranteed.
In order to achieve the above embodiment, the present disclosure further provides a voting processing device based on a virtual reality space.
Fig. 14 is a schematic structural diagram of a voting processing device based on a virtual reality space according to an embodiment of the present disclosure, where the device may be implemented by software and/or hardware, and may be generally integrated in an electronic device to perform voting processing based on the virtual reality space. As shown in fig. 14, the apparatus includes: an acquisition module 1410, a display module 1420, a vote update module 1430, and a voting result processing module 1440, wherein,
An obtaining module 1410, configured to obtain, when a target video is displayed in a virtual reality space, corresponding multiple candidate voting object information in response to an opening instruction of a voting stage;
a display module 1420 configured to determine a target spatial position in the virtual reality space, and render and display a plurality of candidate voting object information at the target spatial position;
a vote update module 1430 for updating current vote information of the selected candidate voting object information in response to detecting a vote confirmation operation;
the voting result processing module 1440 is configured to determine target voting object information from a plurality of candidate voting object information according to corresponding current voting information in response to acquiring an end instruction of the voting stage, and control the target voting object information to be displayed in a preset voting winning state.
The voting processing device based on the virtual reality space provided by the embodiment of the disclosure may execute the voting processing method based on the virtual reality space provided by any embodiment of the disclosure, and has corresponding functional modules and beneficial effects of the execution method, which are not described herein.
To achieve the above embodiments, the present disclosure also proposes a computer program product comprising a computer program/instruction which, when executed by a processor, implements the virtual reality space based voting processing method in the above embodiments.
Fig. 15 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Referring now in particular to fig. 15, a schematic diagram of an electronic device 1500 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device 1500 in the embodiments of the present disclosure may include, but is not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 15 is only one example and should not impose any limitation on the functionality and scope of use of the embodiments of the present disclosure.
As shown in fig. 15, the electronic device 1500 may include a processor (e.g., a central processor, a graphics processor, etc.) 1501, which may perform various suitable actions and processes according to programs stored in a Read Only Memory (ROM) 1502 or programs loaded from a memory 1508 into a Random Access Memory (RAM) 1503. In the RAM 1503, various programs and data necessary for the operation of the electronic device 1500 are also stored. The processor 1501, the ROM 1502, and the RAM 1503 are connected to each other through a bus 1504. An input/output (I/O) interface 1505 is also connected to bus 1504.
In general, the following devices may be connected to the I/O interface 1505: input devices 1506 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 1507 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; memory 1508 including, for example, magnetic tape, hard disk, etc.; and a communication device 1509. The communication means 1509 may allow the electronic device 1500 to communicate wirelessly or by wire with other devices to exchange data. While fig. 15 illustrates an electronic device 1500 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication device 1509, or from the memory 1508, or from the ROM 1502. The above-described functions defined in the virtual reality space based voting processing method of the embodiments of the present disclosure are performed when the computer program is executed by the processor 1501.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to:
when a target video is displayed in a virtual reality space, responding to an opening instruction of a voting stage, acquiring a plurality of corresponding candidate voting object information, determining a target space position in the virtual reality space, rendering and displaying the plurality of candidate voting object information in the target space position, responding to detection of a voting confirmation operation, determining candidate voting object information corresponding to the voting confirmation operation, updating current voting information of the candidate voting object information corresponding to the voting confirmation operation, further responding to an ending instruction of the voting stage, determining target voting object information in the plurality of candidate voting object information according to the corresponding current voting object information, and controlling the target voting object information to be displayed in a preset voting winning state. Therefore, voting processing in the virtual reality space is realized, and VR operation experience of a user is expanded.
The electronic device may write computer program code for performing the operations of the present disclosure in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are example forms of implementing the claims.
Claims (14)
1. A voting processing method based on a virtual reality space is characterized by comprising the following steps:
when a target video is displayed in a virtual reality space, responding to an opening instruction of a voting stage, and acquiring corresponding multiple candidate voting object information;
determining a target space position in the virtual reality space, and rendering and displaying the plurality of candidate voting object information at the target space position;
in response to detecting the voting confirmation operation, updating current vote-obtaining information of the selected candidate voting object information;
and responding to the obtained end instruction of the voting stage, determining target voting object information in the plurality of candidate voting object information according to the corresponding current voting information, and controlling the target voting object information to be displayed in a preset voting winning state.
2. The method of claim 1, further comprising, prior to rendering the plurality of candidate voting object information at the target spatial location:
Obtaining voting animation rendering information corresponding to the voting stage;
and displaying the corresponding voting animation in the virtual reality space according to the voting animation rendering information.
3. The method of claim 1, wherein when rendering and displaying the plurality of candidate voting object information at the target spatial location, further comprising:
and displaying corresponding current voting information at the associated spatial position of each candidate voting object information.
4. The method of claim 1, further comprising, prior to said responding to detecting a vote confirmation operation:
identifying image information shot by a camera on a user to obtain gesture information of the user;
and matching the user gesture information with preset voting gesture information, and determining that the voting confirmation operation is detected under the condition that the user gesture information is successfully matched with the preset voting gesture information.
5. The method of any of claims 1-4, further comprising, prior to said responding to detecting a vote confirmation operation:
and determining the selected candidate voting object information.
6. The method of claim 5, wherein said determining selected candidate voting object information comprises:
Determining an indication direction corresponding to gesture information of a user, and determining candidate voting object information positioned in the indication direction as the selected candidate voting object information; or,
and detecting the current indication position of an operation direction indication model displayed in the virtual reality space, and determining candidate voting object information positioned at the current indication position as the selected candidate voting object information.
7. The method of claim 6, further comprising, after said determining the selected candidate voting object information:
and controlling the selected candidate voting object information to be displayed as a preset voting selected state.
8. The method of claim 1, comprising, prior to said responding to the acquisition of the end instruction of the voting phase:
and detecting the residual duration of the voting stage, and acquiring the ending instruction in response to detecting that the residual duration is equal to zero.
9. The method of claim 8, further comprising, prior to said responding to detecting an end instruction of said voting phase:
and rendering a countdown layer in the virtual reality space, and displaying the residual duration of the voting stage in the countdown layer.
10. The method of claim 1, further comprising, after determining candidate voting object information corresponding to the voting confirmation operation:
acquiring operation initiation object information corresponding to the voting operation;
generating a voting barrage model, wherein the barrage model comprises the operation initiating object information and candidate voting object information corresponding to the voting confirmation operation;
and displaying the voting barrage model on a barrage layer of the target video.
11. The method of claim 1, wherein the determining a target spatial location in the virtual reality space comprises:
determining the target space position in the virtual reality space according to the current sight direction of the user,
the target space position is located in the direction of the user's sight, and the distance between the target space position and the user's eyes is smaller than the distance between the reference space position where the target video is located and the user's eyes.
12. A virtual reality space-based voting processing device, comprising:
the acquisition module is used for responding to the starting instruction of the voting stage when the target video is displayed in the virtual reality space, and acquiring a plurality of corresponding candidate voting object information;
The display module is used for determining a target space position in the virtual reality space, and rendering and displaying the plurality of candidate voting object information at the target space position;
the vote obtaining updating module is used for updating the current vote obtaining information of the selected candidate voting object information in response to the detection of the voting confirmation operation;
and the voting result processing module is used for determining target voting object information in the plurality of candidate voting object information according to the corresponding current voting information in response to the acquired instruction for ending the voting stage, and controlling the target voting object information to be displayed in a preset voting winning state.
13. An electronic device, the electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the executable instructions to implement the virtual reality space based voting processing method according to any one of claims 1-11.
14. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program for executing the virtual reality space based voting processing method according to any one of the preceding claims 1-11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210962527.7A CN117636528A (en) | 2022-08-11 | 2022-08-11 | Voting processing method, device, equipment and medium based on virtual reality space |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210962527.7A CN117636528A (en) | 2022-08-11 | 2022-08-11 | Voting processing method, device, equipment and medium based on virtual reality space |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117636528A true CN117636528A (en) | 2024-03-01 |
Family
ID=90020351
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210962527.7A Pending CN117636528A (en) | 2022-08-11 | 2022-08-11 | Voting processing method, device, equipment and medium based on virtual reality space |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117636528A (en) |
-
2022
- 2022-08-11 CN CN202210962527.7A patent/CN117636528A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112051961A (en) | Virtual interaction method and device, electronic equipment and computer readable storage medium | |
US20230405475A1 (en) | Shooting method, apparatus, device and medium based on virtual reality space | |
CN115191006B (en) | 3D model for displayed 2D elements | |
CN110717993A (en) | Interaction method, system and medium of split type AR glasses system | |
US20240028130A1 (en) | Object movement control method, apparatus, and device | |
CN117636528A (en) | Voting processing method, device, equipment and medium based on virtual reality space | |
CN113703704A (en) | Interface display method, head-mounted display device and computer readable medium | |
US20240078734A1 (en) | Information interaction method and apparatus, electronic device and storage medium | |
CN117631810A (en) | Operation processing method, device, equipment and medium based on virtual reality space | |
CN117641025A (en) | Model display method, device, equipment and medium based on virtual reality space | |
WO2024131405A1 (en) | Object movement control method and apparatus, device, and medium | |
CN117632391A (en) | Application control method, device, equipment and medium based on virtual reality space | |
CN117572994A (en) | Virtual object display processing method, device, equipment and medium | |
CN117765207A (en) | Virtual interface display method, device, equipment and medium | |
CN117640919A (en) | Picture display method, device, equipment and medium based on virtual reality space | |
CN117784921A (en) | Data processing method, device, equipment and medium | |
CN118349105A (en) | Virtual object presentation method, device, equipment and medium | |
CN117376591A (en) | Scene switching processing method, device, equipment and medium based on virtual reality | |
CN117991889A (en) | Information interaction method, device, electronic equipment and storage medium | |
CN117991967A (en) | Virtual keyboard interaction method, device, equipment, storage medium and program product | |
CN117354484A (en) | Shooting processing method, device, equipment and medium based on virtual reality | |
CN115981544A (en) | Interaction method and device based on augmented reality, electronic equipment and storage medium | |
CN117994284A (en) | Collision detection method, collision detection device, electronic equipment and storage medium | |
CN117899456A (en) | Display processing method, device, equipment and medium of two-dimensional assembly | |
CN117826977A (en) | Interaction method, interaction device, electronic equipment, storage medium and computer program product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |