CN114399614B - Three-dimensional display method and device for virtual object, electronic equipment and storage medium - Google Patents
Three-dimensional display method and device for virtual object, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN114399614B CN114399614B CN202111543763.7A CN202111543763A CN114399614B CN 114399614 B CN114399614 B CN 114399614B CN 202111543763 A CN202111543763 A CN 202111543763A CN 114399614 B CN114399614 B CN 114399614B
- Authority
- CN
- China
- Prior art keywords
- target
- image data
- view angle
- displayed
- observation point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 62
- 230000000007 visual effect Effects 0.000 claims abstract description 28
- 239000013598 vector Substances 0.000 claims description 37
- 238000004590 computer program Methods 0.000 claims description 16
- 238000009432 framing Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 17
- 230000003993 interaction Effects 0.000 description 14
- 230000002159 abnormal effect Effects 0.000 description 7
- 230000000694 effects Effects 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 235000009508 confectionery Nutrition 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 230000007613 environmental effect Effects 0.000 description 4
- 230000002452 interceptive effect Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000002829 reductive effect Effects 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000011664 signaling Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 235000019800 disodium phosphate Nutrition 0.000 description 1
- 238000005265 energy consumption Methods 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000002156 mixing Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000004549 pulsed laser deposition Methods 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000003238 somatosensory effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention relates to a three-dimensional display method and device of a virtual object, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a view angle adjustment request aiming at an object to be displayed, wherein the view angle adjustment request comprises an offset view angle relative to a current view angle; determining a target view angle of an object to be displayed according to the offset view angle and the current view angle; determining target image data corresponding to the target view angle according to the target view angle and the pre-stored image data of the object to be displayed corresponding to each view angle; and carrying out three-dimensional display on the target image data. According to the method, images of the object to be displayed are acquired once through the plurality of cameras without adjusting the visual angle every time, so that the calculated amount of the GPU of the computer can be saved, and the data processing efficiency is improved.
Description
Technical Field
The invention relates to the technical field of computers and data processing, in particular to a three-dimensional display method and device for virtual objects, electronic equipment and a storage medium.
Background
With the rapid development of technology, human society has stepped into information society and knowledge society, and a huge amount of information data follows, however, due to the continuous technical innovation of digital storage media, the advantages of long-term storage of digital information over traditional physical storage modes are expanded into a wide application field. Digital protection and application of cultural relics and cultural heritage are increasingly important topics.
The cultural resource digitization acquisition work is carried out by each stage of cultural relic protection units and cultural blog units successively, and the application of the digitized data is also widely developed. The existing digital three-dimensional display of cultural resources is generally performed by adopting a terminal based on naked eye three-dimensional technology, before display, a plurality of cameras are required to be arranged around an object to be displayed, in the display process, the plurality of cameras work simultaneously, a plurality of images of the object to be displayed are collected, and then the plurality of images are processed so that the object to be displayed presents a three-dimensional effect through the terminal based on the naked eye three-dimensional technology. The three-dimensional display method has extremely high requirements on the computing capability of the GPU of the computer, influences the data processing efficiency, and is difficult to integrate in a miniaturized way due to the high requirements of the display card and the high cost of display equipment.
Disclosure of Invention
The invention aims to solve the problem of how to improve the data processing efficiency by providing a three-dimensional display method and device for virtual objects, electronic equipment and a storage medium.
In a first aspect, the present invention solves the above technical problems by providing the following technical solutions: a method of three-dimensional presentation of a virtual object, the method comprising:
Acquiring a view angle adjustment request aiming at an object to be displayed, wherein the view angle adjustment request comprises an offset view angle of the object to be displayed relative to a current view angle;
determining a target view angle of an object to be displayed according to the offset view angle and the current view angle;
Determining target image data corresponding to the target view angle according to the target view angle and the pre-stored image data of the object to be displayed corresponding to each view angle;
and carrying out three-dimensional display on the target image data.
The beneficial effects of the invention are as follows: when a user wants to adjust the viewing angle of an object to be displayed, a viewing angle adjustment request is initiated, wherein the viewing angle adjustment request comprises an offset viewing angle of the object to be displayed relative to a current viewing angle, according to the offset viewing angle and the current viewing angle, an adjusted viewing angle (target viewing angle) can be determined, and for different viewing angles, image data of the object to be displayed corresponding to each viewing angle are stored in advance, and then based on the target viewing angle, target image data corresponding to the target viewing angle can be determined from the image data corresponding to each viewing angle stored in advance.
On the basis of the technical scheme, the invention can be improved as follows.
Further, the method comprises the following steps:
Establishing a space view finding model corresponding to an object to be displayed, wherein the space view finding model is a sphere generated by taking the position of the object to be displayed as a circle center and taking the distance from the circle center to a set main observation point as a radius, each circular ring of the space view finding model comprises a plurality of virtual observation points, each observation point corresponds to one view angle, the view angle corresponding to the main observation point is the set view angle, and the current view angle comprises the set view angle; the pre-stored image data of the object to be displayed corresponding to each view angle comprises the image data of the object to be displayed corresponding to the position of each observation point;
The determining the target viewing angle of the object to be displayed according to the offset viewing angle and the current viewing angle includes:
Determining the position of a target virtual observation point according to the offset view angle and the position of a current observation point corresponding to the current view angle, and taking the view angle corresponding to the target virtual observation point as a target view angle;
The determining the target image data corresponding to the target view angle according to the target view angle and the pre-stored image data of the object to be displayed corresponding to each view angle includes:
and determining the image data corresponding to the target virtual observation point according to the position of the target virtual observation point and the pre-stored image data of the object to be displayed corresponding to the position of each observation point.
The method has the advantages that the main viewpoint and the virtual viewpoints in the established spatial view finding model are used for representing the cameras, the image data of different view angles are the image data corresponding to the different viewpoints, when the target view angle is determined, the determined view angle is converted into the position of the determined virtual viewpoint, the target image data corresponding to the determined target view angle is converted into the image data corresponding to the determined target virtual viewpoint, and in the scheme of the invention, the target image data corresponding to the target view angle can be accurately determined through the spatial view finding model.
Further, the image data of the object to be displayed corresponding to each observation point includes background-free image data and background-containing image data, the view angle adjustment request includes an object view angle adjustment request or an overall view angle adjustment request, and the overall view angle adjustment request refers to a request for adjusting both the view angle of the object to be displayed and the view angle of the background of the object to be displayed;
if the view angle adjustment request is an object view angle adjustment request, the target image data is background-free image data, and the three-dimensional display of the target image data comprises:
three-dimensional display is carried out on the background-free image data corresponding to the target visual angle;
If the viewing angle adjustment request is an overall viewing angle adjustment request, the target image data includes background-free image data and background-free image data, and the three-dimensional display of the target image data includes:
fusing the background-free image data corresponding to the target visual angle and the background-free image data to obtain fused image data;
and carrying out three-dimensional display on the fused image data.
The method has the advantages that in the scheme, the visual angle of the displayed object to be displayed can be adjusted only, the visual angles of the background of the object to be displayed can be kept unchanged without adjustment, the visual angles of the object to be displayed and the background of the object to be displayed can be adjusted together, different adjustment requirements can be met, when the visual angles of the object to be displayed and the background of the object to be displayed are required to be adjusted together, the background-free image data corresponding to the target visual angle and the background-containing image data corresponding to the target visual angle can be fused to obtain fused image data, the fused image data is the image data containing the background and the object to be displayed corresponding to the target visual angle, and the fused image data is subjected to three-dimensional display, so that the displayed image comprises the background under the target visual angle and the object to be displayed.
Further, determining the position of the target virtual viewpoint according to the offset viewing angle and the position of the current viewpoint corresponding to the current viewing angle includes:
Determining the position of a reference viewpoint according to the offset viewing angle and the position of the current viewpoint;
And determining a virtual observation point with the nearest distance to the position of the reference observation point in each virtual observation point in the space view model according to the position of the reference observation point, and taking the position of the virtual observation point with the nearest distance to the position of the reference observation point as the position of the target virtual observation point.
The method has the advantages that the position of the reference observation point can be determined according to the offset view angle and the positions of the main observation points, the position of the reference observation point possibly is not a virtual observation point in the space view model, and the position of the target observation point can be accurately determined in a distance mode based on the distance between the position of the reference observation point and the positions of all the virtual observation points in the space view model.
Further, the determining, according to the position of the reference viewpoint, a virtual viewpoint having a closest distance from the position of the reference viewpoint in each virtual viewpoint in the spatial view model includes:
determining a target normal vector according to the position of the reference observation point and the position of the object to be displayed;
Determining each initial virtual observation point contained on a spherical surface perpendicular to the target normal vector from each virtual observation point of the space view model;
A virtual viewpoint closest to the target normal vector is determined from the initial virtual viewpoints.
The method has the advantages that in the process of determining the position of the target observation point, the fact that the plane corresponding to the visual angle of the observer is generally perpendicular to the ground is considered, the adjusted target virtual observation point corresponding to the visual angle is also generally a virtual observation point on the plane corresponding to the visual angle of the observer, and therefore in the method, the target normal vector can be determined according to the position of the reference observation point and the position of the object to be displayed, the target normal vector is generally parallel to the ground, the spherical surface perpendicular to the target normal vector in the space view finding model is the plane corresponding to the visual angle of the observer, and then the virtual observation point closest to the distance between the target normal vector is determined from the initial virtual observation points, so that the accuracy of determining the target virtual observation point can be ensured, and the data processing amount can be reduced.
Further, the target virtual observation point includes at least two target virtual observation points, and determining target image data corresponding to the target virtual observation points according to the positions of the target virtual observation points and the pre-stored image data of the object to be displayed corresponding to the positions of the observation points, including:
Determining the image data corresponding to each target virtual observation point according to the position of each target virtual observation point in at least two target virtual observation points and the pre-stored image data of the object to be displayed corresponding to the position of each observation point;
Dividing the image data corresponding to each target virtual observation point into a plurality of sub-images;
and carrying out cross stitching on sub-images corresponding to the target virtual observation points to obtain image data to be displayed, and taking the image data to be displayed as the target image data.
The method has the advantages that when the target virtual observation points comprise at least two, the image data corresponding to each target virtual observation point can be divided into a plurality of sub-images, and then the sub-images corresponding to the target virtual observation points are subjected to cross stitching to obtain the image data to be displayed, so that the three-dimensional display can be finally performed to display a good three-dimensional display effect.
Further, the fusing the background-free image data and the background-containing image data corresponding to the target viewing angle to obtain fused image data includes:
And superposing the image data without the background and the image data with the background to obtain fused image data.
The technical scheme has the beneficial effects that when the visual angles of the display object and the background of the object to be displayed are required to be adjusted together, the background-free image data and the background-free image data corresponding to the target visual angle can be overlapped, and the overlapped image data contains the object to be displayed and the background.
In a second aspect, the present invention further provides a three-dimensional display device for a virtual object, to solve the above technical problem, where the device includes:
In a third aspect, the present application further provides an electronic device for solving the above technical problem, where the electronic device includes a memory, a processor, and a computer program stored on the memory and capable of running on the processor, and when the processor executes the computer program, the processor implements the three-dimensional display method of the virtual object of the present application.
In a fourth aspect, the present application further provides a computer readable storage medium, where a computer program is stored, where the computer program is executed by a processor to implement the three-dimensional exhibition method of the virtual object of the present application.
Additional aspects and advantages of the application will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the application.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings that are required to be used in the description of the embodiments of the present invention will be briefly described below.
Fig. 1 is a schematic flow chart of a three-dimensional display method of a virtual object according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a spatial viewfinder model according to an embodiment of the present invention;
FIG. 3 is a schematic view of a sweet angle according to an embodiment of the present invention;
Fig. 4 is a flow chart of a method for displaying a virtual object according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of feedback information according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a digital information resource according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of an environment resource according to an embodiment of the present invention;
FIG. 8 is a schematic structural diagram of a three-dimensional display system for virtual objects according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of observation points corresponding to initial moments and interactions, respectively, according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a three-dimensional display device for a virtual object according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The principles and features of the present invention are described below with examples given for the purpose of illustration only and are not intended to limit the scope of the invention.
The following describes the technical scheme of the present invention and how the technical scheme of the present invention solves the above technical problems in detail with specific embodiments. The following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments. Embodiments of the present invention will be described below with reference to the accompanying drawings.
The scheme provided by the embodiment of the invention can be applied to any application scene of digital resources needing to display the virtual object on the terminal. The scheme provided by the embodiment of the invention can be executed by any electronic equipment, for example, the scheme can be a display Chen Zhongduan for displaying a virtual object, and an application program can be installed and run on the terminal.
The embodiment of the invention provides a possible implementation manner, as shown in fig. 1, a flowchart of a three-dimensional display method of a virtual object is provided, and the scheme can be executed by any electronic device, for example, a terminal device, or jointly executed by the terminal device and a server (such as a GPU). For convenience of description, a method provided by an embodiment of the present invention will be described below by taking a server as an execution body, and the method may include the following steps as shown in a flowchart in fig. 1:
Step S110, a view angle adjustment request for the object to be displayed is obtained, wherein the view angle adjustment request comprises an offset view angle of the object to be displayed relative to the current view angle.
Step S120, determining a target view angle of the object to be displayed according to the offset view angle and the current view angle.
Step S130, determining target image data corresponding to the target view angle according to the target view angle and the pre-stored image data of the object to be displayed corresponding to each view angle.
And step S140, performing three-dimensional display on the target image data.
According to the method, when a user wants to adjust the viewing angle of the object to be displayed, a viewing angle adjustment request is initiated, the viewing angle adjustment request comprises the offset viewing angle of the object to be displayed relative to the current viewing angle, the adjusted viewing angle (target viewing angle) can be determined according to the offset viewing angle and the current viewing angle, and for different viewing angles, the image data of the object to be displayed corresponding to each viewing angle is stored in advance, and then based on the target viewing angle, the target image data corresponding to the target viewing angle can be determined from the image data corresponding to each viewing angle stored in advance.
The solution of the present invention is further described below with reference to the following specific embodiments, in which the three-dimensional display method of a virtual object may include the following steps:
Step S110, a view angle adjustment request for the object to be displayed is obtained, wherein the view angle adjustment request comprises an offset view angle of the object to be displayed relative to the current view angle.
The view angle adjustment request refers to a request that a user wants to adjust a current view angle, the request may be generated based on a trigger operation of the user on a client interface of the terminal device, and a specific form of the trigger operation may be configured according to needs, for example, the trigger operation of the user at a specific operation position on an interface of an application program of the terminal device may be a trigger selection operation of a related trigger identifier in practical use. The specific form of the trigger identifier may be configured according to actual needs, for example, may be a designated virtual button or an input box on the client interface, and specifically, for example, a virtual button "XXX" that may be "XXX" displayed on the client interface may indicate that the viewing angle is offset, and the operation of clicking the virtual button by the user indicates that the user wants to adjust the current viewing angle.
The current viewing angle may be a default viewing angle, that is, a set viewing angle mentioned below, where the set viewing angle indicates an initial viewing angle corresponding to when the object is displayed through the terminal.
The offset viewing angle refers to an angle adjusted based on the current viewing angle, and the offset includes an adjustment of different angles to the object to be displayed, for example, 30 degrees left turn along the Y-axis (direction perpendicular to the ground) of the object to be displayed, 60 degrees right turn along the Y-axis of the object to be displayed, and so on.
Wherein the object to be displayed refers to a virtual object to be displayed and can comprise articles such as cultural relics, artware and the like,
Step S120, determining a target view angle of the object to be displayed according to the offset view angle and the current view angle.
The target viewing angle is the viewing angle adjusted according to the viewing angle adjustment request.
Step S130, determining target image data corresponding to the target view angle according to the target view angle and the pre-stored image data of the object to be displayed corresponding to each view angle.
The method comprises the steps of storing image data of an object to be displayed corresponding to each view angle in advance, and inquiring from the stored image data directly when the view angle adjustment is needed, so that the data processing amount is reduced. It should be noted that, for different objects to be displayed, each object to be displayed correspondingly stores image data corresponding to each view angle, the image data is two-dimensional image data, and the image data corresponding to each view angle can be obtained by shooting at each view angle through the image acquisition device.
In an alternative scheme of the present invention, when storing the image data corresponding to each view angle, each view angle and the image data corresponding to each view angle may be stored by means of an index.
And step S140, performing three-dimensional display on the target image data.
The three-dimensional display of the target image data can be realized by naked eye three-dimensional technology, and is not described here. The target terminal is a terminal capable of displaying three-dimensional data, for example, the target terminal is a glass grating type naked eye 3D display for displaying three-dimensional images, and a three-dimensional effect can be displayed through the glass grating.
In an alternative aspect of the present invention, the method may further comprise:
Establishing a space view finding model corresponding to an object to be displayed, wherein the space view finding model is a sphere generated by taking the position of the object to be displayed as a circle center and taking the distance from the circle center to a set main observation point as a radius, each circular ring of the space view finding model comprises a plurality of virtual observation points, each observation point corresponds to one view angle, the view angle corresponding to the main observation point is the set view angle, and the current view angle comprises the set view angle; the pre-stored image data of the object to be displayed corresponding to each view angle comprises the image data of the object to be displayed corresponding to the position of each observation point;
determining a target view angle of the object to be displayed according to the offset view angle and the current view angle, including:
Determining the position of a target virtual observation point according to the offset view angle and the position of a current observation point corresponding to the current view angle, and taking the view angle corresponding to the target virtual observation point as a target view angle;
according to the target view angle and the pre-stored image data of the object to be displayed corresponding to each view angle, determining the target image data corresponding to the target view angle comprises the following steps:
and determining the image data corresponding to the target virtual observation point according to the position of the target virtual observation point and the pre-stored image data of the object to be displayed corresponding to the position of each observation point.
In the scheme of the invention, before the object to be displayed is displayed, a space view model of the object to be displayed is pre-established, the main view point is a reference view point, and the view angle corresponding to the main view point can be the view angle corresponding to the main view of the object to be displayed for an observer. The viewing angle corresponding to the main viewing point may be used as a default viewing angle, i.e., a set viewing angle. When the object to be displayed is displayed for the first time through the terminal, the object to be displayed can be displayed with the set view angle, and then the current view angle can be the set view angle.
In the spatial view finding model, the spatial view finding model comprises a plurality of observation points (a main observation point and a plurality of virtual observation points), the position of each observation point corresponds to one view angle, the pre-stored image data of the object to be displayed corresponding to each view angle can be represented by the image data corresponding to the position of each observation point, and when the target view angle and the target image data of the object to be displayed are determined, the determination can be performed based on the positions of all the observation points in the spatial view finding model. The position of each observation point can correspond to one image acquisition device, and the image data of the object to be displayed corresponding to each observation point can be acquired through the image acquisition device corresponding to each observation point.
When storing the image data of the object to be displayed corresponding to the positions of the observation points, the positions of the observation points can be numbered, and the positions of the observation points can be numbered according to the polar coordinate sequence corresponding to the observation points or according to the xyz coordinate sequence. For each viewpoint, the position number characterizes the position of that viewpoint. When the image data corresponding to the position numbers of the respective observation points are stored in the form of indexes, the image data corresponding to the respective position numbers may be stored with the respective position numbers as the index numbers in the indexes.
As an example, referring to the schematic diagram of the spatial view model shown in fig. 2, the spatial view model is a sphere, and includes a plurality of virtual observation points, and the distance between every two virtual observation points is the same, and each observation point corresponds to a position number.
In an alternative of the invention, the distance between every two view points may be the same, the smaller the distance, the finer the division of the viewing angle. The distance between every two observation points is not greater than the sweet angle interval, wherein a sweet angle can be understood as a viewing angle, and can be understood as a maximum angle that an observer can move when viewing an image output by a grating filter in an naked eye three-dimensional display (the display configured by the naked eye three-dimensional display as a target terminal, through which the three-dimensional display can realize the three-dimensional display effect) while the image is kept unchanged. See in particular the sweet angle diagram shown in fig. 3. Sweet angle spacing refers to the distance between two positions corresponding to the maximum angle that a viewer can move while viewing the image while the image remains unchanged.
In an alternative scheme of the invention, the image data of the object to be displayed corresponding to each observation point comprises background-free image data and background-containing image data, and the view angle adjustment request comprises an object view angle adjustment request or an integral view angle adjustment request, wherein the integral view angle adjustment request refers to a request for adjusting both the view angle of the object to be displayed and the view angle of the background of the object to be displayed;
if the view angle adjustment request is an object view angle adjustment request, the target image data is background-free image data, and the three-dimensional display of the target image data comprises:
three-dimensional display is carried out on the background-free image data corresponding to the target visual angle;
If the viewing angle adjustment request is an overall viewing angle adjustment request, the target image data includes background-free image data and background-free image data, and the three-dimensional display of the target image data includes:
fusing the background-free image data corresponding to the target visual angle and the background-free image data to obtain fused image data;
and carrying out three-dimensional display on the fused image data.
The image data of the object to be displayed corresponding to each observation point comprises background-free image data and background-containing image data, wherein the background-containing image data does not comprise the object to be displayed. When image data corresponding to an object to be displayed is acquired, background-free image data and background-free image data are required to be acquired. When storing the image data with background and the image data without background, the image data with background and the image data without background can be stored according to the position numbers of all the observation points. And then fusing the background-free image data and the background-free image data corresponding to the target visual angle to obtain fused image data, and performing three-dimensional display on the fused image data.
In an alternative scheme of the present invention, the fusing the background-free image data and the background-free image data corresponding to the target viewing angle to obtain fused image data includes:
And superposing the image data without the background and the image data with the background to obtain fused image data.
The two different types of image data are overlapped, so that the overlapped image data are still corresponding image data under the target scene, and the overlapped image data comprise the object to be displayed and the background.
In an alternative scheme of the present invention, determining the position of the target virtual viewpoint according to the positions of the viewpoint corresponding to the offset viewing angle and the current viewing angle includes:
Determining the position of a reference viewpoint according to the offset viewing angle and the position of the current viewpoint;
And determining a virtual observation point with the nearest distance to the position of the reference observation point in each virtual observation point in the space view model according to the position of the reference observation point, and taking the position of the virtual observation point with the nearest distance to the position of the reference observation point as the position of the target virtual observation point.
The reference viewpoint is a viewpoint determined based on the offset viewing angle, and may not be any one of the viewpoints in the spatial view model, or may be one of the viewpoints in the spatial view model. The reference viewpoint and the current viewpoint may be on the same circle.
Alternatively, the distance between the position of the reference viewpoint and each virtual viewpoint in the spatial viewfinder model may be a euclidean distance.
Optionally, one implementation of determining the position of the reference viewpoint according to the offset viewing angle and the position of the current viewpoint is: and determining an attitude offset vector according to the offset view angle and the position of the current observation point, and taking the position of the intersection point of the attitude offset vector and the spatial framing model as the position of the reference observation point. The gesture offset vector is a vector determined by taking the position of the current observation point as a starting point according to the offset visual angle.
In an alternative aspect of the present invention, the determining, according to the position of the reference viewpoint, a virtual viewpoint having a closest distance from the position of the reference viewpoint in each virtual viewpoint in the spatial viewfinder model includes:
determining a target normal vector according to the position of the reference observation point and the position of the object to be displayed;
Determining each initial virtual observation point contained on a spherical surface perpendicular to the target normal vector from each virtual observation point of the space view model;
A virtual viewpoint closest to the target normal vector is determined from the initial virtual viewpoints.
The target normal vector is a vector between the position of the object to be displayed and the position of the reference observation point, and the spherical surface perpendicular to the target normal vector is a view field plane corresponding to the observer under the target view angle. In the process of determining the position of the target observation point, considering that the plane corresponding to the view angle of the observer is generally perpendicular to the ground, the target virtual observation point corresponding to the adjusted view angle is also generally a virtual observation point on the plane corresponding to the view angle of the observer, based on the scheme of the invention, the target normal vector can be determined according to the position of the reference observation point and the position of the object to be displayed, the target normal vector is generally parallel to the ground, the spherical surface perpendicular to the target normal vector in the space view model is the plane corresponding to the view angle of the observer, and then the virtual observation point closest to the distance between the target normal vectors is determined from the initial virtual observation points, thereby not only ensuring the accuracy of determining the target virtual observation point, but also reducing the data processing amount.
In an alternative scheme of the present invention, the target virtual observation point includes at least two target virtual observation points, and determining target image data corresponding to the target virtual observation points according to positions of the target virtual observation points and pre-stored image data of objects to be displayed corresponding to positions of the observation points includes:
Determining the image data corresponding to each target virtual observation point according to the position of each target virtual observation point in at least two target virtual observation points and the pre-stored image data of the object to be displayed corresponding to the position of each observation point;
Dividing the image data corresponding to each target virtual observation point into a plurality of sub-images;
and carrying out cross stitching on sub-images corresponding to the target virtual observation points to obtain image data to be displayed, and taking the image data to be displayed as the target image data.
The number of the determined target virtual observation points may be more than 1, that is, the target viewing angles correspond to the viewing angles corresponding to the plurality of target virtual observation points, the image data corresponding to each target virtual observation point may be divided into a plurality of sub-images, and the sub-images corresponding to the target virtual observation points are cross-spliced to obtain the target image data corresponding to the target viewing angles. The number of sub-images corresponding to each target virtual viewpoint may be the same or different. The above-mentioned cross-stitching of sub-images refers to blending together sub-images into one image.
As an example, assuming that the number of target virtual viewpoints is 2, namely, a target virtual viewpoint a and a target virtual viewpoint B, each corresponding to one complete image of the display Chen Jian, dividing the image corresponding to the target virtual viewpoint a into a plurality of sub-images, for example, a1 to a10 total of 10 sub-images, dividing the image corresponding to the target virtual viewpoint B into a plurality of sub-images, for example, B1 to B10 total of 10 sub-images, and performing cross-stitching on the 10 sub-images of a1 to a10 and the 10 sub-images of B1 to B10, one implementation method is that: according to a1, b1, a2, b2, the 20 sub-images are spliced together in the sequence, so that target image data is obtained, and the target image data displayed under the action of the grating filter is the three-dimensional display effect of the corresponding target image data under the target view angle. It should be noted that, one implementation manner of cross-stitching in the above example is merely an example, and the scheme of the present invention is not limited to this implementation manner, for example, cross-stitching may be performed according to a1, a2, b1, b2, a3, a4, b3, b4, ··.
After the target image data corresponding to the target viewing angle is determined, three-dimensional display is required to be performed through the terminal device, for example, display is performed based on a naked eye three-dimensional technology. In the prior art, a specific display device and a control device are needed for digital display of target image data, so that requirements on the devices are high, and universality is crossed.
Based on this, referring to the flowchart shown in fig. 4, the present invention provides the following technical solution, so that the target image data may be applied to a common terminal device, where the solution specifically includes:
Step S210, acquiring a target terminal and terminal parameters of the target terminal;
step S220, determining a target application program corresponding to the target terminal according to the terminal parameters of the target terminal;
Step S230, obtaining configuration resources of the objects to be displayed, which correspond to the terminal parameters, wherein the configuration resources comprise digital information resources and display parameter resources, which correspond to the objects to be displayed;
step S240, the configuration resources and the target application program are sent to the target terminal, so that the target terminal can display the object to be displayed according to the configuration resources through the target application program.
The digitized information resource of the object to be displayed may include target image data of the object to be displayed.
The scheme is specifically described below:
step S210, obtaining a target terminal and terminal parameters of the target terminal.
The target terminal refers to a terminal for displaying an object to be displayed, and the target terminal can be one or a plurality of target terminals. Wherein a target terminal can be used to present digitized information of a virtual object. The virtual object may be a cultural relic, a artwork, or the like.
The terminal parameters of the target terminal are parameters reflecting the display capability of the terminal, such as screen size, resolution, refresh rate, power supply requirements, operating system, hardware processing capability, whether or not interaction with the virtual object is possible, how interaction with the displayed virtual object is possible (for example, interaction with the displayed virtual object through a handle configured by the terminal, etc.), and the like.
The target terminal may be selected by a server or by a user, and the acquiring the target terminal may include: acquiring a deployment request of an administrator, wherein the deployment request comprises a first terminal identifier of a target terminal; determining a target terminal according to the first terminal identifier;
Or transmitting a broadcast message; receiving feedback information sent by at least one terminal based on the broadcast message, wherein the feedback information comprises second terminal identifiers of all terminals in the at least one terminal; and determining a target terminal according to the second terminal identifiers, wherein at least one terminal comprises the target terminal.
The deployment request refers to a request that an administrator wants to display an object to be displayed on a certain terminal, the request can be generated based on a trigger operation of the administrator on a client interface of the terminal device, a specific form of the trigger operation is configured according to needs, for example, the trigger operation of the administrator at a specific operation position on an interface of an application program of the terminal device can be a trigger selection operation for a related trigger identifier in practical use. The specific form of the trigger identifier may be configured according to actual needs, for example, may be a designated virtual button or an input box on the client interface, specifically, may be, for example, a virtual button of "XXX" displayed on the client interface, and an operation of clicking the virtual button by an administrator indicates that the administrator wants to view a digitalized resource of a virtual object corresponding to "XXX".
Each terminal corresponds to a terminal identifier, the terminals are distinguished through different terminal identifiers, the first terminal identifier is the terminal identifier corresponding to the target terminal, and the terminal identifier can be at least one of characters, numbers or characters.
The broadcast message may be a message that does not have practical meaning, and is used to determine whether each terminal that is in communication connection with the server works normally, for example, whether the terminal keeps communication connection with the server, if the server can receive feedback information based on the broadcast message, the server proves that the terminal works normally, and the terminal can be used as a target terminal. Wherein the second terminal identification is used for identifying the identity of the terminal which can return feedback information to the server.
If the server does not receive the feedback information of the terminal, the server can indicate that the terminal works abnormally, for example, abnormal reminding information can be generated and sent to terminal equipment of an administrator, and the abnormal reminding information comprises terminal identifiers of terminals which do not send the feedback information.
In the alternative scheme of the invention, each terminal is subjected to round checking through a broadcast message, namely, the broadcast message is sent once every set time interval, if any one of the at least one terminal does not receive feedback information sent by the broadcast message within the set time, abnormal reminding information is generated and sent to terminal equipment of an administrator, the abnormal reminding information comprises terminal identification of any one terminal, the set time is longer than the set time interval, and at least one terminal is a terminal in an activation list.
Wherein, the set time length being longer than the set time interval indicates that no feedback information is received at least once in the set time. As an example, for example, the set time is 3 minutes and the set time interval is 2 minutes, a broadcast message may be transmitted to the terminal once within the set time. If the set time is 2 minutes and the time interval is 1 minute, the broadcast message may be transmitted to the terminal twice within the set time.
In the alternative scheme of the invention, the terminal is determined to be abnormal when the feedback information is not received after the two broadcast messages are sent, namely, the server does not receive the feedback information twice, so that whether the terminal works abnormally or not can be determined more accurately.
In the alternative scheme of the invention, the terminals with abnormal work can be marked so as to distinguish which terminals can work normally and which terminals are terminals with abnormal work.
In the alternative scheme of the invention, whether a new terminal is added can also be detected through a broadcast message, and the specific implementation mode is as follows: the at least one terminal comprises a terminal in an active list and a terminal in an inactive list, the terminal in the active list represents a previously used terminal, i.e. a terminal displaying the digitized resources of the virtual object, and the terminal in the inactive list represents a new terminal. After the server sends the broadcast message to at least one terminal, if the feedback information of the terminal in the inactive list is received, the terminal can be put into the active list, and the terminal is marked as a normal working terminal.
In an alternative scheme of the present invention, for each terminal that receives feedback information, each feedback information includes a feedback information type identifier (hereinafter, a feedback information type identifier number), through which whether the feedback information is valid feedback information can be represented, valid feedback information refers to feedback information including terminal parameters, invalid feedback information refers to feedback information not including terminal parameters, a terminal corresponding to the valid feedback information is taken as a target terminal, and a terminal corresponding to the invalid feedback information is not taken as a target terminal. By further judging the feedback information, the accuracy of determining the target terminal can be improved.
In an alternative scheme of the invention, the information sent by the terminal to the server is processed by a first thread and stored by a message receiving queue, the information sent by the user to the server is processed by a second thread and stored by a signaling control queue, the information sent by the server to the terminal is processed by a third thread, and the information sent to the terminal is stored by a message sending queue. In the solution of the present invention, the receiving feedback information sent by the at least one terminal based on the broadcast message includes: receiving feedback information of at least one terminal through a first thread, and storing the feedback information into a message receiving queue;
The obtaining the deployment request of the user includes: acquiring a deployment request of an administrator through a second thread, and storing the deployment request into a signaling control queue;
The sending the configuration resource and the target application program to the target terminal includes: transmitting the configuration resource and the target application program to the target terminal through a third thread;
the sending a broadcast message includes: the broadcast message is sent through the third thread, and the configuration resource, the target application, and the broadcast message are stored in a message send queue.
In an alternative of the present invention, if the target terminal is acquired based on the deployment request, the terminal parameters of the target terminal may be determined by the deployment request. If the target terminal is determined based on the broadcast message, the feedback information may further include a terminal parameter, that is, the terminal informs the server of the terminal parameter of the terminal through the feedback information.
In an alternative aspect of the present invention, if the target terminal is determined based on a broadcast message including protocol header information, a type of terminal that allows a response, a broadcast response time limit, and an encryption mode identification number.
The protocol header information refers to protocol header information of a communication protocol between the server and the terminal, the type of the terminal allowing response refers to the type of the terminal allowing three-dimensional display of the virtual object, and the broadcasting response time limit refers to a time limit value of the feedback information required to be sent to the server by the terminal. The encryption mode identification number refers to the encryption mode of the broadcast message.
After receiving the broadcast message sent by the server, the terminal sends feedback information to the server based on the broadcast message, see a schematic structure diagram of the feedback information (which may also be referred to as reporting information) shown in fig. 5, where in fig. 5, the feedback information includes at least one of a feedback information type identification number, a display terminal identification number, a display Chen Zhongduan type, a screen size identification number, a screen resolution identification number, and a man-machine interaction identification number.
In an alternative scheme of the invention, if the server receives feedback information fed back by the terminal, the terminal is indicated to work normally, and if the feedback information is not received, the terminal is indicated to work abnormally. Whether the terminal works normally can be determined in another mode, specifically, whether the terminal works normally can be determined according to the feedback information type identification number in the feedback information, different feedback information type identification numbers represent different working states, for example, the feedback information type identification number is 1 or 0,1 indicates that the terminal works normally, 0 indicates that the terminal works abnormally, and each feedback information type identification number can correspond to a field frame format.
The display terminal identification number refers to a terminal identification of a terminal feeding back the feedback information and is used for guaranteeing the identity of the terminal. The type Chen Zhongduan is characterized by the type of the terminal that feeds back the feedback information, for example, the terminal may be divided according to an operating system, the terminal corresponding to the system a is one type, and the terminal corresponding to the system B is another type. The screen size identification number characterizes the size of the screen of the terminal that feeds back the feedback information. The screen resolution identification number characterizes the screen resolution of the terminal that feeds back the feedback information. The man-machine interaction identification number characterizes man-machine interaction accessories such as a handle, a light controller, a somatosensory sensor, a photoelectric sensor and the like which are equipped on a terminal for feeding back the feedback information. The server or the user can be informed of the man-machine interaction identification number, and the terminal feeding back the feedback information has the man-machine interaction capability and the corresponding equipment.
Step S220, determining a target application program corresponding to the target terminal according to the terminal parameters of the target terminal.
According to terminal parameters of the target terminal, display capability of the target terminal can be known, different terminal parameters are correspondingly configured with different application programs, in the alternative scheme of the invention, a first corresponding relation between each terminal parameter and each application program can be established, and then the target application program corresponding to the target terminal can be determined based on the first relation and the terminal parameters of the target terminal.
As an example, for example, the terminal parameter is an operating system, the application corresponding to the operating system a is an application a, and the application corresponding to the operating system B is an application B, and if the operating system of the target terminal is an operating system a, the target application is an application a.
It should be noted that, in practical application, the same application program may also correspond to different types of terminal parameters, for example, the terminal parameters of the target terminal include a screen size and an operating system, and the determined target application program may be an application program corresponding to the screen size and the operating system. The target application must be operable on the target terminal.
Step S230, obtaining configuration resources of the objects to be displayed, which correspond to the terminal parameters, wherein the configuration resources comprise digital information resources and display parameter resources, which correspond to the objects to be displayed.
The object to be displayed refers to a virtual object to be displayed and can comprise articles such as cultural relics and artworks, the digital information resource refers to a digital information resource of the object to be displayed, and the display parameter resource comprises at least one of environment resource, spatial attribute information of the resource and terminal layout frame information index. The terminal layout frame information refers to layout information corresponding to when the object to be displayed is displayed on the target terminal, for example, where the object to be displayed is located on the target terminal. The spatial attribute information of the resource refers to information such as the position, the size, the gesture and the like of the display object in the three-dimensional space when the display object is displayed on the display terminal. The environmental resource refers to environmental information such as a background, special effects, lamplight and the like which are correspondingly displayed when the display object is displayed on the terminal. The configuration resources correspond to terminal parameters of the target terminal, i.e. the configuration resources are presentable on the target terminal.
Different display objects correspond to different digital information resources, and different display objects can correspond to different display parameter resources or the same display parameter resources. The display parameter resource may be selected by a user, or may be configured by a server (i.e. a default display parameter resource is adopted), and if the display parameter resource is selected by the user, the deployment request may include a configuration resource identifier of the display parameter resource. In an alternative scheme of the present invention, the obtaining the configuration resource corresponding to the object to be displayed includes: and acquiring the display parameter resources corresponding to the objects to be displayed according to the configuration resource identifiers.
And if the second corresponding relation between each configuration resource identifier and each display parameter resource is established in advance, determining the display parameter resource corresponding to the object to be displayed based on the second corresponding relation and the configuration resource identifier corresponding to the object to be displayed.
Referring to fig. 6, a schematic structural diagram of a digital information resource (which may also be referred to as a digital information resource package of a display (to be displayed) is shown, where the digital information resource includes a right-of-belonged identifier, a resource number identifier, a keyword, a terminal adaptability identifier, a display digital element package, a display interaction application package and an extension identifier.
The ownership identifier refers to an ownership identifier, namely, a property owner indicating the digital information resource, and when a plurality of venues are operated in a combined mode, whether the digital information resource can be displayed across the venues or across regions is represented by the ownership identifier. The resource number identification refers to an identification of the digitized information resource by which the identity of the digitized information resource is characterized. Keywords refer to keywords used for conditional searches, which can be retrieved when the digitized information resources of the presentation object are retrieved. The terminal adaptability identifier refers to what terminal the digital information resource can run on, and is related to the terminal type and terminal parameters, if the digital information resource for acquiring the object to be displayed can be the digital information resource corresponding to the terminal adaptability identifier, the acquired digital information resource can be displayed through the target terminal. The display digital element package refers to display content of an object to be displayed, and can be formed by combining at least one of text, voice, pictures, video and three-dimensional models, namely the object to be displayed can be displayed in at least one of the text, voice, pictures, video and three-dimensional models.
The display interaction application package refers to a target application program corresponding to a target terminal, and the display interaction application package can directly run on the target terminal. The display interactive application package can be contained in the digital information resource, the target application program corresponding to the target terminal can be determined while the digital information resource is determined according to the terminal parameters of the target terminal, the display interactive application package can be stored in a database, and the target application program corresponding to the terminal parameters can be directly determined from the database according to the terminal parameters. According to terminal parameters, a third corresponding relation between each resource number identifier and each display interactive application package is established in advance, after a digital information resource is determined according to the terminal parameters of the target terminal, the target application program (display interactive application package) of the target terminal can be determined from the database according to the resource number identifier and the third corresponding relation of the digital information resource.
The extension identifier is an extension indication field, which indicates whether a new field is still behind the digital information resource, so that the digital information resource can be conveniently extended in later period, for example, after a system or a program is upgraded, a new field is hoped to be added, and after the identifier is present, the digital information resource can be identified to be extended so as to realize the compatibility of new and old versions.
Optionally, the display parameter resource may further include a display manner, where the display manner includes at least one of a three-dimensional display manner and a two-dimensional display manner, and different display manners may be adopted to display the object to be displayed, so as to meet different display requirements. When the display mode is three-dimensional display, the target image data can be displayed in three dimensions through the target terminal.
Referring to the schematic structure of the environment resource (may also be referred to as an environment resource package) shown in fig. 7, the environment resource package includes a resource number identifier, a keyword, a terminal adaptability identifier, an environment digitizing element package, and an extension identifier.
Wherein the resource number identifier represents an identifier of the environmental resource package, and the identity of the environmental resource package is represented by the identifier. The keyword is a keyword used for conditional search, and can be searched when searching the environment resource package of each display object. The terminal adaptability identifier refers to what terminal the environment resource package can run on, and is related to the terminal type and terminal parameters, if the environment resource package for acquiring the object to be displayed can be the environment resource package corresponding to the terminal adaptability identifier, the acquired environment resource package can be displayed through the target terminal. The environment digital element package refers to specific content of environment parameters of an object to be displayed, and the environment parameters can be at least one of parameters such as background, special effects and lamplight. The extension identifier is an extension indication field, which indicates whether a new field is arranged behind the environment resource package, so that the environment resource package can be conveniently extended in the later period, for example, after a system or a program is upgraded, a new field is hoped to be added, and after the identifier is arranged, the environment resource package can be identified to be extended so as to realize the compatibility of new and old versions.
Step S240, the configuration resources and the target application program are sent to the target terminal, so that the target terminal can display the object to be displayed according to the configuration resources through the target application program.
The configuration resource and the target application program can be packaged into a data packet and sent to the target terminal, the target terminal analyzes the data packet to obtain the configuration resource and the target application program, then the target terminal operates the target application program, and displays the object to be displayed according to the configuration resource, namely, displays the object to be displayed according to the display parameter resource matched with the target terminal in the configuration resource.
In an alternative aspect of the present invention, the method may further comprise:
and receiving a retrieval request of an administrator for the digital information resource or the environment resource, wherein the retrieval request comprises keywords, if the retrieval request is the retrieval request for the digital information resource, the keywords included in the retrieval request are keywords corresponding to the digital information resource, and if the retrieval request is the retrieval request for the environment resource, the keywords included in the retrieval request are keywords corresponding to the environment resource.
According to the keywords in the search request, the digital information resources or the environment resources corresponding to the keywords are searched from a database storing the digital information resources or the environment resources, and the searched digital information resources or environment resources are sent to terminal equipment of an administrator.
In an alternative aspect of the present invention, the method may further comprise:
receiving a storage request of an administrator for a digital information resource and/or an environment resource; according to the storage request, the digital information resource and/or the environment resource are stored in the corresponding database, wherein the digital information resource and the environment resource can be stored in the same database or different databases.
Based on the same principle as the method shown in fig. 4, the embodiment of the present invention further provides a three-dimensional display system of a virtual object, as shown in fig. 8, where the three-dimensional display system of a virtual object includes:
A management server (span Chen Guanli server shown in fig. 8), a storage server (span Chen Cunchu server shown in fig. 8) and at least one terminal (span terminals 1 to span Chen Zhongduan N shown in fig. 8); at least one terminal, the management server, and the storage server are connected through a network (IP network shown in fig. 8);
The management server is used for acquiring the target terminal and terminal parameters for the target terminal; determining a target application program corresponding to the target terminal according to the terminal parameters of the target terminal; acquiring configuration resources corresponding to the objects to be displayed from a storage server, wherein the configuration resources comprise digital information resources and display parameter resources corresponding to the objects to be displayed; the configuration resources and the target application program are sent to the target terminal, so that the target terminal displays the object to be displayed according to the configuration resources through the target application program;
the storage server is used for storing configuration resources corresponding to the objects to be displayed.
The three-dimensional display system of the virtual object is the same as the implementation principle of the three-dimensional display method of the virtual object described above, and is not described herein.
For a better description and understanding of the principles of the method provided by the present invention, the following description of the present invention is provided in connection with an alternative embodiment. It should be noted that, the specific implementation manner of each step in this specific embodiment should not be construed as limiting the solution of the present invention, and other implementation manners that can be considered by those skilled in the art based on the principle of the solution provided by the present invention should also be considered as being within the protection scope of the present invention.
In this example, a spatial view model corresponding to the object to be displayed needs to be established first. The space view finding model is a sphere generated by taking the position of an object to be displayed as the center of a circle and taking the distance from the center of the circle to a set main view point as the radius, each circular ring containing the main view point of the space view finding model comprises a plurality of virtual view points, each view point corresponds to one view angle, and the view angle corresponding to the main view point is the set view angle. According to the positions of the observation points, a position number is set for each observation point.
And then acquiring image data corresponding to the object to be displayed under different view angles according to the positions of all the observation points in the space view finding model, wherein the image data comprises background image data and background-free image data, and establishing an index according to the position numbers of all the observation points and the image data corresponding to all the observation points, wherein the index numbers in the index are the position numbers of all the observation points. And storing the image data and the index corresponding to each observation point in a local storage space.
The current view point at this time is the main view point, taking the set view angle as the current view angle, referring to the position schematic diagram of each view point shown on the left side of fig. 9, where the current view angle is the view angle corresponding to the main view point (main camera shown in fig. 9) on the right side of fig. 9. As can be seen from the left side of fig. 9, at the initial time (the time corresponding to the set viewing angle), the current viewing angle is the viewing angle from the main viewpoint. The initial normal vector shown in the left side of fig. 9 is a vector between the position of the object to be displayed and the position of the main viewpoint, and the plane corresponding to the current viewing angle (virtual viewing field plane) is perpendicular to the initial normal vector, and the set viewing angle corresponding to the main viewpoint is the viewing angle corresponding to each virtual viewing point included on the sphere where the virtual viewing field plane intersects with the spatial view model.
Acquiring a view angle adjustment request aiming at an object to be displayed, wherein the view angle adjustment request comprises an offset view angle relative to a current view angle; referring to the schematic position of each viewpoint shown on the right side of fig. 9, at the interaction time (the time corresponding to the current viewing angle after adjustment), the current viewing angle has changed. And determining an attitude offset vector according to the offset view angle and the position of the main viewpoint, and taking the position of the intersection point of the attitude offset vector and the spatial framing model as the position of the reference viewpoint.
The target normal vector (temporary normal vector shown in the right side of fig. 9) is determined from the position of the reference viewpoint and the position of the object to be presented.
From the virtual observation points of the spatial viewfinder model, initial virtual observation points included on a spherical surface perpendicular to the target normal vector are determined. And determining a virtual observation point with the closest distance to the target normal vector from all the initial virtual observation points, taking the position of the initial virtual observation point with the closest distance to the target normal vector as the position of the target virtual observation point, wherein the view angle corresponding to the position of the target virtual observation point is the target view angle, namely the view angle corresponding to the current view angle after adjustment.
Determining target image data corresponding to the target view angle according to the target view angle and the pre-stored image data of the object to be displayed corresponding to each view angle; and carrying out three-dimensional display on the target image data.
According to the scheme provided by the invention, in the first aspect, two-dimensional and three-dimensional multiple display modes are supported, and different display requirements are met. In the second aspect, the resource package of the object to be displayed and the application program are transmitted to the target terminal together, so that the system has strong flexibility, simpler deployment and more diversified display forms. In the third aspect, the graphic engine optimization method for display applications is provided for naked eye three-dimensional display equipment, so that the operand, the system cost and the energy consumption can be greatly reduced.
Based on the same principle as the method shown in fig. 1, the embodiment of the present invention further provides a three-dimensional display device 20 of a virtual object, as shown in fig. 10, the three-dimensional display device 20 of a virtual object may include a request acquisition module 210, a target viewing angle determination module 220, a target image data determination module 230, and a display module 240, wherein:
a request obtaining module 210, configured to obtain a view angle adjustment request for an object to be displayed, where the view angle adjustment request includes an offset view angle relative to a current view angle;
a target view angle determining module 220, configured to determine a target view angle of the object to be displayed according to the offset view angle and the current view angle;
The target image data determining module 230 is configured to determine target image data corresponding to a target viewing angle according to the target viewing angle and pre-stored image data of an object to be displayed corresponding to each viewing angle;
And the display module 240 is configured to perform three-dimensional display on the target image data.
Optionally, the apparatus further comprises:
The space view finding model building module is used for building a space view finding model corresponding to an object to be displayed, wherein the space view finding model is a sphere generated by taking the position of the object to be displayed as a circle center and taking the distance from the circle center to a set main view point as a radius, each circular ring containing a main view point of the space view finding model comprises a plurality of virtual view points, each view point corresponds to one view angle, the view angle corresponding to the main view point is a set view angle, and the current view angle comprises the set view angle; the pre-stored image data of the object to be displayed corresponding to each view angle comprises the image data of the object to be displayed corresponding to the position of each observation point;
The target view angle determining module 220 is specifically configured to, when determining the target view angle of the object to be displayed according to the offset view angle and the current view angle: determining the position of a target virtual observation point according to the offset view angle and the position of a current observation point corresponding to the current view angle, and taking the view angle corresponding to the target virtual observation point as a target view angle;
the above-mentioned target image data determining module 230 is specifically configured to, when determining target image data corresponding to a target viewing angle according to the target viewing angle and the pre-stored image data of an object to be displayed corresponding to each viewing angle: and determining the image data corresponding to the target virtual observation point according to the position of the target virtual observation point and the pre-stored image data of the object to be displayed corresponding to the position of each observation point.
Optionally, the image data of the object to be displayed corresponding to each observation point includes background-free image data and background-containing image data, and the view angle adjustment request includes an object view angle adjustment request or an overall view angle adjustment request, where the overall view angle adjustment request refers to a request for adjusting both a view angle of the object to be displayed and a view angle of a background of the object to be displayed;
if the view angle adjustment request is an object view angle adjustment request, the target image data is background-free image data, and the display module 240 is specifically configured to:
three-dimensional display is carried out on the background-free image data corresponding to the target visual angle;
If the viewing angle adjustment request is an overall viewing angle adjustment request, the target image data includes background-free image data and background-free image data, and the display module 240 is specifically configured to, when performing three-dimensional display on the target image data: fusing the background-free image data corresponding to the target visual angle and the background-free image data to obtain fused image data; and carrying out three-dimensional display on the fused image data.
Optionally, the target view determining module 220 is specifically configured to, when determining the position of the target virtual viewpoint according to the positions of the observation points corresponding to the offset view and the current view: determining the position of a reference viewpoint according to the offset viewing angle and the position of the current viewpoint; and determining a virtual observation point with the nearest distance to the position of the reference observation point in each virtual observation point in the space view model according to the position of the reference observation point, and taking the position of the virtual observation point with the nearest distance to the position of the reference observation point as the position of the target virtual observation point.
Optionally, the target view angle determining module 220 is specifically configured to, when determining, according to the positions of the reference view points, a virtual view point with a closest distance from the positions of the reference view points among the virtual view points in the spatial viewfinder model: determining a target normal vector according to the position of the reference observation point and the position of the object to be displayed; determining each initial virtual observation point contained on a spherical surface perpendicular to the target normal vector from each virtual observation point of the space view model; a virtual viewpoint closest to the target normal vector is determined from the initial virtual viewpoints.
Optionally, the target virtual viewpoint includes at least two target virtual viewpoints, and the target image data determining module 230 is specifically configured to, when determining target image data corresponding to the target virtual viewpoints according to the positions of the target virtual viewpoints and the pre-stored image data of the object to be displayed corresponding to the positions of the viewpoints: determining the image data corresponding to each target virtual observation point according to the position of each target virtual observation point in at least two target virtual observation points and the pre-stored image data of the object to be displayed corresponding to the position of each observation point; dividing the image data corresponding to each target virtual observation point into a plurality of sub-images; and carrying out cross stitching on sub-images corresponding to the target virtual observation points to obtain image data to be displayed, and taking the image data to be displayed as the target image data.
Optionally, the display module 240 is specifically configured to, when fusing the background-free image data and the background-containing image data corresponding to the target viewing angle to obtain fused image data: and superposing the image data without the background and the image data with the background to obtain fused image data.
The three-dimensional display device for a virtual object according to the embodiment of the present invention may execute the three-dimensional display method for a virtual object provided by the embodiment of the present invention, and its implementation principle is similar, and actions executed by each module and unit in the three-dimensional display device for a virtual object according to each embodiment of the present invention correspond to steps in the three-dimensional display method for a virtual object according to each embodiment of the present invention, and detailed functional descriptions of each module in the three-dimensional display device for a virtual object may be specifically referred to descriptions in the corresponding three-dimensional display method for a virtual object shown in the foregoing, which are not repeated herein.
Wherein, the three-dimensional exhibition device of the virtual object can be a computer program (including program code) running in a computer device, for example, the three-dimensional exhibition device of the virtual object is an application software; the device can be used for executing corresponding steps in the method provided by the embodiment of the invention.
In some embodiments, the three-dimensional exhibition device for a virtual object provided by the embodiments of the present invention may be implemented by combining software and hardware, and by way of example, the three-dimensional exhibition device for a virtual object provided by the embodiments of the present invention may be a processor in the form of a hardware decoding processor that is programmed to execute the three-dimensional exhibition method for a virtual object provided by the embodiments of the present invention, for example, the processor in the form of a hardware decoding processor may employ one or more Application specific integrated circuits (ASICs, applications SPECIFIC INTEGRATED circuits), DSPs, programmable logic devices (PLDs, programmable Logic Device), complex Programmable logic devices (CPLDs, complex Programmable Logic Device), field-Programmable gate arrays (FPGAs), field-Programmable GATE ARRAY) or other electronic components.
In other embodiments, the three-dimensional display device for a virtual object provided by the embodiments of the present invention may be implemented in a software manner, and fig. 10 shows a three-dimensional display device for a virtual object stored in a memory, which may be software in the form of a program, a plug-in, or the like, and includes a series of modules including a request acquisition module 210, a target view angle determination module 220, a target image data determination module 230, and a display module 240, for implementing the three-dimensional display method for a virtual object provided by the embodiments of the present invention.
The modules involved in the embodiments of the present invention may be implemented in software or in hardware. The name of a module does not in some cases define the module itself.
Based on the same principles as the methods shown in the embodiments of the present invention, there is also provided in the embodiments of the present invention an electronic device, which may include, but is not limited to: a processor and a memory; a memory for storing a computer program; a processor for executing the method according to any of the embodiments of the invention by invoking a computer program.
In an alternative embodiment, there is provided an electronic device, as shown in fig. 11, the electronic device 30 shown in fig. 11 includes: a processor 310 and a memory 330. Wherein the processor 310 is coupled to the memory 330, such as via a bus 320. Optionally, the electronic device 30 may further comprise a transceiver 340, and the transceiver 340 may be used for data interaction between the electronic device and other electronic devices, such as transmission of data and/or reception of data, etc. It should be noted that, in practical applications, the transceiver 340 is not limited to one, and the structure of the electronic device 30 is not limited to the embodiment of the present invention.
The Processor 310 may be a CPU (Central Processing Unit ), general purpose Processor, DSP (DIGITAL SIGNAL Processor, data signal Processor), ASIC (Application SPECIFIC INTEGRATED Circuit), FPGA (Field Programmable GATE ARRAY ) or other programmable logic device, transistor logic device, hardware component, or any combination thereof. Which may implement or perform the various exemplary logic blocks, modules and circuits described in connection with this disclosure. Processor 310 may also be a combination that performs computing functions, e.g., including one or more microprocessor combinations, a combination of a DSP and a microprocessor, etc.
Bus 320 may include a path that communicates information between the components. Bus 320 may be a PCI (PERIPHERAL COMPONENT INTERCONNECT, peripheral component interconnect standard) bus, or an EISA (Extended Industry Standard Architecture ) bus, or the like. The bus 320 may be classified into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 11, but not only one bus or one type of bus.
Memory 330 may be, but is not limited to, ROM (Read Only Memory) or other type of static storage device that can store static information and instructions, RAM (Random Access Memory ) or other type of dynamic storage device that can store information and instructions, EEPROM (ELECTRICALLY ERASABLE PROGRAMMABLE READ ONLY MEMORY ), CD-ROM (Compact Disc Read Only Memory, compact disc Read Only Memory) or other optical disk storage, optical disk storage (including compact discs, laser discs, optical discs, digital versatile discs, blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
The memory 330 is used for storing application program codes (computer programs) for executing the inventive arrangements and is controlled to be executed by the processor 310. The processor 310 is configured to execute the application code stored in the memory 330 to implement what is shown in the foregoing method embodiments.
The electronic device shown in fig. 11 is only an example, and should not impose any limitation on the functions and application scope of the embodiment of the present invention.
Embodiments of the present invention provide a computer-readable storage medium having a computer program stored thereon, which when run on a computer, causes the computer to perform the corresponding method embodiments described above.
According to another aspect of the present invention, there is also provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the methods provided in the implementation of the various embodiments described above.
Computer program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
It should be appreciated that the flow charts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer readable storage medium according to embodiments of the present invention may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer-readable storage medium carries one or more programs which, when executed by the electronic device, cause the electronic device to perform the methods shown in the above-described embodiments.
The above description is only illustrative of the preferred embodiments of the present invention and of the principles of the technology employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in the present invention is not limited to the specific combinations of technical features described above, but also covers other technical features formed by any combination of the technical features described above or their equivalents without departing from the spirit of the disclosure. Such as the above-mentioned features and the technical features disclosed in the present invention (but not limited to) having similar functions are replaced with each other.
Claims (9)
1. The three-dimensional display method of the virtual object is characterized by comprising the following steps of:
Acquiring a view angle adjustment request aiming at an object to be displayed, wherein the view angle adjustment request comprises an offset view angle relative to a current view angle;
determining a target view angle of the object to be displayed according to the offset view angle and the current view angle;
Determining target image data corresponding to the target view angle according to the target view angle and the pre-stored image data of the object to be displayed corresponding to each view angle;
three-dimensional display is carried out on the target image data;
The method further comprises the steps of:
Establishing a space view finding model corresponding to the object to be displayed, wherein the space view finding model is a sphere generated by taking the position of the object to be displayed as a circle center and taking the distance from the circle center to a set main view point as a radius, each circular ring containing the main view point of the space view finding model comprises a plurality of virtual view points, each view point corresponds to one view angle, the view angle corresponding to the main view point is a set view angle, and the current view angle comprises the set view angle; the prestored image data of the object to be displayed corresponding to each view angle comprises the image data of the object to be displayed corresponding to the position of each observation point;
The determining the target view angle of the object to be displayed according to the offset view angle and the current view angle includes:
Determining the position of a target virtual observation point according to the offset view angle and the position of a current observation point corresponding to the current view angle, and taking the view angle corresponding to the target virtual observation point as the target view angle;
The determining the target image data corresponding to the target view angle according to the target view angle and the pre-stored image data of the object to be displayed corresponding to each view angle comprises the following steps:
and determining the image data corresponding to the target virtual observation point according to the position of the target virtual observation point and the pre-stored image data of the object to be displayed corresponding to the position of each observation point.
2. The method according to claim 1, wherein the image data of the object to be displayed corresponding to each viewpoint includes background-free image data and background-free image data, the view angle adjustment request includes an object view angle adjustment request or an overall view angle adjustment request, and the overall view angle adjustment request refers to a request for adjusting both the view angle of the object to be displayed and the view angle of the background of the object to be displayed;
if the view angle adjustment request is the object view angle adjustment request, the target image data is background-free image data, and the three-dimensional display of the target image data includes:
three-dimensional display is carried out on the background-free image data corresponding to the target visual angle;
if the viewing angle adjustment request is the overall viewing angle adjustment request, the target image data includes background-free image data and background-free image data, and the three-dimensional display of the target image data includes:
fusing the background-free image data corresponding to the target visual angle and the background-free image data to obtain fused image data;
and carrying out three-dimensional display on the fused image data.
3. The method of claim 1, wherein determining the location of the target virtual viewpoint based on the locations of the viewpoint corresponding to the offset viewing angle and the current viewing angle comprises:
determining the position of a reference viewpoint according to the offset viewing angle and the position of the current viewpoint;
and determining a virtual observation point with the nearest distance to the position of the reference observation point in each virtual observation point in the space view model according to the position of the reference observation point, and taking the position of the virtual observation point with the nearest distance to the position of the reference observation point as the position of the target virtual observation point.
4. A method according to claim 3, wherein said determining, from the positions of the reference viewpoint, the closest virtual viewpoint from the positions of the reference viewpoint among the respective virtual viewpoints in the spatial framing model comprises:
determining a target normal vector according to the position of the reference observation point and the position of the object to be displayed;
Determining each initial virtual observation point contained on a spherical surface perpendicular to the target normal vector from each virtual observation point of the space view model;
and determining a virtual observation point closest to the distance between the target normal vectors from the initial virtual observation points.
5. The method according to claim 1, wherein the target virtual viewpoint includes at least two target virtual viewpoints, and the determining target image data corresponding to the target virtual viewpoints according to the positions of the target virtual viewpoints and the pre-stored image data of the object to be displayed corresponding to the positions of the viewpoints includes:
Determining image data corresponding to each target virtual observation point according to the position of each target virtual observation point in the at least two target virtual observation points and the pre-stored image data of the object to be displayed corresponding to the position of each observation point;
Dividing the image data corresponding to each target virtual observation point into a plurality of sub-images;
and carrying out cross stitching on sub-images corresponding to the target virtual observation points to obtain image data to be displayed, and taking the image data to be displayed as the target image data.
6. The method according to claim 2, wherein the fusing the background-free image data and the background-free image data corresponding to the target viewing angle to obtain fused image data includes:
And superposing the background-free image data and the background-containing image data to obtain the fused image data.
7. A three-dimensional display device for a virtual object, comprising:
the system comprises a request acquisition module, a display module and a display module, wherein the request acquisition module is used for acquiring a view angle adjustment request aiming at an object to be displayed, and the view angle adjustment request comprises an offset view angle relative to a current view angle;
the target view angle determining module is used for determining the target view angle of the object to be displayed according to the offset view angle and the current view angle;
The target image data determining module is used for determining target image data corresponding to the target visual angle according to the target visual angle and the pre-stored image data of the object to be displayed corresponding to each visual angle;
the display module is used for three-dimensionally displaying the target image data;
The apparatus further comprises:
The space view finding model building module is used for building a space view finding model corresponding to the object to be displayed, wherein the space view finding model is a sphere which is generated by taking the position of the object to be displayed as a circle center and taking the distance from the circle center to a set main view point as a radius, each circular ring containing the main view point of the space view finding model comprises a plurality of virtual view points, each view point corresponds to one view angle, the view angle corresponding to the main view point is a set view angle, and the current view angle comprises the set view angle; the prestored image data of the object to be displayed corresponding to each view angle comprises the image data of the object to be displayed corresponding to the position of each observation point;
The target view angle determining module is specifically configured to, when determining the target view angle of the object to be displayed according to the offset view angle and the current view angle: determining the position of a target virtual observation point according to the offset view angle and the position of a current observation point corresponding to the current view angle, and taking the view angle corresponding to the target virtual observation point as the target view angle;
The target image data determining module is specifically configured to, when determining target image data corresponding to the target view angle according to the target view angle and the pre-stored image data of the object to be displayed corresponding to each view angle: and determining the image data corresponding to the target virtual observation point according to the position of the target virtual observation point and the pre-stored image data of the object to be displayed corresponding to the position of each observation point.
8. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of any one of claims 1-6 when the computer program is executed.
9. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the method of any of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111543763.7A CN114399614B (en) | 2021-12-16 | 2021-12-16 | Three-dimensional display method and device for virtual object, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111543763.7A CN114399614B (en) | 2021-12-16 | 2021-12-16 | Three-dimensional display method and device for virtual object, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114399614A CN114399614A (en) | 2022-04-26 |
CN114399614B true CN114399614B (en) | 2024-11-05 |
Family
ID=81226483
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111543763.7A Active CN114399614B (en) | 2021-12-16 | 2021-12-16 | Three-dimensional display method and device for virtual object, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114399614B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110400337A (en) * | 2019-07-10 | 2019-11-01 | 北京达佳互联信息技术有限公司 | Image processing method, device, electronic equipment and storage medium |
CN111888762A (en) * | 2020-08-13 | 2020-11-06 | 网易(杭州)网络有限公司 | Method for adjusting visual angle of lens in game and electronic equipment |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109242978B (en) * | 2018-08-21 | 2023-07-07 | 百度在线网络技术(北京)有限公司 | Viewing angle adjusting method and device for three-dimensional model |
CN110045827B (en) * | 2019-04-11 | 2021-08-17 | 腾讯科技(深圳)有限公司 | Method and device for observing virtual article in virtual environment and readable storage medium |
-
2021
- 2021-12-16 CN CN202111543763.7A patent/CN114399614B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110400337A (en) * | 2019-07-10 | 2019-11-01 | 北京达佳互联信息技术有限公司 | Image processing method, device, electronic equipment and storage medium |
CN111888762A (en) * | 2020-08-13 | 2020-11-06 | 网易(杭州)网络有限公司 | Method for adjusting visual angle of lens in game and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN114399614A (en) | 2022-04-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3629290B1 (en) | Localization for mobile devices | |
CN109829981B (en) | Three-dimensional scene presentation method, device, equipment and storage medium | |
US10863310B2 (en) | Method, server and terminal for information interaction | |
CN111984114B (en) | Multi-person interaction system based on virtual space and multi-person interaction method thereof | |
JP6530794B2 (en) | Spatial object search sorting method and cloud system | |
CN112101209B (en) | Method and apparatus for determining world coordinate point cloud for roadside computing device | |
US20150169186A1 (en) | Method and apparatus for surfacing content during image sharing | |
KR102111079B1 (en) | Display of objects based on multiple models | |
US20150140974A1 (en) | Supporting the provision of services | |
CN111766951A (en) | Image display method and apparatus, computer system, and computer-readable storage medium | |
US20140225921A1 (en) | Adding user-selected mark-ups to a video stream | |
KR102337209B1 (en) | Method for notifying environmental context information, electronic apparatus and storage medium | |
CN114398117A (en) | Virtual object display method and device, electronic equipment and computer storage medium | |
CN113377472A (en) | Account login method, three-dimensional display device and server | |
JP2023504956A (en) | Performance detection method, device, electronic device and computer readable medium | |
CN113483774A (en) | Navigation method, navigation device, electronic equipment and readable storage medium | |
KR101466132B1 (en) | System for integrated management of cameras and method thereof | |
US20150109328A1 (en) | Techniques for navigation among multiple images | |
CN108919951B (en) | Information interaction method and device | |
CN114399614B (en) | Three-dimensional display method and device for virtual object, electronic equipment and storage medium | |
CN109034214B (en) | Method and apparatus for generating a mark | |
CN108401163A (en) | A kind of method, apparatus and OTT operation systems for realizing VR live streamings | |
US12086920B1 (en) | Submesh-based updates in an extended reality environment | |
US20230066708A1 (en) | Identity information presentation method and apparatus, terminal, server, and storage medium | |
US20240157240A1 (en) | System and method for generating notifications for an avatar to conduct interactions within a metaverse |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |