CN111918114A - Image display method, image display device, display equipment and computer readable storage medium - Google Patents
Image display method, image display device, display equipment and computer readable storage medium Download PDFInfo
- Publication number
- CN111918114A CN111918114A CN202010761742.1A CN202010761742A CN111918114A CN 111918114 A CN111918114 A CN 111918114A CN 202010761742 A CN202010761742 A CN 202010761742A CN 111918114 A CN111918114 A CN 111918114A
- Authority
- CN
- China
- Prior art keywords
- image
- interactive
- display
- virtual effect
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 61
- 230000002452 interceptive effect Effects 0.000 claims abstract description 254
- 230000000694 effects Effects 0.000 claims abstract description 169
- 230000003190 augmentative effect Effects 0.000 claims abstract description 22
- 238000009877 rendering Methods 0.000 claims abstract description 22
- 230000003993 interaction Effects 0.000 claims description 32
- 238000004891 communication Methods 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 9
- 230000000007 visual effect Effects 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 17
- 230000008569 process Effects 0.000 description 7
- 230000004044 response Effects 0.000 description 5
- 230000009286 beneficial effect Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 239000004576 sand Substances 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000008846 dynamic interplay Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/442—Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
- H04N21/44213—Monitoring of end-user related data
- H04N21/44218—Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Social Psychology (AREA)
- Computer Networks & Wireless Communication (AREA)
- Databases & Information Systems (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the disclosure discloses an image display method, which comprises the following steps: acquiring multi-frame interactive images of interactive objects in a real scene through a plurality of first image acquisition devices; selecting a target interactive image from the multi-frame interactive images based on a specific selection condition; determining virtual effect data corresponding to a display object based on the target interactive image, and rendering the virtual effect data to obtain a virtual effect image; and displaying the augmented reality effect of the virtual effect image and the display object which are superposed on each other on display equipment. The embodiment of the disclosure also discloses an image display device, a display device and a computer readable storage medium.
Description
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image display method and apparatus, a display device, and a computer-readable storage medium.
Background
At present, for some large-scale exhibitions, such as historical relic exhibition, automobile exhibition, building body exhibition in construction site, or building planning sand table exhibition, exhibitors can only see the real object of the exhibit often, the relevant information of the exhibit mostly depends on the explanation of an instructor or the exhibition of an independent promo, and the exhibition effect is not rich enough. In addition, when the display device displays the exhibit at present, the exhibitor can only touch the display screen or click the button in the display screen to control the display object in the display content, and the control mode is complicated and inflexible.
Disclosure of Invention
The embodiment of the disclosure provides an image display method and device, a display device and a computer readable storage medium.
The technical scheme of the embodiment of the disclosure is realized as follows:
the embodiment of the present disclosure provides an image display method, including:
acquiring multi-frame interactive images of interactive objects in a real scene through a plurality of first image acquisition devices;
selecting a target interactive image from the multi-frame interactive images based on a specific selection condition;
determining virtual effect data corresponding to a display object based on the target interactive image, and rendering the virtual effect data to obtain a virtual effect image;
and displaying the augmented reality effect of the virtual effect image and the display object which are superposed on each other on display equipment.
An embodiment of the present disclosure provides an image display device, including:
the acquisition unit is used for acquiring multi-frame interactive images of interactive objects in a real scene through the first image acquisition device;
the selecting unit is used for selecting a target interactive image from the multi-frame interactive images based on a specific selecting condition;
the processing unit is used for determining virtual effect data corresponding to a display object based on the target interactive image and rendering the virtual effect data to obtain a virtual effect image;
and the display unit is used for displaying the augmented reality effect of the virtual effect image and the display object which are superposed.
The disclosed embodiments provide a display device comprising a camera, a display, a processor and a memory for storing a computer program capable of running on the processor;
the camera, the display, the processor and the memory are connected through a communication bus;
the processor, in combination with the camera and the display, implements the method provided by the embodiments of the present disclosure when running the computer program stored in the memory.
The disclosed embodiments also provide a computer-readable storage medium having a computer program stored thereon, the computer program being executed by a processor to implement the methods provided by the disclosed embodiments.
The embodiment of the disclosure has the following beneficial effects:
the image display method provided by the embodiment of the disclosure acquires multi-frame interactive images of interactive objects in a real scene through a plurality of first image acquisition devices; further, based on a specific selection condition, selecting a target interactive image from the multi-frame interactive images; determining virtual effect data corresponding to the display object based on the target interactive image, and rendering the virtual effect data to obtain a virtual effect image; and finally, displaying the augmented reality effect of the virtual effect image and the display object which are superposed on each other on the display equipment. Therefore, the display equipment acquires the interactive images through the camera and selects one target interactive image from the multiple interactive images to respond according to a certain strategy, so that the interactive objects can operate the display equipment without contacting, and the interaction flexibility and accuracy are improved; in addition, the display equipment can obtain the virtual effect data of the display object according to the target interactive image, and adds the virtual effect on the display object for display, so that the display effect of the image is enhanced, and the richness of image display is improved.
Drawings
FIG. 1-1 is a schematic diagram of an alternative configuration of an image display system provided by an embodiment of the present disclosure;
fig. 1-2 are schematic diagrams of an application scenario provided by an embodiment of the present disclosure;
fig. 2 is a flowchart of an image display method according to an embodiment of the disclosure;
FIG. 3-1 is a first schematic diagram of a display device provided in an embodiment of the present disclosure;
fig. 3-2 is a schematic diagram ii of a display device provided in an embodiment of the present disclosure;
FIG. 4-1 is a first schematic diagram of a display effect provided by an embodiment of the disclosure;
fig. 4-2 is a schematic diagram of a display effect provided by the embodiment of the disclosure;
fig. 5 is a schematic diagram of a display effect provided by the embodiment of the disclosure;
fig. 6 is a schematic structural diagram of an image display device according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a display device according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the present disclosure more clearly understood, the present disclosure is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the disclosure and are not intended to limit the disclosure.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein is for the purpose of describing embodiments of the invention only and is not intended to be limiting of the invention.
An Augmented Reality (Augmented Reality) technology is a technology for skillfully fusing virtual information and a real world, and a user can view a virtual effect superimposed in a real scene through an AR device, for example, can view a virtual treelet superimposed on a real campus playground and view a virtual flying bird superimposed in the sky, how to better fuse the virtual effects of the virtual treelet and the virtual flying bird with the real scene, and realize the presentation effect of the virtual effect in the Augmented Reality scene.
The embodiments of the present disclosure provide an image display method, an apparatus, a device, and a computer-readable storage medium, which can improve richness of image display and flexibility of image display control, where the image display method provided by the embodiments of the present disclosure is applied to a display device, and an exemplary application of the display device provided by the embodiments of the present disclosure is described below, where the display device provided by the embodiments of the present disclosure may be implemented as various types of terminals such as AR glasses, a notebook computer, a tablet computer, a desktop computer, a set-top box, a mobile device (e.g., a mobile phone, a portable music player, a personal digital assistant, a dedicated messaging device, and a portable game device). In a disclosed embodiment, the display device comprises a display screen, wherein the display screen is implemented as a movable display screen, for example, the display screen can be moved on a preset sliding track, or moved on a movable sliding support, or moved by a user holding the display device to implement the movement of the display screen.
Next, an exemplary application when the display device is implemented as a terminal will be explained. When the display device is implemented as a terminal, virtual effect data of a real scene object can be acquired from a preset three-dimensional virtual scene in an internal storage space of the terminal based on a display object in the real scene image, and an AR image effect combined with the virtual reality superposed with the display object in the real scene is presented according to the virtual effect data; the terminal can also interact with the cloud server, and virtual effect data are obtained through a preset three-dimensional virtual scene prestored in the cloud server. In the following, the description of the image display system is performed by taking the AR image effect as an example, in combination with a scenario in which a display object is displayed, in which a terminal acquires virtual effect data in an interactive manner with a server.
Referring to fig. 1-1, fig. 1-1 is an alternative architecture diagram of an image display system 100 provided by an embodiment of the present disclosure, in order to support a presentation application, a terminal 400 (which exemplarily shows a terminal 400-1 and a terminal 400-2) is connected to a server 200 through a network 300, and the network 300 may be a wide area network or a local area network, or a combination of the two. In a real display scene, such as historical relic display, sand table display, building display at a construction site, etc., the terminal 400 may be a display device arranged on a preset slide rail, or a mobile phone with a camera, wherein the mobile phone can be moved by being held by hand.
The terminal 400 is configured to acquire a real scene image at a current moving position through an image acquisition unit; determining virtual effect data matched with a display object based on the display object included in the real scene image; rendering a virtual effect corresponding to the virtual effect data at a display position associated with the display object in the real scene image by using the virtual effect data; the augmented reality AR effect is shown in the graphical interface 410 with the real scene image superimposed with the virtual effect.
For example, when the terminal 400 is implemented as a mobile phone, a preset display application on the mobile phone may be started, a camera is called through the preset display application to collect a real scene image, and a data request is initiated to the server 200 based on a display object included in the real scene image, and after receiving the data request, the server 200 determines virtual effect data matched with the display object from a preset virtual three-dimensional scene model prestored in the database 500; and transmits the virtual effect data back to the terminal 400. After the terminal 400 obtains the virtual effect data fed back by the server, the virtual effect is rendered according to the virtual effect data through the rendering tool, and the virtual effect is superimposed on the target area of the display object in the real scene image, so that an AR effect image in a virtual-real combination manner is obtained, and finally the AR effect image is displayed on the graphical interface of the terminal 400.
In some embodiments, the server 200 may be an independent physical server, may also be a server cluster or a distributed system formed by a plurality of physical servers, and may also be a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a network service, cloud communication, a middleware service, a domain name service, a security service, a CDN, and a big data and artificial intelligence platform. The terminal 400 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, and the like. The terminal and the server may be directly or indirectly connected through wired or wireless communication, and the embodiment of the present disclosure is not limited thereto.
By way of example, the following illustrates an application scenario to which the embodiments of the present disclosure are applicable.
Fig. 1-2 are schematic diagrams of an application scenario provided by an embodiment of the present disclosure, and as shown in fig. 1-2, a display device may include a movable display screen 101, where the movable display screen 101 may be disposed around a plurality of exhibits in an exhibition, a rear camera may be configured on the movable display screen 101, and may be used to photograph the exhibits, and the movable display screen 101 may display the exhibits and display a virtual effect on the exhibits. The virtual effect of the exhibit can be at least one of introduction information of the exhibit, internal detail display information of the exhibit, an outline of the exhibit and a virtual interpreter of the exhibit. The movable display screen 101 is also provided with a front camera for shooting an interactive object (such as an exhibitor) in front of the movable display screen 101, and further, the movable display screen 101 can recognize an instruction sent by the interactive object in the shot image, so that the display and adjustment of the virtual effect of the exhibit are realized.
Based on the image display system and the application scenario, an image display method provided by the embodiment of the present disclosure is described, referring to the schematic flow diagram of the image display method shown in fig. 2, and the image display method provided by the embodiment of the present disclosure includes S210 to S240. Wherein,
s210, acquiring multi-frame interactive images of interactive objects in the real scene through a plurality of first image acquisition devices.
The image display method provided in the embodiment of the disclosure is applied to a display device, wherein a display screen of the display device is a movable screen. The display screen of the display device may move on a preset sliding track as shown in fig. 3-1, or may slide by being fixed on a movable sliding support as shown in fig. 3-2.
In the embodiment of the present disclosure, the display device may include a plurality of first image capturing devices, where the first image capturing devices are capable of capturing images of an interactive object in a real scene, where the interactive object refers to an object capable of performing information interaction with the display device, and for example, the interactive object may be an exhibitor viewing display content in the display device, a user of the terminal device, and the like.
In this embodiment, the display device may process the interactive image acquired by the first image acquisition device, identify information such as pose information and expression information of the interactive object in the interactive image, and respond to an interactive instruction (for example, an instruction to enlarge a size of a display object in the display device and switch a virtual effect of the display object) sent by the interactive object, thereby implementing information interaction between the interactive object and the display device.
In some embodiments of the present disclosure, the first image capturing device may be a front camera of the display device, or may also be a rear camera of the display device, or may also be a camera disposed on a side surface of the display device. The setting position of the first image acquisition device in the display equipment is not limited in the embodiment of the disclosure.
In some embodiments of the present disclosure, a plurality of first image capturing devices may correspond to a plurality of frames of interactive images one to one, that is, one first image capturing device may capture one frame of interactive image. In addition, in the embodiment of the present disclosure, each of the plurality of first image capturing devices may also correspond to a plurality of frames of interactive images, that is, one first image capturing device may capture a plurality of frames of interactive images.
It should be noted that, in the embodiment of the present disclosure, the first image capturing device may be a fixed camera or a movable camera. The embodiment of the present disclosure does not limit the type of the first image capturing device.
S220, selecting a target interactive image from the multi-frame interactive images based on a specific selection condition.
In practical applications, a plurality of interactive objects around the display device may issue interactive instructions to the display device at the same time, and therefore, the display device cannot determine which interactive object responds to the interactive instructions.
Based on this, in the embodiment of the present disclosure, the display device may select one target interactive image from the collected multiple frames of interactive images according to a specific selection condition, so as to respond to the target interactive image. That is, the display apparatus may select an image satisfying a specific selection condition as a target interactive image from among the plurality of interactive images.
In some embodiments of the present disclosure, in order to improve the accuracy of the response, the display device may use factors such as image quality of an interactive object in the interactive image, and quality of a captured face as a specific selection condition for selecting the target interactive image.
In some embodiments of the present disclosure, the specific selection condition may include at least one of the following conditions:
the definition of the target interactive image is greater than a definition threshold value;
the number of the interactive objects in the target interactive image is greater than a first threshold value;
the number of interactive objects of which the visual lines are positioned on a display screen of the display equipment in the target interactive image is greater than a second threshold value;
and matching the target interactive image with a preset image.
The definition threshold, the first threshold and the second threshold are preset fixed values; or a value dynamically determined during the processing of the multi-frame interactive image, for example, the definition threshold may be a second highest definition value in the multi-frame interactive image. The disclosed embodiments are not limited herein.
Illustratively, the display device may compare multiple frames of interactive images, and select the interactive image with the largest image definition as the target interactive image; or the display device can also select the interactive image with the largest number of interactive objects as the target interactive image; the display equipment can also select the interactive image with the largest number of interactive objects with the sight focused on the display screen of the display equipment as a target interactive image; or the display device selects one interactive image matched with the preset image from the multi-frame interactive images as a target interactive image.
In another example, the display device may further select, as the target interactive object, an interactive image with the largest image definition and the largest number of interactive objects; or selecting the interactive image with the largest image definition and the largest number of interactive objects with the sight line focused on the display screen of the display equipment as the target interactive image.
In some embodiments of the present disclosure, when the specific selection condition includes more than two conditions, S120 selects the target interactive image from the multiple interactive images based on the specific selection condition, which may be implemented by:
s2201, determining the priority order of each condition in the specific selection conditions;
s2201, selecting a target interactive image from the multi-frame interactive images according to the priority order.
It is to be understood that, when a plurality of selection conditions are included in a specific selection condition, the display apparatus may select a target interactive image from the plurality of frames of interactive images in order of priority of each condition.
Illustratively, the specific selection condition includes three conditions, and the priority order is from high to low: the definition of the target interactive image is larger than a definition threshold value, the number of interactive objects in the target interactive image is larger than a first threshold value, and the number of interactive objects of a sight line in the target interactive image on a display screen of display equipment is larger than a second threshold value. In this way, after acquiring multiple frames of interactive images, the display device firstly selects an interactive image with the image definition greater than the definition threshold from the multiple frames of interactive images according to the condition of the highest priority (namely that the definition of the target interactive image is greater than the definition threshold), and if only one interactive image greater than the definition threshold exists, the interactive image is taken as the target interactive image; and if a plurality of interactive images with the definition threshold value are larger than the definition threshold value, the display equipment continuously selects the interactive images with the number of the interactive objects larger than the first threshold value from the selected interactive images with the definition threshold value larger than the definition threshold value based on a second condition of the priority (namely that the number of the interactive objects in the target interactive images is larger than the first threshold value). Similarly, if only one interactive image with the number of the selected interactive objects larger than the first threshold is available, the interactive image is used as a target interactive image, otherwise, the display device continues to select the interactive image with the number of the interactive objects with the sight lines on the display screen of the display device larger than the second threshold from the interactive images with the number of the selected interactive objects larger than the first threshold based on a third condition of the priority (namely that the number of the interactive objects with the sight lines on the display screen of the display device in the target interactive image is larger than the second threshold), and the selected interactive image is used as the target interactive image.
And S230, determining virtual effect data corresponding to the display object based on the target interactive image, and rendering the virtual effect data to obtain a virtual effect image.
The display object refers to image content corresponding to a real display object in a real scene image. Specifically, the display device may acquire an image of the real scene through the second image acquisition device to obtain a real scene image, and display the real scene image on a display screen of the display device.
It should be noted that the second image capturing device is different from the first image capturing device mentioned in S210. The second image acquisition device is used for acquiring images corresponding to the display objects in the real scene, and the first image acquisition device is used for acquiring images corresponding to the interactive objects in the real scene. Illustratively, the first image acquisition device is a front camera of the display device, and the second image acquisition device is a rear camera of the display device.
On the basis, when the interactive object watches the display screen of the display equipment, the display object displayed in the display screen can be watched; in this way, the interactive object may make an interactive instruction for the presentation object in the display screen, so that the display device responds to the interactive instruction by acquiring the interactive image of the interactive object.
In some embodiments of the present disclosure, the display device may identify image content of the target interaction image, determine an interaction instruction (e.g., a particular gesture, or a particular posture) of the interaction object, and determine virtual effect data to present in response to the interaction instruction.
In some embodiments of the present disclosure, the display device may further determine a target first image capturing device that needs to respond according to the target interaction image, and then continue to capture consecutive image frames through the target first image capturing device, so as to identify an interaction instruction (e.g., a specific gesture track) of an interaction object in the consecutive image frames, and determine virtual effect data to be presented in response to the interaction instruction.
In the disclosed embodiment, the virtual effect data is a set of virtual image data, which may be rendering parameters for rendering a virtual effect by a rendering tool. A virtual effect may be understood as a virtual object that is represented in an image of a real scene.
In an embodiment of the present disclosure, the virtual effect data may include at least one of:
the method comprises the steps of rendering a display object, virtually explaining the object, virtually representing the outline of the object, virtually representing the detail of the object and virtually representing the label.
The rendering model of the display object is a three-dimensional virtual model of the display object constructed based on the image information and the depth information of the display object. The three-dimensional virtual model can show objects in a real scene in a ratio of 1: 1, namely, if the three-dimensional virtual model is put into the world coordinate system where the real scene is located, the three-dimensional virtual model is completely overlapped with the target display object in the real scene.
The virtual explanation object refers to a virtual object that can interact with an interactive object located in front of the display device, such as a virtual interpreter and a virtual robot. For example, referring to a display interface diagram of an exemplary display device shown in fig. 4-1, the virtual explanation object may be a virtual interpreter 402 that explains a presentation object 401 in an image of a real scene.
The virtual object outline model is a virtual image which displays the outline of an object displayed in a real scene image in a key manner. For example, referring to the display interface diagram of an exemplary display device shown in fig. 4-1, the virtual object outline model may be a virtual contour line 403 outlining a display object 401 in the real scene image 400.
The virtual object detail model refers to the virtual detail display of the display object in the real scene image; for example, referring to the display interface schematic of an exemplary display device shown in fig. 4-2, the virtual object detail model may be a virtual detail presentation 405 inside a cultural relic 404 presented in the real scene image 400.
The virtual tag is used for displaying additional information of a display object in a real scene image; for example, referring to the display interface schematic of an exemplary display device shown in fig. 4-2, the virtual label may be detailed introduction information 406 corresponding to a cultural relic 404 shown in the real scene image, wherein the detailed introduction information may be "caliber 75.6 cm".
In the embodiment of the disclosure, after determining that the target interaction image is to be responded, the display device may acquire virtual effect data corresponding to the display object from the local storage space, or may acquire virtual effect data corresponding to the display object from a third-party device, for example, a cloud server. The embodiment of the present disclosure does not limit the manner of obtaining the virtual effect data.
And S240, displaying the augmented reality effect of the virtual effect image and the display object which are superposed on each other on the display equipment.
In the embodiment of the present disclosure, after the virtual effect data is rendered to obtain the virtual effect image, the display object in the real scene image and the virtual effect image may be superimposed, and the superimposed augmented reality effect may be displayed on the display device. In this way, when the user views the acquired real scene image on the display screen of the display device, the user can view the display object in the real scene image and the virtual effect superposed on the display object.
Therefore, the image display method provided by the embodiment of the disclosure acquires multi-frame interactive images of interactive objects in a real scene through a plurality of first image acquisition devices; further, based on a specific selection condition, selecting a target interactive image from the multi-frame interactive images; determining virtual effect data corresponding to the display object based on the target interactive image, and rendering the virtual effect data to obtain a virtual effect image; and finally, displaying the augmented reality effect of the virtual effect image and the display object which are superposed on each other on the display equipment. Therefore, the display equipment acquires the interactive images through the camera and selects one target interactive image from the multiple interactive images to respond according to a certain strategy, so that the interactive objects can operate the display equipment without contacting, and the interaction flexibility and accuracy are improved; in addition, the display equipment can obtain the virtual effect data of the display object according to the target interactive image, and adds the virtual effect on the display object for display, so that the display effect of the image is enhanced, and the richness of image display is improved.
Based on the above embodiments, there are various ways of determining the virtual effect data corresponding to the display object based on the target interactive image in S230, and two ways are described in detail below: mode one and mode two.
In the first mode, S230 determines virtual effect data corresponding to the display object based on the target interactive image, and may be implemented in the following manner:
s2301, identifying image content of the target interactive image, and determining posture information of an interactive object in the target interactive image;
and determining virtual effect data corresponding to the display object based on the posture information.
In the embodiment of the disclosure, after the target interactive image is selected and obtained, the display device may identify the image content of the target interactive image, determine an interactive instruction sent by the interactive object, and determine virtual effect data corresponding to the display object based on the interactive instruction.
In some embodiments of the present disclosure, the display device may determine the interaction instruction according to the recognized posture information of the interaction object. The gesture information may include gesture information and/or posture information.
Illustratively, the display device recognizes an interactive object display "OK" gesture from the target interactive image, that is, the interactive object is considered to expect to know the internal details of the display object in the current real scene image, so that the display device may obtain a virtual object detail model corresponding to the current display object according to the "OK" gesture, so that the rendered virtual effect image can display the internal details of the display object.
Therefore, the embodiment of the disclosure can respond by identifying the image content of the target interactive image, and can quickly respond to the image content of the target interactive image.
In the second mode, S230 determines virtual effect data corresponding to the display object based on the target interactive image, and may also be implemented in the following manner:
s2301', determining a target first image acquisition device corresponding to the target interactive image based on the target interactive image;
s2302', collecting at least two frames of images to be processed through a target first image collecting device;
s2303', based on at least two frames of images to be processed, virtual effect data corresponding to the display object is determined.
In practical application, the dynamic interaction instruction of the interaction object cannot be identified only through one target interaction image; for example, a user amplifies the interactive instructions of the virtual tag; therefore, it is necessary to determine the operation of the user over a period of time through a plurality of images.
In the embodiment of the disclosure, the display device may determine the first image acquisition device with a better acquisition effect according to the target interactive image. Here, the target first image acquisition device corresponding to the target interaction image is the first image acquisition device with a better effect.
In some embodiments of the present disclosure, after determining the target first image capturing device, the display device controls the first image capturing device to continue capturing multiple frames of images to be processed, determines an interaction instruction according to posture change information of an interaction object in the multiple frames of images to be processed, and determines virtual effect data corresponding to the display object based on the interaction instruction.
Illustratively, the display device acquires two continuous frames of images to be processed through the target first image acquisition device, and determines that the palm of the interactive object slides rightwards by identifying and comparing the two frames of images to be processed, namely, the interactive object is considered to expect to switch the additional information of the display object in the current real scene image; based on this, the display device can obtain the virtual tag to be displayed corresponding to the current display object according to the posture change information of the interaction object sliding rightwards by the palm, so as to display the additional information of the display object.
Therefore, the target first image acquisition device to be responded can be determined, the multi-frame image acquired by the target first image acquisition device is responded, the interactive instruction of the interactive object can be accurately determined, and the accurate determination of the response is improved.
Next, an exemplary manner of acquiring the presentation object is described.
In the embodiment of the present disclosure, before determining, based on the target interaction image, virtual effect data corresponding to the display object, the following steps may be further performed:
step a, acquiring a real scene image through a second image acquisition device;
b, identifying the image content of the real scene image to obtain a display object;
and c, displaying the display object on display equipment.
In the embodiment of the present disclosure, the display device may acquire an image of the current real scene in real time through the second image acquisition apparatus. The real scene may be a building indoor scene, a street scene, a specific object, and the like, in which a virtual object can be superimposed, and the virtual object is superimposed in the real scene to present an augmented reality effect.
In the embodiment of the disclosure, the display device may identify image content in the real scene image, obtain a display object in the real scene image, and display the display object. Therefore, the interactive object can watch the display object from the display screen of the display device and interact with the display object to present the virtual effect of the display object and enhance the display effect of the display object.
Based on the foregoing embodiments, in some embodiments of the present disclosure, the display screen of the display device may be a transparent display screen or a non-transparent display screen.
When the display screen of the display device is a non-transparent display screen, a monocular camera or a binocular camera may be disposed on the back side of the non-transparent display screen (i.e., the side not disposed with the display screen) for collecting a display object facing the back side of the non-transparent display screen, and an augmented reality AR effect in which a real scene image corresponding to the display object and a virtual effect are superimposed is displayed through the display screen on the front side of the non-transparent display screen. Thus, the interactive object can be positioned on the front side of the non-transparent display screen, view the augmented reality AR effect of the real scene image and the virtual effect, and interact with the display object in the real scene image.
When the display screen of the display device is the transparent display screen, a monocular camera or a binocular camera can be arranged on one side of the transparent display screen and used for collecting the display object located on one side of the transparent display screen. The display device displays the virtual effect corresponding to the display object on the transparent screen by identifying the collected display object. Based on this, referring to the display interface schematic diagram shown in fig. 5, the interactive object may view the display object located behind the transparent display screen through the transparent display screen, and view the virtual effect superimposed on the display object from the transparent display screen. Thus, the augmented reality AR effect that the real scene and the virtual effect are overlapped is realized. Thus, the interactive object can be positioned at any side of the transparent display screen, view the augmented reality AR effect of the real scene image and the virtual effect, and interact with the display object in the real scene image.
Based on the foregoing embodiments, in some embodiments of the present disclosure, before the rendering of the virtual effect data to obtain the virtual effect image in S230, the following steps may also be performed:
determining attribute information of an interactive object in a target interactive image based on the target interactive image;
and secondly, determining the display position of the virtual effect data based on the attribute information.
In some embodiments of the present disclosure, the attribute information may include at least one of:
the height information of the interactive object, the sight angle information of the interactive object and the position information of the interactive object in the target interactive image.
Correspondingly, rendering the virtual effect data in S230 to obtain a virtual effect image may be implemented in the following manner:
and rendering the virtual effect data to obtain a virtual effect image based on the display position of the virtual effect data.
That is, the display device may identify the image content of the target interactive image, determine the interactive object, the height, the view angle of the interactive object, and the position of the interactive object in the target interactive image, thereby determining the position where the virtual effect data is displayed, and thus, the display position of the virtual effect data may be matched with the height, the view angle, and the position of the interactive object, so that the interactive object views the optimal display effect of the virtual effect data.
Based on the foregoing embodiments, in some embodiments of the present disclosure, the acquiring, in S210, multiple frames of interactive images by multiple first image acquisition apparatuses includes:
acquiring multi-frame interactive images through a plurality of first image acquisition devices according to a preset time interval;
or,
and controlling a plurality of first image acquisition devices to acquire multi-frame interactive images under the condition that the current scene is detected to be changed.
In practical applications, interactive objects around the display device may be in a changing state, and therefore, in the embodiment of the present disclosure, the display device may continuously acquire a plurality of frames of interactive images at a certain time interval and respond based on the plurality of frames of interactive images. In addition, under the condition that the current scene changes, it may be considered that the interactive object may operate the display object in the real scene image, and therefore, the display apparatus in the embodiment of the present disclosure may control the plurality of first image capturing devices to capture the multi-frame interactive image and respond based on the captured multi-frame interactive image under the condition that the current scene changes. Thus, the flexibility of the response of the display device can be improved.
In some embodiments of the present disclosure, the augmented reality effect includes:
and displaying the virtual effect object corresponding to the virtual effect data in the preset range of the display object, wherein the display position of the virtual effect object is matched with the attribute information of the interactive object in the target interactive image.
That is to say, the embodiment of the present disclosure may increase a virtual effect for the display object in the real scene image, and enhance the display effect of the target display object. Meanwhile, the display position of the virtual effect data can be matched with the height, the sight angle and the position of the interactive object, so that the interactive object can watch the optimal display effect of the virtual effect data, and the viewing experience of the interactive object is improved.
Based on the foregoing embodiments, an embodiment of the present disclosure provides an image display apparatus, which may be applied to the display device described above, and fig. 6 is a schematic diagram of a composition structure of the image display apparatus provided in the embodiment of the present disclosure, as shown in fig. 6, where the apparatus 600 includes:
the acquisition unit 601 is used for acquiring multi-frame interactive images of interactive objects in a real scene through a first image acquisition device;
a selecting unit 602, configured to select a target interactive image from the multiple frames of interactive images based on a specific selecting condition;
a processing unit 603, configured to determine, based on the target interaction image, virtual effect data corresponding to a display object, and render the virtual effect data to obtain a virtual effect image;
a display unit 604, configured to display an augmented reality effect in which the virtual effect image and the display object are superimposed.
In some embodiments of the present disclosure, the processing unit 603 is specifically configured to perform recognition processing on image content of the target interactive image, and determine posture information of an interactive object in the target interactive image; and determining virtual effect data corresponding to the display object based on the attitude information.
In some embodiments of the present disclosure, the processing unit 603 is specifically configured to determine, based on the target interaction image, a target first image capturing device corresponding to the target interaction image;
acquiring at least two frames of images to be processed through the target first image acquisition device;
and determining virtual effect data corresponding to the display object based on the at least two frames of images to be processed.
In some embodiments of the present disclosure, the acquiring unit 601 is further configured to acquire a real scene image through a second image acquiring device;
a processing unit 603, configured to identify image content of the real scene image, and obtain the display object;
a display unit 604, configured to display the display object on the display device.
In some embodiments of the present disclosure, the processing unit 603 is configured to determine, based on the target interaction image, attribute information of an interaction object in the target interaction image; determining a display position of the virtual effect data based on the attribute information; and rendering the virtual effect data to obtain a virtual effect image based on the display position of the virtual effect data.
In some embodiments of the present disclosure, the attribute information includes at least one of:
the height information of the interactive object, the sight angle information of the interactive object and the position information of the interactive object in the target interactive image.
In some embodiments of the present disclosure, the specific selection condition includes at least one of the following conditions:
the definition of the target interactive image is greater than a definition threshold value;
the number of the interactive objects in the target interactive image is greater than a first threshold value;
the number of the interactive objects of which the visual lines are positioned on the display screen of the display equipment in the target interactive image is larger than a second threshold value;
and the target interactive image is matched with a preset image.
In some embodiments of the present disclosure, when the specific selection condition includes more than two conditions, the selecting unit 602 is specifically configured to determine a priority order of each condition in the specific selection condition; and selecting a target interactive image from the multi-frame interactive images according to the priority sequence.
In some embodiments of the present disclosure, the acquiring unit 601 is specifically configured to acquire, by the multiple first image acquiring devices, the multiple frames of interactive images according to a preset time interval; or, under the condition that the current scene is detected to be changed, controlling the plurality of first image acquisition devices to acquire the plurality of frames of interactive images.
In some embodiments of the present disclosure, the augmented reality effect comprises:
and displaying a virtual effect object corresponding to the virtual effect data in the preset range of the display object, wherein the display position of the virtual effect object is matched with the attribute information of the interactive object in the target interactive image.
In some embodiments of the present disclosure, the display screen of the display device moves on a preset slide rail.
In some embodiments of the present disclosure, the display screen of the display device is a transparent display screen or a non-transparent display screen.
It should be noted that the above description of the embodiment of the apparatus, similar to the above description of the embodiment of the method, has similar beneficial effects as the embodiment of the method. For technical details not disclosed in the embodiments of the apparatus of the present disclosure, reference is made to the description of the embodiments of the method of the present disclosure.
It should be noted that, in the embodiment of the present disclosure, if the information display method is implemented in the form of a software functional module and is sold or used as a standalone product, the information display method may also be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a terminal, a server, etc.) to execute all or part of the methods described in the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the present disclosure are not limited to any specific combination of hardware and software.
Accordingly, the embodiment of the present disclosure further provides a computer storage medium, where computer-executable instructions are stored on the computer storage medium, and the computer-executable instructions are used to implement the steps of the information display method provided by the foregoing embodiment.
Accordingly, an embodiment of the present disclosure provides a display device, fig. 7 is a schematic structural diagram of the display device in the embodiment of the present disclosure, and as shown in fig. 7, the display device 700 includes: a camera 701 and a display 702;
a memory 703 for storing a computer program;
the processor 704 is configured to implement the steps of the image display method provided in the foregoing embodiment in combination with the camera 701 and the display 702 when executing the computer program stored in the memory 703.
The display device 700 further includes: a communication bus 705. The communication bus 705 is configured to enable connective communication between these components.
In the embodiment of the present disclosure, the display 702 includes, but is not limited to, a liquid crystal display, an organic light emitting diode display, a touch display, and the like, and the disclosure is not limited herein.
The above description of the computer device and storage medium embodiments is similar to the description of the method embodiments above, with similar beneficial effects as the method embodiments. For technical details not disclosed in the embodiments of the computer apparatus and storage medium of the present disclosure, reference is made to the description of the embodiments of the method of the present disclosure.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the present disclosure, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present disclosure. The above-mentioned serial numbers of the embodiments of the present disclosure are merely for description and do not represent the merits of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in the present disclosure, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiments of the present disclosure.
In addition, all the functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Alternatively, the integrated unit of the present disclosure may be stored in a computer-readable storage medium if it is implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the methods described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
The above description is only for the specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present disclosure, and all the changes or substitutions should be covered within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.
Claims (15)
1. An image display method, characterized in that the method comprises:
acquiring multi-frame interactive images of interactive objects in a real scene through a plurality of first image acquisition devices;
selecting a target interactive image from the multi-frame interactive images based on a specific selection condition;
determining virtual effect data corresponding to a display object based on the target interactive image, and rendering the virtual effect data to obtain a virtual effect image;
and displaying the augmented reality effect of the virtual effect image and the display object which are superposed on each other on display equipment.
2. The method of claim 1, wherein determining virtual effect data corresponding to a presentation object based on the target interaction image comprises:
identifying the image content of the target interactive image, and determining the posture information of an interactive object in the target interactive image;
and determining virtual effect data corresponding to the display object based on the attitude information.
3. The method of claim 1, wherein determining virtual effect data corresponding to a presentation object based on the target interaction image comprises:
determining a target first image acquisition device corresponding to the target interactive image based on the target interactive image;
acquiring at least two frames of images to be processed through the target first image acquisition device;
and determining virtual effect data corresponding to the display object based on the at least two frames of images to be processed.
4. The method according to any one of claims 1 to 3, wherein before determining the virtual effect data corresponding to the presentation object based on the target interaction image, the method further comprises:
acquiring a real scene image through a second image acquisition device;
identifying the image content of the real scene image to obtain the display object;
and displaying the display object on the display equipment.
5. The method of any of claims 1-4, wherein prior to rendering the virtual effect data into a virtual effect image, the method further comprises:
determining attribute information of an interactive object in the target interactive image based on the target interactive image;
determining a display position of the virtual effect data based on the attribute information;
the rendering the virtual effect data to obtain a virtual effect image includes:
and rendering the virtual effect data to obtain a virtual effect image based on the display position of the virtual effect data.
6. The method of claim 5, wherein the attribute information comprises at least one of:
the height information of the interactive object, the sight angle information of the interactive object and the position information of the interactive object in the target interactive image.
7. The method according to claim 1, wherein the specific selection condition comprises at least one of the following conditions:
the definition of the target interactive image is greater than a definition threshold value;
the number of the interactive objects in the target interactive image is greater than a first threshold value;
the number of the interactive objects of which the visual lines are positioned on the display screen of the display equipment in the target interactive image is larger than a second threshold value;
and the target interactive image is matched with a preset image.
8. The method according to claim 7, wherein when the specific selection condition includes more than two conditions, the method selects a target interactive image from the plurality of frames of interactive images based on the specific selection condition, and further comprises:
determining the priority order of each condition in the specific selection conditions;
and selecting a target interactive image from the multi-frame interactive images according to the priority sequence.
9. The method according to any one of claims 1-8, wherein said capturing a plurality of frames of interactive images by a plurality of first image capturing devices comprises:
collecting the multi-frame interactive images through the plurality of first image collecting devices according to a preset time interval;
or,
and controlling the plurality of first image acquisition devices to acquire the multi-frame interactive images under the condition that the current scene is detected to be changed.
10. The method of any one of claims 1-9, wherein the augmented reality effect comprises:
and displaying a virtual effect object corresponding to the virtual effect data in the preset range of the display object, wherein the display position of the virtual effect object is matched with the attribute information of the interactive object in the target interactive image.
11. The method according to any one of claims 1 to 10, wherein the display screen of the display device is moved on a preset slide rail.
12. The method according to any one of claims 1-11, wherein the display screen of the display device is a transparent display screen or a non-transparent display screen.
13. An image display device characterized by comprising:
the acquisition unit is used for acquiring multi-frame interactive images of interactive objects in a real scene through the first image acquisition device;
the selecting unit is used for selecting a target interactive image from the multi-frame interactive images based on a specific selecting condition;
the processing unit is used for determining virtual effect data corresponding to a display object based on the target interactive image and rendering the virtual effect data to obtain a virtual effect image;
and the display unit is used for displaying the augmented reality effect of the virtual effect image and the display object which are superposed.
14. A display device comprising a camera, a display, a processor and a memory for storing a computer program operable on the processor;
the camera, the display, the processor and the memory are connected through a communication bus;
wherein the processor, when running a computer program stored in the memory in conjunction with the camera and the display, performs the steps of the method of any of claims 1 to 12.
15. A computer-readable storage medium, on which a computer program is stored which is executed by a processor for implementing the steps of the method of any one of claims 1 to 12.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010761742.1A CN111918114A (en) | 2020-07-31 | 2020-07-31 | Image display method, image display device, display equipment and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010761742.1A CN111918114A (en) | 2020-07-31 | 2020-07-31 | Image display method, image display device, display equipment and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111918114A true CN111918114A (en) | 2020-11-10 |
Family
ID=73287475
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010761742.1A Pending CN111918114A (en) | 2020-07-31 | 2020-07-31 | Image display method, image display device, display equipment and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111918114A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112634773A (en) * | 2020-12-25 | 2021-04-09 | 北京市商汤科技开发有限公司 | Augmented reality presentation method and device, display equipment and storage medium |
WO2023045964A1 (en) * | 2021-09-27 | 2023-03-30 | 上海商汤智能科技有限公司 | Display method and apparatus, device, computer readable storage medium, computer program product, and computer program |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1060772A2 (en) * | 1999-06-11 | 2000-12-20 | Mixed Reality Systems Laboratory Inc. | Apparatus and method to represent mixed reality space shared by plural operators, game apparatus using mixed reality apparatus and interface method thereof |
US20140049558A1 (en) * | 2012-08-14 | 2014-02-20 | Aaron Krauss | Augmented reality overlay for control devices |
CN106873768A (en) * | 2016-12-30 | 2017-06-20 | 中兴通讯股份有限公司 | A kind of augmented reality method, apparatus and system |
CN107277494A (en) * | 2017-08-11 | 2017-10-20 | 北京铂石空间科技有限公司 | three-dimensional display system and method |
CN109683701A (en) * | 2017-10-18 | 2019-04-26 | 深圳市掌网科技股份有限公司 | Augmented reality exchange method and device based on eye tracking |
CN109903129A (en) * | 2019-02-18 | 2019-06-18 | 北京三快在线科技有限公司 | Augmented reality display methods and device, electronic equipment, storage medium |
CN109918975A (en) * | 2017-12-13 | 2019-06-21 | 腾讯科技(深圳)有限公司 | A kind of processing method of augmented reality, the method for Object identifying and terminal |
CN110298924A (en) * | 2019-05-13 | 2019-10-01 | 西安电子科技大学 | For showing the coordinate transformation method of detection information in a kind of AR system |
-
2020
- 2020-07-31 CN CN202010761742.1A patent/CN111918114A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1060772A2 (en) * | 1999-06-11 | 2000-12-20 | Mixed Reality Systems Laboratory Inc. | Apparatus and method to represent mixed reality space shared by plural operators, game apparatus using mixed reality apparatus and interface method thereof |
US20140049558A1 (en) * | 2012-08-14 | 2014-02-20 | Aaron Krauss | Augmented reality overlay for control devices |
CN106873768A (en) * | 2016-12-30 | 2017-06-20 | 中兴通讯股份有限公司 | A kind of augmented reality method, apparatus and system |
CN107277494A (en) * | 2017-08-11 | 2017-10-20 | 北京铂石空间科技有限公司 | three-dimensional display system and method |
CN109683701A (en) * | 2017-10-18 | 2019-04-26 | 深圳市掌网科技股份有限公司 | Augmented reality exchange method and device based on eye tracking |
CN109918975A (en) * | 2017-12-13 | 2019-06-21 | 腾讯科技(深圳)有限公司 | A kind of processing method of augmented reality, the method for Object identifying and terminal |
CN109903129A (en) * | 2019-02-18 | 2019-06-18 | 北京三快在线科技有限公司 | Augmented reality display methods and device, electronic equipment, storage medium |
CN110298924A (en) * | 2019-05-13 | 2019-10-01 | 西安电子科技大学 | For showing the coordinate transformation method of detection information in a kind of AR system |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112634773A (en) * | 2020-12-25 | 2021-04-09 | 北京市商汤科技开发有限公司 | Augmented reality presentation method and device, display equipment and storage medium |
CN112634773B (en) * | 2020-12-25 | 2022-11-22 | 北京市商汤科技开发有限公司 | Augmented reality presentation method and device, display equipment and storage medium |
WO2023045964A1 (en) * | 2021-09-27 | 2023-03-30 | 上海商汤智能科技有限公司 | Display method and apparatus, device, computer readable storage medium, computer program product, and computer program |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112037314A (en) | Image display method, image display device, display equipment and computer readable storage medium | |
WO2022022036A1 (en) | Display method, apparatus and device, storage medium, and computer program | |
US10055888B2 (en) | Producing and consuming metadata within multi-dimensional data | |
CN111833458B (en) | Image display method and device, equipment and computer readable storage medium | |
US20130135295A1 (en) | Method and system for a augmented reality | |
CN112684894A (en) | Interaction method and device for augmented reality scene, electronic equipment and storage medium | |
KR20140082610A (en) | Method and apaaratus for augmented exhibition contents in portable terminal | |
US11232636B2 (en) | Methods, devices, and systems for producing augmented reality | |
US11151791B2 (en) | R-snap for production of augmented realities | |
CN111880720B (en) | Virtual display method, device, equipment and computer readable storage medium | |
EP2707820A1 (en) | Method and apparatus for enabling virtual tags | |
CN110473293A (en) | Virtual objects processing method and processing device, storage medium and electronic equipment | |
CN106648098B (en) | AR projection method and system for user-defined scene | |
CN111679742A (en) | Interaction control method and device based on AR, electronic equipment and storage medium | |
CN111815780A (en) | Display method, display device, equipment and computer readable storage medium | |
US20230073750A1 (en) | Augmented reality (ar) imprinting methods and systems | |
CN111815782A (en) | Display method, device and equipment of AR scene content and computer storage medium | |
KR100957189B1 (en) | Augmented reality system using simple frame marker, and method therefor, and the recording media storing the program performing the said method | |
CN111918114A (en) | Image display method, image display device, display equipment and computer readable storage medium | |
CN112947756A (en) | Content navigation method, device, system, computer equipment and storage medium | |
CN103257703A (en) | Augmented reality device and method | |
CN113470190A (en) | Scene display method and device, equipment, vehicle and computer readable storage medium | |
US20230316675A1 (en) | Traveling in time and space continuum | |
US10409464B2 (en) | Providing a context related view with a wearable apparatus | |
Ha et al. | DigiLog Space: Real-time dual space registration and dynamic information visualization for 4D+ augmented reality |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20201110 |
|
RJ01 | Rejection of invention patent application after publication |