Nothing Special   »   [go: up one dir, main page]

CN113359986B - Augmented reality data display method and device, electronic equipment and storage medium - Google Patents

Augmented reality data display method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113359986B
CN113359986B CN202110620483.5A CN202110620483A CN113359986B CN 113359986 B CN113359986 B CN 113359986B CN 202110620483 A CN202110620483 A CN 202110620483A CN 113359986 B CN113359986 B CN 113359986B
Authority
CN
China
Prior art keywords
equipment
special effect
data
target
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110620483.5A
Other languages
Chinese (zh)
Other versions
CN113359986A (en
Inventor
田真
李斌
欧华富
王婷婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN202110620483.5A priority Critical patent/CN113359986B/en
Publication of CN113359986A publication Critical patent/CN113359986A/en
Application granted granted Critical
Publication of CN113359986B publication Critical patent/CN113359986B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/44Browsing; Visualisation therefor
    • G06F16/444Spatial browsing, e.g. 2D maps, 3D or virtual spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/48Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/487Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Library & Information Science (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure provides an augmented reality data display method, an apparatus, an electronic device and a storage medium, wherein the method comprises the following steps: responding to target information codes on the augmented reality AR equipment scanning bottle body, and determining a target exhibition area to be visited; acquiring positioning information of the AR equipment; and under the condition that the AR equipment reaches an explanation area corresponding to any preset knowledge point in the target exhibition area based on the positioning information, playing audio explanation data matched with any preset knowledge point through the AR equipment, and displaying first AR special effect data matched with the audio explanation data.

Description

Augmented reality data display method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of augmented reality, in particular to an augmented reality data display method, an apparatus, an electronic device and a storage medium.
Background
Augmented reality (Augmented Reality, AR) technology is a technology that integrates virtual information with the real world ingeniously, and uses multiple technical means such as multimedia, three-dimensional modeling, real-time tracking and registration, intelligent interaction, sensing, etc. The technology can simulate and simulate virtual information such as characters, images, three-dimensional models, music, videos and the like generated by a computer and then apply the virtual information to the real world, and the two kinds of information are mutually complemented, so that the enhancement of the real world is realized. Therefore, it is important to provide an augmented reality AR data display method.
Disclosure of Invention
In view of this, the present disclosure provides at least one augmented reality data display method, apparatus, electronic device and storage medium.
In a first aspect, the present disclosure provides an augmented reality data display method, including:
responding to target information codes on the augmented reality AR equipment scanning bottle body, and determining a target exhibition area to be visited;
acquiring positioning information of the AR equipment;
and under the condition that the AR equipment reaches an explanation area corresponding to any preset knowledge point in the target exhibition area based on the positioning information, playing audio explanation data matched with any preset knowledge point through the AR equipment, and displaying first AR special effect data matched with the audio explanation data.
In the method, the target information code on the bottle body is scanned by the augmented reality AR equipment to determine the target exhibition area to be visited, and the target information code is arranged on the bottle body to combine the target information code with the common object in the scene, so that the AR equipment can enter the AR navigation process conveniently and rapidly, and the AR data display efficiency is improved.
Meanwhile, audio explanation data corresponding to any preset knowledge point is played through the control AR equipment, the first AR special effect data matched with the currently played audio explanation data is displayed while the audio explanation data is played, the audio explanation data is subjected to auxiliary explanation by using the first AR special effect data, the explanation process of the preset knowledge point is clear and visual, and the explanation efficiency of the preset knowledge point is improved.
In a possible implementation manner, the determining the target exhibition area to be visited in response to the augmented reality AR device scanning the target information code on the bottle body includes:
responding to target information codes on the AR equipment scanning bottle body, and determining a target exhibition area to be visited and a destination position of the AR equipment in the target exhibition area;
the method further comprises the steps of:
generating navigation guidance information from the current location to the destination location based on the destination location and the current location of the AR device;
and displaying the navigation guidance information through the AR equipment.
Here, the navigation guidance information from the current position to the destination position can be generated based on the determined destination position in the target exhibition area and the current position of the AR device, and the navigation guidance information is displayed through the AR device, so that the AR device can reach the destination position according to the navigation guidance information, and diversity and flexibility of AR data display are improved.
In a possible implementation manner, the determining, in response to the AR device scanning the target information code on the bottle, the target exhibition area to be visited and the destination location of the AR device within the target exhibition area includes:
responding to the target information code on the AR equipment scanning bottle body, and displaying resource pushing information through the AR equipment;
responding to triggering operation for the resource pushing information, and determining the destination position, corresponding to the resource pushing information, in a target exhibition area to be browsed; and the destination position is a position for acquiring the target resource corresponding to the resource pushing information.
In this embodiment, the AR device may be configured to display the resource pushing information, for example, the winning information, in response to the target information code on the bottle scanned by the AR device, so that the information display is more flexible.
In a possible implementation manner, the audio explanation data corresponds to AR special effect data with multiple styles, and the method further includes:
and determining first AR special effect data matched with the audio explanation data from multiple types of AR special effect data corresponding to the audio explanation data according to the history record information of the AR equipment and/or the user information of a target user to which the AR equipment belongs.
Here, the audio interpretation data may correspond to multiple types of AR special effect data, and may determine, from multiple types of AR special effect data corresponding to the audio interpretation data, first AR special effect data matched with the audio interpretation data according to history information of the AR device and/or user information of a target user to which the AR device belongs, so that different AR devices may display different first AR special effect data when playing the same audio interpretation data, or may also cause the same AR device to display different first AR special effect data when playing the same audio interpretation data multiple times, thereby improving flexibility and diversity of display of the first AR special effect data.
In a possible implementation manner, the presenting the first AR special effect data matched with the audio explanation data includes:
generating AR special effect data matched with a target user based on user information of the target user to which the AR equipment belongs;
the AR equipment is controlled to sequentially display first AR special effect data matched with the audio explanation data and AR special effect data matched with the target user; or, overlapping the AR special effect matched with the target user and the first AR special effect matched with the audio explanation data to generate overlapped second AR special effect data, and displaying the second AR special effect data through the AR equipment.
In this embodiment, the AR special effect data matched with the target user may be generated based on the user information of the target user to which the AR device belongs, and then the second special effect data matched with the target user of the AR device may be generated based on the AR special effect data matched with the target user and the first AR special effect data matched with the audio interpretation data, so that the generated second special effect data has uniqueness, and the AR special effect data is enriched. Or, the AR equipment can be controlled to sequentially display the first AR special effect data matched with the audio explanation data and the AR special effect data matched with the target user, so that the diversity of the displayed AR special effect data is improved.
In a possible embodiment, the method further comprises:
and displaying the determined third AR special effect data through the AR equipment under the condition that the AR equipment reaches the destination position in the target exhibition area and/or the AR equipment has played the audio explanation data corresponding to each preset knowledge point in the target exhibition area.
In a possible implementation manner, the playing, by the AR device, the audio interpretation data matched with any preset knowledge point, and displaying the first AR special effect data matched with the audio interpretation data, includes:
And under the condition that the AR equipment meets the trigger display condition of the first AR special effect data based on the positioning information, controlling the AR equipment to play the audio explanation data and displaying the first AR special effect data.
Here, when the AR device meets the trigger display condition of the first AR special effect data, the AR device is controlled to play the audio explanation data and display the AR special effect data, so as to avoid the situation that the play effect of the first AR special effect data is poor when the positioning information of the AR device is unsuitable, and improve the display effect of the AR special effect data.
In a possible embodiment, the method further comprises:
and controlling the AR equipment to display guide information for guiding and adjusting the positioning information of the AR equipment under the condition that the AR equipment does not meet the trigger display condition of the first AR special effect data based on the positioning information.
Here, when the AR device does not meet the trigger display condition of the AR special effect data, the AR device may be controlled to display the guiding information for guiding and adjusting the positioning information of the AR device, so that the positioning information of the AR device may be adjusted according to the guiding information, so that the adjusted AR device may display the first AR special effect data clearly and intuitively, avoid omission of the preset knowledge point, and improve the explanation efficiency of the preset knowledge point.
In a possible implementation manner, the method is applied to a client application platform, and the client application platform is a Web page Web side application platform or an applet side application platform.
The following description of the effects of the apparatus, the electronic device, etc. refers to the description of the above method, and will not be repeated here.
In a second aspect, the present disclosure provides an augmented reality data presentation apparatus comprising:
the first determining module is used for responding to the target information code on the augmented reality AR equipment scanning bottle body and determining a target exhibition area to be visited;
the acquisition module is used for acquiring the positioning information of the AR equipment;
the first display module is used for playing audio explanation data matched with any preset knowledge point through the AR equipment and displaying first AR special effect data matched with the audio explanation data under the condition that the AR equipment is determined to reach the explanation area corresponding to any preset knowledge point in the target exhibition area based on the positioning information.
In a possible implementation manner, the first determining module is configured to, when determining the target exhibition area to be visited in response to the augmented reality AR device scanning the target information code on the bottle,:
Responding to target information codes on the AR equipment scanning bottle body, and determining a target exhibition area to be visited and a destination position of the AR equipment in the target exhibition area;
the apparatus further comprises: a generation module for:
generating navigation guidance information from the current location to the destination location based on the destination location and the current location of the AR device;
and displaying the navigation guidance information through the AR equipment.
In a possible implementation manner, the first determining module is configured to, when determining, in response to the AR device scanning the target information code on the bottle, a target exhibition area to be visited and a destination location of the AR device within the target exhibition area:
responding to the target information code on the AR equipment scanning bottle body, and displaying resource pushing information through the AR equipment;
responding to triggering operation for the resource pushing information, and determining the destination position, corresponding to the resource pushing information, in a target exhibition area to be browsed; and the destination position is a position for acquiring the target resource corresponding to the resource pushing information.
In a possible implementation manner, the audio explanation data corresponds to AR special effect data with multiple styles, and the apparatus further includes: a second determining module, configured to:
And determining first AR special effect data matched with the audio explanation data from multiple types of AR special effect data corresponding to the audio explanation data according to the history record information of the AR equipment and/or the user information of a target user to which the AR equipment belongs.
In a possible implementation manner, the first display module is configured to, when displaying the first AR special effect data matched with the audio interpretation data:
generating AR special effect data matched with a target user based on user information of the target user to which the AR equipment belongs;
the AR equipment is controlled to sequentially display first AR special effect data matched with the audio explanation data and AR special effect data matched with the target user; or, overlapping the AR special effect matched with the target user and the first AR special effect matched with the audio explanation data to generate overlapped second AR special effect data, and displaying the second AR special effect data through the AR equipment.
In a possible embodiment, the apparatus further comprises: and a second display module for:
and displaying the determined third AR special effect data through the AR equipment under the condition that the AR equipment reaches the destination position in the target exhibition area and/or the AR equipment has played the audio explanation data corresponding to each preset knowledge point in the target exhibition area.
In a possible implementation manner, the first display module is configured to, when playing, by the AR device, audio interpretation data that matches any one of the preset knowledge points, and displaying first AR special effect data that matches the audio interpretation data:
and under the condition that the AR equipment meets the trigger display condition of the first AR special effect data based on the positioning information, controlling the AR equipment to play the audio explanation data and displaying the first AR special effect data.
In a possible embodiment, the apparatus further comprises: and a third display module for:
and controlling the AR equipment to display guide information for guiding and adjusting the positioning information of the AR equipment under the condition that the AR equipment does not meet the trigger display condition of the first AR special effect data based on the positioning information.
In a possible implementation manner, the method is applied to a client application platform, and the client application platform is a Web page Web side application platform or an applet side application platform.
In a third aspect, the present disclosure provides an electronic device comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication over the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the augmented reality data presentation method according to the first aspect or any one of the embodiments.
In a fourth aspect, the present disclosure provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the augmented reality data presentation method according to the first aspect or any embodiment described above.
The foregoing objects, features and advantages of the disclosure will be more readily apparent from the following detailed description of the preferred embodiments taken in conjunction with the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for the embodiments are briefly described below, which are incorporated in and constitute a part of the specification, these drawings showing embodiments consistent with the present disclosure and together with the description serve to illustrate the technical solutions of the present disclosure. It is to be understood that the following drawings illustrate only certain embodiments of the present disclosure and are therefore not to be considered limiting of its scope, for the person of ordinary skill in the art may admit to other equally relevant drawings without inventive effort.
Fig. 1 is a schematic flow chart of an augmented reality data presentation method according to an embodiment of the present disclosure;
FIG. 2 illustrates an interface schematic diagram of an AR device provided by embodiments of the present disclosure;
FIG. 3 illustrates an interface schematic diagram of an AR device provided by embodiments of the present disclosure;
FIG. 4 illustrates an interface schematic diagram of an AR device provided by embodiments of the present disclosure;
fig. 5 is a schematic diagram illustrating a trigger display condition of first AR special effect data in an augmented reality data display method according to an embodiment of the present disclosure;
FIG. 6 illustrates an interface schematic diagram of an AR device provided by embodiments of the present disclosure;
fig. 7 is a flowchart illustrating another augmented reality data presentation method according to an embodiment of the present disclosure;
fig. 8 shows a schematic architecture diagram of an augmented reality data presentation device provided by an embodiment of the present disclosure;
fig. 9 shows a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, not all embodiments. The components of the embodiments of the present disclosure, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure provided in the accompanying drawings is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be made by those skilled in the art based on the embodiments of this disclosure without making any inventive effort, are intended to be within the scope of this disclosure.
Augmented reality (Augmented Reality, AR) technology is a technology that integrates virtual information with the real world ingeniously, and uses multiple technical means such as multimedia, three-dimensional modeling, real-time tracking and registration, intelligent interaction, sensing, etc. The technology can simulate and simulate virtual information such as characters, images, three-dimensional models, music, videos and the like generated by a computer and then apply the virtual information to the real world, and the two kinds of information are mutually complemented, so that the enhancement of the real world is realized. Accordingly, the embodiment of the disclosure provides an augmented reality data display method, an augmented reality data display device, electronic equipment and a storage medium.
The present invention is directed to a method for manufacturing a semiconductor device, and a semiconductor device manufactured by the method.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
For the convenience of understanding the embodiments of the present disclosure, a detailed description will be first given of an augmented reality data display method disclosed in the embodiments of the present disclosure. The execution subject of the augmented reality data display method provided by the embodiment of the present disclosure may be an AR device, which is an intelligent device capable of supporting an AR function, for example, the AR device includes, but is not limited to, a mobile phone, a tablet, AR glasses, and the like.
Referring to fig. 1, a flowchart of an augmented reality data display method according to an embodiment of the disclosure is shown, where the method includes S101-S103, where:
s101, responding to target information codes on the augmented reality AR equipment scanning bottle body, and determining a target exhibition area to be visited;
s102, acquiring positioning information of the AR equipment;
and S103, under the condition that the AR equipment reaches an explanation area corresponding to any preset knowledge point in the target exhibition area based on the positioning information, playing audio explanation data matched with any preset knowledge point through the AR equipment, and displaying first AR special effect data matched with the audio explanation data.
In the method, the target information code on the bottle body is scanned by the augmented reality AR equipment to determine the target exhibition area to be visited, and the target information code is arranged on the bottle body to combine the target information code with the common object in the scene, so that the AR equipment can enter the AR navigation process conveniently and rapidly, and the AR data display efficiency is improved.
Meanwhile, audio explanation data corresponding to any preset knowledge point is played through the control AR equipment, the first AR special effect data matched with the currently played audio explanation data is displayed while the audio explanation data is played, the audio explanation data is subjected to auxiliary explanation by using the first AR special effect data, the explanation process of the preset knowledge point is clear and visual, and the explanation efficiency of the preset knowledge point is improved.
S101 to S103 are specifically described below.
For S101:
the bottle body can be any bottle-shaped object such as a water bottle, a beer bottle, a beverage bottle and the like.
When the method is implemented, the AR equipment can be controlled to scan the target information code on the bottle body, and the target exhibition area to be visited corresponding to the AR equipment is determined in response to the AR equipment scanning the target information code. The target information code may be a two-dimensional code, a bar code, or other information code printed on the bottle body.
The target exhibition area can be the exhibition area corresponding to the target information code, and the target information codes on different bottle bodies can correspond to the same target exhibition area or different target exhibition areas. The target exhibition area can be set according to actual conditions. For example, the target exhibition area may be an exhibition area corresponding to any exhibition hall such as a museum, an exhibition hall, a commemorative hall, etc., or may be an exhibition area set in the exhibition hall, for example, the target exhibition area may be a calligraphy exhibition area or a painting exhibition area in a painting and calligraphy exhibition hall, etc.
In an alternative embodiment, determining the target display area to be visited in response to the augmented reality AR device scanning the target information code on the bottle may include: and responding to the target information code scanned by the AR equipment on the bottle body, determining a target exhibition area to be visited and a destination position of the AR equipment in the target exhibition area.
In implementation, different destination information codes may correspond to different destination positions, where the destination position corresponding to the destination information code may be implemented by the device according to an actual situation. For example, when the target exhibition area is an exhibition area corresponding to a beer exhibition hall, the destination position corresponding to the first target information code may be a hop production area in the beer exhibition hall, and the destination position corresponding to the second target information code may be a beer fermentation area in the beer exhibition hall.
The augmented reality data display method further comprises the following steps: generating navigation guidance information from the current location to the destination location based on the destination location and the current location of the AR device; and displaying the navigation guidance information through the AR equipment.
After determining that the AR device is located at a destination location within the target exhibition area in response to the AR device scanning a target information code on the bottle, a current location of the AR device may be determined; and generating navigation guidance information from the current location to the destination location according to the current location and the destination location of the AR device. Wherein, the current position of the AR equipment can be determined according to the positioning sensor on the AR equipment; the current position of the AR equipment can also be determined by utilizing a visual positioning mode according to the current scene image and the three-dimensional scene model acquired by the AR equipment.
The navigation guidance information may then be presented by the AR device so that the AR device may be controlled to move to the destination location based on the navigation guidance information. Referring to an interface schematic of an AR device shown in fig. 2, navigation guidance information is shown in fig. 2, for example, the navigation guidance information may be "move 10 meters forward followed by right turn" and/or a navigation route map located at the upper left position of fig. 2.
Here, the navigation guidance information from the current position to the destination position can be generated based on the determined destination position in the target exhibition area and the current position of the AR device, and the navigation guidance information is displayed through the AR device, so that the AR device can reach the destination position according to the navigation guidance information, and diversity and flexibility of AR data display are improved.
In an alternative embodiment, the determining, in response to the AR device scanning the target information code on the bottle, the target exhibition area to be visited and the destination location of the AR device within the target exhibition area may include:
a1, responding to a target information code on a scanning bottle body of the AR equipment, and displaying resource pushing information through the AR equipment;
Step A2, responding to triggering operation for the resource pushing information, and determining the destination position which corresponds to the resource pushing information and is positioned in a target exhibition area to be browsed; and the destination position is a position for acquiring the target resource corresponding to the resource pushing information.
When the method is implemented, the AR equipment can be controlled to scan the target information code on the bottle body, and resource pushing information is displayed through the AR equipment in response to the AR equipment scanning the target information code on the bottle body. For example, the resource push information may be winning information, etc. Referring to an interface schematic diagram of an AR device shown in fig. 3, resource pushing information is shown, for example, the resource pushing information may be "you pleased, winning a prize".
For example, voice data, touch screen operation and the like input by a user can be used as trigger operation for the resource pushing information, for example, the voice data can be "winning a prize", and the touch screen operation can be clicking a display position corresponding to the resource pushing information and the like. And responding to triggering operation for the resource pushing information, and determining a destination position which corresponds to the resource pushing information and is positioned in a target exhibition area to be browsed, wherein the destination position is a position for picking up target resources corresponding to the resource pushing information. For example, when the resource pushing information is winning information, the destination location may be a winning area set in the target exhibition area.
In this embodiment, the AR device may be configured to display the resource pushing information, for example, the winning information, in response to the target information code on the bottle scanned by the AR device, so that the information display is more flexible.
For S102:
in an alternative implementation manner, the positioning information of the AR equipment can be determined according to the sensor arranged on the AR equipment, and the determining process of the positioning information of the AR equipment is simple and quick. For example, the sensor may be a global positioning system (Global Positioning System, GPS), inertial sensor (Inertial Measurement Unit, IMU), or the like.
In another alternative embodiment, the real-time scene image acquired by the AR device may be acquired; and determining positioning information of the AR equipment based on the real-time scene image and the three-dimensional scene model corresponding to the constructed target exhibition area.
When the method is implemented, after the real-time scene image acquired by the AR equipment is acquired, the characteristic points in the real-time scene image can be extracted, the characteristic points are matched with the characteristic point cloud included in the three-dimensional scene model, and positioning information when the real-time scene image is acquired by the AR equipment is determined. The positioning information may include position information and/or orientation information, for example, the position information may be coordinate information of the AR device in a coordinate system corresponding to the three-dimensional scene model; the orientation information may be the euler angle to which the AR device corresponds.
In practice, a three-dimensional scene model of the target display area may be constructed according to the following steps: collecting multi-frame scene images at different positions, different angles and different times in a target exhibition area, and extracting feature points of each frame of scene image to obtain a point cloud set corresponding to each frame of scene image; and obtaining characteristic point clouds corresponding to the target exhibition areas by utilizing the point cloud sets respectively corresponding to the multi-frame scene images, wherein the characteristic point clouds corresponding to the target exhibition areas form a three-dimensional scene model.
Or acquiring scene videos corresponding to target exhibition areas at different positions, different angles and different times, acquiring multi-frame video frames from the acquired scene videos, extracting characteristic points of each frame of video frame, and obtaining a point cloud set corresponding to each frame of video frame; and obtaining a three-dimensional scene model corresponding to the target exhibition area by utilizing the point cloud sets respectively corresponding to the multi-frame video frames.
Here, the positioning information of the AR device can be determined more accurately by using the real-time scene image acquired by the AR device and the constructed three-dimensional scene model.
For S103:
the preset knowledge points can be any interpretable knowledge point set in the target exhibition area. For example, when the target exhibition area is a painting exhibition hall, the preset knowledge point can be the name of any painting on the exhibition; for example, the preset knowledge points may be: mona Lisa, qingming Shanghai, etc. When the target exhibition area is a beer exhibition hall, the preset knowledge points may be titles and the like in the beer making process, for example, the preset knowledge points may be: hops, beer fermentation, etc.
In implementation, the spatial position of each preset knowledge point and the explanation area corresponding to each preset knowledge point can be determined in the constructed three-dimensional scene model, wherein the spatial position and the explanation area corresponding to the preset knowledge point can be set according to an actual scene. For example, a preset knowledge point of hop production can be aimed at, an exhibition area of a hop production process in a target exhibition area can be used as an explanation area of the preset knowledge point, and a central position of the explanation area can be used as a spatial position of the preset knowledge point. For example, for a preset knowledge point of "Qingming Shanghai map", an exhibition area corresponding to "Qingming Shanghai map" in the target exhibition area may be used as an exhibition area corresponding to the preset knowledge point, and a position of "Qingming Shanghai map" in the target exhibition area may be determined as a spatial position of the preset knowledge point.
In implementation, whether the AR equipment is located in the interpretation area of the preset knowledge point or not can be determined based on the interpretation area corresponding to the preset knowledge point and the position information indicated by the positioning information of the AR equipment, and if yes, audio interpretation data corresponding to the preset knowledge point and AR special effect data matched with the audio interpretation data are obtained. And playing the audio explanation data matched with the preset knowledge point through the AR equipment, and displaying the first AR special effect data matched with the audio explanation data.
The audio explanation data may be preset audio data for explaining the preset knowledge point; the first AR special effects data may be AR data comprising one or more of the following: video content, image content, text content. For example, when the preset knowledge point is "Mona Lisa", the audio interpretation data may be audio data for interpreting information such as author information and creation background of "Mona Lisa"; the first AR special effect data may be AR data of female figures included in mona lisa; alternatively, the first AR special effect data may be AR data of an author of "Mona Lisa", or the like. The first AR special effect data matched with the audio explanation data can be set according to the requirement; and the audio interpretation data may correspond to one kind of AR special effect data, or may correspond to a plurality of kinds of AR special effect data.
The display pose of the first AR special effect data can be set in the three-dimensional scene model according to requirements. For example, the display pose of the first AR special effect data may be located in the interpretation area of the preset knowledge point.
In implementation, the corresponding first AR special effect data may be matched with each audio interpretation data in advance, and after the audio interpretation data is acquired, the first AR special effect data matched with the audio interpretation data may be determined.
Or, after the audio explanation data corresponding to any preset knowledge point is obtained, the first AR special effect data matched with the audio explanation data can be determined in real time. For example, after audio explanation data corresponding to any preset knowledge point is obtained, a target keyword corresponding to the audio explanation data can be identified; or, text content of the audio explanation data can be determined, text content corresponding to the audio explanation data is identified, and target keywords included in the audio explanation data are determined; the target keyword may be a word related to a preset knowledge point. And determining the first AR special effect data matched with the audio explanation data based on the target keywords corresponding to the audio explanation data and the keywords corresponding to each first AR special effect data.
When the method is implemented, when the AR equipment is controlled to play the audio explanation data, the AR equipment can be controlled to display the first AR special effect data related to the target keyword according to the target keyword of the audio explanation data which is played currently, so that the played first AR special effect data is consistent with the content of the played audio explanation data, and the preset knowledge points can be explained and displayed clearly.
In this embodiment, audio explanation data corresponding to each preset knowledge point may be predetermined according to the following steps: acquiring audio explanation data to be processed corresponding to a target area; determining target keywords respectively matched with a plurality of preset knowledge points from audio explanation data to be processed according to a preset knowledge point library; and respectively determining the audio interpretation data corresponding to each preset knowledge point from the audio interpretation data to be processed according to the corresponding playing time position of each target keyword in the audio interpretation data to be processed.
When the method is implemented, the to-be-processed audio interpretation data corresponding to the target exhibition area can be generated according to the information of the plurality of knowledge points included in the target exhibition area, wherein the to-be-processed audio interpretation data can be complete audio data for respectively interpreting all the knowledge points. Determining target keywords respectively matched with a plurality of preset knowledge points from the audio explanation data to be processed according to a preset knowledge point library; or determining target keywords respectively matched with a plurality of preset knowledge points from text contents corresponding to the audio explanation data to be processed. And finally, segmenting the audio interpretation data to be processed according to the corresponding playing time position of each target keyword in the audio interpretation data to be processed, and determining the audio interpretation data corresponding to each preset knowledge point from a plurality of audio interpretation data obtained after segmentation.
In an alternative embodiment, the audio interpretation data corresponds to AR special effects data having multiple styles, and the method further comprises: and determining first AR special effect data matched with the audio explanation data from multiple types of AR special effect data corresponding to the audio explanation data according to the history record information of the AR equipment and/or the user information of a target user to which the AR equipment belongs.
Here, the audio interpretation data may correspond to multiple types of AR special effect data, and may determine, from multiple types of AR special effect data corresponding to the audio interpretation data, first AR special effect data matched with the audio interpretation data according to history information of the AR device and/or user information of a target user to which the AR device belongs, so that different AR devices may display different first AR special effect data when playing the same audio interpretation data, or may also cause the same AR device to display different first AR special effect data when playing the same audio interpretation data multiple times, thereby improving flexibility and diversity of display of the first AR special effect data.
Wherein the history information of the AR device may include at least one of the following information: the number of times the AR device scans the information code, the device information of the AR device, the history visit record of the AR device and the like; the user information of the target user to which the AR device belongs includes at least one of: age, gender, occupation, personal preference data of the target user, etc.
For example, the tour level of the AR device may be determined according to the number of times the AR device scans the information code, where the number of times is greater, the level is higher, different levels may correspond to different types of AR effect data, for example, text content may be included in AR effect data corresponding to level one, image content may be included in AR effect data corresponding to level two, video content may be included in AR effect data corresponding to level three, and so on. And the first AR special effect data matched with the audio explanation data can be determined from multiple types of AR special effect data corresponding to the audio explanation data according to the times that the AR equipment scans the information codes.
The first AR special effect data matched with the audio interpretation data may also be determined from multiple types of AR special effect data corresponding to the audio interpretation data according to the historical tour record of the AR device. For example, if the history browse record of the AR device indicates that the AR device displays the AR special effect data corresponding to the type one multiple times, the AR special effect data corresponding to the type one may be determined as the first AR special effect data matched with the audio explanation data from the AR special effect data corresponding to the audio explanation data multiple types.
Or, the AR device may further indicate that the AR device displays the AR special effect data corresponding to the type one last time according to the history browse record of the AR device, and then from the AR special effect data of multiple types corresponding to the audio explanation data, other types of AR special effect data except the type one may be determined as the first AR special effect data matched with the audio explanation data.
For example, matching user information may be determined in advance for each AR special effect data. And determining the first AR special effect data matched with the audio explanation data from the AR special effect data of a plurality of types corresponding to the audio explanation data according to the user information of the target user. For example, if the target user is a female, the AR special effect data matched with the female may be determined as the first AR special effect data matched with the audio interpretation data from the multiple types of AR special effect data.
In an alternative embodiment, the presenting the first AR special effects data matched with the audio explanation data may include:
step B1, generating AR special effect data matched with a target user based on user information of the target user to which the AR equipment belongs;
step B2, controlling the AR equipment to sequentially display first AR special effect data matched with the audio explanation data and AR special effect data matched with the target user;
and B3, overlapping the AR special effect matched with the target user and the first AR special effect matched with the audio explanation data to generate overlapped second AR special effect data, and displaying the second AR special effect data through the AR equipment.
In step B1, AR special effect data matched with the target user may be generated based on the user information of the target user to which the AR device belongs, for example, if the user information of the target user is: gender: female, age: for example, the AR effect data may include AR effects of the cartoon character.
After the step B1 is executed, the step B2 may be executed, that is, the AR device may be controlled to display first AR special effect data matched with the audio explanation data, and then the AR device may be controlled to display AR special effect data matched with the target user; or, the AR device can be controlled to display the AR special effect data matched with the target user, and then the AR device can be controlled to display the first AR special effect data matched with the audio explanation data.
After the step B1 is executed, the step B3 may be executed, that is, the AR special effect matched by the target user and the first AR special effect matched by the audio explanation data may be superimposed, the superimposed second AR special effect data is generated, and the second AR special effect data is displayed through the AR device. Referring to an interface schematic of an AR device shown in fig. 4, there is shown in fig. 4 an AR special effect 41 matching a target user and a first AR special effect 42 matching audio interpretation data.
In this embodiment, the AR special effect data matched with the target user may be generated based on the user information of the target user to which the AR device belongs, and then the second special effect data matched with the target user of the AR device may be generated based on the AR special effect data matched with the target user and the first AR special effect data matched with the audio interpretation data, so that the generated second special effect data has uniqueness, and the AR special effect data is enriched. Or, the AR equipment can be controlled to sequentially display the first AR special effect data matched with the audio explanation data and the AR special effect data matched with the target user, so that the diversity of the displayed AR special effect data is improved.
In an alternative embodiment, the method further comprises: and displaying the determined third AR special effect data through the AR equipment under the condition that the AR equipment reaches the destination position in the target exhibition area and/or the AR equipment has played the audio explanation data corresponding to each preset knowledge point in the target exhibition area.
In this embodiment, when the AR device reaches a destination location in the target exhibition area and/or the AR device has played audio explanation data corresponding to each preset knowledge point in the target exhibition area, the end of the current navigation of the AR device may be determined, and then the determined third AR special effect data may be displayed by the AR device.
The third AR special effect data may be set according to actual situations, for example, the third AR special effect data may be AR special effect data related to the destination location, or may be AR special effect data summarizing the target exhibition area, etc.
In implementation, third AR special effect data of multiple styles can be preset, and then the third AR special effect data to be displayed can be determined from the third AR special effect data of multiple styles preset according to history record information of the AR equipment and/or user information of a target user to which the AR equipment belongs, and the AR equipment is controlled to display the determined third AR special effect data to be displayed.
Or, the generated AR special effect matched with the target user and the third AR special effect are overlapped to generate overlapped third AR special effect data, and the overlapped third AR special effect data of the AR equipment is controlled.
In an optional implementation manner, the playing, by the AR device, the audio interpretation data matched with any preset knowledge point, and displaying the first AR special effect data matched with the audio interpretation data may include:
in a first mode, under the condition that the AR equipment meets the trigger display condition of the first AR special effect data based on the positioning information, the AR equipment is controlled to play the audio explanation data, and the first AR special effect data is displayed.
And in a second mode, under the condition that the AR equipment does not meet the trigger display condition of the first AR special effect data based on the positioning information, controlling the AR equipment to display guide information for guiding and adjusting the positioning information of the AR equipment.
Here, when the AR device meets the trigger display condition of the first AR special effect data, the AR device is controlled to play the audio explanation data and display the first AR special effect data, so as to avoid the situation that the play effect of the AR special effect data is poor when the positioning information of the AR device is unsuitable, and improve the display effect of the AR special effect data.
And when the AR equipment does not meet the trigger display condition of the first AR special effect data, the AR equipment can be controlled to display the guide information for guiding and adjusting the positioning information of the AR equipment so as to adjust the positioning information of the AR equipment according to the guide information, so that the adjusted AR equipment can display the first AR special effect data more clearly and intuitively, omission of preset knowledge points is avoided, and the explanation efficiency of the preset knowledge points is improved.
In implementation, a corresponding trigger display condition may be set for each first AR special effect data, for example, the trigger display condition may include a trigger display direction and/or a trigger display distance. When the trigger display condition comprises a trigger display direction, determining that the AR equipment meets the trigger display condition of the AR special effect data when the positioning information of the AR equipment indicates that the orientation of the AR equipment is located in the set trigger display direction. When the trigger display condition comprises a trigger display distance, and when the positioning information of the AR equipment indicates that the distance between the AR equipment and the display pose of the AR special effect data is smaller than a set first distance value and/or larger than a set second distance value, determining that the AR equipment meets the trigger display condition of the AR special effect data.
Referring to fig. 5, for the first AR effect data in fig. 5, a trigger display direction and a trigger display distance corresponding to the first AR effect data may be generated based on a display pose of the first AR effect data. For example, when the direction of the AR device is within the direction range corresponding to the trigger display direction, it is determined that the AR device meets the trigger display condition, for example, the AR device one 51 in fig. 5, when the direction of the AR device is the direction 511, the direction of the AR device is within the direction range corresponding to the set trigger display direction, and the AR device meets the trigger display condition; when the orientation of the AR device is direction 512, then the AR device does not satisfy the trigger presentation condition.
For example, if the distance between the first AR device 51 and the display location of the first AR special effect data in fig. 5 is smaller than the set first distance value and larger than the set second distance value, the first AR device 51 satisfies the set trigger display distance. The distance between the second AR device 52 and the display position of the AR special effect data in fig. 5 is greater than the set first distance value, so the second AR device 52 does not satisfy the set trigger display distance.
When the trigger display condition comprises a trigger display direction and a trigger display distance, determining that the AR equipment meets the trigger display condition of the first AR special effect data when the orientation indicated by the positioning information of the AR equipment meets the trigger display direction condition and the position information indicated by the positioning information of the AR equipment meets the trigger display distance condition. And when the orientation indicated by the positioning information of the AR equipment does not meet the condition of triggering the display direction, and/or the position information indicated by the positioning information of the AR equipment does not meet the condition of triggering the display distance, determining that the AR equipment does not meet the condition of triggering the display of the first AR special effect data.
If the AR equipment meets the triggering display condition corresponding to the AR special effect data, the AR equipment is controlled to play the audio explanation data, and the first AR special effect data is displayed. If the AR equipment does not meet the triggering display condition corresponding to the first AR special effect data, generating guide information for guiding and adjusting the pose information of the AR equipment, displaying the guide information through the AR equipment, and adjusting the pose of the AR equipment through the guide information, so that the AR equipment can clearly display the AR special effect data.
Referring to an interface schematic diagram of an AR device shown in fig. 6, fig. 6 shows guide information, for example, the guide information may be "please move the AR device to the right, and view first AR special effect data corresponding to a preset knowledge point a".
In an alternative embodiment, the method is applied to a client application platform, and the client application platform is a Web page Web application platform or an applet application platform.
When the method is implemented, the method can be applied to a client application platform, and the client can be a Web application platform on the AR equipment or an applet application platform on the AR equipment. Alternatively, the client application platform may also be an application on the AR device for AR navigation.
In implementation, referring to fig. 7, the augmented reality data presentation method may include:
s701, responding to target information codes on the AR equipment scanning bottle body, and determining a target exhibition area to be visited.
S702, acquiring positioning information of the AR equipment.
S703, judging whether the AR equipment reaches the explanation area corresponding to any preset knowledge point in the target exhibition area based on the positioning information of the AR equipment.
And S704, if yes, acquiring audio explanation data corresponding to any preset knowledge point and first AR special effect data matched with the audio explanation data.
S705, it may be determined, based on the positioning information, whether the AR device meets a trigger display condition of the first AR special effect data.
And S706, if yes, playing the audio explanation data matched with any preset knowledge point through the AR equipment, and displaying the first AR special effect data matched with the audio explanation data. For example, when the preset knowledge point is a history person, audio explanation data of the preset knowledge point and first AR special effect data including animation effects of the history person can be played; for another example, when the target exhibition area is a beer-making factory, the audio explanation data corresponding to the hop production and the first AR special effect data including the production fermentation process of the hop can be played when the hop is produced at the preset knowledge point.
S707, if not satisfied, the AR device may be controlled to display guide information for guiding the pose information of the AR device to be adjusted, so as to guide the AR device to display the first AR special effect data. For example, when the orientation of the AR device is located in the trigger display direction corresponding to the AR special effect data, and/or when the distance between the AR device and the display pose of the AR device and the first AR special effect data is in the range of 0.5 m to 5 m (including 0.5 m and 5 m), it is determined that the AR device meets the trigger display condition of the first AR special effect data.
And S708, displaying the determined third AR special effect data through the AR equipment when the AR equipment reaches the destination position in the target exhibition area and/or the AR equipment has played the audio explanation data corresponding to each preset knowledge point in the target exhibition area.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
Based on the same concept, the embodiment of the present disclosure further provides an augmented reality data display device, referring to fig. 8, which is a schematic architecture diagram of the augmented reality data display device provided by the embodiment of the present disclosure, including a first determining module 801, an obtaining module 802, and a first display module 803, specifically:
A first determining module 801, configured to determine a target exhibition area to be visited in response to the augmented reality AR device scanning a target information code on the bottle;
an obtaining module 802, configured to obtain positioning information of the AR device;
and the first display module 803 is configured to play audio explanation data matched with any preset knowledge point through the AR device and display first AR special effect data matched with the audio explanation data when it is determined that the AR device reaches an explanation area corresponding to any preset knowledge point in the target exhibition area based on the positioning information.
In a possible implementation manner, the first determining module 801 is configured to, when determining the target exhibition area to be visited in response to the augmented reality AR device scanning the target information code on the bottle,:
responding to target information codes on the AR equipment scanning bottle body, and determining a target exhibition area to be visited and a destination position of the AR equipment in the target exhibition area;
the apparatus further comprises: a generating module 804, configured to:
generating navigation guidance information from the current location to the destination location based on the destination location and the current location of the AR device;
And displaying the navigation guidance information through the AR equipment.
In a possible implementation manner, the first determining module 801 is configured to, when determining, in response to the AR device scanning the target information code on the bottle, a target exhibition area to be visited and a destination location of the AR device within the target exhibition area:
responding to the target information code on the AR equipment scanning bottle body, and displaying resource pushing information through the AR equipment;
responding to triggering operation for the resource pushing information, and determining the destination position, corresponding to the resource pushing information, in a target exhibition area to be browsed; and the destination position is a position for acquiring the target resource corresponding to the resource pushing information.
In a possible implementation manner, the audio explanation data corresponds to AR special effect data with multiple styles, and the apparatus further includes: a second determining module 805 configured to:
and determining first AR special effect data matched with the audio explanation data from multiple types of AR special effect data corresponding to the audio explanation data according to the history record information of the AR equipment and/or the user information of a target user to which the AR equipment belongs.
In a possible implementation manner, the first display module 803 is configured to, when displaying the first AR special effect data that matches the audio interpretation data:
generating AR special effect data matched with a target user based on user information of the target user to which the AR equipment belongs;
the AR equipment is controlled to sequentially display first AR special effect data matched with the audio explanation data and AR special effect data matched with the target user; or, overlapping the AR special effect matched with the target user and the first AR special effect matched with the audio explanation data to generate overlapped second AR special effect data, and displaying the second AR special effect data through the AR equipment.
In a possible embodiment, the apparatus further comprises: a second display module 806 for:
and displaying the determined third AR special effect data through the AR equipment under the condition that the AR equipment reaches the destination position in the target exhibition area and/or the AR equipment has played the audio explanation data corresponding to each preset knowledge point in the target exhibition area.
In a possible implementation manner, the first display module 803 is configured to, when playing, by the AR device, audio interpretation data that matches any one of the preset knowledge points, and displaying first AR special effect data that matches the audio interpretation data:
And under the condition that the AR equipment meets the trigger display condition of the first AR special effect data based on the positioning information, controlling the AR equipment to play the audio explanation data and displaying the first AR special effect data.
In a possible embodiment, the apparatus further comprises: a third display module 807 for:
and controlling the AR equipment to display guide information for guiding and adjusting pose information of the AR equipment under the condition that the AR equipment does not meet the trigger display condition of the first AR special effect data based on the positioning information.
In a possible implementation manner, the method is applied to a client application platform, and the client application platform is a Web page Web side application platform or an applet side application platform.
In some embodiments, the functions or templates included in the apparatus provided by the embodiments of the present disclosure may be used to perform the methods described in the foregoing method embodiments, and specific implementations thereof may refer to descriptions of the foregoing method embodiments, which are not repeated herein for brevity.
Based on the same technical concept, the embodiment of the disclosure also provides electronic equipment. Referring to fig. 9, a schematic structural diagram of an electronic device according to an embodiment of the disclosure includes a processor 901, a memory 902, and a bus 903. The memory 902 is configured to store execution instructions, including a memory 9021 and an external memory 9022; the memory 9021 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 901 and data exchanged with an external memory 9022 such as a hard disk, the processor 901 exchanges data with the external memory 9022 through the memory 9021, and when the electronic device 900 is operated, the processor 901 and the memory 902 communicate through the bus 903, so that the processor 901 executes the following instructions:
Responding to target information codes on the augmented reality AR equipment scanning bottle body, and determining a target exhibition area to be visited;
acquiring positioning information of the AR equipment;
and under the condition that the AR equipment reaches an explanation area corresponding to any preset knowledge point in the target exhibition area based on the positioning information, playing audio explanation data matched with any preset knowledge point through the AR equipment, and displaying first AR special effect data matched with the audio explanation data.
Furthermore, the embodiments of the present disclosure also provide a computer readable storage medium, on which a computer program is stored, which when executed by a processor performs the steps of the augmented reality data presentation method described in the above method embodiments. Wherein the storage medium may be a volatile or nonvolatile computer readable storage medium.
The embodiments of the present disclosure further provide a computer program product, where the computer program product carries a program code, where instructions included in the program code may be used to perform the steps of the augmented reality data presentation method described in the foregoing method embodiments, and specifically reference may be made to the foregoing method embodiments, which are not described herein.
Wherein the above-mentioned computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in essence or a part contributing to the prior art or a part of the technical solution, or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a specific embodiment of the disclosure, but the protection scope of the disclosure is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the disclosure, and it should be covered in the protection scope of the disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (11)

1. An augmented reality data display method, comprising:
responding to target information codes on the augmented reality AR equipment scanning bottle body, and determining a target exhibition area to be visited;
acquiring positioning information of the AR equipment;
under the condition that the AR equipment reaches an explanation area corresponding to any preset knowledge point in the target exhibition area based on the positioning information, playing audio explanation data matched with any preset knowledge point through the AR equipment, and displaying first AR special effect data matched with the audio explanation data;
the displaying first AR special effects data matched with the audio interpretation data includes:
generating AR special effect data matched with a target user based on user information of the target user to which the AR equipment belongs;
the AR equipment is controlled to sequentially display first AR special effect data matched with the audio explanation data and AR special effect data matched with the target user; or, overlapping the AR special effect matched with the target user and the first AR special effect matched with the audio explanation data to generate overlapped second AR special effect data, and displaying the second AR special effect data through the AR equipment.
2. The method of claim 1, wherein the determining the target display area to be visited in response to the augmented reality AR device scanning the target information code on the bottle comprises:
responding to target information codes on the AR equipment scanning bottle body, and determining a target exhibition area to be visited and a destination position of the AR equipment in the target exhibition area;
the method further comprises the steps of:
generating navigation guidance information from the current location to the destination location based on the destination location and the current location of the AR device;
and displaying the navigation guidance information through the AR equipment.
3. The method of claim 2, wherein the determining a target display area to be visited and a destination location of the AR device within the target display area in response to the AR device scanning a target information code on the bottle comprises:
responding to the target information code on the AR equipment scanning bottle body, and displaying resource pushing information through the AR equipment;
responding to triggering operation for the resource pushing information, and determining the destination position, corresponding to the resource pushing information, in a target exhibition area to be browsed; and the destination position is a position for acquiring the target resource corresponding to the resource pushing information.
4. A method according to any one of claims 1 to 3, wherein the audio interpretation data corresponds to AR special effects data of a plurality of styles, the method further comprising:
and determining first AR special effect data matched with the audio explanation data from multiple types of AR special effect data corresponding to the audio explanation data according to the history record information of the AR equipment and/or the user information of a target user to which the AR equipment belongs.
5. A method according to any one of claims 1 to 3, wherein the method further comprises:
and displaying the determined third AR special effect data through the AR equipment under the condition that the AR equipment reaches the destination position in the target exhibition area and/or the AR equipment has played the audio explanation data corresponding to each preset knowledge point in the target exhibition area.
6. The method according to any one of claims 1 to 3, wherein playing, by the AR device, audio interpretation data matching any one of the preset knowledge points and displaying first AR special effect data matching the audio interpretation data, includes:
and under the condition that the AR equipment meets the trigger display condition of the first AR special effect data based on the positioning information, controlling the AR equipment to play the audio explanation data and displaying the first AR special effect data.
7. A method according to any one of claims 1 to 3, wherein the method further comprises:
and controlling the AR equipment to display guide information for guiding and adjusting the positioning information of the AR equipment under the condition that the AR equipment does not meet the trigger display condition of the first AR special effect data based on the positioning information.
8. A method according to any one of claims 1 to 3, wherein the method is applied to a client application platform, the client application platform being a Web-side application platform or an applet-side application platform.
9. An augmented reality data display device, comprising:
the first determining module is used for responding to the target information code on the augmented reality AR equipment scanning bottle body and determining a target exhibition area to be visited;
the acquisition module is used for acquiring the positioning information of the AR equipment;
the first display module is used for playing audio explanation data matched with any preset knowledge point through the AR equipment and displaying first AR special effect data matched with the audio explanation data under the condition that the AR equipment is determined to reach the explanation area corresponding to any preset knowledge point in the target exhibition area based on the positioning information;
The first display module is further configured to, when displaying first AR special effect data that matches the audio interpretation data:
generating AR special effect data matched with a target user based on user information of the target user to which the AR equipment belongs;
the AR equipment is controlled to sequentially display first AR special effect data matched with the audio explanation data and AR special effect data matched with the target user; or, overlapping the AR special effect matched with the target user and the first AR special effect matched with the audio explanation data to generate overlapped second AR special effect data, and displaying the second AR special effect data through the AR equipment.
10. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication over the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the augmented reality data presentation method of any one of claims 1 to 8.
11. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a computer program which, when executed by a processor, performs the steps of the augmented reality data presentation method according to any one of claims 1 to 8.
CN202110620483.5A 2021-06-03 2021-06-03 Augmented reality data display method and device, electronic equipment and storage medium Active CN113359986B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110620483.5A CN113359986B (en) 2021-06-03 2021-06-03 Augmented reality data display method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110620483.5A CN113359986B (en) 2021-06-03 2021-06-03 Augmented reality data display method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113359986A CN113359986A (en) 2021-09-07
CN113359986B true CN113359986B (en) 2023-06-20

Family

ID=77531802

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110620483.5A Active CN113359986B (en) 2021-06-03 2021-06-03 Augmented reality data display method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113359986B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113359984A (en) * 2021-06-03 2021-09-07 北京市商汤科技开发有限公司 Bottle special effect presenting method and device, computer equipment and storage medium
CN113899359B (en) * 2021-09-30 2023-02-17 北京百度网讯科技有限公司 Navigation method, device, equipment and storage medium
CN116257309A (en) * 2021-12-10 2023-06-13 北京字跳网络技术有限公司 Content display method, device, electronic apparatus, storage medium, and program product
CN116499489A (en) * 2023-03-16 2023-07-28 阿里巴巴(中国)有限公司 Man-machine interaction method, device, equipment and product based on map navigation application
CN117765839B (en) * 2023-12-25 2024-07-16 广东保伦电子股份有限公司 Indoor intelligent navigation method, device and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112348969A (en) * 2020-11-06 2021-02-09 北京市商汤科技开发有限公司 Display method and device in augmented reality scene, electronic equipment and storage medium
CN112348968A (en) * 2020-11-06 2021-02-09 北京市商汤科技开发有限公司 Display method and device in augmented reality scene, electronic equipment and storage medium

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018104834A1 (en) * 2016-12-07 2018-06-14 Yogesh Chunilal Rathod Real-time, ephemeral, single mode, group & auto taking visual media, stories, auto status, following feed types, mass actions, suggested activities, ar media & platform
US10477277B2 (en) * 2017-01-06 2019-11-12 Google Llc Electronic programming guide with expanding cells for video preview
AU2018227710B2 (en) * 2017-02-28 2022-03-17 Magic Leap, Inc. Virtual and real object recording in mixed reality device
CN107484120A (en) * 2017-06-21 2017-12-15 湖南简成信息技术有限公司 Intelligent guide method, tour guide device and equipment based on third party application
US20180345129A1 (en) * 2018-07-27 2018-12-06 Yogesh Rathod Display virtual objects within predefined geofence or receiving of unique code from closest beacon
CN109040289A (en) * 2018-08-27 2018-12-18 百度在线网络技术(北京)有限公司 Interest point information method for pushing, server, terminal and storage medium
CN110703922A (en) * 2019-10-22 2020-01-17 成都中科大旗软件股份有限公司 Electronic map tour guide method special for tourist attraction
CN111640171B (en) * 2020-06-10 2023-09-01 浙江商汤科技开发有限公司 Historical scene explanation method and device, electronic equipment and storage medium
CN111665945B (en) * 2020-06-10 2023-11-24 浙江商汤科技开发有限公司 Tour information display method and device
CN111640202B (en) * 2020-06-11 2024-01-09 浙江商汤科技开发有限公司 AR scene special effect generation method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112348969A (en) * 2020-11-06 2021-02-09 北京市商汤科技开发有限公司 Display method and device in augmented reality scene, electronic equipment and storage medium
CN112348968A (en) * 2020-11-06 2021-02-09 北京市商汤科技开发有限公司 Display method and device in augmented reality scene, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
公共服务谱新篇 画卷漫展佳境来――南通博物苑"掌上博物馆"建设的实践与思考;黄金;;博物馆研究(02);全文 *
基于AR增强现实技术的数字图书馆服务创新研究;温日琴;;农业图书情报学刊(08);全文 *

Also Published As

Publication number Publication date
CN113359986A (en) 2021-09-07

Similar Documents

Publication Publication Date Title
CN113359986B (en) Augmented reality data display method and device, electronic equipment and storage medium
US10924676B2 (en) Real-time visual effects for a live camera view
US8983184B2 (en) Vision image information storage system and method thereof, and recording medium having recorded program for implementing method
KR101433305B1 (en) Mobile device based content mapping for augmented reality environment
CN111638796A (en) Virtual object display method and device, computer equipment and storage medium
US11842514B1 (en) Determining a pose of an object from rgb-d images
US9591295B2 (en) Approaches for simulating three-dimensional views
US9679416B2 (en) Content creation tool
US10186084B2 (en) Image processing to enhance variety of displayable augmented reality objects
US20110096093A1 (en) Image processing device, image processing method and program
CN107689082B (en) Data projection method and device
EP2560145A2 (en) Methods and systems for enabling the creation of augmented reality content
US9135735B2 (en) Transitioning 3D space information to screen aligned information for video see through augmented reality
WO2012033768A2 (en) Efficient information presentation for augmented reality
US11023501B2 (en) Method and apparatus for displaying map information and storage medium
CN104657458A (en) Method and device for presenting object information of foreground object in scene image
WO2022252688A1 (en) Augmented reality data presentation method and apparatus, electronic device, and storage medium
CN113282687A (en) Data display method and device, computer equipment and storage medium
US8639023B2 (en) Method and system for hierarchically matching images of buildings, and computer-readable recording medium
JP2019192026A (en) Information processing device, information processing method, program, content display system and content distribution device
CN113867528A (en) Display method, device, equipment and computer readable storage medium
CN114967914A (en) Virtual display method, device, equipment and storage medium
US20130235028A1 (en) Non-photorealistic Rendering of Geographic Features in a Map
CN113362474A (en) Augmented reality data display method and device, electronic equipment and storage medium
CN111918114A (en) Image display method, image display device, display equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant