Nothing Special   »   [go: up one dir, main page]

WO2020024909A1 - 定位跟踪方法、终端设备及计算机可读取存储介质 - Google Patents

定位跟踪方法、终端设备及计算机可读取存储介质 Download PDF

Info

Publication number
WO2020024909A1
WO2020024909A1 PCT/CN2019/098200 CN2019098200W WO2020024909A1 WO 2020024909 A1 WO2020024909 A1 WO 2020024909A1 CN 2019098200 W CN2019098200 W CN 2019098200W WO 2020024909 A1 WO2020024909 A1 WO 2020024909A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
time
terminal device
marker
image acquisition
Prior art date
Application number
PCT/CN2019/098200
Other languages
English (en)
French (fr)
Inventor
胡永涛
于国星
戴景文
Original Assignee
广东虚拟现实科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201810891134.5A external-priority patent/CN110794955B/zh
Priority claimed from CN201910642093.0A external-priority patent/CN110442235B/zh
Application filed by 广东虚拟现实科技有限公司 filed Critical 广东虚拟现实科技有限公司
Priority to US16/687,699 priority Critical patent/US11127156B2/en
Publication of WO2020024909A1 publication Critical patent/WO2020024909A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C19/00Gyroscopes; Turn-sensitive devices using vibrating masses; Turn-sensitive devices without moving masses; Measuring angular rate using gyroscopic effects
    • G01C19/02Rotary gyroscopes
    • G01C19/04Details
    • G01C19/32Indicating or recording means specially adapted for rotary gyroscopes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C19/00Gyroscopes; Turn-sensitive devices using vibrating masses; Turn-sensitive devices without moving masses; Measuring angular rate using gyroscopic effects
    • G01C19/02Rotary gyroscopes
    • G01C19/34Rotary gyroscopes for indicating a direction in the horizontal plane, e.g. directional gyroscopes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/163Wearable computers, e.g. on a belt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1684Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
    • G06F1/1686Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being an integrated camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/1633Constructional details or arrangements of portable computers not specific to the type of enclosures covered by groups G06F1/1615 - G06F1/1626
    • G06F1/1684Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675
    • G06F1/1694Constructional details or arrangements related to integrated I/O peripherals not covered by groups G06F1/1635 - G06F1/1675 the I/O peripheral being a single or a set of motion sensors for pointer control or gesture input obtained by sensing movements of the portable computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Definitions

  • the present application relates to the field of virtual reality technology, and more particularly, to a positioning and tracking method, terminal equipment, and computer-readable storage medium.
  • augmented reality is a technology that increases the user's perception of the real world through information provided by computer systems.
  • the generated virtual object, scene or system prompt information is superimposed on the real scene to enhance or modify the perception of the real world environment or data representing the real world environment. Therefore, how to accurately and effectively perform positioning and tracking on a display device (such as a head-mounted display device, smart glasses, a smart phone, etc.) is an urgent problem to be solved.
  • This application proposes a positioning and tracking method, a terminal device, and a computer-readable storage medium.
  • an embodiment of the present application provides a positioning and tracking method, which is applied to a terminal device.
  • the terminal device includes a first image acquisition device and a second image acquisition device.
  • the method includes: The first image collected by the device includes a marker, and the relative position and posture information between the first image acquisition device and the marker is obtained to obtain the first information; A second image having a target scene, obtaining position and posture information of the second image acquisition device within the target scene to obtain second information, wherein the marker and the terminal device are located in the target scene; and Use the first information and the second information to obtain position and posture information of the terminal device relative to the marker, to obtain target information.
  • an embodiment of the present application provides a terminal device, including: a first image acquisition device for acquiring a first image containing a marker; a second image acquisition device for acquiring a target scene including A second image; a memory storing one or more computer programs; one or more processors; when the computer program is executed by the processor, causing the processor to perform the following steps: according to the first image To obtain the relative position and attitude information between the first image acquisition device and the marker to obtain the first information; and obtain the second image acquisition device in the target scene according to the second image Position and posture information to obtain second information, wherein the marker and the terminal device are located in the target scene; and using the first information and the second information to obtain the terminal device's relative to the marker Position and attitude information to get target information.
  • an embodiment of the present application provides a positioning and tracking method.
  • the method includes: collecting an image including a marker; identifying the marker in the image, and acquiring first spatial position information; and acquiring a bit of a terminal device.
  • Posture change information the posture change information includes position change information and posture change information of the terminal device; acquiring second space position information of the terminal device according to the posture change information; and based on the first space Acquiring the current position information of the terminal device by using the position information and / or the second spatial position information.
  • An embodiment of the present application provides a terminal device including a memory and a processor.
  • the memory stores a computer program.
  • the processor executes the following steps. : Collecting an image containing a marker; identifying the marker in the image, and acquiring first spatial position information; acquiring posture change information of the terminal device, the posture change information including the position change information of the terminal device and Posture change information; acquiring second spatial position information of the terminal device according to the posture change information; and acquiring the current current of the terminal device based on the first spatial position information and / or the second spatial position information location information.
  • an embodiment of the present application provides a computer storage medium on which a computer program is stored.
  • the processor causes the processor to execute the method in the foregoing embodiment.
  • FIG. 1 is a schematic diagram of a positioning and tracking system in an embodiment
  • FIG. 2 is a block diagram of a terminal device in an embodiment
  • FIG. 3 is a flowchart of a positioning and tracking method according to an embodiment
  • FIG. 4 is an application scenario diagram of a positioning and tracking method according to an embodiment
  • FIG. 5 is a flowchart of a positioning and tracking method in another embodiment
  • FIG. 6 is a flowchart of a positioning and tracking method according to another embodiment
  • FIG. 7 is a schematic diagram of a relationship between a terminal device, a marker, and a target scene in an embodiment
  • FIG. 8 is a flowchart of acquiring position and attitude information of a terminal device relative to a marker according to an embodiment
  • FIG. 9 is a flowchart of updating prediction information according to first information at a first time in an embodiment
  • FIG. 10 is a flowchart of updating prediction information according to second information at a second time in an embodiment
  • FIG. 11 is an example diagram of acquiring position and attitude information of a terminal device relative to a marker in an embodiment
  • FIG. 13 is a flowchart of determining a first time in an embodiment
  • FIG. 14 is a component connection diagram of a terminal device in an embodiment.
  • a positioning and tracking system 10 provided in an embodiment of the present application includes a terminal device 100 and a marker 200.
  • the terminal device 100 may be a head-mounted display device, or may be a mobile device such as a mobile phone or a tablet.
  • the head-mounted display device may be an integrated head-mounted display device or a head-mounted display device connected to an external electronic device.
  • the terminal device 100 may also be a smart terminal such as a mobile phone connected to an external / plug-in head-mounted display device, that is, the terminal device 100 may serve as a processing and storage device of the head-mounted display device, and plug in or access the external head-mounted display device.
  • the terminal device 100 is a head-mounted display device, and includes a processor 110 and a memory 120.
  • the memory stores one or more application programs and can be configured to be executed by the one or more processors 110. Several programs are used to perform the methods described in this application.
  • the processor 110 may include one or more processing cores.
  • the processor 110 uses various interfaces and lines to connect various parts of the entire terminal device 100, and executes or executes instructions, programs, code sets, or instruction sets stored in the memory 120 and calls data stored in the memory 120 to execute Various functions and processing data of the electronic device 100.
  • the processor 110 may be implemented in at least one hardware form of digital signal processing, field programmable gate array, and programmable logic array.
  • the processor 110 may integrate one or a combination of a central processing unit, an image processor, and a modem. Among them, the CPU mainly handles the operating system, user interface, and application programs; the GPU is responsible for rendering and rendering of the displayed content; the modem is used for wireless communication.
  • the modem may not be integrated into the processor 110, and may be implemented by a communication chip alone.
  • the memory 120 may include a random access memory, and may also include a read-only memory.
  • the memory 120 may be used to store instructions, programs, codes, code sets, or instruction sets.
  • the memory 120 may include a storage program area and a storage data area, where the storage program area may store instructions for implementing an operating system and instructions for implementing at least one function (such as a touch function, a sound playback function, an image playback function, etc.) , Instructions for implementing the following method embodiments, and the like.
  • the storage data area may also store data and the like created by the electronic device 100 during use.
  • the terminal device 100 may further include a camera 130 for capturing an image of a real object and a scene image of a target scene.
  • the camera 130 may be an infrared camera or a visible light camera, and the specific type is not limited.
  • the terminal device 100 may further include one or more of the following components: a display module, an optical module, a communication module, and a power source.
  • the display module may include a display control unit.
  • the display control unit is configured to receive a display image of the virtual content rendered by the processor, display the display image on the optical module, and enable the user to view the virtual content through the optical module.
  • the display module may be a display screen or a projection device, and is used to display an image.
  • the optical module can use an off-axis optical system or a waveguide optical system. After the display module displays the display image, the image can be projected to the user's eyes. The user can see the display image projected by the display module through the optical module.
  • the user can also observe the real environment through the optical module, and experience the visual effect of the virtual content superimposed on the real environment.
  • the communication module may be a module such as Bluetooth, WiFi, ZigBee, and the terminal device may communicate with the interactive device through the communication module to perform information and instruction interaction.
  • the power supply can supply power to the entire terminal equipment to ensure the normal operation of each component of the terminal equipment.
  • the marker may be any graphic or object having a recognizable feature mark.
  • the marker may be a pattern having a topological structure, and the topological structure refers to a sub-marker and a feature point in the marker. Connected relationship, but not limited to this.
  • an infrared filter may be provided outside the marker, and the marker is invisible to the user.
  • the camera may be an infrared camera, and the image of the marker is collected by emitting infrared light to reduce the influence of visible light on the marker image. Improve the accuracy of location tracking.
  • the camera of the terminal device 100 may collect a marker image containing the marker 200.
  • the processor of the terminal device 100 obtains the marker image and related information, performs identification and tracking on the marker 200 included in the marker image, and obtains the identity information of the marker 200 and the information between the terminal device 100 and the marker 200. Position and attitude relationship.
  • an embodiment of the present application provides a positioning and tracking method, which is applied to a terminal device 100 and includes the following steps:
  • Step 310 Acquire an image containing a marker.
  • Step 320 Identify a marker in the image, and obtain first spatial position information.
  • the terminal device collects an image containing a marker through a camera, and recognizes and tracks the image to obtain the position and posture information of the terminal device relative to the marker, and then obtains the position and posture information of the terminal device in a real scene, that is, Spatial location information.
  • a plurality of markers may be discretely disposed at multiple positions in a real scene, and one of the markers may be set as a target marker near an entrance of the real scene (such as a doorway of a room or an entrance to an area). Near), that is, near the starting position where the user enters the scene.
  • the terminal device can collect the image containing the target marker through the camera to initially locate the position of the terminal device in the real scene. At this time, the position of the terminal device determined according to the target marker is the terminal device in the real scene. The initial position.
  • Step 330 Acquire the posture change information of the terminal device.
  • the terminal device when a user moves in a real scene, since the marker is not set in all areas in the real scene, when the camera of the terminal device cannot capture an image containing the marker, the terminal device may be obtained by The pose change information of the pose estimates the current position of the terminal device.
  • the terminal device can obtain the position and attitude information of the terminal device in a real scene, that is, the second spatial position information, in real time through a visual-inertial odometry (VIO).
  • the terminal device can collect the scene image in real time through the camera, and calculate the position and posture information of the terminal device in the real scene according to the key points (or feature points) contained in the scene image.
  • a user may first detect a target marker located near the entrance of the real environment through a terminal device to locate the target marker, and obtain first spatial position information corresponding to the target marker, the first spatial position information It can be used as a benchmark for calculating the position of the terminal device through VIO.
  • the posture change information of the terminal device relative to the first spatial position information corresponding to the target marker can be obtained in real time through VIO, which can be real time Calculate the position of the terminal device, that is, the current location of the user in the real scene.
  • Step 340 Acquire the second spatial position information of the terminal device according to the posture change information.
  • Step 350 Acquire the current location information of the terminal device based on the first spatial location information and / or the second spatial location information.
  • the acquired position information may be directly used as the current position information of the terminal device; the terminal device may also be combined with the first space The position information and the second spatial position information are used to obtain the current position information of the terminal device.
  • the terminal device may directly use the first spatial position information as the current position information of the terminal device; when the camera of the terminal device does not collect the image containing the marker, The second spatial position information is obtained only based on the VIO, and the second spatial position information may be directly used as the current position information of the terminal device; if the terminal device has both acquired the image containing the marker through the camera, the first spatial position information is obtained, and The second spatial position information is obtained using VIO. Assuming that the marker positioning is more accurate than the VIO positioning, the more accurate first spatial position information can be selected as the current position information of the terminal device, and the acquired first spatial position can also be integrated. Information and second spatial position information, such as performing weight calculation, to obtain the current position information of the terminal device.
  • the terminal device recognizes and tracks the markers (such as User A or User B's location), can display the virtual image corresponding to the marker, and determine the position and posture information of the terminal device in the venue according to the marker pair; when the user wears the terminal device to move in the museum, the terminal device Camera sometimes cannot capture the image of the marker (such as the location of user C), the terminal device can obtain the posture change information of the first spatial position information in real time through VIO, so as to determine the terminal device in the venue Position and attitude information in.
  • the markers such as User A or User B's location
  • the associated virtual image can be obtained according to the current position information obtained by VIO positioning, and the virtual image is rendered and displayed, so that the terminal device can detect When marking an object, a virtual image associated with the position and posture can also be displayed.
  • the positioning and tracking method provided in the above embodiment can determine the position of the user through the tracking of the marker, and when the marker cannot be detected, calculate the position of the user by combining the posture change information of the terminal device, and perform the positioning based on the marker and VIO Combined, the user position can be accurately obtained in real time, which improves the accuracy of indoor positioning and tracking.
  • a positioning and tracking method includes the following steps:
  • Step 510 Acquire an image containing a marker.
  • Step 520 Identify the marker in the image and obtain the first spatial position information.
  • Step 522 Identify the marker in the image, and obtain identity information of the marker.
  • the terminal device can identify the collected image containing the marker, obtain the identity information corresponding to the marker, and different markers correspond to different identity information.
  • the identity information can be performed by one or more of numbers, letters, and symbols. It indicates that the identification information of the marker can be set according to the features such as the pattern color, shape, and topology of the marker, but is not limited thereto.
  • Step 524 Obtain the marker position information of the marker in the pre-stored map based on the identity information.
  • Each marker corresponds to one piece of identity information, and the position information of the marker in the pre-stored map corresponds.
  • the position information of the marker in the pre-stored map can be obtained according to the identification information.
  • the pre-stored map may be a virtual map created and stored according to a real environment in advance, and the position of the marker in the pre-stored map refers to the actual position of the marker in the real environment.
  • the terminal device may store the identity information of multiple markers locally or on the server, and the marker location information corresponding to each marker may be searched locally or on the server according to the obtained identity information of the marker. The location of the marker.
  • Step 526 Obtain information about the relative relationship between the terminal device and the marker.
  • the terminal device may obtain the relative relationship information between the terminal device and the marker according to the collected image containing the marker, where the relative relationship information includes relative position and posture information between the terminal device and the marker.
  • Step 528 Obtain first spatial position information of the terminal device in the pre-stored map based on the marked position information and the relative relationship information.
  • the terminal device may combine the position information of the marker in the pre-stored map and the relative relationship information between the terminal device and the marker to determine the position and posture information of the terminal device in the current realistic scene, that is, obtain the first spatial position information.
  • step 522 step 530 and step 540 are further included.
  • Step 530 Determine whether the marker is a target marker according to the identity information
  • Step 540 When the marker is the target marker, construct a virtual scene matching the pre-stored map based on the target marker.
  • the terminal device After the terminal device obtains the identity information of the marker, it can detect the identity information of the marker to determine whether it is a target marker.
  • the marker When the marker is a target marker, a virtual scene corresponding to a pre-stored map can be constructed based on the target marker and displayed.
  • target markers can be placed at the boundaries of different realistic scenes of the AR / VR environment.
  • different realistic scenes of the AR / VR environment For example, in a multi-theme AR / VR museum, there are multiple exhibition scenes such as ocean, grassland, and starry sky.
  • Target markers corresponding to the real scene can be set at the entrance of each real scene.
  • a marine-related virtual scene can be constructed based on the target marker; when the user moves from the marine-themed scene to the starry-themed scene, the terminal device collects the starry-themed real scene
  • a star-related virtual scene can be constructed based on the target marker and the previous ocean-related virtual scene can be replaced, and the relevant virtual scene is displayed to the user through the display module of the terminal device.
  • step 550 is further included.
  • Step 550 Obtain virtual object data corresponding to the marker, and display the virtual object corresponding to the marker according to the virtual object data.
  • the identity information of each marker corresponds one-to-one with its bound virtual object.
  • the terminal device obtains the identity information of the marker, can obtain virtual object data corresponding to the marker according to the identity information, and generates and displays a virtual object according to the virtual object data and the position and posture information of the terminal device relative to the marker.
  • the virtual objects corresponding to the markers may be displayed alone or in combination with the aforementioned virtual scenes.
  • markers are set next to the exhibits, and the terminal device can display the virtual objects related to the exhibits by collecting the images of the markers, such as text introductions of the exhibits, related virtual animations, virtual
  • the mini games and the like are not limited to this. The user can see the superimposed display of the virtual object and the real scene through the terminal device to enhance the sense of interaction.
  • Step 560 Acquire the posture change information of the terminal device.
  • step 560 may include steps 562, 564, and 566.
  • Step 562 Acquire a scene image containing key points.
  • Key points can be points with obvious features in the image, such as feature points such as the edges and corners of objects in the image that can be used to represent the location of a point in the real environment.
  • the scene image may include multiple frames of realistic scene images taken within a certain period of time, and each frame of the realistic scene image may contain multiple key points for positioning.
  • Step 564 Extract description vectors of key points in the current image.
  • the terminal device can obtain the description of the key point from the previous position of the previous frame image to the next position of the adjacent next frame image by extracting the position of the same key point in the two adjacent frames in the two images. vector.
  • Step 566 Obtain posture change information of the terminal device based on the description vector.
  • the terminal device After the terminal device extracts the description vector of the key point, it can obtain the time interval between the two adjacent frames of the image, and the modulus length and direction of the description vector, to calculate the key point relative to the camera of the terminal device within the shooting time interval. Space displacement to obtain the position change information of the terminal device.
  • An IMU Inertial measurement unit
  • the posture change information of the terminal device can be obtained in real time through the IMU.
  • Step 570 Acquire the second spatial position information of the terminal device according to the posture change information.
  • Step 580 Acquire the current location information of the terminal device based on the first spatial location information and / or the second spatial location information.
  • step 590 may be further included.
  • Step 590 Generate a virtual screen according to the current position information.
  • the terminal device may generate and display a virtual screen corresponding to the spatial position and posture information currently in a real scene. For example, in a VR / AR museum, a dynamic signpost indicating line corresponding to the current position and attitude information may be displayed on the terminal device to guide the user to find the next marker (virtual exhibit).
  • a virtual screen corresponding to the spatial position and posture information currently in a real scene.
  • a dynamic signpost indicating line corresponding to the current position and attitude information may be displayed on the terminal device to guide the user to find the next marker (virtual exhibit).
  • the location information of the terminal device in the real scene and the virtual screen may be associated in advance, and the associated information may be stored locally or in the cloud of the terminal device.
  • steps 5110, 5120, 5130, and 5140 may be further included.
  • Step 5110 Acquire an image containing the new marker.
  • Step 5120 Identify the new marker and obtain the position information of the new marker in the pre-stored map.
  • Step 5130 Recalculate the first spatial position information of the terminal device in the pre-stored map according to the position information of the new marker.
  • Step 5140 Calibrate the posture change information of the terminal device based on the recalculated first spatial position information.
  • the VIO may deviate in the process of continuously measuring its own posture change information.
  • the terminal device may calibrate the position change information of the VIO through the position information calculated by the collected new markers, so that the The posture change information is restarted with the new marker as a reference to improve tracking accuracy.
  • the position information of the new marker can be obtained first, and then the relative position and attitude relationship between the new marker and the target marker in the real scene can be calculated to calibrate the posture change information, that is, the VIO's Post-calibration information is still based on the target marker after calibration.
  • the posture change information obtained by the VIO may be directly cleared and recalculated based on the new target marker; as another method You can also obtain the position and attitude information of the terminal device relative to the new target marker, as well as the new target marker and the initial target marker (the target marker identified by the terminal device for the first time is generally set at the entrance of the stadium) Relative position and attitude, calculate the position and attitude information of the terminal device relative to the initial target marker, and calibrate the posture change information obtained by VIO.
  • the VIO of the latter method has a response curve, which will gradually calibrate the posture change information of the VIO based on the data of the markers, instead of on the screen displayed by the terminal device. Generate a sudden change, so that users have a better visual experience.
  • the positioning and tracking method provided in the foregoing embodiment displays dynamic virtual images in association with the current position change of the terminal device, which enhances the immersion, and can calibrate the current posture change information when a new marker is collected, further improving the accuracy of positioning. degree.
  • FIG. 6 another embodiment of the present application provides a positioning and tracking method, which is applied to a terminal device.
  • the method may include steps 610 to 630.
  • Step 610 Obtain the relative position and attitude information between the first image acquisition device and the marker according to the first image including the marker collected by the first image acquisition device to obtain the first information.
  • the terminal device is provided with a first image acquisition device for acquiring an image of the marker, and the position and posture information of the first image acquisition device relative to the marker can be determined based on the image, that is, six degrees of freedom of the first image acquisition device relative to the marker Information, including three translational degrees of freedom and three rotational degrees of freedom.
  • the three translational degrees of freedom are used to describe the X, Y, and Z coordinate values of the three-dimensional object.
  • the three rotational degrees of freedom include pitch, roll, and roll. ) And transverse angle (Yaw).
  • Step 620 Acquire the position and posture information of the second image acquisition device in the target scene according to the second image including the target scene collected by the second image acquisition device to obtain the second information, where the marker and the terminal device are located at the target. Within the scene.
  • the terminal device is also provided with a second image acquisition device, which is used to acquire a scene image in which the target scene is in the visual range.
  • the terminal device and the marker are all in the target scene.
  • the marker 102 and the terminal device 103 are both located in the target scene 101.
  • the first image acquisition device of the terminal device 103 is configured to acquire an image containing the marker 102, and the second image acquisition is used to acquire an image of the target scene 101.
  • the terminal device may obtain position and posture information of the second image acquisition device in the target scene according to the collected scene images, and obtain the second information.
  • the terminal device can use the VIO calculation to obtain the second information, obtain the angular velocity and acceleration data of the terminal device through the inertial measurement unit, and combine the scene image collected by the second image acquisition device to obtain the position and location of the second image acquisition device in the target scene. Posture information.
  • Step 630 Use the first information and the second information to obtain the position and posture information of the terminal device relative to the marker to obtain target information.
  • the terminal device can obtain the position and posture information of the terminal device relative to the marker, that is, target information, based on the position and posture information of the first image acquisition device relative to the marker and the position and posture information of the second image acquisition device in the target scene.
  • the first information between the first image acquisition device and the marker may be used as the target information, and the target information may also be used.
  • the second information of the second image acquisition device in the target scene is used as the target information.
  • the terminal device may comprehensively obtain the target information by combining the first information and the second information, for example, using the average value of the first information and the second information as the target information, or assigning different weights to the first information.
  • the first information and the second information are weighted.
  • the terminal device may also obtain the position and posture information of the terminal device relative to the marker through the inertial measurement unit, and update the position and posture information by using at least one of the first information and the second information, thereby obtaining Target information.
  • target information of a terminal device relative to a marker is obtained by using the first information acquired by the first image acquisition device and the second information acquired by the second image acquisition device, so that the positioning and tracking of the terminal device are more accurate.
  • using the first information and the second information to obtain the position and posture information of the terminal device relative to the marker to obtain the target information may include the following steps.
  • Step 810 Use the inertial measurement unit to obtain the predicted position and attitude information of the terminal device relative to the marker at different times, and obtain the predicted information at different times.
  • the inertial measurement unit may use a gyroscope to measure angle changes of three degrees of freedom of rotation of the terminal device, and use an accelerometer to measure displacements of the three degrees of freedom of movement of the terminal device.
  • the position change and attitude change of the terminal device can be accumulated by the inertial measurement unit to predict the position and attitude information of the terminal device relative to the marker at different times.
  • the terminal device uses the inertial measurement unit to obtain the prediction information of the previous time, it can obtain the prediction information of the current time by integrating the prediction information of the previous time and use the prediction information of the current time as the relative marker of the terminal device at the current time.
  • Position and attitude information may use a gyroscope to measure angle changes of three degrees of freedom of rotation of the terminal device, and use an accelerometer to measure displacements of the three degrees of freedom of movement of the terminal device.
  • the position change and attitude change of the terminal device can be accumulated by the inertial measurement unit to predict the position and attitude information of the terminal device relative to the marker at different
  • Step 820 When the first information at the first time is obtained, the prediction information at the first time is updated by using the first information to obtain the first prediction information to re-obtain the prediction information after the first time.
  • the terminal device acquires the relative position and attitude information between the first image acquisition device and the marker according to the first image acquired at the first time, that is, when the first information at the first time is acquired, the first information can be used to
  • the first time is updated by using the prediction information obtained by the inertial measurement unit to obtain the first prediction information, and the inertial measurement unit may re-obtain the prediction information at each time after the first time based on the first prediction information.
  • an image including a marker may be collected by a first image acquisition device, and relative position and attitude information between the first image acquisition device and the marker may be acquired.
  • the first initial rigid body relationship between the image acquisition device and the inertial measurement unit converts the relative position and attitude information between the first image acquisition device and the marker to obtain the relative position and posture between the inertial measurement unit and the marker Information, that is, the initial position and attitude information of the terminal device relative to the marker, that is, the initial prediction information of the inertial measurement unit.
  • the inertial measurement unit can predict the position and attitude information of the terminal device relative to the marker at different times.
  • the first image acquisition device does not acquire the first image
  • the inertial measurement unit does not obtain the initial position and attitude information of the terminal device relative to the marker, and may be in a waiting state all the time.
  • step 820 may include steps 822 to 826.
  • Step 822 Obtain a first rigid body relationship between the first image acquisition device and the inertial measurement unit.
  • the first rigid body relationship between the first image acquisition device and the inertial measurement unit refers to a structural placement relationship between the first image acquisition device and the inertial measurement unit, and the placement relationship may include the first image acquisition device and the inertial measurement Information such as the distance and orientation between the units, the placement relationship can be obtained through actual measurement, can also be obtained using the structural design value, or obtained through calibration.
  • the placement relationship can reflect the rotation amount and translation amount of the first image acquisition device relative to the inertial measurement unit or the inertial measurement unit relative to the first image acquisition device, and the rotation amount and the translation amount indicate that the spatial coordinates of the first image acquisition device and the The rotation angle and displacement position required when the spatial coordinates of the inertial measurement unit are superposed, wherein the spatial coordinates of the first image acquisition device are a three-dimensional coordinate system established by the center point of the first image acquisition device, and the spatial coordinates of the inertial measurement unit It is a three-dimensional coordinate system established with the center point of the inertial measurement unit. Among them, the spatial coordinates are not limited to being established with a center point.
  • Step 824 Obtain position and posture information of the inertial measurement unit relative to the marker according to the first information and the first rigid body relationship at the first moment.
  • the first image acquisition device and the inertial measurement unit are set on the terminal device at the same time, and the mapping relationship between the first image acquisition device and the marker can be obtained through the first rigid body relationship between the first image acquisition device and the inertial measurement unit.
  • the terminal device can convert the relative position and posture information between the first image acquisition device and the marker at the first moment according to the first rigid body relationship, and obtain the position and posture information of the inertial measurement unit relative to the marker at the first moment.
  • Step 826 Update the prediction information at the first time by using the position and attitude information of the inertial measurement unit relative to the marker to obtain the first prediction information.
  • the terminal device obtains the position and posture information of the inertial measurement unit relative to the marker at the first moment through the first rigid body relationship transformation, and can use the position and posture information to update the prediction information at the first moment.
  • information update parameters may be obtained according to the position and posture information of the inertial measurement unit relative to the marker at the first moment and the predicted information at the first moment, and the information update parameter may be the The deviation values between the position and attitude information of the marker and the prediction information are used to update the prediction information at the first time based on the information update parameter.
  • the position and posture information of the inertial measurement unit relative to the marker at the first moment and the prediction information may be weighted to obtain updated prediction information, and the weight of the weighted calculation may be set according to the actual setting.
  • the prediction information after the first time may be reacquired according to the first prediction information at the first time, and the inertial measurement unit may, based on the first prediction information, perform a terminal device at each time after the first time. Integrate changes in position and attitude, and re-obtain prediction information at each time after the first time.
  • the terminal device may also update and correct the first rigid body relationship between the first image acquisition device and the inertial measurement unit, so that the first rigid body relationship is more accurate. Updating the first rigid body relationship may include steps (1) to (3).
  • Step (1) Use the first rigid body relationship and the first prediction information to predict the relative position and attitude information between the first image acquisition device and the marker to obtain the first attitude prediction information.
  • the terminal device may use the first rigid body relationship between the first image acquisition device and the inertial measurement unit to perform coordinate conversion on the first prediction information at the first moment, and recalculate the relationship between the first image acquisition device and the marker. Relative position and attitude information to obtain first attitude prediction information.
  • Step (2) Acquire an error between the first information at the first moment and the first attitude prediction information.
  • the terminal device may obtain errors between the first attitude prediction information at the first moment and the actually determined relative position and attitude information (ie, the first information) of the first image acquisition device and the marker.
  • the difference between the first information at the first moment and the first pose prediction information may be calculated and an absolute value may be taken to obtain an error between the first information and the first pose prediction information.
  • Step (3) Update the first rigid body relationship according to the error.
  • the error between the first information at the first moment and the first attitude prediction information mainly refers to the relative position between the first image acquisition device and the marker and the error between the actual value and the predicted value of the attitude information.
  • the error between the first information and the first attitude prediction information updates the first rigid body relationship to improve the accuracy of positioning and tracking. The smaller the error between the first information and the first attitude prediction information is, the more accurate the first rigid body relationship is.
  • the number of updates of the first rigid body relationship may be obtained, and it may be determined whether the number of updates is greater than the Set the number of times. When the number of times is greater than the preset number, the update of the first rigid body relationship may be ended.
  • Step 830 When the second information at the second time is obtained, use the second information to update the prediction information at the second time to obtain the second prediction information to re-obtain the prediction information after the second time.
  • the terminal device obtains the position and posture information of the second image acquisition device in the target scene according to the scene image collected by the second image acquisition device at the second moment, that is, the second information at the second moment, and uses the second information to measure the inertia.
  • the prediction information of the unit at the second moment is updated to obtain the second prediction information.
  • the prediction information after the second time may be reacquired according to the second prediction information at the second time, and the inertial measurement unit may integrate the second prediction information to obtain the prediction information at each time after the second time.
  • step 830 may include steps 832 to 836.
  • Step 832 Obtain a second rigid body relationship between the second image acquisition device and the inertial measurement unit.
  • the second rigid body relationship between the second image acquisition device and the inertial measurement unit refers to a structural placement relationship between the second image acquisition device and the inertial measurement unit, and the placement relationship may include the second image acquisition device and the inertial measurement
  • the rotation and displacement and placement relationship between the units can be obtained through actual measurement, and can also be obtained using the structural design value, or obtained through calibration.
  • the second rigid body relationship reflects the amount of rotation and translation required by the second image acquisition device relative to the inertial measurement unit or the inertial measurement unit relative to the second image acquisition device, and the amount of rotation and translation indicates the space of the second image acquisition device. The rotation and displacement required when the coordinates coincide with the spatial coordinates of the inertial measurement unit.
  • the spatial coordinates of the second image acquisition device are a three-dimensional coordinate system established by the center point of the second image acquisition device.
  • the spatial coordinates of the inertial measurement unit are A three-dimensional coordinate system established with the center point of the inertial measurement unit. Among them, the spatial coordinates are not limited to being established with a center point.
  • Step 834 Use the first rigid body relationship and the second rigid body relationship of the first image acquisition device and the inertial measurement unit to perform coordinate conversion on the second information at the second moment to obtain the position and attitude information of the inertial measurement unit relative to the marker.
  • the terminal device can obtain the first image acquisition device and the second image acquisition according to the first rigid body relationship between the first image acquisition device and the inertial measurement unit and the second rigid body relationship between the second image acquisition device and the inertial measurement unit.
  • the third rigid body relationship between the devices When the first image acquisition device acquires an image containing a marker, the relative position and attitude information between the first image acquisition device and the marker can be obtained based on the image, and the first image acquisition device can be obtained by using a third rigid body relationship. Perform coordinate conversion on the relative position and posture information with the marker to obtain the relative position and posture information between the second image acquisition device and the marker. The relative position and posture between the second image acquisition device and the marker can be obtained.
  • the attitude information is used as the initial position and attitude information of the second image acquisition device relative to the marker.
  • the position and posture information of the second image acquisition device in the target scene can be obtained according to the scene image, and the second image acquisition device can be based on the relative marker's
  • the initial position and posture information, the position and posture information of the second image acquisition device in the target scene are converted into the relative position and posture information between the second image acquisition device and the marker, and the second is obtained according to the second rigid body relationship.
  • the relative relationship between the inertial measurement unit of time and the marker is used as the initial position and attitude information of the second image acquisition device relative to the marker.
  • Step 836 Use the position and attitude information of the inertial measurement unit relative to the marker to update the prediction information at the second time to obtain the second prediction information.
  • the terminal device can obtain the position and attitude information of the inertial measurement unit relative to the marker at the second time according to the first rigid body relationship, the second rigid body relationship, and the second information at the second time, and use the position and attitude information to The prediction information is updated to obtain second prediction information.
  • the terminal device may update the second rigid body relationship between the second image acquisition device and the inertial measurement unit, including steps (a) to (c).
  • the terminal device may use the second rigid body relationship between the second image acquisition device and the inertial measurement unit to perform coordinate conversion on the updated prediction information at the second moment, and recalculate the target of the second image acquisition device at the second moment. Position and pose information in the scene to obtain second pose prediction information.
  • the terminal device may obtain an error between the second posture prediction information at the second moment and the position and posture information (ie, the second information) of the actually determined second image acquisition device within the target scene.
  • a difference between the second information at the second moment and the second pose prediction information may be calculated and an absolute value may be taken to obtain an error between the second information at the second moment and the second pose prediction information.
  • the error between the second information at the second moment and the second attitude prediction information refers to the error between the actual value and the predicted value of the position and attitude information of the second image acquisition device in the target scene.
  • the second information and The error between the second attitude prediction information updates the second rigid body relationship and improves the accuracy of positioning and tracking.
  • Step 840 Use the predicted information at the current time as the target information.
  • the terminal device may use the prediction information of the current time obtained by the inertial measurement unit as the position and attitude information of the terminal device relative to the marker at the current time, that is, target information, and the prediction information at different times may be used as the target information at the corresponding time.
  • This embodiment provides a specific process for obtaining target information in a positioning and tracking method.
  • the IMU is to obtain prediction information of a terminal device relative to a marker by using an inertial measurement unit, and the tag is to obtain position and attitude information based on the marker image.
  • VIO is to obtain position and attitude information through VIO algorithm.
  • a1, a2, a3, and a4 are the prediction information of the inertial measurement unit at time T1, T2, T3, and T4 respectively.
  • the prediction information at the later time can be obtained by integrating the prediction information at the previous time. This integration refers to inertial measurement. Integration of acceleration and attitude angle in the unit.
  • the first image acquisition device acquires an image containing a marker at time T1 and acquires the first information.
  • the first information can be converted into the inertial measurement unit at time T1.
  • the inertial measurement unit can use a1 'updated at time T1 to perform integral prediction for each time after time T1 to obtain prediction information a2' at time T2, prediction information a3 'at time T3, and prediction information a4' at time T4.
  • the second image acquisition device acquires the second image of the target scene at time T2 and obtains the second information.
  • the second rigid body relationship between the second image acquisition device and the inertial measurement unit can convert the second information into T2
  • the position and attitude information c1 of the moment inertial measurement unit relative to the marker, and according to c1 the prediction information a2 'at time T2 is updated to obtain a2 ⁇ , and a2 ⁇ updated at time T2 can be used to re-perform at each time after time T2
  • prediction information a3 ⁇ at time T3 and prediction information a4 ⁇ at time T4 are obtained.
  • the latest prediction information of the inertial measurement unit at each time can be used as the position and attitude information of the terminal device relative to the marker at the corresponding time.
  • the positioning and tracking method provided in the foregoing embodiment updates the prediction information of the target time point by introducing the first rigid body relationship and the second rigid body relationship of the first image acquisition device and the second image acquisition device relative to the inertial measurement unit to further ensure the terminal The accuracy of the device's relative marker position and attitude information.
  • a positioning and tracking method is provided and applied to a terminal device.
  • the terminal device further includes a microprocessor and a processor.
  • the first image acquisition device is connected to the microprocessor.
  • the image acquisition device is connected to the processor.
  • the method includes steps 1210 to 1260.
  • Step 1210 Obtain the relative position and attitude information between the first image acquisition device and the marker according to the first image including the marker collected by the first image acquisition device, and obtain the first information.
  • Step 1220 Obtain the position and posture information of the second image acquisition device in the target scene according to the second image including the target scene collected by the second image acquisition device to obtain the second information, where the marker and the terminal device are located at the target Within the scene.
  • Step 1230 Use the inertial measurement unit to obtain the predicted position and attitude information of the terminal device relative to the marker at different times, and obtain the predicted information at different times.
  • Step 1240 When the first information at the first time is obtained, the prediction information at the first time is updated by using the first information to obtain the first prediction information to re-obtain the prediction information after the first time.
  • step 1240 includes steps 1242 to 1248.
  • Step 1242 Obtain multiple interruption times by the processor.
  • the interruption time is the time when the first image acquisition device sends an interruption signal to the processor.
  • the connection relationship between the first image acquisition device, the second image acquisition device, the microprocessor, and the processor in the terminal device is shown in FIG. 14.
  • the first image acquisition device 401 is connected to the microprocessor.
  • the second image acquisition device 402 is connected to the processor, and the inertial measurement unit is also connected to the processor. Since the processor and the microprocessor are two independent pieces of hardware and each adopts an independent clock system, it is necessary to time synchronize the data of the processor and the microprocessor to ensure that the first information at the first moment can be used at the first moment.
  • the forecast information is updated.
  • the first image acquisition device can send an interrupt signal to the processor, for example, send a GPIO (General-purpose input / output) interrupt signal to process
  • the device can record and store the moment when the interrupt signal is received. Because the delay between the moment when the processor receives the interrupt signal and the moment when the first image acquisition device sends the interrupt signal is small, it can be ignored, so the process can be processed.
  • the time when the interrupter receives the interruption signal is the time when the first image acquisition device sends the interruption signal to the processor, that is, the interruption time.
  • the first image acquisition device acquires the image containing the marker is a process of continuously acquiring multiple frames of images. Multiple exposures will occur, and each exposure will generate an interrupt, that is, an interrupt will be generated for each frame.
  • the processor can obtain Multiple interruptions.
  • Step 1244 The receiving time is obtained by the processor, and the receiving time is the time when the processor receives the first image sent by the microprocessor.
  • the first image acquisition device acquires the first image, and the first image can be processed by the microprocessor, for example, imaging processing.
  • the microprocessor can send the processed first image to the processor, and the processor can record and receive The time to the first image of each frame, that is, the time of receiving the first image.
  • Step 1246 Determine the first time by using the receiving time and the multiple interrupting times.
  • the delay time may include a processing time t1 and a transmission time t2 of the first image.
  • the processing time t1 refers to the time consumed by the microprocessor to process the first image.
  • the processing time t1 is related to the frame rate of the image sensor of the first image acquisition device. The longer, the shorter the processing time t1 of the first image.
  • the transmission time t2 refers to the time required for the first image to be transmitted from the microprocessor to the processor.
  • the processor may obtain the theoretical exposure time of the first image according to the receiving time and the delay time of receiving the first image.
  • the processor may store a plurality of interruption times at which the first image acquisition device sends an interruption signal, may calculate the difference between the theoretical exposure time of the first image and each interruption time, and determine whether the difference between each interruption time of the theoretical exposure time is The time when the difference is less than the preset threshold and the time when the difference is less than the preset threshold is the time when the first image acquisition device acquires the first image.
  • the processor stores a plurality of interruption times Tc1, Tc2, Tc3, Tc4, etc., and can calculate the difference between the theoretical exposure time Ta and Tc1, Tc2, Tc3, Tc4, etc. ⁇ t1, ⁇ t2, ⁇ t3, ⁇ t4 ..., it can be judged whether the difference values ⁇ t1, ⁇ t2, ⁇ t3, ⁇ t4 ... are smaller than the preset threshold Th, and the interruption time corresponding to the difference of Th below the preset threshold is taken as the first Time when the image acquisition device acquires the first image.
  • the interruption times Tc1, Tc2, Tc3, Tc4, and Tc5 recorded by the processor are 20ms, 40ms, 60ms, 80ms, and 100ms, respectively.
  • the processor may obtain prediction information corresponding to the time to update the prediction information.
  • Step 1248 Obtain prediction information at the first time, and use the first information at the first time to update the prediction information at the first time.
  • Step 1250 When the second information at the second time is obtained, use the second information to update the prediction information at the second time to obtain the second prediction information, and re-obtain the prediction after the second time based on the second prediction information. information.
  • Step 1260 Use the predicted information at the current time as the target information.
  • the processor may also periodically send a time synchronization instruction to the microprocessor.
  • the time synchronization instruction includes the clock time of the processor, and the time synchronization instruction is used to instruct the microprocessor to perform microprocessing on the processor according to the clock time of the processor.
  • the clock time of the processor is adjusted to keep the clocks of the processor and the microprocessor synchronized.
  • the microprocessor After the microprocessor receives the time synchronization instruction, it can calculate the time error between the processor and the processor according to the current clock time, the processor clock time, and the signal transmission delay between the processor and the microprocessor, and adjust according to the time error.
  • the current clock time is adjusted to keep the clocks of the processor and the microprocessor synchronized.
  • the positioning and tracking method provided in the foregoing embodiment can implement data synchronization between the microprocessor and the processor to ensure the accuracy of the positioning and tracking results.
  • the present application provides a computer-readable storage medium.
  • the computer-readable medium stores program code, and the program code can be called by a processor to execute the method described in the foregoing embodiment.
  • the computer-readable storage medium may be an electronic memory such as a flash memory, an EEPROM (Electrically Erasable Programmable Read-Only Memory), an EPROM, a hard disk, or a ROM.
  • the computer-readable storage medium includes a non-transitory computer-readable storage medium.
  • the computer-readable storage medium has a storage space of a program code for performing any of the method steps in the above method. These program codes can be read from or written into one or more computer program products.
  • the program code may be compressed, for example, in a suitable form.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Human Computer Interaction (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种定位跟踪方法,根据采集的包含标记物的图像获取第一空间位置信息;根据采集的场景图像获取第二空间位置信息,并根据第一空间位置信息及第二空间位置信息中的至少一种对终端设备进行定位追踪。

Description

定位跟踪方法、终端设备及计算机可读取存储介质 技术领域
本申请涉及虚拟现实技术领域,更具体地,涉及一种定位跟踪方法、终端设备及计算机可读取存储介质。
背景技术
随着虚拟现实(Virtual Reality,VR)、增强现实(Augmented Reality,AR)技术的发展,以增强现实为例,增强现实是通过计算机系统提供的信息增加用户对现实世界感知的技术,其将计算机生成的虚拟物体、场景或系统提示信息叠加到真实场景中,来增强或修改对现实世界环境或表示现实世界环境的数据的感知。因此,如何准确有效的对显示装置(例如头戴显示装置、智能眼镜、智能手机等)进行定位跟踪是亟待解决的问题。
发明内容
本申请提出了一种定位跟踪方法、终端设备及计算机可读取存储介质。
第一方面,本申请实施例提供了一种定位跟踪方法,应用于终端设备,所述终端设备包括第一图像采集装置和第二图像采集装置,所述方法包括:根据所述第一图像采集装置采集的包含有标记物的第一图像,获取所述第一图像采集装置与所述标记物之间的相对位置及姿态信息,得到第一信息;根据所述第二图像采集装置采集的包含有目标场景的第二图像,获取所述第二图像采集装置在所述目标场景内的位置及姿态信息,得到第二信息,其中,所述标记物和终端设备位于所述目标场景内;及利用所述第一信息和所述第二信息获取所述终端设备相对所述标记物的位置及姿态信息,得到目标信息。
第二方面,本申请实施例提供了一种终端设备,包括:第一图像采集装置,用于采集的包含有标记物的第一图像;第二图像采集装置,用于采集的包含有目标场景的第二图像;存储器,存储有一个或多个计算机程序;一个或多个处理器;所述计算机程序被所述处理器执行时,使得所述处理器执行如下步骤:根据所述第一图像,获取所述第一图像采集装置与所述标记物之间的相对位置及姿态信息,得到第一信息;根据所述第二图像,获取所述第二图像采集装置在所述目标场景内的位置及姿态信息,得到第二信息,其中,所述标记物和终端设备位于所述目标场景内;及利用所述第一信息和所述第二信息获取所述终端设备相对所述标记物的位置及姿态信息,得到目标信息。
第三方面,本申请实施例提供了一种定位跟踪方法,所述方法包括:采集包含标记物的图像;识别所述图像中的标记物,并获取第一空间位置信息;获取终端设备的位姿变化信息,所述位姿变化信息包括所述终端设备的位置变化信息和姿态变化信息;根据所述位姿变化信息获取所述终端设备的第二空间位置信息;及基于所述第一空间位置信息和/或所述第二空间位置信息,获取所述终端设备的当前位置信息。
第四方法,本申请实施例提供了一种终端设备,包括存储器及处理器,所述存储器中存储有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行如下步骤:采集包含标记物 的图像;识别所述图像中的标记物,并获取第一空间位置信息;获取终端设备的位姿变化信息,所述位姿变化信息包括所述终端设备的位置变化信息和姿态变化信息;根据所述位姿变化信息获取所述终端设备的第二空间位置信息;及基于所述第一空间位置信息和/或所述第二空间位置信息,获取所述终端设备的当前位置信息。
第五方面,本申请实施例提供了一种计算机存储介质,其上存储有计算机程序,所述计算机程序被处理器运行时,使得所述处理器执行上述实施例中的方法。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征、目的和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为一个实施例中的定位跟踪系统示意图;
图2为一个实施例中终端设备的框图;
图3为一个实施例中定位跟踪方法的流程图;
图4为一个实施例中定位跟踪方法的应用场景图;
图5为另一个实施例中定位跟踪方法的流程图;
图6为又一个实施例中定位跟踪方法的流程图;
图7为一个实施例中终端设备、标记物以及目标场景的关系示意图;
图8为一个实施例中获取终端设备相对标记物的位置及姿态信息的流程图;
图9为一个实施例中根据第一时刻的第一信息对预测信息进行更新的流程图;
图10为一个实施例中根据第二时刻的第二信息对预测信息进行更新的流程图;
图11为一个实施例中获取终端设备相对标记物的位置及姿态信息的示例图;
图12为又一个实施例中定位跟踪方法的流程图;
图13为一个实施例中确定第一时刻的流程图;
图14为一个实施例中终端设备的部件连接图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
请参图1,本申请实施例提供的一种定位跟踪系统10,包括终端设备100以及标记物200。其中,终端设备100可以是头戴显示装置,也可以是手机、平板等移动设备。终端设备100为头戴显示装置时,头戴显示装置可以为一体式头戴显示装置,也可以是连接有外置电子装置的头戴显示装置。终端设备100还可以是与外接式/插入式头戴显示装置连接的手机等智能终端,即终端设备100可作为头戴显示装置的处 理和存储设备,插入或者接入外接式头戴显示装置,以在头戴显示装置中显示虚拟内容300。
请参图2,终端设备100为头戴显示装置,包括处理器110、存储器120,其中,存储器存储有一个或多个应用程序,可被配置为由一个或多个处理器110执行,一个或多个程序用于执行本申请所描述的方法。
处理器110可包括一个或者多个处理核。处理器110利用各种接口和线路连接整个终端设备100内的各个部分,通过运行或执行存储在存储器120内的指令、程序、代码集或指令集,以及调用存储在存储器120内的数据,执行电子设备100的各种功能和处理数据。处理器110可采用数字信号处理、现场可编程门阵列、可编程逻辑阵列中的至少一种硬件形式来实现。处理器110可集成中央处理器、图像处理器和调制解调器等中的一种或几种的组合。其中,CPU主要处理操作系统、用户界面和应用程序等;GPU用于负责显示内容的渲染和绘制;调制解调器用于处理无线通信。其中,上述调制解调器也可不集成到处理器110中,单独通过一块通信芯片进行实现。
存储器120可包括随机存储器,也可包括只读存储器。存储器120可用于存储指令、程序、代码、代码集或指令集。存储器120可包括存储程序区和存储数据区,其中,存储程序区可存储用于实现操作系统的指令、用于实现至少一个功能的指令(比如触控功能、声音播放功能、图像播放功能等)、用于实现下述各个方法实施例的指令等。存储数据区还可存储电子设备100在使用中所创建的数据等。
在一些实施例中,终端设备100还可包括相机130,用于采集现实物体的图像以及采集目标场景的场景图像。相机130可为红外相机,也可为可见光相机,具体类型并不限定。
在一个实施例中,终端设备100还可包括如下一个或多个部件:显示模组、光学模组、通信模块以及电源。显示模组可包括显示控制单元,显示控制单元用于接收处理器渲染后的虚拟内容的显示图像,将该显示图像显示并投射至光学模组上,使用户能够通过光学模组观看到虚拟内容。其中,显示模组可以是显示屏或投射装置等,用于显示图像。光学模组可采用离轴光学系统或波导光学系统,显示模组显示的显示图像经光学模组后,能够被投射至用户的眼睛。用户通过光学模组可看到显示模组投射的显示图像。在一些实施方式中,用户还能够透过光学模组观察到现实环境,感受虚拟内容与现实环境叠加后的视觉效果。通信模块可是蓝牙、WiFi、ZigBee等模块,终端设备可通过通信模块与交互设备通信连接,以进行信息以及指令的交互。电源可为整个终端设备进行供电,保证终端设备各个部件的正常运行。
在一些实施例中,标记物可以是任意具有可识别特征标记的图形或物体,例如,标记物可以是具有拓扑结构的图案,拓扑结构是指标记物中的子标记物和特征点等之间连通关系,但不限于此。作为一种实施方式,标记物外可设置有红外滤光片,标记物对用户不可见,相机可以为红外相机,通过发射红外光线采集标记物的图像,以降低可见光对标记物图像的影响,提高定位追踪的准确性。
当标记物200在终端设备100的相机的视觉范围内时,终端设备100的相机可采集到包含有标记物200的标记物图像。终端设备100的处理器获取到标记物图像及相关信息,对标记物图像中包含的标记物200进行识别跟踪,获取到该标记物200的身份信息,及终端设备100相对标记物200之间的位置与姿态关系。
请参图3,本申请一个实施例提供一种定位跟踪方法,应用于终端设备100,包括以下步骤:
步骤310:采集包含标记物的图像。
步骤320:识别图像中的标记物,并获取第一空间位置信息。
终端设备通过相机采集包含标记物的图像,对该图像进行识别跟踪,获取终端设备相对于该标记物的位置和姿态信息,进而获取终端设备在现实场景中所在的位置及姿态信息,即第一空间位置信息。
在一些实施方式中,多个标记物可离散设置于现实场景中的多个位置,其中一个标记物可作为目标标记物设置于该现实场景的入口附近(比如房间的门口,或是区域的入口附近),即用户进入该场景中的起始位置附近。终端设备可通过相机采集包含该目标标记物的图像,以对终端设备在现实场景中所在的位置进行初始定位,此时根据该目标标记物确定的终端设备的位置即是终端设备在现实场景中的初始位置。
步骤330:获取终端设备的位姿变化信息。
在一些实施例中,用户在现实场景中移动时,由于现实场景中并非在所有区域都设置有标记物,当终端设备的相机无法采集到包含标记物的图像时,可通过获取终端设备相对于初始位姿的位姿变化信息,推算出终端设备当前的位置。
作为一种方式,终端设备可通过视觉惯性里程计(Visual-Inertial Odometry,VIO)实时获取终端设备在现实场景中的位置及姿态信息,即第二空间位置信息。终端设备可通过相机实时采集场景图像,根据场景图像中包含的关键点(或特征点)计算得到终端设备的在现实场景中的位置及姿态信息。用户在进入现实场景中时,可先通过终端设备检测到设置于现实环境的入口附近的目标标记物进行定位,并获取与该目标标记物对应的第一空间位置信息,该第一空间位置信息可以作为后续通过VIO计算终端设备的位置的基准;当用户继续在现实环境中移动时,可通过VIO实时获取终端设备相对于目标标记物对应的第一空间位置信息的位姿变化信息,即可实时计算出终端设备即用户当前在现实场景中所在的位置。
步骤340:根据位姿变化信息获取终端设备的第二空间位置信息。
步骤350:基于第一空间位置信息和/或第二空间位置信息,获取终端设备的当前位置信息。
作为一种方式,当终端设备仅获取第一空间位置信息和第二空间位置信息中的一种时,可将获取的位置信息直接作为终端设备的当前位置信息;终端设备也可结合第一空间位置信息及第二空间位置信息,得到终端设备的当前位置信息。
例如,终端设备在根据包含标记物的图像获得第一空间位置信息时,可直接将该第一空间位置信息作为终端设备的当前位置信息;当终端设备的相机没有采集到包含标记物的图像,仅根据VIO获取了第二空间位置信息,可直接将第二空间位置信息作为终端设备的当前位置信息;若终端设备既通过相机采集到包含标记物的图像,获取了第一空间位置信息,且利用VIO获取了第二空间位置信息,假设标记物定位相对于VIO定位更加精确,则可选择更为精确的第一空间位置信息作为终端设备的当前位置信息,也可融合获取的第一空间位置信息和第二空间位置信息,比如进行加权计算等,得到终端设备的当前位置信息。
如图4所示,在VR/AR博物馆场景中,由于标记物M的数量以及位置一般较为固定,且场馆的空间较大,标记物的分布较为分散,终端设备识别跟踪到标记物时(比如用户A或用户B所在的位置),可显示该标记物对应的虚拟图像,并根据该标记物对确定终端设备在场馆中的位置及姿态信息;当用户佩戴终端设备在博物馆中移动时,终端设备的相机有时候无法采集到标记物的图像(如用户C所在的位置),终端设备可通过VIO实时获取其相对于最新获取的第一空间位置信息的位姿变化信息,从而确定终端设备在场馆中的位置及姿态信息。
作为一种方式,当终端设备的相机无法采集到标记物的图像时,可根据利用VIO定位得到的当前位置信息获取关联的虚拟图像,对虚拟图像进行渲染并显示,实现终端设备在没有检测到标记物时,也可显示与位置及姿态相关联的虚拟图像。
上述的举例只是本申请提供的定位跟踪方法的部分实际应用,随着VR/AR技术的进一步发展与普及,本申请提供的定位跟踪方法可在更多的实际应用场景中发挥作用。
上述实施例提供的定位跟踪方法,可以通过标记物追踪确定用户的位置,并在无法检测到标记物时结合终端设备的位姿变化信息计算获取用户的位置,将基于标记物定位与VIO定位进行结合,可以实时准确获取用户位置,提高了室内定位跟踪的准确性。
请参图5,本申请另一实施例提供的一种定位跟踪方法,包括以下步骤:
步骤510:采集包含标记物的图像。
步骤520:识别图像中的标记物,并获取第一空间位置信息。
步骤522:识别图像中的标记物,并获取标记物的身份信息。
终端设备可对采集的包含标记物的图像进行识别,获取该标记物对应的身份信息,不同标记物对应不同的身份信息,该身份信息可用数字、字母、符号等中的一种或多种进行表示,标记物的身份信息可根据该标记物的图案颜色、形状、拓扑结构等特征的进行设定,但不限于此。
步骤524:基于身份信息,获取标记物在预存地图中的标记位置信息。
每个标记物对应一个身份信息,且该标记物在预存储地图中的位置信息对应。在获取标记物的身份信息后,可根据该身份信息获得与该标记物在预存储地图中的标记位置信息。作为一种方式,预存储地图可以是预先根据现实环境建立并存储的虚拟地图,标记物在预存地图中的位置是指该标记物在现实环境中的实际位置。
在一些实施例中,终端设备本地或服务器上可存储有多个标记物的身份信息,以及与每个标记物对应的标记位置信息,根据获取的标记物的身份信息在本地或服务器上可查找该标记物的位置。
步骤526:获取终端设备与标记物的相对关系信息。
终端设备根据采集的包含标记物的图像可获取终端设备与标记物的相对关系信息,其中,相对关系信息包括终端设备与标记物之间的相对位置和姿态信息。
步骤528:基于标记位置信息和相对关系信息,获取在终端设备在预存地图中的第一空间位置信息。
终端设备可结合标记物在预存储地图中的位置信息,及终端设备与该标记物的相对关系信息,确定终端设备在当前所处的现实场景中的位置及姿态信息,即获取第一空间位置信息。
在一个实施例中,在步骤522后,还包括步骤530和步骤540。
步骤530:根据身份信息判断标记物是否为目标标记物;
步骤540:当标记物为目标标记物时,基于目标标记物构建与预存地图匹配的虚拟场景。
终端设备获取标记物的身份信息后,可对该标记物的身份信息进行检测,判断其是否为目标标记物。当该标记物为目标标记物,可基于该目标标记物构建与预存地图对应的虚拟场景,并进行显示。
作为一种方式,可在AR/VR环境的不同现实场景的分界处放置不同的目标标记物。例如,在一个多主题的AR/VR博物馆中,具有海洋、草原、星空等多个展览场景,可在每个现实场景的入口分别设置与现实场景对应的目标标记物,当终端设备采集到以海洋为主题的现实场景入口的目标标记物后,可基于该目标标记物构建海洋相关的虚拟场景;当用户从海洋主题场景移动到星空主题场景时,终端设备采集到以星空为主题的现实场景入口的目标标记物后,可基于该目标标记物构建星空相关的虚拟场景并替换掉先前的海洋相关的虚拟场景,并通过终端设备的显示模组将相关的虚拟场景展示给用户。
在一个实施例中,在步骤520后,还包括步骤550。
步骤550:获取与标记物对应的虚拟对象数据,并根据虚拟对象数据显示与标记物对应的虚拟对象。
每个标记物的身份信息与其绑定的虚拟对象一一对应。终端设备获取标记物的身份信息,可根据该身份信息获得与该标记物对应的虚拟对象数据,根据该虚拟对象数据和终端设备相对标记物的位置及姿态信息生成并显示虚拟对象。作为一种方式,标记物对应的虚拟对象可单独进行显示,也可与前述的虚拟场景结合进行显示。例如,在AR/VR博物馆中,展览品旁设置有标记物,终端设备通过采集标记物的图像,可展示与展览品相关的虚拟对象,比如该展览品的文字介绍、相关的虚拟动画、虚拟的小游戏等,具体不限于此。用户通过终端设备可看到虚拟对象与现实场景叠加显示,增强互动感。
步骤560:获取终端设备的位姿变化信息。
在一些实施例中,步骤560可包括步骤562、步骤564和步骤566。
步骤562:获取包含关键点的场景图像。
关键点可以是图像中具有明显特征的点,例如图像中物体的边缘、角点等可用于表现现实环境中某个点所在位置的特征点。作为一种方式,场景图像可包含在某一时间段内拍摄的多帧现实场景图像,每帧现实场景图像中可含有多个用于定位的关键点。
步骤564:提取当前图像中的关键点的描述向量。
终端设备可通过提取相邻两帧图像中的同一关键点在两幅图像中的位置,即可获得该关键点从前一帧图像的上一位置到相邻下一帧图像的下一位置的描述向量。
步骤566:基于描述向量,获得终端设备的位姿变化信息。
终端设备在提取关键点的描述向量后,可获取相邻两帧图像拍摄的时间间隔,以及该描述向量的模长与方向,计算出该关键点相对于终端设备的摄像头在拍摄时间间隔内的空间位移,得到终端 设备的位置变化信息。终端设备中还可设置在IMU(Inertial measurement unit,惯性测量单元),并通过IMU实时获取终端设备的姿态变化信息。
步骤570:根据位姿变化信息获取终端设备的第二空间位置信息。
步骤580:基于第一空间位置信息和/或第二空间位置信息,获取终端设备的当前位置信息。
在一个实施例中,在步骤580后,还可包括步骤590。
步骤590:根据当前位置信息生成虚拟画面。
终端设备可生成并显示与当前在现实场景中的空间位置和姿态信息对应的虚拟画面。例如,在VR/AR博物馆内,可在终端设备显示与当前的位置及姿态信息对应的动态的路标指示线,以引导用户寻找下一个标记物(虚拟展品)。作为一种方式,可预先将终端设备在现实场景中的位置信息与虚拟画面进行关联,并将关联信息存储在终端设备的本地或云端。
在一个实施例中,在步骤580后,还可包括步骤5110、5120、5130和5140。
步骤5110:采集包含新标记物的图像。
步骤5120:识别新标记物,并获取新标记物在预存地图中的位置信息。
步骤5130:根据新标记物的位置信息,重新计算终端设备在预存地图中的第一空间位置信息。
步骤5140:基于重新计算的第一空间位置信息,校准终端设备的位姿变化信息。
在一些情况下,VIO在持续测量自身位姿变化信息的过程中可能出现偏差,终端设备可通过采集到的新标记物计算出的位置信息,校准VIO的位姿变化信息,使得通过VIO得到的位姿变化信息以该新标记物作为基准重新开始计算,提高跟踪精度。在其他的实施方式中,可先获取新标记物的位置信息,再计算出新标记物与现实场景中的目标标记物之间的相对位置及姿态关系,以校准位姿变化信息,即VIO的位姿变化信息在校准后还是以目标标记物为基准。
在一些实施例中,当终端设备采集到新的目标标记物时,可直接将VIO获取的位姿变化信息清零,并以该新的目标标记物为基准重新进行计算;作为另一种方式,还可获取终端设备相对于新的目标标记物的位置和姿态信息,以及该新的目标标记物和初始目标标记物(终端设备首次识别出的目标标记物,一般设置在场馆的入口处)的相对位置和姿态,计算出终端设备相对于初始目标标记物的位置和姿态信息,并对VIO获取的位姿变化信息进行校准。相对于前一种直接清零的方式,后一种方式的VIO具有一个响应曲线,会依据标记物的数据,逐渐对VIO的位姿变化信息进行校准,而不会在终端设备显示的画面上产生突变,使用户具有较好的视觉体验。
上述实施例提供的定位跟踪方法,通过终端设备的当前位置变化关联显示动态的虚拟图像,增强了沉浸感,且能够在采集到新标记物时校准当前的位姿变化信息,进一步提升定位的精确度。
请参图6,本申请又一个实施例中提供了一种定位跟踪方法,应用于终端设备,该方法可包括步骤610至步骤630。
步骤610:根据第一图像采集装置采集的包含有标记物的第一图像,获取第一图像采集装置与标记物之间的相对位置及姿态信息,得到第一信息。
终端设备上设置有用于采集标记物的图像的第一图像采集装置,根据该图像可确定第一图像采集装置相对标记物的位置及姿态信息,即第一图像采集装置相对标记物的六自由度信息,包括三个 平移自由度和三个旋转自由度,三个平移自由度用于描述三维对象X,Y,Z坐标值;三个旋转自由度包括俯仰角(Pitch)、横滚角(Roll)及横向角(Yaw)。
步骤620:根据第二图像采集装置采集的包含有目标场景的第二图像,获取第二图像采集装置在目标场景内的位置及姿态信息,得到第二信息,其中,标记物和终端设备位于目标场景内。
终端设备还设有第二图像采集装置,用于采集目标场景处于视觉范围内的场景图像。在一些实施方式中,终端设备与标记物均处于目标场景中,为了说明终端设备、标记物以及目标场景之间的关系,可参考图7,标记物102与终端设备103均位于目标场景101中,终端设备103的第一图像采集装置用于采集包含标记物102的图像,第二图像采集用于采集目标场景101的图像。
终端设备根据采集的场景图像可获取第二图像采集装置在目标场景内位置及姿态信息,得到第二信息。终端设备可利用VIO计算获得第二信息,通过惯性测量单元获取终端设备的角速度和加速度数据,结合第二图像采集装置采集的场景图像即可获取到第二图像采集装置在目标场景内的位置及姿态信息。
步骤630:利用第一信息和第二信息获取终端设备相对标记物的位置及姿态信息,得到目标信息。
终端设备根据第一图像采集装置相对标记物的位置及姿态信息,和第二图像采集装置在目标场景内的位置及姿态信息,可获取终端设备相对标记物的位置及姿态信息,即目标信息。在一个实施例中,因为第一图像采集装置和第二图像采集装置均安装于终端设备上,因此,可将第一图像采集装置与标记物之间的第一信息作为目标信息,也可将第二图像采集装置在目标场景内的第二信息作为目标信息。
为了使获取的目标信息更加准确有效,终端设备可结合第一信息和第二信息综合获取目标信息,例如,将第一信息和第二信息的平均值作为目标信息,或者分配不同的权重对第一信息和第二信息进行加权计算等。在一些实施方式中,终端设备也可通过惯性测量单元获取终端设备相对标记物的位置与姿态信息,利用第一信息和第二信息中的至少一种对该位置及姿态信息进行更新,进而得到目标信息。
上述实施例的定位跟踪的方法,通过第一图像采集装置获取的第一信息和第二图像采集装置获取的第二信息得到终端设备相对标记物的目标信息,使得对终端设备的定位跟踪更加精准。
在一个实施例中,如图8所示,上述利用第一信息和第二信息获取终端设备相对标记物的位置及姿态信息,得到目标信息,可包括如下步骤。
步骤810:利用惯性测量单元获取不同时刻下终端设备相对标记物的预测位置及姿态信息,得到不同时刻的预测信息。
在一些实施方式中,惯性测量单元可利用陀螺仪测量终端设备的三个旋转自由度的角度变化,并利用加速度计测量终端设备三个移动自由度的位移。通过惯性测量单元可对终端设备的位置变化及姿态变化进行累积,以预测在不同时刻下终端设备相对标记物的位置及姿态信息。终端设备利用惯性测量单元获取到前一时刻的预测信息后,可通过积分及前一时刻的预测信息,获取到当前时刻的预测信息,并将当前时刻的预测信息作为当前时刻终端设备相对标记物的位置及姿态信息。
步骤820:当获取到第一时刻的第一信息时,利用第一信息对第一时刻的预测信息进行更新,得 到第一预测信息,以重新获取在第一时刻之后的预测信息。
终端设备根据在第一时刻采集的第一图像获取到第一图像采集装置与标记物之间的相对位置及姿态信息,即获取到第一时刻的第一信息时,可利用该第一信息对第一时刻利用惯性测量单元获取的预测信息进行更新,得到第一预测信息,惯性测量单元可基于第一预测信息重新获取在第一时刻之后各个时刻的预测信息。
在一些实施方式中,当惯性测量单元处于初始状态时,可通过第一图像采集装置采集包含标记物的图像,获取第一图像采集装置与标记物之间的相对位置及姿态信息,利用第一图像采集装置与惯性测量单元之间的第一初始刚体关系对该第一图像采集装置与标记物之间的相对位置及姿态信息进行转化,得到惯性测量单元与标记物之间的相对位置及姿态信息,即终端设备相对标记物的初始位置及姿态信息,也即惯性测量单元的初始预测信息。基于该初始预测信息,惯性测量单元可对不同时刻下终端设备相对标记物的位置及姿态信息进行预测。当惯性测量单元处于初始状态时,第一图像采集装置没有采集到第一图像,惯性测量单元没有获取到终端设备相对所述标记物的初始位置及姿态信息,可一直处于等待状态。
在一些实施方式中,如图9所示,步骤820可包括步骤822至步骤826。
步骤822:获取第一图像采集装置与惯性测量单元的第一刚体关系。
第一图像采集装置与惯性测量单元的第一刚体关系指的是第一图像采集装置与惯性测量单元之间在结构上的摆放关系,该摆放关系可包括第一图像采集装置与惯性测量单元之间的距离、方位等信息,摆放关系可通过实际测量获取,也可利用结构设计值获取,或是通过标定得到。该摆放关系能够反映出第一图像采集装置相对惯性测量单元或者惯性测量单元相对第一图像采集装置的旋转量和平移量,该旋转量和平移量表示将第一图像采集装置的空间坐标与惯性测量单元的空间坐标进行重合时所需的旋转角度和位移位置,其中,第一图像采集装置的空间坐标是以第一图像采集装置的中心点建立的三维坐标系,惯性测量单元的空间坐标是以惯性测量单元的中心点建立的三维坐标系。其中,空间坐标并不限定于以中心点建立。
步骤824:根据第一时刻的第一信息和第一刚体关系获取惯性测量单元相对标记物的位置及姿态信息。
第一图像采集装置和惯性测量单元同时设置于终端设备上,通过第一图像采集装置与惯性测量单元的第一刚体关系可得到第一图像采集装置与标记物之间的映射关系。终端设备可根据第一刚体关系对第一时刻的第一图像采集装置与标记物之间的相对位置及姿态信息进行转化,得到第一时刻的惯性测量单元相对标记物的位置及姿态信息。
步骤826:利用惯性测量单元相对标记物的位置及姿态信息对第一时刻的预测信息进行更新,得到第一预测信息。
终端设备通过第一刚体关系转化得到第一时刻的惯性测量单元相对标记物的位置及姿态信息,可利用该位置及姿态信息对第一时刻的预测信息进行更新。作为一种具体实施方式,可根据第一时刻下惯性测量单元相对标记物的位置及姿态信息与第一时刻的预测信息获取信息更新参数,该信息更新参数可以是第一时刻的惯性测量单元相对标记物的位置及姿态信息与预测信息之间的偏差值, 基于该信息更新参数对第一时刻的预测信息进行更新。作为另一种实施方式,也可将第一时刻的惯性测量单元相对标记物的位置及姿态信息与预测信息进行加权计算,得到更新后的预测信息,加权计算的权重可根据实际设定。
在一些实施方式中,可根据第一时刻的第一预测信息重新获取第一时刻之后的预测信息,惯性测量单元可在第一预测信息的基础上,对终端设备在第一时刻之后的各个时刻的位置及姿态变化进行积分,重新获取第一时刻之后的各个时刻的预测信息。
在一些实施方式中,终端设备还可对第一图像采集装置与惯性测量单元之间的第一刚体关系进行更新、较正,使第一刚体关系更为准确。对第一刚体关系进行更新可以包括步骤(1)至步骤(3)。
步骤(1):利用第一刚体关系和第一预测信息预测第一图像采集装置与标记物之间的相对位置及姿态信息,得到第一姿态预测信息。
在一个实施例中,终端设备可利用第一图像采集装置与惯性测量单元的第一刚体关系对第一时刻的第一预测信息进行坐标转换,重新计算第一图像采集装置与标记物之间的相对位置及姿态信息,得到第一姿态预测信息。
步骤(2):获取第一时刻的第一信息与第一姿态预测信息之间的误差。
终端设备可获取第一时刻的第一姿态预测信息和实际确定的第一图像采集装置与标记物的相对位置及姿态信息(即第一信息)之间的误差。在一些实施例中,可计算第一时刻的第一信息与第一姿态预测信息的差值并取绝对值,得到第一信息与第一姿态预测信息之间的误差。
步骤(3):根据误差对第一刚体关系进行更新。
第一时刻的第一信息与第一姿态预测信息之间的误差主要指的是第一图像采集装置与标记物之间的相对位置及姿态信息的实际值与预测值之间的误差,可利用第一信息与第一姿态预测信息的误差对第一刚体关系进行更新,提高定位追踪的准确性。其中,第一信息与第一姿态预测信息之间的误差越小则表明第一刚体关系越准确,在一个实施例中,可获取第一刚体关系的更新次数,并判断该更新次数是否大于预设次数,当大于预设次数时,可结束第一刚体关系的更新。
步骤830:当获取到第二时刻的第二信息时,利用第二信息对第二时刻的预测信息进行更新,得到第二预测信息以重新获取在第二时刻之后的预测信息。
终端设备根据第二图像采集装置在第二时刻采集的场景图像,得到第二图像采集装置在目标场景内的位置及姿态信息,即第二时刻的第二信息,利用该第二信息对惯性测量单元在第二时刻的预测信息进行更新,可得到第二预测信息。在一些实施方式中,可根据第二时刻的第二预测信息重新获取第二时刻之后的预测信息,惯性测量单元可对第二预测信息进行积分,得到第二时刻之后各个时刻的预测信息。
在一个实施例中,如图10所示,步骤830可包括步骤832至步骤836。
步骤832:获取第二图像采集装置与惯性测量单元的第二刚体关系。
第二图像采集装置与惯性测量单元的第二刚体关系指的是第二图像采集装置与惯性测量单元之间在结构上的摆放关系,该摆放关系可包括第二图像采集装置与惯性测量单元之间的旋转和位移,摆放关系可通过实际测量获取,也可利用结构设计值获取,或是通过标定得到。第二刚体关系反映 出第二图像采集装置相对惯性测量单元或者将惯性测量单元相对第二图像采集装置所需要的旋转量和平移量,该旋转量和平移量表示将第二图像采集装置的空间坐标与惯性测量单元的空间坐标进行重合时所需旋转和位移,其中,第二图像采集装置的空间坐标是以第二图像采集装置的中心点建立的三维坐标系,惯性测量单元的空间坐标是以惯性测量单元的中心点建立的三维坐标系。其中,空间坐标并不限定于以中心点建立。
步骤834:利用第一图像采集装置与惯性测量单元的第一刚体关系和第二刚体关系对第二时刻的第二信息进行坐标转换,得到惯性测量单元相对标记物的位置及姿态信息。
终端设备可根据第一图像采集装置与惯性测量单元之间的第一刚体关系,以及第二图像采集装置与惯性测量单元之间的第二刚体关系,得到第一图像采集装置与第二图像采集装置之间的第三刚体关系。当第一图像采集装置采集到包含有标记物的图像,可根据该图像获取第一图像采集装置与标记物之间的相对位置及姿态信息,利用第三刚体关系可对该第一图像采集装置与标记物之间的相对位置及姿态信息进行坐标转换,得到第二图像采集装置与标记物之间的相对位置及姿态信息,可将该第二图像采集装置与标记物之间的相对位置及姿态信息作为第二图像采集装置相对标记物的初始位置及姿态信息。当第二图像采集装置在第二时刻采集到目标场景的场景图像,可根据该场景图像获取第二图像采集装置在目标场景内的位置及姿态信息,可基于第二图像采集装置相对标记物的初始位置及姿态信息,将该第二图像采集装置在目标场景内的位置及姿态信息转化为第二图像采集装置与标记物之间的相对位置及姿态信息,并根据第二刚体关系得到第二时刻的惯性测量单元与标记物之间的相对关系。
步骤836:利用惯性测量单元相对标记物的位置及姿态信息对第二时刻的预测信息进行更新,得到第二预测信息。
终端设备根据第一刚体关系、第二刚体关系以及第二时刻的第二信息,可获取第二时刻的惯性测量单元相对标记物的位置及姿态信息,利用该位置及姿态信息对第二时刻的预测信息进行更新,可得到第二预测信息。
在一些实施方式中,终端设备可对第二图像采集装置与惯性测量单元的第二刚体关系进行更新,包括步骤(a)至步骤(c)。
步骤(a):利用第二刚体关系和第二预测信息预测第二图像采集装置在目标场景内的位置及姿态信息,得到第二姿态预测信息。
在一个实施例中,终端设备可利用第二图像采集装置与惯性测量单元的第二刚体关系对第二时刻更新后的预测信息进行坐标转换,重新计算第二时刻的第二图像采集装置在目标场景内的位置及姿态信息,得到第二姿态预测信息。
步骤(b):获取第二时刻的第二信息与第二姿态预测信息之间的误差。
终端设备可获取第二时刻的第二姿态预测信息和实际确定的第二图像采集装置在目标场景内的位置及姿态信息(即第二信息)之间的误差。在一些实施例中,可计算第二时刻的第二信息与第二姿态预测信息的差值并取绝对值,得到第二时刻第二信息与第二姿态预测信息之间的误差。
步骤(c):根据误差对第二刚体关系进行更新。
第二时刻的第二信息与第二姿态预测信息之间的误差是指第二图像采集装置在目标场景内的位置及姿态信息的实际值与预测值之间的误差,可利用第二信息与第二姿态预测信息之间的误差对第二刚体关系进行更新,提高定位追踪的准确性。
步骤840:将当前时刻的预测信息作为目标信息。
终端设备可将通过惯性测量单元得到的当前时刻的预测信息作为当前时刻终端设备相对标记物的位置及姿态信息,即目标信息,不同时刻的预测信息可作为对应时刻的目标信息。
请参阅图11,本实施例提供了定位跟踪方法中获取目标信息的具体流程,IMU为利用惯性测量单元获取终端设备相对标记物的预测信息,tag为根据标记物图像获取位置及姿态信息,而VIO为通过VIO算法获取位置及姿态信息。a1、a2、a3、a4分别为惯性测量单元在T1、T2、T3、T4时刻的预测信息,后一时刻的预测信息均可根据前一时刻的预测信息进行积分得到,该积分是指惯性测量单元中加速度及姿态角等的积分。
第一图像采集装置在T1时刻采集到包含标记物的图像,并获取到第一信息,根据第一图像采集装置与惯性测量单元之间的刚体关系可将第一信息转化为T1时刻惯性测量单元相对标记物的位置及姿态信息b1,并利用b1对惯性测量单元在T1时刻的预测信息进行更新得到a1’。惯性测量单元可利用T1时刻更新后的a1’对T1时刻之后的各个时刻重新进行积分预测,得到T2时刻的预测信息a2’、T3时刻的预测信息a3’以及T4时刻的预测信息a4’等。第二图像采集装置在T2时刻采集到目标场景的第二图像,并获取到第二信息,利用第二图像采集装置与惯性测量单元之间的第二刚体关系可将该第二信息转化为T2时刻惯性测量单元相对标记物的位置及姿态信息c1,并根据c1对T2时刻的预测信息a2’进行更新,得到a2^,可利用T2时刻更新后的a2^对T2时刻之后的各个时刻重新进行积分预测,得到T3时刻的预测信息a3^以及T4时刻的预测信息a4^等。惯性测量单元在各个时刻的最新的预测信息即可作为对应时刻终端设备相对标记物的位置及姿态信息。
上述实施例提供的定位跟踪的方法通过引入第一图像采集装置和第二图像采集装置相对惯性测量单元的第一刚体关系和第二刚体关系来对目标时间点的预测信息进行更新,进一步保证终端设备相对标记物位置及姿态信息的准确性。
请参图12,在另一个实施例中,提供了一种定位跟踪方法,应用于终端设备,该终端设备还包括微处理器和处理器,第一图像采集装置与微处理器连接,第二图像采集装置与处理器连接。该方法包括步骤1210至步骤1260。
步骤1210:根据第一图像采集装置采集的包含有标记物的第一图像,获取第一图像采集装置与标记物之间的相对位置及姿态信息,得到第一信息。
步骤1220:根据第二图像采集装置采集的包含有目标场景的第二图像,获取第二图像采集装置在目标场景内的位置及姿态信息,得到第二信息,其中,标记物和终端设备位于目标场景内。
步骤1230:利用惯性测量单元获取不同时刻下终端设备相对标记物的预测位置及姿态信息,得到不同时刻的预测信息。
步骤1240:当获取到第一时刻的第一信息时,利用第一信息对第一时刻的预测信息进行更新,得到第一预测信息,以重新获取在第一时刻之后的预测信息。
在一个实施例中,请参图13,步骤1240包括步骤1242至步骤1248。
步骤1242:通过处理器获取多个中断时刻,中断时刻为第一图像采集装置向处理器发送中断信号的时刻。
在一个实施例中,终端设备中的第一图像采集装置、第二图像采集装置、微处理器以及处理器等设备的连接关系如图14所示,第一图像采集装置401与微处理器连接,第二图像采集装置402与处理器连接,同时惯性测量单元也与处理器连接。由于处理器和微处理器是两个独立的硬件,分别采用独立的时钟系统,因此需要对处理器和微处理器的数据进行时间同步,以保证第一时刻的第一信息可对第一时刻的预测信息进行更新。
第一图像采集装置每次采集到包含标记物的第一图像时,均可向处理器发送一个中断信号,例如,发送一个GPIO(General-purpose input/output,通用输入/输出)中断信号,处理器可记录并存储每次接收到该中断信号的时刻,由于处理器接收中断信号的时刻与第一图像采集装置发送中断信号的时刻之间延时较小,可忽略不记,因此可将处理器每次接收到该中断信号的时刻作为第一图像采集装置向处理器发送中断信号的时刻,即中断时刻。第一图像采集装置采集包含标记物的图像是连续采集多帧图像的过程,会发生多次曝光,每次曝光都会对应产生一次中断,即采集每一帧都会产生一个中断,处理器可以获取到多个中断时刻。
步骤1244:通过处理器获取接收时刻,接收时刻为处理器接收微处理器发送的第一图像的时刻。
第一图像采集装置采集到第一图像,可通过微处理器对第一图像进行处理,例如,成像处理等,微处理器可以将处理后的第一图像发送至处理器,处理器可以记录接收到每帧第一图像的时刻,即第一图像的接收时刻。
步骤1246:利用接收时刻和多个中断时刻确定第一时刻。
在一些实施方式中,第一图像采集装置将第一图像传输至处理器的过程中存在一定的延迟时长△T,该延迟时长可以包括第一图像的处理时长t1和传输时长t2。其中,处理时长t1指的是微处理器处理第一图像所消耗的时间,在一个实施例中,该处理时长t1与第一图像采集装置的图像传感器的帧率相关,图像传感器的帧率越长,则第一图像的处理时长t1越短。传输时长t2指的是第一图像从微处理器传输至处理器所需的时间,本实施例中延迟时长△T可以是处理时长t1与传输时长t2的总和,即△T=t1+t2。
处理器可以根据接收第一图像的接收时刻和延迟时长得到该第一图像的理论曝光时刻。在一个实施例中,第一图像的理论曝光时刻可以通过接收时刻减去延迟时长获取,即第一图像的理论曝光时刻Ta可以是接收时刻Tb与延迟时长△T之间的差值,即Ta=Tb-△T。
处理器可以存储有多个第一图像采集装置发送中断信号的中断时刻,可以计算第一图像的理论曝光时刻与每个中断时刻的差值,并判断理论曝光时刻每个中断时刻的差值是否小于预设阈值,并将差值小于预设阈值的中断时刻作为第一图像采集装置采集第一图像的时刻。
举例说明,处理器存储有多个中断时刻Tc1、Tc2、Tc3、Tc4……,可分别计算理论曝光时刻Ta与Tc1、Tc2、Tc3、Tc4……之间的差值△t1,△t2,△t3,△t4……,可判断差值△t1,△t2,△t3,△t4……是否小于预设阈值时Th,并将小于预设阈值时Th的差值对应的中断时刻作为第一图像采集 装置采集第一图像的时刻。
在一些实施方式中,当存在多个差值小于预设阈值的中断时刻时,由于实际的延迟时长可能比理论上的延迟时长大,因此可进一步判断中断时刻是否小于理论曝光时刻,并将小于理论曝光时刻的中断时刻作为第一图像采集装置采集第一图像的时刻。例如,处理器接收第一图像的接收时刻Tb为100ms(毫秒),延迟时长△T为30ms,则第一图像的理论曝光时刻Ta为Tb-△T=70ms。处理器记录的中断时刻Tc1、Tc2、Tc3、Tc4和Tc5分别为20ms、40ms、60ms、80ms、100ms,可分别计算中断时刻Tc1、Tc2、Tc3、Tc4和Tc5与理论曝光时刻Ta的差值为50ms、30ms、10ms、10ms、30ms,可设阈值Th=15ms,则差值小于阈值的中断时刻有Tc3、Tc4,可以进一步将理论曝光时刻Ta与Tc3、Tc4进行比对,并选取小于或等于Ta的中断时刻Tc3,可将Tc3作为第一图像采集装置采集该第一图像的时刻,即第一时刻为60ms。
处理器获取到第一图像采集装置采集第一图像的时刻后,可获取该时刻对应的预测信息,以对该预测信息进行更新。
步骤1248:获取第一时刻的预测信息,并利用第一时刻的第一信息对第一时刻的预测信息进行更新。
步骤1250:当获取到第二时刻的第二信息时,利用第二信息对第二时刻的预测信息进行更新,得到第二预测信息,并基于第二预测信息重新获取在第二时刻之后的预测信息。
步骤1260:将当前时刻的预测信息作为目标信息。
在一些实施方式中,处理器还可定时向微处理器发送时刻同步指令,该时刻同步指令包括处理器的时钟时刻,而时刻同步指令用于指示微处理器根据处理器的时钟时刻对微处理器的时钟时刻进行调整,以使处理器及微处理器的时钟保持同步。微处理器接收时刻同步指令后,可根据当前的时钟时刻、处理器的时钟时刻及处理器与微处理器之间的信号发送时延计算与处理器之间的时间误差,并根据时间误差调整当前的时钟时刻。
上述实施例提供的定位跟踪方法,可以实现微处理器与处理器之间数据的同步,以保证定位追踪结果的准确性。
在一个实施例中,本申请提供一种计算机可读存储介质,该计算机可读介质中存储有程序代码,程序代码可被处理器调用执行上述实施例中所描述的方法。
计算机可读存储介质可以是诸如闪存、EEPROM(电可擦除可编程只读存储器)、EPROM、硬盘或者ROM之类的电子存储器。可选地,计算机可读存储介质包括非易失性计算机可读介质(non-transitory computer-readable storage medium)。计算机可读存储介质具有执行上述方法中的任何方法步骤的程序代码的存储空间。这些程序代码可以从一个或者多个计算机程序产品中读出或者写入到这一个或者多个计算机程序产品中。程序代码可以例如以适当形式进行压缩。
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不驱使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (23)

  1. 一种定位跟踪方法,其特征在于,应用于终端设备,所述终端设备包括第一图像采集装置和第二图像采集装置,所述方法包括:
    根据所述第一图像采集装置采集的包含有标记物的第一图像,获取所述第一图像采集装置与所述标记物之间的相对位置及姿态信息,得到第一信息;
    根据所述第二图像采集装置采集的包含有目标场景的第二图像,获取所述第二图像采集装置在所述目标场景内的位置及姿态信息,得到第二信息,其中,所述标记物和终端设备位于所述目标场景内;及
    利用所述第一信息和所述第二信息获取所述终端设备相对所述标记物的位置及姿态信息,得到目标信息。
  2. 根据权利要求1所述的方法,其特征在于,所述终端设备还包括惯性测量单元;
    所述利用所述第一信息和所述第二信息获取所述终端设备相对所述标记物的位置及姿态信息,得到目标信息,包括:
    利用所述惯性测量单元获取不同时刻下所述终端设备相对所述标记物的预测位置及姿态信息,得到不同时刻的预测信息;
    当获取到第一时刻的第一信息时,利用所述第一信息对所述第一时刻的预测信息进行更新,得到第一预测信息,并基于所述第一预测信息重新获取在所述第一时刻之后的预测信息;
    当获取到第二时刻的第二信息时,利用所述第二信息对所述第二时刻的预测信息进行更新,得到第二预测信息,并基于所述第二预测信息重新获取在所述第二时刻之后的预测信息;及
    将当前时刻的预测信息作为目标信息。
  3. 根据权利要求2所述的方法,其特征在于,所述当获取到第一时刻的第一信息时,利用所述第一信息对所述第一时刻的预测信息进行更新,得到第一预测信息,包括:
    获取所述第一图像采集装置与所述惯性测量单元的第一刚体关系;
    根据第一时刻的第一信息和所述第一刚体关系获取所述惯性测量单元相对所述标记物的位置及姿态信息;及
    利用所述惯性测量单元相对所述标记物的位置及姿态信息对所述第一时刻的预测信息进行更新,得到第一预测信息。
  4. 根据权利要求3所述的方法,其特征在于,所述得到第一预测信息之后,包括:
    利用所述第一刚体关系和所述第一预测信息预测所述第一图像采集装置与所述标记物之间的相对位置及姿态信息,得到第一姿态预测信息;
    获取所述第一时刻的第一信息与所述第一姿态预测信息之间的误差;及
    根据所述误差对所述第一刚体关系进行更新。
  5. 根据权利要求2所述的方法,其特征在于,所述当获取到第二时刻的第二信息时,利用所述第二信息对所述第二时刻的预测信息进行更新,得到第二预测信息,包括:
    获取所述第二图像采集装置与所述惯性测量单元的第二刚体关系;
    利用所述第一图像采集装置与所述惯性测量单元的第一刚体关系和所述第二刚体关系对第二时刻的第二信息进行坐标转换,得到所述终端设备相对所述标记物的中间位置及姿态信息;及
    利用所述终端设备相对所述标记物的中间位置及姿态信息对所述第二时刻的预测信息进行更新,得到第二预测信息。
  6. 根据权利要求5所述的方法,其特征在于,所述得到第二预测信息之后,还包括:
    利用所述第二刚体关系和所述第二预测信息预测所述第二图像采集装置在所述目标场景内的位置及姿态信息,得到第二姿态预测信息;
    获取所述第二时刻的第二信息与所述第二姿态预测信息之间的误差;及
    根据所述误差对所述第二刚体关系进行更新。
  7. 根据权利要求2所述的方法,其特征在于,所述终端设备还包括微处理器和处理器,所述第一图像采集装置与所述微处理器连接,所述第二图像采集装置与所述处理器连接;
    所述当获取到第一时刻的第一信息时,利用所述第一信息对所述第一时刻的预测信息进行更新,包括:
    通过所述处理器获取多个中断时刻,所述中断时刻为所述第一图像采集装置向所述处理器发送中断信号的时刻;
    通过所述处理器获取接收时刻,所述接收时刻为所述处理器接收所述微处理器发送的所述第一图像的时刻;
    利用所述接收时刻和所述多个中断时刻确定第一时刻;及
    获取所述第一时刻的预测信息,并利用所述第一时刻的第一信息对所述第一时刻的预测信息进行更新。
  8. 根据权利要求7所述的方法,其特征在于,所述利用所述接收时刻和所述多个中断时刻确定第一时刻,包括:
    获取从所述第一图像采集装置采集所述第一图像到所述处理器接收所述第一图像的延迟时长,所述延迟时长是所述第一图像的处理时长和传输时长的总和;
    利用所述接收时刻和所述延迟时长获取所述第一图像的曝光时刻;
    计算所述曝光时刻与每个所述中断时刻的差值,并判断所述差值是否小于预设阈值;及
    将差值小于所述预设阈值的中断时刻作为第一时刻。
  9. 根据权利要求7所述的方法,其特征在于,所述方法还包括:
    通过所述处理器向所述微处理器发送时刻同步指令,所述时刻同步指令包括所述处理器的时钟时刻,所述时刻同步指令用于指示所述微处理器根据所述处理器的时钟时刻对所述微处理器的时钟时刻进行调整。
  10. 一种终端设备,其特征在于,包括:
    第一图像采集装置,用于采集的包含有标记物的第一图像;
    第二图像采集装置,用于采集的包含有目标场景的第二图像;
    存储器,存储有一个或多个计算机程序;
    一个或多个处理器;
    所述计算机程序被所述处理器执行时,使得所述处理器执行如下步骤:
    根据所述第一图像,获取所述第一图像采集装置与所述标记物之间的相对位置及姿态信息,得到第一信息;
    根据所述第二图像,获取所述第二图像采集装置在所述目标场景内的位置及姿态信息,得到第二信息,其中,所述标记物和终端设备位于所述目标场景内;及
    利用所述第一信息和所述第二信息获取所述终端设备相对所述标记物的位置及姿态信息,得到目标信息。
  11. 根据权利要求10所述的终端设备,其特征在于,所述终端设备还包括惯性测量单元;
    所述利用所述第一信息和所述第二信息获取所述终端设备相对所述标记物的位置及姿态信息,得到目标信息,包括:
    利用所述惯性测量单元获取不同时刻下所述终端设备相对所述标记物的预测位置及姿态信息,得到不同时刻的预测信息;
    当获取到第一时刻的第一信息时,利用所述第一信息对所述第一时刻的预测信息进行更新,得到第一预测信息,并基于所述第一预测信息重新获取在所述第一时刻之后的预测信息;
    当获取到第二时刻的第二信息时,利用所述第二信息对所述第二时刻的预测信息进行更新,得到第二预测信息,并基于所述第二预测信息重新获取在所述第二时刻之后的预测信息;及
    将当前时刻的预测信息作为目标信息。
  12. 根据权利要求11所述的终端设备,其特征在于,所述当获取到第一时刻的第一信息时,利用所述第一信息对所述第一时刻的预测信息进行更新,得到第一预测信息,包括:
    获取所述第一图像采集装置与所述惯性测量单元的第一刚体关系;
    根据第一时刻的第一信息和所述第一刚体关系获取所述惯性测量单元相对所述标记物的位置及姿态信息;及
    利用所述惯性测量单元相对所述标记物的位置及姿态信息对所述第一时刻的预测信息进行更新,得到第一预测信息。
  13. 根据权利要求12所述的终端设备,其特征在于,所述处理器在执行所述得到第一预测信息的步骤之后,还执行以下步骤:
    利用所述第一刚体关系和所述第一预测信息预测所述第一图像采集装置与所述标记物之间的相对位置及姿态信息,得到第一姿态预测信息;
    获取所述第一时刻的第一信息与所述第一姿态预测信息之间的误差;及
    根据所述误差对所述第一刚体关系进行更新。
  14. 根据权利要求11所述的终端设备,其特征在于,所述当获取到第二时刻的第二信息时,利用所述第二信息对所述第二时刻的预测信息进行更新,得到第二预测信息,包括:
    获取所述第二图像采集装置与所述惯性测量单元的第二刚体关系;
    利用所述第一图像采集装置与所述惯性测量单元的第一刚体关系和所述第二刚体关系对第二时刻的第二信息进行坐标转换,得到所述终端设备相对所述标记物的中间位置及姿态信息;及
    利用所述终端设备相对所述标记物的中间位置及姿态信息对所述第二时刻的预测信息进行更新,得到第二预测信息。
  15. 根据权利要求14所述的终端设备,其特征在于,所述处理器在执行所述得到第二预测信息的步骤之后,还执行以下步骤:
    利用所述第二刚体关系和所述第二预测信息预测所述第二图像采集装置在所述目标场景内的位置及姿态信息,得到第二姿态预测信息;
    获取所述第二时刻的第二信息与所述第二姿态预测信息之间的误差;及
    根据所述误差对所述第二刚体关系进行更新。
  16. 根据权利要求11所述的终端设备,其特征在于,所述终端设备还包括微处理器,所述第一图像采集装置与所述微处理器连接,所述第二图像采集装置与所述处理器连接;
    所述当获取到第一时刻的第一信息时,利用所述第一信息对所述第一时刻的预测信息进行更新,包括:
    获取多个中断时刻,所述中断时刻为所述第一图像采集装置向所述处理器发送中断信号的时刻;
    获取接收时刻,所述接收时刻为所述处理器接收所述微处理器发送的所述第一图像的时刻;
    利用所述接收时刻和所述多个中断时刻确定第一时刻;及
    获取所述第一时刻的预测信息,并利用所述第一时刻的第一信息对所述第一时刻的预测信息进行更新。
  17. 根据权利要求16所述的终端设备,其特征在于,所述利用所述接收时刻和所述多个中断时刻确定第一时刻,包括:
    获取从所述第一图像采集装置采集所述第一图像到所述处理器接收所述第一图像的延迟时长,所述延迟时长是所述第一图像的处理时长和传输时长的总和;
    利用所述接收时刻和所述延迟时长获取所述第一图像的曝光时刻;
    计算所述曝光时刻与每个所述中断时刻的差值,并判断所述差值是否小于预设阈值;及
    将差值小于所述预设阈值的中断时刻作为第一时刻。
  18. 一种计算机存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器运行时,使得所述处理器执行如下步骤:
    根据终端设备的第一图像采集装置采集的包含有标记物的第一图像,获取所述第一图像采集装置与所述标记物之间的相对位置及姿态信息,得到第一信息;
    根据终端设备的所述第二图像采集装置采集的包含有目标场景的第二图像,获取所述第二图像采集装置在所述目标场景内的位置及姿态信息,得到第二信息,其中,所述标记物和终端设备位于所述目标场景内;及
    利用所述第一信息和所述第二信息获取所述终端设备相对所述标记物的位置及姿态信息,得到目标信息。
  19. 根据权利要求18所述的计算机存储介质,其特征在于,所述利用所述第一信息和所述第二信息获取所述终端设备相对所述标记物的位置及姿态信息,得到目标信息,包括:
    利用终端设备的惯性测量单元获取不同时刻下所述终端设备相对所述标记物的预测位置及姿态信息,得到不同时刻的预测信息;
    当获取到第一时刻的第一信息时,利用所述第一信息对所述第一时刻的预测信息进行更新,得到第一预测信息,并基于所述第一预测信息重新获取在所述第一时刻之后的预测信息;
    当获取到第二时刻的第二信息时,利用所述第二信息对所述第二时刻的预测信息进行更新,得到第二预测信息,并基于所述第二预测信息重新获取在所述第二时刻之后的预测信息;及
    将当前时刻的预测信息作为目标信息。
  20. 根据权利要求19所述的计算机存储介质,其特征在于,所述当获取到第一时刻的第一信息时,利用所述第一信息对所述第一时刻的预测信息进行更新,得到第一预测信息,包括:
    获取所述第一图像采集装置与所述惯性测量单元的第一刚体关系;
    根据第一时刻的第一信息和所述第一刚体关系获取所述惯性测量单元相对所述标记物的位置及姿态信息;及
    利用所述惯性测量单元相对所述标记物的位置及姿态信息对所述第一时刻的预测信息进行更新,得到第一预测信息。
  21. 一种定位跟踪方法,其特征在于,所述方法包括:
    采集包含标记物的图像;
    识别所述图像中的标记物,并获取第一空间位置信息;
    获取终端设备的位姿变化信息,所述位姿变化信息包括所述终端设备的位置变化信息和姿态变化信息;
    根据所述位姿变化信息获取所述终端设备的第二空间位置信息;及
    基于所述第一空间位置信息和/或所述第二空间位置信息,获取所述终端设备的当前位置信息。
  22. 一种终端设备,其特征在于,包括存储器及处理器,所述存储器中存储有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行如下步骤:
    采集包含标记物的图像;
    识别所述图像中的标记物,并获取第一空间位置信息;
    获取终端设备的位姿变化信息,所述位姿变化信息包括所述终端设备的位置变化信息和姿态变化信息;
    根据所述位姿变化信息获取所述终端设备的第二空间位置信息;及
    基于所述第一空间位置信息和/或所述第二空间位置信息,获取所述终端设备的当前位置信息。
  23. 一种计算机存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器运行时,使得所述处理器执行如下步骤:
    采集包含标记物的图像;
    识别所述图像中的标记物,并获取第一空间位置信息;
    获取终端设备的位姿变化信息,所述位姿变化信息包括所述终端设备的位置变化信息和姿态变化信息;
    根据所述位姿变化信息获取所述终端设备的第二空间位置信息;及
    基于所述第一空间位置信息和/或所述第二空间位置信息,获取所述终端设备的当前位置信息。
PCT/CN2019/098200 2018-08-02 2019-07-29 定位跟踪方法、终端设备及计算机可读取存储介质 WO2020024909A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/687,699 US11127156B2 (en) 2018-08-02 2019-11-19 Method of device tracking, terminal device, and storage medium

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201810891134.5 2018-08-02
CN201810891134.5A CN110794955B (zh) 2018-08-02 2018-08-02 定位跟踪方法、装置、终端设备及计算机可读取存储介质
CN201910642093.0 2019-07-16
CN201910642093.0A CN110442235B (zh) 2019-07-16 2019-07-16 定位跟踪方法、装置、终端设备及计算机可读取存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/687,699 Continuation US11127156B2 (en) 2018-08-02 2019-11-19 Method of device tracking, terminal device, and storage medium

Publications (1)

Publication Number Publication Date
WO2020024909A1 true WO2020024909A1 (zh) 2020-02-06

Family

ID=69230819

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/098200 WO2020024909A1 (zh) 2018-08-02 2019-07-29 定位跟踪方法、终端设备及计算机可读取存储介质

Country Status (2)

Country Link
US (1) US11127156B2 (zh)
WO (1) WO2020024909A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2581238A (en) * 2018-12-06 2020-08-12 Bae Systems Plc Tracking system

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7404011B2 (ja) * 2019-09-24 2023-12-25 東芝テック株式会社 情報処理装置
CN110908504B (zh) * 2019-10-10 2021-03-23 浙江大学 一种增强现实博物馆协作交互方法与系统
US11302077B2 (en) * 2020-05-29 2022-04-12 Snap Inc. Augmented reality guidance that generates guidance markers

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103869814A (zh) * 2012-12-17 2014-06-18 联想(北京)有限公司 一种终端定位和导航方法以及可移动的终端
WO2014111923A1 (en) * 2013-01-15 2014-07-24 Israel Aerospace Industries Ltd Remote tracking of objects
CN106296801A (zh) * 2015-06-12 2017-01-04 联想(北京)有限公司 一种建立物体三维图像模型的方法及电子设备
CN106713773A (zh) * 2017-03-31 2017-05-24 联想(北京)有限公司 一种拍摄控制方法及电子设备
CN108235736A (zh) * 2017-12-25 2018-06-29 深圳前海达闼云端智能科技有限公司 一种定位方法、云端服务器、终端、系统、电子设备及计算机程序产品

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2496591B (en) * 2011-11-11 2017-12-27 Sony Corp Camera Movement Correction
JP6021533B2 (ja) * 2012-09-03 2016-11-09 キヤノン株式会社 情報処理システム、装置、方法及びプログラム
US9159133B2 (en) * 2012-11-05 2015-10-13 Qualcomm Incorporated Adaptive scale and/or gravity estimation
KR102234477B1 (ko) * 2015-01-15 2021-04-01 한국전자통신연구원 영상 품질에 기초한 파노라마 영상 생성 장치 및 방법
JP2017224984A (ja) * 2016-06-15 2017-12-21 セイコーエプソン株式会社 プログラム、装置、キャリブレーション方法
JP6527178B2 (ja) * 2017-01-12 2019-06-05 ファナック株式会社 視覚センサのキャリブレーション装置、方法及びプログラム
JP6824790B2 (ja) * 2017-03-15 2021-02-03 オムロン株式会社 高さ補正装置及び高さ補正装置を用いた血圧測定システム及び高さ補正方法
US11216975B2 (en) * 2017-09-06 2022-01-04 Brainlab Ag Hybrid phantom and a method for determining the relative position between a thermal camera and a 3D camera using a hybrid phantom
KR20190055582A (ko) * 2017-11-15 2019-05-23 삼성전자주식회사 전자 장치의 이미지 촬영 방법 및 그 전자 장치
CN108062776B (zh) * 2018-01-03 2019-05-24 百度在线网络技术(北京)有限公司 相机姿态跟踪方法和装置
JP7128449B2 (ja) * 2019-04-18 2022-08-31 トヨタ自動車株式会社 運転支援装置及びその運転支援装置の調整方法
US11770992B2 (en) * 2019-05-15 2023-10-03 Cnh Industrial America Llc Position monitoring for agricultural system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103869814A (zh) * 2012-12-17 2014-06-18 联想(北京)有限公司 一种终端定位和导航方法以及可移动的终端
WO2014111923A1 (en) * 2013-01-15 2014-07-24 Israel Aerospace Industries Ltd Remote tracking of objects
CN106296801A (zh) * 2015-06-12 2017-01-04 联想(北京)有限公司 一种建立物体三维图像模型的方法及电子设备
CN106713773A (zh) * 2017-03-31 2017-05-24 联想(北京)有限公司 一种拍摄控制方法及电子设备
CN108235736A (zh) * 2017-12-25 2018-06-29 深圳前海达闼云端智能科技有限公司 一种定位方法、云端服务器、终端、系统、电子设备及计算机程序产品

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2581238A (en) * 2018-12-06 2020-08-12 Bae Systems Plc Tracking system
GB2581238B (en) * 2018-12-06 2021-05-19 Bae Systems Plc Tracking system
US11796800B2 (en) 2018-12-06 2023-10-24 Bae Systems Plc Tracking system

Also Published As

Publication number Publication date
US20200090365A1 (en) 2020-03-19
US11127156B2 (en) 2021-09-21

Similar Documents

Publication Publication Date Title
JP6979475B2 (ja) ヘッドマウントディスプレイ追跡
WO2020024909A1 (zh) 定位跟踪方法、终端设备及计算机可读取存储介质
US10499002B2 (en) Information processing apparatus and information processing method
US11796309B2 (en) Information processing apparatus, information processing method, and recording medium
US10462406B2 (en) Information processing apparatus and information processing method
KR101930657B1 (ko) 몰입식 및 대화식 멀티미디어 생성을 위한 시스템 및 방법
US10916056B2 (en) Method of displaying virtual information in a view of a real environment
TW202117502A (zh) 一種擴增實境資料呈現方法、設備及儲存媒體
US8648879B2 (en) Apparatus and method for tracking augmented reality content
WO2017126172A1 (ja) 情報処理装置、情報処理方法、及び記録媒体
US10481679B2 (en) Method and system for optical-inertial tracking of a moving object
CN110794955B (zh) 定位跟踪方法、装置、终端设备及计算机可读取存储介质
KR20160130217A (ko) 희소 및 조밀 맵핑 정보를 포함하는 맵을 생성하기 위한 방법들 및 시스템들
CN106980368A (zh) 一种基于视觉计算及惯性测量单元的虚拟现实交互设备
JP2018524657A (ja) 電子デバイス上における環境マッピング用のフィーチャ・データの管理
CN113610702B (zh) 一种建图方法、装置、电子设备及存储介质
JP2021509214A (ja) 移動可能オブジェクトを光学的慣性追跡するための方法及びシステム
CN111489376A (zh) 跟踪交互设备的方法、装置、终端设备及存储介质
WO2022240933A1 (en) Depth-from- stereo bending correction
US11854227B2 (en) Depth-from-stereo bending correction using visual inertial odometry features
WO2021065607A1 (ja) 情報処理装置および方法、並びにプログラム
US20240312145A1 (en) Tight imu-camera coupling for dynamic bending estimation
CN118897618A (zh) 射线生成方法、装置、设备和介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19844220

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 09/06/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19844220

Country of ref document: EP

Kind code of ref document: A1