CN108693548A - A kind of navigation methods and systems based on scene objects identification - Google Patents
A kind of navigation methods and systems based on scene objects identification Download PDFInfo
- Publication number
- CN108693548A CN108693548A CN201810480999.2A CN201810480999A CN108693548A CN 108693548 A CN108693548 A CN 108693548A CN 201810480999 A CN201810480999 A CN 201810480999A CN 108693548 A CN108693548 A CN 108693548A
- Authority
- CN
- China
- Prior art keywords
- scene
- position information
- user
- target
- navigation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 41
- 238000005516 engineering process Methods 0.000 claims description 16
- 238000004891 communication Methods 0.000 claims description 15
- 230000005540 biological transmission Effects 0.000 claims description 3
- 238000012360 testing method Methods 0.000 claims description 2
- 230000004807 localization Effects 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S19/00—Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
- G01S19/38—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
- G01S19/39—Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
- G01S19/42—Determining position
- G01S19/45—Determining position by combining measurements of signals from the satellite radio beacon positioning system with a supplementary measurement
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Navigation (AREA)
Abstract
An embodiment of the present invention provides a kind of navigation methods and systems based on scene objects identification, including:At least one scene objects are identified from the scene image of user present position;From the location information for obtaining each scene objects at least one scene objects in presetting database;According to the location information of each scene objects at least one scene objects, the location information of the user is obtained.The location information of user is obtained by identifying the location information of all scene objects in the scene image of user present position, realize the positioning of user, it solves the problems, such as in Satellite Navigation Technique can not to position when GPS signal is lost or GPS signal is weak well or Wrong localization, further improves the precision of navigation.
Description
Technical Field
The embodiment of the invention relates to the technical field of navigation, in particular to a navigation method and a navigation system based on scene target identification.
Background
The popularization of the navigation technology greatly facilitates the life and travel of people, and the current navigation technology is mostly based on a satellite Positioning technology, for example, a GPS (Global Positioning System) is composed of 21 working satellites and 3 spare satellites.
However, in many places without GPS signals or with weak GPS signals, the user position may not be determined or the user position may be determined incorrectly, and great inconvenience is caused in that navigation is not possible in application scenes requiring high-precision navigation, such as walking.
Disclosure of Invention
Embodiments of the present invention provide a navigation method and system based on scene object recognition, which overcome the above problems or at least partially solve the above problems.
In one aspect, an embodiment of the present invention provides a navigation method based on scene object recognition, including:
identifying at least one scene target from a scene image of a position where a user is located;
acquiring position information of each scene target in the at least one scene target from a preset database;
and acquiring the position information of the user according to the position information of each scene target in the at least one scene target.
Further, before identifying at least one scene object from the scene image of the location where the user is located, the method further comprises:
and acquiring the scene image shot by the user through the mobile terminal at the position.
Further, after obtaining the location information of the user, the method further comprises:
and planning a navigation path for the user by using a preset path navigation algorithm and combining a satellite navigation technology according to the position information of the user.
Further, the scene objects include one or more of buildings, guideboards, landmark buildings, and doorplates.
Furthermore, the preset database stores a position information list of all scene targets acquired by a satellite positioning technology, wherein each scene target corresponds to the position information thereof one by one; accordingly, the number of the first and second electrodes,
the obtaining of the position information of each scene target in the at least one scene target from the preset database specifically includes:
and searching the position information corresponding to each scene target in the at least one scene target in the position information list.
Further, the obtaining the location information of the user according to the location information of each scene target in the at least one scene target specifically includes:
acquiring the position information of a scene image of the position of the user according to the position information of each scene target in the at least one scene target;
and taking the position information of the scene image of the position where the user is located as the position information of the user.
Further, the obtaining, according to the position information of each scene target in the at least one scene target, the position information of the scene image of the position where the user is located specifically includes:
when the number of the acquired scene targets is one, using the position information of the scene target as the position information of the scene image of the position where the user is located; when the acquired scene targets are two scene targets, taking the midpoint position of the connecting line of the two scene targets as the position information of the scene image of the position where the user is located; when the acquired scene targets comprise at least three scene targets, the at least three scene targets are sequentially connected, and the center positions of polygons obtained through connection are used as position information of the scene images of the positions where the users are located.
In another aspect, an embodiment of the present invention provides a navigation system based on scene object recognition, including:
the scene target acquisition module is used for identifying at least one scene target from a scene image of a position where a user is located;
the scene target position information acquisition module is used for acquiring the position information of each scene target in the at least one scene target from a preset database;
and the user position information acquisition module is used for acquiring the position information of the user according to the position information of each scene target in the at least one scene target.
In a third aspect, an embodiment of the present invention provides a navigation device based on scene object recognition, including:
at least one processor, at least one memory, a communication interface, and a bus; wherein,
the processor, the memory and the communication interface complete mutual communication through the bus;
the communication interface is used for information transmission between the test equipment and the communication equipment of the display device;
the memory stores program instructions executable by the processor, which when called by the processor are capable of performing the above-described methods.
A fourth aspect of the present invention provides a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the above method.
According to the navigation method and system based on scene target identification, provided by the embodiment of the invention, the position information of the user is acquired by identifying the position information of all scene targets in the scene image of the position of the user, so that the positioning of the user is realized, the problem that the GPS signal is lost or cannot be positioned or positioned wrongly when the GPS signal is weak in the satellite navigation technology is well solved, and the navigation precision is further improved.
Drawings
Fig. 1 is a flowchart of a navigation method based on scene object recognition according to an embodiment of the present invention;
fig. 2 is a block diagram of a navigation system based on scene object recognition according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a navigation device based on scene object recognition according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some embodiments, but not all embodiments, of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a flowchart of a navigation method based on scene object recognition according to an embodiment of the present invention, and as shown in fig. 1, the method includes:
s1, identifying at least one scene target from the scene image of the position where the user is located;
s2, acquiring the position information of each scene target in the at least one scene target from a preset database;
and S3, acquiring the position information of the user according to the position information of each scene target in the at least one scene target.
In step S1, the scene object is a specific and unique marking object in the scene, such as a building, a landmark building, a guideboard, a doorplate, etc., the position information of the scene object is generally determined and known, and the position information of the scene object can be stored in a preset database as preset information and can be taken when needed.
When a user navigates through the existing navigation technology, such as a satellite navigation technology, the positioning cannot be performed due to loss of a GPS signal or weak GPS signal, the user sends an acquired scene image of a current scene to a background, and the background processes the scene image by using an image processing algorithm to acquire a scene target in the scene, where the number of the acquired scene targets is at least one, and it can be understood that the more the number of the scene targets is, the more accurate the position information of the user is acquired subsequently.
In step S2, the position information of all scene objects identified in step S1 is acquired by searching in a preset database. It will be appreciated that the location information includes all information required by existing navigation techniques, such as satellite navigation.
In step S3, after the position information of all scene objects in the scene where the user is located is obtained, since the distance between the scene graph that can be obtained by the user and the position where the user is located is generally short, the position information of the user can be calculated through the position information of the scene objects, and then the user can be positioned.
According to the navigation method based on scene target identification provided by the embodiment of the invention, the position information of the user is acquired by identifying the position information of all scene targets in the scene image of the position of the user, so that the positioning of the user is realized, the problem that the GPS signal is lost or cannot be positioned or is positioned wrongly when the GPS signal is weak in the satellite navigation technology is well solved, and the navigation precision is further improved.
On the basis of the above embodiment, before identifying at least one scene object from the scene image of the position where the user is located, the method further comprises:
and acquiring the scene image shot by the user through the mobile terminal at the position.
Specifically, the scene image of the position where the user is located can be shot through a mobile terminal carried by the user and then uploaded to the background by the user. Preferably, an application program for executing the method is installed on a mobile phone of the user, when the GPS signal is unavailable or weak, the application program sends out prompt information, and the user takes a picture through the mobile phone and uploads a scene image to a background through the application program. The scene images are conveniently and quickly acquired and uploaded through the mobile terminal.
On the basis of the above embodiment, after the location information of the user is acquired, the method further includes:
and planning a navigation path for the user by using a preset path navigation algorithm and combining a satellite navigation technology according to the position information of the user.
Specifically, after the position information of the user is obtained, a navigation path is planned for the user by combining a path navigation algorithm and a satellite navigation technology in the prior art, and it can be understood that the preset path navigation algorithm and the satellite navigation technology are the prior art and are not described herein again.
On the basis of the above embodiment, the scene object includes one or more of a building, a guideboard, a landmark building and a doorplate.
Specifically, buildings, guideboards, surface buildings, doorboards, and the like are all specific and unique marking objects, and the position information thereof is also determined and known, and when scene marks in a scene image are identified, all types of scene targets need to be identified, so as to improve the accuracy of the position information of a user.
On the basis of the above embodiment, the preset database stores a list of position information of all scene targets acquired by a satellite positioning technology, wherein each scene target corresponds to its position information one to one; accordingly, the number of the first and second electrodes,
the obtaining of the position information of each scene target in the at least one scene target from the preset database specifically includes:
and searching the position information corresponding to each scene target in the at least one scene target in the position information list.
On the basis of the above embodiment, the acquiring the location information of the user according to the location information of each scene object in the at least one scene object specifically includes:
acquiring the position information of a scene image of the position of the user according to the position information of each scene target in the at least one scene target;
and taking the position information of the scene image of the position where the user is located as the position information of the user.
Specifically, the position information of the scene image where the user is located is determined by the scene object, but since the distance between the scene image shot by the user through the mobile terminal and the user is within a certain range, the position information of the scene image can be taken as the position information of the user although the position information of the scene image is acquired by the scene object.
On the basis of the above embodiment, the obtaining, according to the position information of each scene target in the at least one scene target, the position information of the scene image of the position where the user is located specifically includes:
when the number of the acquired scene targets is one, using the position information of the scene target as the position information of the scene image of the position where the user is located; when the acquired scene targets are two scene targets, taking the midpoint position of the connecting line of the two scene targets as the position information of the scene image of the position where the user is located; when the acquired scene targets comprise at least three scene targets, the at least three scene targets are sequentially connected, and the center positions of polygons obtained through connection are used as position information of the scene images of the positions where the users are located.
Specifically, when the number of acquired scene objects is different, in order to make the position information of the scene image more accurate, the position information of the scene image needs to be acquired in different manners. When the acquired scene targets comprise at least three scene targets, the position information of the center position of the polygon obtained by sequentially connecting all the scene targets is acquired.
Fig. 2 is a structural block diagram of a navigation system based on scene object recognition according to an embodiment of the present invention, and as shown in fig. 2, the system includes a scene object obtaining module 201, a scene object position information obtaining module 202, and a user position information obtaining module 203. Wherein:
the scene object obtaining module 201 is configured to identify at least one scene object from a scene image of a location where a user is located. The scene target position information obtaining module 202 is configured to obtain position information of each scene target in the at least one scene target from a preset database. The user location information obtaining module 203 is configured to obtain location information of the user according to location information of each scene target in the at least one scene target.
Specifically, the functions and operation flows of the modules in the navigation system based on scene object recognition in the embodiment of the present invention are in one-to-one correspondence with the method embodiments described above, and are not described herein again.
According to the navigation system based on scene target identification provided by the embodiment of the invention, the position information of the user is obtained by identifying the position information of all scene targets in the scene image of the position of the user, so that the positioning of the user is realized, the problem that the GPS signal is lost or cannot be positioned or is positioned wrongly when the GPS signal is weak in the satellite navigation technology is well solved, and the navigation precision is further improved.
As shown in fig. 3, on the basis of the foregoing embodiment, an embodiment of the present invention further provides a navigation apparatus based on scene object recognition, including: at least one processor 301, at least one memory 302, a communication interface 303, and a bus 304; the processor 301, the memory 302 and the communication interface 303 complete mutual communication through the bus 304; the communication interface 303 is used for information transmission between the modeling apparatus and a communication apparatus of a display device; the memory 302 stores program instructions executable by the processor 301, and the processor 301 calls the program instructions to perform the method of fig. 1.
The logic instructions in the memory 302 may be implemented in software functional units and stored in a computer readable storage medium when sold or used as a stand-alone product. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Embodiments of the present invention provide a non-transitory computer-readable storage medium, which stores computer instructions, where the computer instructions cause the computer to perform the methods provided by the above method embodiments, for example, the methods include: identifying at least one scene target from a scene image of a position where a user is located; acquiring position information of each scene target in the at least one scene target from a preset database; and acquiring the position information of the user according to the position information of each scene target in the at least one scene target.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (10)
1. A navigation method based on scene target recognition is characterized by comprising the following steps:
identifying at least one scene target from a scene image of a position where a user is located;
acquiring position information of each scene target in the at least one scene target from a preset database;
and acquiring the position information of the user according to the position information of each scene target in the at least one scene target.
2. The method of claim 1, wherein prior to identifying at least one scene object from the image of the scene where the user is located, the method further comprises:
and acquiring the scene image shot by the user through the mobile terminal at the position.
3. The method of claim 1, wherein after obtaining the location information of the user, the method further comprises:
and planning a navigation path for the user by using a preset path navigation algorithm and combining a satellite navigation technology according to the position information of the user.
4. The method of claim 1, wherein the scene objects include one or more of buildings, guideboards, landmark buildings, and doorplates.
5. The method according to claim 1, wherein the predetermined database stores a list of location information of all scene objects obtained by satellite positioning technology, wherein each scene object corresponds to its location information one-to-one; accordingly, the number of the first and second electrodes,
the obtaining of the position information of each scene target in the at least one scene target from the preset database specifically includes:
and searching the position information corresponding to each scene target in the at least one scene target in the position information list.
6. The method according to claim 1, wherein the obtaining the location information of the user according to the location information of each scene object in the at least one scene object specifically includes:
acquiring the position information of a scene image of the position of the user according to the position information of each scene target in the at least one scene target;
and taking the position information of the scene image of the position where the user is located as the position information of the user.
7. The method according to claim 6, wherein the obtaining, according to the position information of each scene object in the at least one scene object, the position information of the scene image at the position where the user is located specifically includes:
when the number of the acquired scene targets is one, using the position information of the scene target as the position information of the scene image of the position where the user is located; when the acquired scene targets are two scene targets, taking the midpoint position of the connecting line of the two scene targets as the position information of the scene image of the position where the user is located; when the acquired scene targets comprise at least three scene targets, the at least three scene targets are sequentially connected, and the center positions of polygons obtained through connection are used as position information of the scene images of the positions where the users are located.
8. A navigation system based on scene object recognition, comprising:
the scene target acquisition module is used for identifying at least one scene target from a scene image of a position where a user is located;
the scene target position information acquisition module is used for acquiring the position information of each scene target in the at least one scene target from a preset database;
and the user position information acquisition module is used for acquiring the position information of the user according to the position information of each scene target in the at least one scene target.
9. A navigation device based on scene object recognition, comprising:
at least one processor, at least one memory, a communication interface, and a bus; wherein,
the processor, the memory and the communication interface complete mutual communication through the bus;
the communication interface is used for information transmission between the test equipment and the communication equipment of the display device;
the memory stores program instructions executable by the processor, the processor invoking the program instructions to perform the method of any of claims 1-7.
10. A non-transitory computer-readable storage medium storing computer instructions that cause a computer to perform the method of any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810480999.2A CN108693548B (en) | 2018-05-18 | 2018-05-18 | Navigation method and system based on scene target recognition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810480999.2A CN108693548B (en) | 2018-05-18 | 2018-05-18 | Navigation method and system based on scene target recognition |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108693548A true CN108693548A (en) | 2018-10-23 |
CN108693548B CN108693548B (en) | 2021-10-22 |
Family
ID=63846990
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810480999.2A Active CN108693548B (en) | 2018-05-18 | 2018-05-18 | Navigation method and system based on scene target recognition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108693548B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109781115A (en) * | 2019-02-15 | 2019-05-21 | 陈炜 | A kind of map reference identifying system can be improved positioning accuracy |
CN110567475A (en) * | 2019-09-19 | 2019-12-13 | 北京地平线机器人技术研发有限公司 | Navigation method, navigation device, computer readable storage medium and electronic equipment |
CN111256677A (en) * | 2020-01-22 | 2020-06-09 | 维沃移动通信(杭州)有限公司 | Positioning method, electronic device and storage medium |
CN111811533A (en) * | 2020-07-06 | 2020-10-23 | 腾讯科技(深圳)有限公司 | Yaw determination method and device and electronic equipment |
CN112284396A (en) * | 2020-10-29 | 2021-01-29 | 的卢技术有限公司 | Vehicle positioning method suitable for underground parking lot |
CN113329121A (en) * | 2021-05-28 | 2021-08-31 | 维沃软件技术有限公司 | Operation execution method, operation execution device, electronic device, and readable storage medium |
CN114485617A (en) * | 2022-01-14 | 2022-05-13 | 京东方科技集团股份有限公司 | Navigation route generation method and device, storage medium and electronic equipment |
Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000221045A (en) * | 1999-01-29 | 2000-08-11 | Fujitsu Ten Ltd | Navigation device |
CN1569558A (en) * | 2003-07-22 | 2005-01-26 | 中国科学院自动化研究所 | Moving robot's vision navigation method based on image representation feature |
US20110169947A1 (en) * | 2010-01-12 | 2011-07-14 | Qualcomm Incorporated | Image identification using trajectory-based location determination |
CN102467739A (en) * | 2010-10-29 | 2012-05-23 | 夏普株式会社 | Image judgment device, image extraction device and image judgment method |
US20120320212A1 (en) * | 2010-03-03 | 2012-12-20 | Honda Motor Co., Ltd. | Surrounding area monitoring apparatus for vehicle |
CN103398717A (en) * | 2013-08-22 | 2013-11-20 | 成都理想境界科技有限公司 | Panoramic map database acquisition system and vision-based positioning and navigating method |
CN103424113A (en) * | 2013-08-01 | 2013-12-04 | 毛蔚青 | Indoor positioning and navigating method of mobile terminal based on image recognition technology |
US20140267248A1 (en) * | 2013-03-14 | 2014-09-18 | Robert Bosch Gmbh | System And Method For Generation Of Shadow Effects In Three-Dimensional Graphics |
CN104299236A (en) * | 2014-10-20 | 2015-01-21 | 中国科学技术大学先进技术研究院 | Target locating method based on scene calibration and interpolation combination |
CN106125040A (en) * | 2016-06-15 | 2016-11-16 | 中建电子工程有限公司 | The method improving TOA wireless location system opposing moisture content change capability of influence |
US20170138752A1 (en) * | 2015-06-19 | 2017-05-18 | Yakov Z. Mermelstein | Method and System for Providing Personalized Navigation Services and Crowd-Sourced Location-Based Data |
CN106776813A (en) * | 2016-11-24 | 2017-05-31 | 南华大学 | Large-scale indoor venue based on SSIFT algorithms is quickly positioned and air navigation aid |
CN106875442A (en) * | 2016-12-26 | 2017-06-20 | 上海蔚来汽车有限公司 | Vehicle positioning method based on image feature data |
US9721388B2 (en) * | 2011-04-20 | 2017-08-01 | Nec Corporation | Individual identification character display system, terminal device, individual identification character display method, and computer program |
CN107404566A (en) * | 2016-05-19 | 2017-11-28 | 中国移动通信集团设计院有限公司 | A kind of terminal scene determination methods and device |
CN107766432A (en) * | 2017-09-18 | 2018-03-06 | 维沃移动通信有限公司 | A kind of data interactive method, mobile terminal and server |
CN107967457A (en) * | 2017-11-27 | 2018-04-27 | 全球能源互联网研究院有限公司 | A kind of place identification for adapting to visual signature change and relative positioning method and system |
CN108020225A (en) * | 2016-10-28 | 2018-05-11 | 大辅科技(北京)有限公司 | Map system and air navigation aid based on image recognition |
-
2018
- 2018-05-18 CN CN201810480999.2A patent/CN108693548B/en active Active
Patent Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000221045A (en) * | 1999-01-29 | 2000-08-11 | Fujitsu Ten Ltd | Navigation device |
CN1569558A (en) * | 2003-07-22 | 2005-01-26 | 中国科学院自动化研究所 | Moving robot's vision navigation method based on image representation feature |
US20110169947A1 (en) * | 2010-01-12 | 2011-07-14 | Qualcomm Incorporated | Image identification using trajectory-based location determination |
US20120320212A1 (en) * | 2010-03-03 | 2012-12-20 | Honda Motor Co., Ltd. | Surrounding area monitoring apparatus for vehicle |
CN102467739A (en) * | 2010-10-29 | 2012-05-23 | 夏普株式会社 | Image judgment device, image extraction device and image judgment method |
US9721388B2 (en) * | 2011-04-20 | 2017-08-01 | Nec Corporation | Individual identification character display system, terminal device, individual identification character display method, and computer program |
US20140267248A1 (en) * | 2013-03-14 | 2014-09-18 | Robert Bosch Gmbh | System And Method For Generation Of Shadow Effects In Three-Dimensional Graphics |
CN103424113A (en) * | 2013-08-01 | 2013-12-04 | 毛蔚青 | Indoor positioning and navigating method of mobile terminal based on image recognition technology |
CN103398717A (en) * | 2013-08-22 | 2013-11-20 | 成都理想境界科技有限公司 | Panoramic map database acquisition system and vision-based positioning and navigating method |
CN104299236A (en) * | 2014-10-20 | 2015-01-21 | 中国科学技术大学先进技术研究院 | Target locating method based on scene calibration and interpolation combination |
US20170138752A1 (en) * | 2015-06-19 | 2017-05-18 | Yakov Z. Mermelstein | Method and System for Providing Personalized Navigation Services and Crowd-Sourced Location-Based Data |
CN107404566A (en) * | 2016-05-19 | 2017-11-28 | 中国移动通信集团设计院有限公司 | A kind of terminal scene determination methods and device |
CN106125040A (en) * | 2016-06-15 | 2016-11-16 | 中建电子工程有限公司 | The method improving TOA wireless location system opposing moisture content change capability of influence |
CN108020225A (en) * | 2016-10-28 | 2018-05-11 | 大辅科技(北京)有限公司 | Map system and air navigation aid based on image recognition |
CN106776813A (en) * | 2016-11-24 | 2017-05-31 | 南华大学 | Large-scale indoor venue based on SSIFT algorithms is quickly positioned and air navigation aid |
CN106875442A (en) * | 2016-12-26 | 2017-06-20 | 上海蔚来汽车有限公司 | Vehicle positioning method based on image feature data |
CN107766432A (en) * | 2017-09-18 | 2018-03-06 | 维沃移动通信有限公司 | A kind of data interactive method, mobile terminal and server |
CN107967457A (en) * | 2017-11-27 | 2018-04-27 | 全球能源互联网研究院有限公司 | A kind of place identification for adapting to visual signature change and relative positioning method and system |
Non-Patent Citations (1)
Title |
---|
陈夏兰: "基于交通图像信息的GNSS互补定位算法研究", 《第四届中国卫星导航学术年会文集-S9组合导航与导航新算法》 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109781115A (en) * | 2019-02-15 | 2019-05-21 | 陈炜 | A kind of map reference identifying system can be improved positioning accuracy |
CN110567475A (en) * | 2019-09-19 | 2019-12-13 | 北京地平线机器人技术研发有限公司 | Navigation method, navigation device, computer readable storage medium and electronic equipment |
CN110567475B (en) * | 2019-09-19 | 2023-09-29 | 北京地平线机器人技术研发有限公司 | Navigation method, navigation device, computer readable storage medium and electronic equipment |
CN111256677A (en) * | 2020-01-22 | 2020-06-09 | 维沃移动通信(杭州)有限公司 | Positioning method, electronic device and storage medium |
CN111256677B (en) * | 2020-01-22 | 2022-05-17 | 维沃移动通信(杭州)有限公司 | Positioning method, electronic device and storage medium |
CN111811533A (en) * | 2020-07-06 | 2020-10-23 | 腾讯科技(深圳)有限公司 | Yaw determination method and device and electronic equipment |
CN112284396A (en) * | 2020-10-29 | 2021-01-29 | 的卢技术有限公司 | Vehicle positioning method suitable for underground parking lot |
CN112284396B (en) * | 2020-10-29 | 2023-01-03 | 的卢技术有限公司 | Vehicle positioning method suitable for underground parking lot |
CN113329121A (en) * | 2021-05-28 | 2021-08-31 | 维沃软件技术有限公司 | Operation execution method, operation execution device, electronic device, and readable storage medium |
CN114485617A (en) * | 2022-01-14 | 2022-05-13 | 京东方科技集团股份有限公司 | Navigation route generation method and device, storage medium and electronic equipment |
CN114485617B (en) * | 2022-01-14 | 2024-10-18 | 京东方科技集团股份有限公司 | Navigation route generation method and device, storage medium and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN108693548B (en) | 2021-10-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108693548B (en) | Navigation method and system based on scene target recognition | |
CN111442722B (en) | Positioning method, positioning device, storage medium and electronic equipment | |
CN108051836B (en) | Positioning method, device, server and system | |
CN110246182B (en) | Vision-based global map positioning method and device, storage medium and equipment | |
CN107845114B (en) | Map construction method and device and electronic equipment | |
CN111623765B (en) | Indoor positioning method and system based on multi-mode data | |
CN110443850B (en) | Target object positioning method and device, storage medium and electronic device | |
CN105571583B (en) | User position positioning method and server | |
CN112689234B (en) | Indoor vehicle positioning method, device, computer equipment and storage medium | |
CN111750882A (en) | Method and device for correcting vehicle pose during initialization of navigation map | |
CN104949673A (en) | Target locating method and device based on non-visual perception information | |
CN111380515A (en) | Positioning method and device, storage medium and electronic device | |
CN112985419B (en) | Indoor navigation method and device, computer equipment and storage medium | |
CN114419590A (en) | High-precision map verification method, device, equipment and storage medium | |
CN111583338B (en) | Positioning method and device for unmanned equipment, medium and unmanned equipment | |
CN113219505A (en) | Method, device and equipment for acquiring GPS coordinates for vehicle-road cooperative tunnel scene | |
US11085992B2 (en) | System and method for positioning a terminal device | |
CN109788431B (en) | Bluetooth positioning method, device, equipment and system based on adjacent node group | |
CN113654548A (en) | Positioning method, positioning device, electronic equipment and storage medium | |
CN111723682A (en) | Method and device for providing location service, readable storage medium and electronic equipment | |
CN110264521A (en) | A kind of localization method and system based on binocular camera | |
CN110580275A (en) | Map display method and device | |
CN112802097B (en) | Positioning method, positioning device, electronic equipment and storage medium | |
CN112749577A (en) | Parking space detection method and device | |
CN114219907B (en) | Three-dimensional map generation method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |