CN113587917A - Indoor positioning method, device, equipment, storage medium and computer program product - Google Patents
Indoor positioning method, device, equipment, storage medium and computer program product Download PDFInfo
- Publication number
- CN113587917A CN113587917A CN202110858444.9A CN202110858444A CN113587917A CN 113587917 A CN113587917 A CN 113587917A CN 202110858444 A CN202110858444 A CN 202110858444A CN 113587917 A CN113587917 A CN 113587917A
- Authority
- CN
- China
- Prior art keywords
- floor
- positioning
- bluetooth
- indoor
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 60
- 238000004590 computer program Methods 0.000 title claims abstract description 16
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 101
- 230000000007 visual effect Effects 0.000 claims abstract description 94
- 230000004044 response Effects 0.000 claims abstract description 15
- 230000000977 initiatory effect Effects 0.000 claims description 5
- 238000013473 artificial intelligence Methods 0.000 abstract description 3
- 230000003190 augmentative effect Effects 0.000 abstract description 2
- 238000004891 communication Methods 0.000 description 12
- 238000012545 processing Methods 0.000 description 9
- 230000008569 process Effects 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000002860 competitive effect Effects 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012876 topography Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/005—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
- G01C21/206—Instruments for performing navigational calculations specially adapted for indoor navigation
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Navigation (AREA)
Abstract
The present disclosure provides an indoor positioning method, an indoor positioning device, an electronic device, a computer-readable storage medium, and a computer program product, and relates to the technical field of artificial intelligence, such as image recognition, high-precision navigation, augmented reality, cloud service, and the like. The method comprises the following steps: acquiring a target image shot at the indoor current position; matching preset time lengths of target images in the image data of at least two candidate floors; in response to the fact that the matched positioning result cannot be obtained within the preset time length, determining the target indoor floor by utilizing the Bluetooth positioning information accumulated within the preset time length; in the image data corresponding to the target indoor floor, a target indoor position corresponding to the target image is specified. The method comprehensively utilizes a pure visual positioning algorithm and a Bluetooth visual positioning algorithm, so that the time consumption for finally obtaining the indoor position of the target is shorter and the accuracy is higher.
Description
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to the field of artificial intelligence technologies such as image recognition, high-precision navigation, augmented reality, and cloud service, and in particular, to an indoor positioning method and apparatus, an electronic device, a computer-readable storage medium, and a computer program product.
Background
Outdoor Positioning mostly depends on a Global Positioning System (GPS) and some auxiliary inertial components, and the scheme is mature.
Compared with outdoor positioning, the indoor space is relatively small, the GPS signal receiving is hindered, and particularly, how to realize high-precision and quick indoor positioning in a room with a complex space structure (such as multiple floors) is a technical problem to be solved urgently by technical staff in the field.
Disclosure of Invention
The embodiment of the disclosure provides an indoor positioning method and device, electronic equipment, a computer readable storage medium and a computer program product.
In a first aspect, an embodiment of the present disclosure provides an indoor positioning method, including: acquiring a target image shot at the indoor current position; matching preset time of a target image in image data of at least two candidate floors, wherein the candidate floors are floors for positioning query; in response to the fact that the matched positioning result cannot be obtained within the preset time length, determining the target indoor floor by utilizing the Bluetooth positioning information accumulated within the preset time length; in the image data corresponding to the target indoor floor, a target indoor position corresponding to the target image is specified.
In a second aspect, an embodiment of the present disclosure provides an indoor positioning device, including: a target image acquisition unit configured to acquire a target image photographed at a current position in a room; the candidate floor matching and positioning unit is configured to match target images in the image data of at least two candidate floors for preset time, and the candidate floors are floors available for positioning and query; the floor Bluetooth positioning unit is configured to respond to the situation that a matched positioning result cannot be obtained within the preset time length, and determine the target indoor floor by utilizing Bluetooth positioning information accumulated within the preset time length; and a target indoor position determination unit configured to determine a target indoor position corresponding to the target image in the image data corresponding to the target indoor floor.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method of indoor positioning as described in any one of the implementations of the first aspect when executed.
In a fourth aspect, the disclosed embodiments provide a non-transitory computer-readable storage medium storing computer instructions for enabling a computer to implement an indoor positioning method as described in any implementation manner of the first aspect when executed.
In a fifth aspect, the embodiments of the present disclosure provide a computer program product comprising a computer program, which when executed by a processor is capable of implementing the indoor positioning method as described in any implementation manner of the first aspect.
The indoor positioning method provided by the embodiment of the disclosure comprises the following steps: firstly, acquiring a target image shot at the indoor current position; then, matching preset target image duration in image data of at least two candidate floors for positioning query; next, if the matched positioning result can not be obtained within the preset time, determining the target indoor floor by using the Bluetooth positioning information accumulated within the preset time; finally, a target indoor position corresponding to the target image is specified in the image data corresponding to the target indoor floor.
The method synthesizes a pure visual positioning algorithm for matching image data of at least two candidate floors only based on a target image to obtain a target indoor position, and a Bluetooth visual positioning algorithm for determining the target indoor floor based on Bluetooth positioning information and then matching image data of the corresponding floor based on the target image, namely, the characteristic of more accurate result brought by the pure visual algorithm is fully utilized at first, and floor information which is as accurate as possible is obtained by means of Bluetooth positioning information accumulated during the period when the positioning result is not obtained after a preset time, so that the time consumption for finally obtaining the target indoor position is shorter, and the accuracy is higher.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
Other features, objects and advantages of the disclosure will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture to which the present disclosure may be applied;
fig. 2 is a flowchart of an indoor positioning method provided in an embodiment of the present disclosure;
fig. 3 is a flowchart of a method for determining a target indoor floor in an indoor positioning method provided by an embodiment of the present disclosure;
fig. 4 is a flowchart of another method for determining a target indoor floor in an indoor positioning method provided by an embodiment of the present disclosure;
fig. 5 is a flowchart of another indoor positioning method provided in the embodiments of the present disclosure;
fig. 6 is a flowchart of an indoor navigation method provided in an embodiment of the present disclosure;
fig. 7 is a block diagram of an indoor positioning apparatus provided in the embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of an electronic device suitable for performing an indoor positioning method according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness. It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, necessary security measures are taken, and the customs of the public order is not violated.
Fig. 1 illustrates an exemplary system architecture 100 to which embodiments of the indoor positioning method, apparatus, electronic device, and computer-readable storage medium of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include a mobile terminal 101, a bluetooth terminal 102, a network 103, and a server 104. The network 103 is used to provide a medium for communication links between the mobile terminal 101, the bluetooth terminal 102 and the server 104. Network 103 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may interact with the bluetooth terminal 102 using the mobile terminal 101 and interact with the server 104 through the network 103 to receive or transmit messages and the like. The mobile terminal 101, the bluetooth terminal 102, and the server 104 may be installed with various applications for implementing information communication therebetween, such as a bluetooth visual positioning application, a pure visual positioning application, an indoor positioning application, and the like.
The mobile terminal 101, the bluetooth terminal 102 and the server 104 may be hardware or software. When the mobile terminal 101 is hardware, it may be various portable and movable electronic devices with a display screen and a camera, including but not limited to a smart phone, a tablet computer, a laptop computer, etc.; when the mobile terminal 101 is software, it may be installed in the electronic device listed above, and it may be implemented as multiple software or software modules, or may be implemented as a single software or software module, and is not limited herein. When the bluetooth terminal 102 is hardware, it may be various electronic devices embedded with bluetooth communication components; when the bluetooth terminal 102 is software, it can be installed in these electronic devices. When the server 104 is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server; when the server is software, the server may be implemented as a plurality of software or software modules, or may be implemented as a single software or software module, which is not limited herein.
The server 104 may provide various services through various built-in applications, taking an indoor positioning application that may provide an indoor positioning service as an example, the server 104 may implement the following effects when running the indoor positioning application: firstly, acquiring a target image shot by a user at the indoor current position through the mobile terminal 101 from the mobile terminal 101 through the network 103; then, matching preset time of a target image in image data of at least two candidate floors stored by the system; then, if the matched positioning result cannot be obtained within the preset time, determining the target indoor floor by using the bluetooth positioning information (received from the interactive data between the mobile terminal 101 and the indoor deployed bluetooth terminal 102) accumulated within the preset time; finally, a target indoor position corresponding to the target image is specified from the image data corresponding to the target indoor floor stored in the image data storage unit.
Since the indoor positioning in the above manner not only needs to occupy more computing resources and stronger computing power, but also needs to store a large amount of image data of each floor in advance, the indoor positioning method provided in the following embodiments of the present disclosure is generally executed by the server 104 having stronger computing power, more computing resources and larger data storage capacity, and accordingly, the indoor positioning device is generally also disposed in the server 104. However, it should be noted that when the mobile terminal 101 also has the calculation capability, calculation resource and storage capability that meet the requirements, the mobile terminal 101 may also complete the above calculations performed by the server 104 through the indoor positioning application installed thereon, and then output the same result as the server 104. Especially, when there are multiple terminal devices with different computing capabilities, but the indoor positioning application determines that the terminal device has a strong computing capability and a large amount of computing resources are left, the terminal device can execute the above operations, so as to appropriately reduce the computing pressure of the server 104, and accordingly, the indoor positioning device can also be installed in the mobile terminal 101. In such a case, the exemplary system architecture 100 may also not include the server 104 and the network 103.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring to fig. 2, fig. 2 is a flowchart of an indoor positioning method according to an embodiment of the disclosure, where the process 200 includes the following steps:
step 201: acquiring a target image shot at the indoor current position;
this step is intended to acquire, by an execution subject (e.g., the server 104 shown in fig. 1) of the indoor positioning method, a target image that is taken by a user at a current indoor position through an intelligent mobile terminal (e.g., the mobile terminal 101 shown in fig. 1) equipped with a camera assembly.
The target image may be an image captured by a user at the current indoor position in any direction, but in order to increase the matching speed as much as possible, the target image may be an image captured by a landmark object in the current indoor position and current floor, such as a shop signboard at 4 floors in a shopping mall, a mascot sculpture at 7 floors, and the like.
Step 202: matching preset time lengths of target images in the image data of at least two candidate floors;
on the basis of step 201, this step is intended to match target images in the image data of at least two candidate floors by the execution main body described above, and perform matching operation for a preset time period. The number of the candidate floors may be equal to or less than the total number of floors (for reasons such as that some floors are not open to ordinary users).
The positioning method of matching only the image data of the target image on at least two candidate floors to try to determine the indoor position according to the matching result is generally called a pure visual positioning algorithm, which is an algorithm for positioning by simply using the content of visual images as the name suggests. Thus, this step may also be expressed as actually matching the target image preset duration using a purely visual positioning algorithm. A
It should be noted that the matching operation performed by the pure visual positioning algorithm on the image data of at least two candidate floors is not necessarily ordered, that is, the image data of one floor is not necessarily matched first, and then the image data of another floor is matched, or all the image data of the disordered floors are matched in a random manner, so that the speed of outputting the matching result can be increased, and the time consumption is increased in some cases.
Step 203: in response to the fact that the matched positioning result cannot be obtained within the preset time length, determining the target indoor floor by utilizing the Bluetooth positioning information accumulated within the preset time length;
in the step, aiming at the condition that the matched positioning result output by the pure visual positioning algorithm cannot be obtained within the preset time length, the execution main body begins to determine the target indoor floor where the user is currently located by utilizing the Bluetooth positioning information accumulated within the previous preset time length. That is, the information implied in this step is actually: the collection and accumulation of bluetooth positioning information is also started simultaneously when the matching operation is started with the vision-only positioning algorithm in step 202.
The bluetooth positioning information is information which represents that an intelligent mobile terminal located in the room of the user can send a bluetooth signal through indoor setting or can determine the located position through a bluetooth mode, so that the preset time is accumulated, and the problem of floor positioning error caused by only instantaneously received bluetooth positioning information is avoided.
The Bluetooth positioning information collected in the disclosure is collected under the condition that the Bluetooth device is authorized by a related owner to allow the Bluetooth signal to be sent out or the positioning information to be read in a Bluetooth mode, and the related requirements are met.
Step 204: in the image data corresponding to the target indoor floor, a target indoor position corresponding to the target image is specified.
On the basis of step 203, this step is intended to perform matching operation on the target image only in the image data corresponding to the target indoor floor by the execution main body described above to determine a target indoor position corresponding to the target image according to the matching result, the target indoor position including at least which floor and specific position information in the floor.
That is, the implementation provided in step 203 and step 204 can be regarded as an improvement of the standard bluetooth visual positioning algorithm, the standard bluetooth visual positioning algorithm is to determine the floor information based on the first second bluetooth positioning information, and then match the target image with the image data corresponding to the floor information, and the standard bluetooth visual positioning algorithm is used independently and not in cooperation with other positioning algorithms. In addition, it can be considered that step 203 provides the floor information for the pure visual positioning algorithm to reduce the matched image data for further positioning, because the technical means of the pure visual positioning algorithm and the bluetooth visual positioning algorithm in the latter half are practically identical.
The indoor positioning method provided by the embodiment of the disclosure integrates a pure visual positioning algorithm for obtaining the indoor position of the target only based on the matching of the image data of the target image on at least two candidate floors and a Bluetooth visual positioning algorithm for determining the indoor floor of the target where the target image is located by utilizing the Bluetooth positioning information and then matching the image data of the corresponding floor based on the target image, namely, the characteristic of more accurate result brought by the pure visual algorithm is fully utilized at first, and when the positioning result is not obtained after a preset time, the floor information which is as accurate as possible is obtained by means of the Bluetooth positioning information accumulated in the period, so that the time consumption for obtaining the indoor position of the target is shorter and the accuracy is higher finally.
To further the understanding of how step 203 determines the target indoor floor from the accumulated bluetooth positioning information, the present disclosure also provides two different implementations via fig. 3 and 4 as follows, wherein the process 300 as shown in fig. 3 includes the following steps:
step 301: reading floor information from a positioning Bluetooth tag preset in a floor;
step 302: and in response to the fact that the same floor information is read from the positioning Bluetooth tags exceeding the preset number within the preset time length, determining the floor corresponding to the floor information as a target indoor floor.
This embodiment make full use of promptly sets up the bluetooth label for location in different floors in advance, and the floor information that useful bluetooth mode can be read is saved in this bluetooth label for location, then promotes the accuracy of floor information through length and predetermined quantity in predetermineeing, because can read the bluetooth label for location in two adjacent floors simultaneously in some topography.
The process 400 shown in FIG. 4 includes the following steps:
step 401: collecting a transmitted Bluetooth signal from Bluetooth equipment arranged in a floor;
step 402: reading floor setting information from a Bluetooth device which continuously sends a horizontal Bluetooth signal within a preset time;
the horizontal direction is less than 90 degrees with the horizontal line that floor ground pointed, and the bluetooth signal that sends from the bluetooth equipment of the last floor or next floor that sets up in the floor of place is removed carelessly through the horizontal direction promptly, guarantees as far as possible that the bluetooth signal that is used for distinguishing floor setting information all sends from the bluetooth equipment of the same floor.
Step 403: and determining the floor corresponding to the floor setting information as a target indoor floor.
Unlike the process 300 shown in fig. 3, the present embodiment provides a manner of determining a target indoor floor by using bluetooth devices existing on a current floor and attempting to read floor setting information from the bluetooth devices according to a preset duration and a direction of a bluetooth signal. The method is more suitable for indoor scenes in which positioning Bluetooth tags are not specially arranged on each floor.
Referring to fig. 5, fig. 5 is a flowchart of an indoor positioning method according to an embodiment of the disclosure, where the process 500 includes the following steps:
step 501: acquiring a target image shot at the indoor current position;
step 502: collecting Bluetooth positioning information by using a Bluetooth visual positioning algorithm;
step 503: matching target images in the image data of at least two candidate floors by using a pure visual positioning algorithm;
as shown in fig. 5, step 502 and step 503 are executed simultaneously, and it should be noted that the bluetooth visual positioning algorithm executed in step 502 only starts part of the function of collecting bluetooth positioning information.
Step 504: judging whether the pure visual positioning algorithm obtains a positioning result after the time length of T1, if so, executing a step 509, otherwise, executing the step 505 and the step 507 simultaneously;
the step aims to judge whether the pure visual positioning algorithm obtains a positioning result after the time length of T1 by the execution main body, and different processing branches are selected according to different judgment results.
Since the pure visual positioning algorithm needs to match the image data of at least two candidate floors, the time consumption is usually long, but the accuracy of the result is high, so if the image data can be output within the set time period of T1, the subsequent steps of the Bluetooth visual positioning algorithm do not need to be started.
Step 505: determining the target indoor floor by using the Bluetooth positioning information accumulated in the T1 time period;
this step is based on the determination result of step 504 being that the pure visual positioning algorithm does not obtain a positioning result after the time duration of T1, and is intended to enable the subsequent execution operation of the bluetooth visual positioning algorithm by the execution subject, that is, to determine the target indoor floor using the bluetooth positioning information accumulated during the time duration of T1.
Step 506: matching a target image in the image data of the target indoor floor by using a Bluetooth visual positioning algorithm;
on the basis of step 505, this step is intended to match the target image in the image data of the target indoor floor by the execution body using the bluetooth visual positioning algorithm to attempt to obtain a positioning result by the bluetooth visual positioning algorithm.
Step 507: continuously utilizing a pure visual positioning algorithm to match target images in the image data of the remaining alternative floors;
this step is based on the fact that the pure visual positioning algorithm does not obtain a positioning result after the time period of T1 elapses as a result of the determination in step 504, and is intended to be performed by the execution main body to continue to match a target image in the image data of the remaining candidate floors that have not been subjected to matching using the pure visual positioning algorithm.
As shown in fig. 5, the execution trigger conditions of step 505 and step 507 are that the pure visual positioning algorithm does not obtain a positioning result after the time duration of T1, and therefore, the two steps are triggered to be executed simultaneously to see who can output the positioning result most quickly in a competitive manner, and therefore, the reason for the competitive manner is that it is not known how much the pure visual positioning algorithm performs the matching operation in the time duration of T1 executed previously, and in the unnecessary and random matching manner, there is a possibility that the positioning result is output more quickly than in the bluetooth visual positioning algorithm.
Step 508: generating a positioning result according to the matching result obtained firstly;
on the basis of step 507 and step 506, this step is intended to generate a positioning result from the first obtained matching result according to the time minimization principle without any other requirements by the execution subject.
Particularly, the pure visual positioning algorithm is more reliable than the positioning result output by the Bluetooth visual positioning, so that when the pure visual positioning algorithm is earlier than the Bluetooth visual positioning algorithm to obtain the target indoor position, the target indoor position obtained by the pure visual positioning algorithm can be used as the final positioning result, and the Bluetooth visual positioning algorithm is stopped executing.
In addition, if the bluetooth visual positioning algorithm is earlier than the pure visual positioning algorithm to obtain the target indoor position, the error return frequency corresponding to the target indoor position obtained by the bluetooth visual positioning algorithm can be counted at the moment, namely the error return frequency represents the feedback frequency of the user for the target indoor position given at the moment; and then setting a waiting time with a corresponding size according to the error return frequency, wherein the waiting time is used for selecting the target indoor position obtained by the pure visual algorithm as a final positioning result when the Bluetooth visual positioning algorithm is earlier than the target indoor position obtained by the pure visual algorithm but the early time is not longer than the waiting time. Namely, when the time advantage of the Bluetooth visual positioning algorithm is less than that of the pure visual positioning algorithm, the pure visual positioning algorithm with higher reliability is still adopted to output the result.
Step 509: and outputting a positioning result.
This step is based on the determination result of step 504 being that the positioning result is obtained by the pure visual positioning algorithm after the time length of T1, so that the obtained positioning result can be directly output.
Different from any of the above embodiments, the present embodiment provides a comprehensive use mode of two sets of independently operating visual positioning algorithms to strive for the purpose of outputting a positioning result as accurate as possible as soon as possible.
Based on any of the above embodiments, after determining the indoor location, it may also be considered to provide an indoor navigation service according to an indoor positioning situation, and an implementation manner including but not limited to that may be seen in a flow 600 shown in fig. 6, which includes the following steps:
step 601: responding to the received indoor navigation request, and taking the target indoor position as a navigation starting point;
step 602: acquiring shooting time of a target image and inertial motion information during the period of initiating an indoor navigation request;
step 603: and generating current indoor navigation information according to the navigation starting point and the inertial motion information.
That is, in the present embodiment, the target indoor position determined based on the target image is used as the navigation starting point, and then the inertial derivation is performed on the navigation according to the shooting time of the target image and the inertial motion information of the user during the period of initiating the indoor navigation, so as to complete the current indoor navigation information corresponding to the indoor navigation request. Specifically, the Inertial motion information may be obtained by an IMU (Inertial Measurement Unit) or a VIO (Visual-Inertial odometer).
In order to deepen understanding, the disclosure further provides a specific implementation scheme in combination with a specific application scene, assuming that a user is currently visiting a shopping center with 8 floors randomly, then eating lunch together in an X store with 7 floors approximately according to information sent by friends, and when receiving the sent information, the user uses a smart phone to shoot an image of the current surrounding environment of the user and tries to start indoor navigation to the X store.
To achieve the above object, the following steps can be performed:
1) uploading a target image obtained by shooting through the smart phone to an indoor positioning application by a user;
2) the indoor positioning application uploads a target image to a server which is provided with a pure visual positioning algorithm and is in the cloud end of the server to perform full matching (matching is performed in image data of all candidate floors), and tries to collect Bluetooth positioning signals from the periphery;
3) when the indoor positioning application still receives a positioning result returned by the server after 5 seconds, determining that the floor is 5 floors according to the Bluetooth positioning information received in the 5 seconds, and informing the server configured with the Bluetooth visual positioning algorithm of the floor information of the 5 floors to perform targeted matching;
4) the indoor positioning application finally receives the positioning information of the 5-layer middle position returned by the server configured with the Bluetooth visual positioning algorithm in 6.7 seconds, and receives the positioning information of the 4-layer middle position returned by the server configured with the pure visual positioning algorithm in 7 seconds;
5) selecting the middle position of 4 layers as a final indoor positioning result by the indoor positioning application according to the condition that the time difference 0.3 is less than the preset 1 second;
6) the indoor positioning application sends the "level 4 intermediate position" to the indoor navigation application and ultimately routes the user to the level 7X store through the indoor navigation application.
With further reference to fig. 7, as an implementation of the methods shown in the above-mentioned figures, the present disclosure provides an embodiment of an indoor positioning apparatus, which corresponds to the method embodiment shown in fig. 2, and which can be applied in various electronic devices.
As shown in fig. 7, the indoor positioning apparatus 700 of the present embodiment may include: a target image acquisition unit 701, an alternative floor matching positioning unit 702, a floor Bluetooth positioning unit 703 and a target indoor position determination unit 704. A target image acquiring unit 701 configured to acquire a target image captured at a current position in a room; the candidate floor matching and positioning unit 702 is configured to match target images in image data of at least two candidate floors for preset time duration, wherein the candidate floors are floors available for positioning and querying; a floor bluetooth positioning unit 703 configured to determine a target indoor floor using bluetooth positioning information accumulated for a preset time period in response to failing to obtain a matching positioning result within the preset time period; a target indoor position determination unit 704 configured to determine a target indoor position corresponding to the target image in the image data corresponding to the target indoor floor.
In the present embodiment, in the indoor positioning apparatus 700: the specific processing of the target image obtaining unit 701, the candidate floor matching positioning unit 702, the floor bluetooth positioning unit 703 and the target indoor position determining unit 704 and the technical effects thereof can refer to the related descriptions of step 201 and step 204 in the embodiment corresponding to fig. 2, and are not described herein again.
In some optional implementations of this embodiment, the floor bluetooth location unit 703 may be further configured to:
reading floor information from a positioning Bluetooth tag preset in a floor;
and in response to the fact that the same floor information is read from the positioning Bluetooth tags exceeding the preset number within the preset time length, determining the floor corresponding to the floor information as a target indoor floor.
In some optional implementations of this embodiment, the floor bluetooth location unit 703 may be further configured to:
collecting a transmitted Bluetooth signal from Bluetooth equipment arranged in a floor;
reading floor setting information from a Bluetooth device which continuously sends a horizontal Bluetooth signal within a preset time; wherein, the included angle between the horizontal direction finger and the horizontal line pointed by the floor ground is less than 90 degrees;
and determining the floor corresponding to the floor setting information as a target indoor floor.
In some optional implementations of this embodiment, the indoor positioning apparatus 700 may further include:
a continuous matching control unit configured to control the pure visual positioning algorithm to continue to perform matching in the image data of the remaining candidate floors based only on the target image, and control the bluetooth visual positioning algorithm to continue to perform matching in the image data corresponding to the target indoor floor based on the target image;
and the final positioning result determining unit is configured to respond to the pure vision algorithm to obtain the target indoor position earlier than the Bluetooth vision positioning algorithm, take the target indoor position obtained by the pure vision positioning algorithm as a final positioning result, and stop executing the Bluetooth vision positioning algorithm.
In some optional implementations of this embodiment, the indoor positioning apparatus 700 may further include:
an error return frequency counting unit configured to count an error return frequency corresponding to a target indoor position derived by the bluetooth visual positioning algorithm in response to the bluetooth visual positioning algorithm deriving the target indoor position earlier than the pure visual algorithm;
a waiting time period setting unit configured to set a waiting time period of a corresponding size according to the error return frequency; and the waiting time is used for selecting the target indoor position obtained by the pure visual algorithm as a final positioning result when the Bluetooth visual positioning algorithm is earlier than the pure visual algorithm to obtain the target indoor position but the early time is not longer than the waiting time.
In some optional implementations of this embodiment, the indoor positioning apparatus 700 may further include:
a navigation start point determination unit configured to take the target indoor position as a navigation start point in response to the received indoor navigation request;
an inertial motion information acquisition unit configured to acquire a photographing time of a target image and inertial motion information during initiation of an indoor navigation request;
and a current indoor navigation information generating unit configured to generate current indoor navigation information according to the navigation starting point and the inertial motion information.
The present embodiment exists as an apparatus embodiment corresponding to the above method embodiment, and the indoor positioning apparatus provided in this embodiment integrates a pure visual positioning algorithm for obtaining a target indoor position by matching only based on image data of a target image on at least two candidate floors, and a bluetooth visual positioning algorithm for determining a target indoor floor where the target indoor position is located by using bluetooth positioning information and then performing matching based on image data of the target image on a corresponding floor, that is, a characteristic of a more accurate result obtained by the pure visual positioning algorithm is fully utilized at first, and when a positioning result is not obtained after a preset time period, floor information as accurate as possible is obtained by means of bluetooth positioning information accumulated during the time period, so that time consumption for obtaining the target indoor position is shorter and accuracy is higher finally.
According to an embodiment of the present disclosure, the present disclosure also provides an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being executable by the at least one processor to enable the at least one processor to implement the indoor positioning method described in any of the above embodiments when executed.
According to an embodiment of the present disclosure, there is also provided a readable storage medium storing computer instructions for enabling a computer to implement the indoor positioning method described in any of the above embodiments when executed.
The embodiments of the present disclosure provide a computer program product, which when executed by a processor is capable of implementing the indoor positioning method described in any of the embodiments above.
FIG. 8 illustrates a schematic block diagram of an example electronic device 800 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the apparatus 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored. The calculation unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
A number of components in the device 800 are connected to the I/O interface 805, including: an input unit 806, such as a keyboard, a mouse, or the like; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, or the like; and a communication unit 809 such as a network card, modem, wireless communication transceiver, etc. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server may be a cloud Server, which is also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service extensibility in the conventional physical host and Virtual Private Server (VPS) service.
According to the technical scheme of the embodiment of the disclosure, a pure visual positioning algorithm for matching the image data of at least two candidate floors based on the target image to obtain the target indoor position and a Bluetooth visual positioning algorithm for determining the target indoor floor based on the Bluetooth positioning information and then matching the image data of the corresponding floor based on the target image are integrated, namely, the characteristic of more accurate result brought by the pure visual algorithm is fully utilized at first, and when the positioning result is not obtained after a preset time, the floor information which is as accurate as possible is obtained by means of the Bluetooth positioning information accumulated in the time, so that the time consumption for obtaining the target indoor position is shorter and the accuracy is higher finally.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.
Claims (15)
1. An indoor positioning method, comprising:
acquiring a target image shot at the indoor current position;
matching the preset time length of the target image in the image data of at least two candidate floors; wherein the alternative floor is a floor available for positioning query;
in response to the fact that the matched positioning result cannot be obtained within the preset time length, determining the target indoor floor by utilizing Bluetooth positioning information accumulated within the preset time length;
and determining a target indoor position corresponding to the target image in the image data corresponding to the target indoor floor.
2. The method of claim 1, wherein the determining the target indoor floor using the accumulated bluetooth positioning information for the preset time period comprises:
reading floor information from a positioning Bluetooth tag preset in a floor;
and in response to the fact that the same floor information is read from the positioning Bluetooth tags exceeding the preset number within the preset time length, determining the floor corresponding to the floor information as the target indoor floor.
3. The method of claim 1, wherein the determining the target indoor floor using the accumulated bluetooth positioning information for the preset time period comprises:
collecting a transmitted Bluetooth signal from Bluetooth equipment arranged in a floor;
reading floor setting information from the Bluetooth equipment which continuously sends horizontal Bluetooth signals within the preset duration; wherein, the included angle between the horizontal direction finger and the horizontal line pointed by the floor ground is less than 90 degrees;
and determining a floor corresponding to the floor setting information as the target indoor floor.
4. The method of claim 1, further comprising:
controlling a pure visual positioning algorithm to continue to perform matching in the image data of the remaining candidate floors based only on the target image, and controlling a Bluetooth visual positioning algorithm to continue to perform matching in the image data corresponding to the target indoor floor based on the target image;
and responding to the pure vision algorithm to obtain the target indoor position earlier than the Bluetooth vision positioning algorithm, taking the target indoor position obtained by the pure vision positioning algorithm as a final positioning result, and stopping executing the Bluetooth vision positioning algorithm.
5. The method of claim 4, further comprising:
responding to the Bluetooth visual positioning algorithm to obtain the target indoor position earlier than the pure visual algorithm, and counting error return frequency corresponding to the target indoor position obtained by the Bluetooth visual positioning algorithm;
setting a corresponding waiting time length according to the error return frequency; and the waiting time length is used for selecting the target indoor position obtained by the pure visual algorithm as a final positioning result when the Bluetooth visual positioning algorithm is earlier than the pure visual algorithm to obtain the target indoor position but the early time length is not more than the waiting time length.
6. The method of any of claims 1-5, further comprising:
responding to the received indoor navigation request, and taking the target indoor position as a navigation starting point;
acquiring shooting time of the target image and inertial motion information during the period of initiating the indoor navigation request;
and generating current indoor navigation information according to the navigation starting point and the inertial motion information.
7. An indoor positioning device comprising:
a target image acquisition unit configured to acquire a target image photographed at a current position in a room;
the candidate floor matching positioning unit is configured to match the target image in the image data of at least two candidate floors for preset time; wherein the alternative floor is a floor available for positioning query;
a floor Bluetooth positioning unit configured to determine a target indoor floor using Bluetooth positioning information accumulated within the preset time period in response to failing to obtain a matching positioning result within the preset time period;
a target indoor position determination unit configured to determine a target indoor position corresponding to the target image in the image data corresponding to the target indoor floor.
8. The apparatus of claim 7, wherein the floor bluetooth location unit is further configured to:
reading floor information from a positioning Bluetooth tag preset in a floor;
and in response to the fact that the same floor information is read from the positioning Bluetooth tags exceeding the preset number within the preset time length, determining the floor corresponding to the floor information as the target indoor floor.
9. The apparatus of claim 7, wherein the floor bluetooth location unit is further configured to:
collecting a transmitted Bluetooth signal from Bluetooth equipment arranged in a floor;
reading floor setting information from the Bluetooth equipment which continuously sends horizontal Bluetooth signals within the preset duration; wherein, the included angle between the horizontal direction finger and the horizontal line pointed by the floor ground is less than 90 degrees;
and determining a floor corresponding to the floor setting information as the target indoor floor.
10. The apparatus of claim 7, further comprising:
a continuation-matching control unit configured to control a pure visual positioning algorithm to continue matching in image data of remaining candidate floors based only on the target image, and to control a bluetooth visual positioning algorithm to continue matching in image data corresponding to the target indoor floor based on the target image;
a final positioning result determination unit configured to derive the target indoor position as a final positioning result in response to the pure visual positioning algorithm being earlier than the Bluetooth visual positioning algorithm, and to stop executing the Bluetooth visual positioning algorithm.
11. The apparatus of claim 7, further comprising:
an error return frequency counting unit configured to count an error return frequency corresponding to a target indoor position derived by the Bluetooth visual positioning algorithm in response to the Bluetooth visual positioning algorithm deriving the target indoor position earlier than the pure visual algorithm;
a waiting time length setting unit configured to set a waiting time length of a corresponding size according to the error return frequency; and the waiting time length is used for selecting the target indoor position obtained by the pure visual algorithm as a final positioning result when the Bluetooth visual positioning algorithm is earlier than the pure visual algorithm to obtain the target indoor position but the early time length is not more than the waiting time length.
12. The apparatus of any of claims 7-11, further comprising:
a navigation start point determination unit configured to take the target indoor position as a navigation start point in response to the received indoor navigation request;
an inertial motion information acquisition unit configured to acquire a photographing time of the target image and inertial motion information during initiation of the indoor navigation request;
a current indoor navigation information generating unit configured to generate current indoor navigation information according to the navigation start point and the inertial motion information.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the indoor positioning method of any one of claims 1-6.
14. A non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the indoor positioning method of any one of claims 1-6.
15. A computer program product comprising a computer program which, when executed by a processor, implements an indoor positioning method according to any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110858444.9A CN113587917A (en) | 2021-07-28 | 2021-07-28 | Indoor positioning method, device, equipment, storage medium and computer program product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110858444.9A CN113587917A (en) | 2021-07-28 | 2021-07-28 | Indoor positioning method, device, equipment, storage medium and computer program product |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113587917A true CN113587917A (en) | 2021-11-02 |
Family
ID=78251224
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110858444.9A Pending CN113587917A (en) | 2021-07-28 | 2021-07-28 | Indoor positioning method, device, equipment, storage medium and computer program product |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113587917A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113923596A (en) * | 2021-11-23 | 2022-01-11 | 中国民用航空总局第二研究所 | Indoor positioning method, device, equipment and medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105025439A (en) * | 2015-06-26 | 2015-11-04 | 上海汇纳信息科技股份有限公司 | Indoor positioning system, applied database, indoor positioning method and indoor positioning device |
KR20160022095A (en) * | 2014-08-19 | 2016-02-29 | 엘지전자 주식회사 | Mobile terminal for determining indoor position using local area communication, method for controlling the mobile terminal, and server therefore |
CN111583343A (en) * | 2020-06-17 | 2020-08-25 | 深圳市商汤科技有限公司 | Visual positioning method and related device, equipment and storage medium |
CN111814752A (en) * | 2020-08-14 | 2020-10-23 | 上海木木聚枞机器人科技有限公司 | Indoor positioning implementation method, server, intelligent mobile device and storage medium |
CN112634366A (en) * | 2020-12-23 | 2021-04-09 | 北京百度网讯科技有限公司 | Position information generation method, related device and computer program product |
WO2021139590A1 (en) * | 2020-01-06 | 2021-07-15 | 三个机器人公司 | Indoor localization and navigation apparatus based on bluetooth and slam, and method therefor |
-
2021
- 2021-07-28 CN CN202110858444.9A patent/CN113587917A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20160022095A (en) * | 2014-08-19 | 2016-02-29 | 엘지전자 주식회사 | Mobile terminal for determining indoor position using local area communication, method for controlling the mobile terminal, and server therefore |
CN105025439A (en) * | 2015-06-26 | 2015-11-04 | 上海汇纳信息科技股份有限公司 | Indoor positioning system, applied database, indoor positioning method and indoor positioning device |
WO2021139590A1 (en) * | 2020-01-06 | 2021-07-15 | 三个机器人公司 | Indoor localization and navigation apparatus based on bluetooth and slam, and method therefor |
CN111583343A (en) * | 2020-06-17 | 2020-08-25 | 深圳市商汤科技有限公司 | Visual positioning method and related device, equipment and storage medium |
CN111814752A (en) * | 2020-08-14 | 2020-10-23 | 上海木木聚枞机器人科技有限公司 | Indoor positioning implementation method, server, intelligent mobile device and storage medium |
CN112634366A (en) * | 2020-12-23 | 2021-04-09 | 北京百度网讯科技有限公司 | Position information generation method, related device and computer program product |
Non-Patent Citations (2)
Title |
---|
刘鹏;: "视觉建图技术在智慧医院室内定位导诊中的应用与研究", 中国数字医学, no. 10, 31 October 2018 (2018-10-31) * |
张立东;孙煜;万明俊;: "基于蓝牙技术的城市轨道交通室内定位导航及应用", 城市轨道交通研究, no. 05, 31 May 2019 (2019-05-31) * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113923596A (en) * | 2021-11-23 | 2022-01-11 | 中国民用航空总局第二研究所 | Indoor positioning method, device, equipment and medium |
CN113923596B (en) * | 2021-11-23 | 2024-01-30 | 中国民用航空总局第二研究所 | Indoor positioning method, device, equipment and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11961297B2 (en) | Terminal device, information processing device, object identifying method, program, and object identifying system | |
US11023717B2 (en) | Method, apparatus, device and system for processing commodity identification and storage medium | |
CN109784177A (en) | Missing crew's method for rapidly positioning, device and medium based on images match | |
EP2672401A1 (en) | Method and apparatus for storing image data | |
CN112291473B (en) | Focusing method and device and electronic equipment | |
CN113392794B (en) | Vehicle line crossing identification method and device, electronic equipment and storage medium | |
CN113361458A (en) | Target object identification method and device based on video, vehicle and road side equipment | |
WO2021003692A1 (en) | Algorithm configuration method, device, system, and movable platform | |
CN113587917A (en) | Indoor positioning method, device, equipment, storage medium and computer program product | |
CN112559884A (en) | Method and device for hooking panorama and interest point, electronic equipment and storage medium | |
CN113587928B (en) | Navigation method, navigation device, electronic equipment, storage medium and computer program product | |
CN110751853A (en) | Parking space data validity identification method and device | |
CN111696134A (en) | Target detection method and device and electronic equipment | |
CN116189249A (en) | Image recognition method and device of intelligent equipment, intelligent equipment and electronic equipment | |
CN114407024B (en) | Position leading method, device, robot and storage medium | |
CN112560726B (en) | Target detection confidence determining method, road side equipment and cloud control platform | |
CN113654548A (en) | Positioning method, positioning device, electronic equipment and storage medium | |
CN111866366A (en) | Method and apparatus for transmitting information | |
CN118644548A (en) | Method, device, electronic equipment and storage medium for determining object position | |
CN115223068A (en) | Object duplication removing method and device based on video stream and object identification system | |
CN116894894A (en) | Method, apparatus, device and storage medium for determining motion of avatar | |
CN115713614A (en) | Image scene construction method and device, electronic equipment and storage medium | |
CN116434552A (en) | Non-motor vehicle lane snapshot method and device, electronic equipment and storage medium | |
CN117496113A (en) | Feature detection method, device, equipment and storage medium | |
CN114202636A (en) | Generation method and device of passable road model in subway station and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |